]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
bump version to 8.2.1
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
22
23 ----
24 apt install libpve-network-perl
25 ----
26
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
29
30 ----
31 apt install ifupdown2
32 ----
33
34 You need to add
35 ----
36 source /etc/network/interfaces.d/*
37 ----
38 at the end of /etc/network/interfaces to have the sdn config included
39
40
41 Basic Overview
42 --------------
43
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
46
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
55
56
57 Main configuration
58 ~~~~~~~~~~~~~~~~~~
59
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
62 `/etc/pve/sdn`
63
64 On the web-interface SDN feature have 3 main sections for the configuration
65
66 * SDN: a overview of the SDN state
67
68 * Zones: Create and manage the virtual separated network Zones
69
70 * VNets: Create virtual network bridges + subnets management.
71
72 And some options:
73
74 * Controller: For complex setups to control Layer 3 routing
75
76 * Ipams: Allow to use external tools for ip managements (vm/ct ips)
77
78 * Dns: Allow to define a dns server api for register vm/ct hostname/ip addresses
79
80
81 [[pvesdn_config_main_sdn]]
82
83 SDN
84 ~~~
85
86 This is the main status panel. Here you can see deployment status of zones on
87 different nodes.
88
89 There is an 'Apply' button, to push and reload local configuration on all
90 cluster nodes.
91
92
93 [[pvesdn_local_deployment_monitoring]]
94 Local Deployment Monitoring
95 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
96
97 After applying the configuration through the main SDN web-interface panel,
98 the local network configuration is generated locally on each node in
99 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
100
101 You can monitor the status of local zones and vnets through the main tree.
102
103
104 [[pvesdn_config_zone]]
105 Zones
106 -----
107
108 A zone will define a virtually separated network.
109
110 It can use different technologies for separation:
111
112 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
113
114 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
115
116 * VXLAN: (layer2 vxlan)
117
118 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
119
120 * bgp-evpn: vxlan using layer3 border gateway protocol routing
121
122 You can restrict a zone to specific nodes.
123
124 It's also possible to add permissions on a zone, to restrict user to use only a
125 specific zone and only the VNets in that zone
126
127 Common options
128 ~~~~~~~~~~~~~~
129
130 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
131 nodes.
132
133 Ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
134
135 Dns:: Optional, dns api server.
136
137 ReverseDns:: Optional, reverse dns api server.
138
139 Dnszone:: Optional, dns domain name. Use to register hostname like <hostname>.<domain>
140 The dns zone need to be already existing in dns server.
141
142
143 [[pvesdn_zone_plugin_simple]]
144 Simple Zones
145 ~~~~~~~~~~~~
146
147 This is the simplest plugin, it will create an isolated vnet bridge.
148 This bridge is not linked to physical interfaces, VM traffic is only
149 local to the node(s).
150 It can be also used for NAT or routed setup.
151
152 [[pvesdn_zone_plugin_vlan]]
153 VLAN Zones
154 ~~~~~~~~~~
155
156 This plugin will reuse an existing local Linux or OVS bridge,
157 and manage VLANs on it.
158 The benefit of using SDN module, is that you can create different zones with
159 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
160
161 Specific `VLAN` configuration options:
162
163 bridge:: Reuse this local bridge or OVS switch, already
164 configured on *each* local node.
165
166 [[pvesdn_zone_plugin_qinq]]
167 QinQ Zones
168 ~~~~~~~~~~
169
170 QinQ is stacked VLAN. The first VLAN tag defined for the zone
171 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
172
173 NOTE: Your physical network switches must support stacked VLANs!
174
175 Specific QinQ configuration options:
176
177 bridge:: A local VLAN-aware bridge already configured on each local node
178
179 service vlan:: The main VLAN tag of this zone
180
181 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
182 For example, you reduce the MTU to `1496` if you physical interface MTU is
183 `1500`.
184
185 [[pvesdn_zone_plugin_vxlan]]
186 VXLAN Zones
187 ~~~~~~~~~~~
188
189 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
190 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
191 4 UDP datagrams, using `4789` as the default destination port. You can, for
192 example, create a private IPv4 VXLAN network on top of public internet network
193 nodes.
194 This is a layer2 tunnel only, no routing between different VNets is possible.
195
196 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
197
198 Specific EVPN configuration options:
199
200 peers address list:: A list of IPs from all nodes through which you want to
201 communicate. Can also be external nodes.
202
203 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
204 lower than the outgoing physical interface.
205
206 [[pvesdn_zone_plugin_evpn]]
207 EVPN Zones
208 ~~~~~~~~~~
209
210 This is the most complex of all supported plugins.
211
212 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
213 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
214 node, with this a virtual guest can use that address as gateway.
215
216 Routing can work across VNets from different zones through a VRF (Virtual
217 Routing and Forwarding) interface.
218
219 Specific EVPN configuration options:
220
221 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
222 it must be different than VXLAN-id of VNets
223
224 controller:: an EVPN-controller need to be defined first (see controller
225 plugins section)
226
227
228 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
229 exit gateway from evpn network through real network. This nodes
230 will announce a default route in the evpn network.
231
232 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
233 lower than the outgoing physical interface.
234
235
236 [[pvesdn_config_vnet]]
237 VNets
238 -----
239
240 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
241 on the node and used for Virtual Machine communication.
242
243 VNet properties are:
244
245 ID:: a 8 characters ID to name and identify a VNet
246
247 Alias:: Optional longer name, if the ID isn't enough
248
249 Zone:: The associated zone for this VNet
250
251 Tag:: The unique VLAN or VXLAN id
252
253 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
254 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
255
256 [[pvesdn_config_subnet]]
257
258 Subnets
259 ~~~~~~~
260
261 For each Vnet, you can define 1 or multiple subnets to define an ip network (ipv4 or ipv6).
262
263 It can be used to restrict ip addresses you can define on a specific vnet,
264 assign routes/gateway on vnet in layer3 zones,
265 enable snat in layer 3 zones,
266 auto assign ips on vm/ct through ipam plugin && dns registration through dns plugins.
267
268 If an ipam server is associated to the subnet zone, the subnet prefix will be automatically
269 registered in the ipam.
270
271
272 Subnet properties are:
273
274 ID:: a cidr network address. Ex: 10.0.0.0/8
275
276 Gateway:: ip address for the default gateway of the network.
277 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
278
279 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
280 The subnet source ip will be natted to server outgoing interface/ip.
281 On evpn zone, it's done only on evpn gateway-nodes.
282
283 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
284
285
286
287
288 [[pvesdn_config_controllers]]
289 Controllers
290 -----------
291
292 Some zone types need an external controller to manage the VNet control-plane.
293 Currently this is only required for the `bgp-evpn` zone plugin.
294
295 [[pvesdn_controller_plugin_evpn]]
296 EVPN Controller
297 ~~~~~~~~~~~~~~~
298
299 For `BGP-EVPN`, we need a controller to manage the control plane.
300 The currently supported software controller is the "frr" router.
301 You may need to install it on each node where you want to deploy EVPN zones.
302
303 ----
304 apt install frr frr-pythontools
305 ----
306
307 Configuration options:
308
309 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
310 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
311 breaking, or get broken, by global routing by mistake.
312
313 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
314 external nodes or route reflectors servers)
315
316
317 [[pvesdn_controller_plugin_BGP]]
318 BGP Controller
319 ~~~~~~~~~~~~~~~
320
321 The bgp controller is not used directly by a zone.
322 You can used it to configure frr to manage bgp peers.
323
324 For Bgp-evpn, it can be use to define a different ASN by node,
325 so doing ebgp.
326
327 Configuration options:
328
329 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
330 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
331 breaking, or get broken, by global routing by mistake.
332
333 peers:: An ip list of peers where you want to communicate for the underlay
334 BGP network
335
336 ebgp:: if your peers remote-as is different, it's enabling ebgp.
337
338 node:: the node of this bgp controller
339
340 loopback:: If you want to use a loopback or dummy interface as source
341 for the evpn network. (for multipath)
342
343
344 [[pvesdn_config_ipam]]
345 Ipams
346 -----
347 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
348 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
349
350 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
351
352
353 [[pvesdn_ipam_plugin_pveipam]]
354 PVEIpam plugin
355 ~~~~~~~~~~~~~~
356
357 This is the default internal ipam for your proxmox cluster if you don't have external ipam software
358
359 [[pvesdn_ipam_plugin_phpipam]]
360 PHPIpam plugin
361 ~~~~~~~~~~~~~~
362 https://phpipam.net/
363
364 You need to create an application in phpipam, and add an api token with admin permission
365
366 PHPipam properties are:
367
368 * Url: The rest api url : http://phpipam.domain.com/api/<appname>/
369 * Token: your api token
370 * Section: An integer id. Sections are group of subnets in phpipam.
371 Default install have sectionid=1 for customers
372
373 [[pvesdn_ipam_plugin_netbox]]
374 Netbox Ipam plugin
375 ~~~~~~~~~~~~~~~~~~
376 https://github.com/netbox-community/netbox
377
378 you need to create an api token in netbox
379 https://netbox.readthedocs.io/en/stable/api/authentication
380
381 PHPipam properties are:
382
383 Url:: The rest api url: http://yournetbox.domain.com/api
384 Token:: your api token
385
386 [[pvesdn_config_dns]]
387 Dns
388 ---
389 Dns is used to define a dns api server for registration of your hostname/ip address
390 an DNS is associated to 1 or multiple zones, to provide dns registration
391 for all ips in subnets defined in this zone.
392
393 [[pvesdn_dns_plugin_powerdns]]
394 Powerdns plugin
395 ~~~~~~~~~~~~~~~
396 https://doc.powerdns.com/authoritative/http-api/index.html
397
398 you need to enable webserver && api in your powerdns config:
399
400 ----
401 api=yes
402 api-key=arandomgeneratedstring
403 webserver=yes
404 webserver-port=8081
405 ----
406
407 Powerdns properties are:
408
409 Url:: The rest api url: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
410 key:: the api key
411 ttl:: default ttl for records
412
413
414 Examples
415 --------
416
417 [[pvesdn_setup_example_vlan]]
418 VLAN Setup Example
419 ~~~~~~~~~~~~~~~~~~
420
421 TIP: While we show plain configuration content here, almost everything should
422 be configurable using the web-interface only.
423
424 Node1: /etc/network/interfaces
425
426 ----
427 auto vmbr0
428 iface vmbr0 inet manual
429 bridge-ports eno1
430 bridge-stp off
431 bridge-fd 0
432 bridge-vlan-aware yes
433 bridge-vids 2-4094
434
435 #management ip on vlan100
436 auto vmbr0.100
437 iface vmbr0.100 inet static
438 address 192.168.0.1/24
439
440 source /etc/network/interfaces.d/*
441 ----
442
443 Node2: /etc/network/interfaces
444
445 ----
446 auto vmbr0
447 iface vmbr0 inet manual
448 bridge-ports eno1
449 bridge-stp off
450 bridge-fd 0
451 bridge-vlan-aware yes
452 bridge-vids 2-4094
453
454 #management ip on vlan100
455 auto vmbr0.100
456 iface vmbr0.100 inet static
457 address 192.168.0.2/24
458
459 source /etc/network/interfaces.d/*
460 ----
461
462 Create a VLAN zone named `myvlanzone':
463
464 ----
465 id: myvlanzone
466 bridge: vmbr0
467 ----
468
469 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
470 `myvlanzone' as it's zone.
471
472 ----
473 id: myvnet1
474 zone: myvlanzone
475 tag: 10
476 ----
477
478 Apply the configuration through the main SDN panel, to create VNets locally on
479 each nodes.
480
481 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
482
483 Use the following network configuration for this VM:
484
485 ----
486 auto eth0
487 iface eth0 inet static
488 address 10.0.3.100/24
489 ----
490
491 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
492 `myvnet1' as vm1.
493
494 Use the following network configuration for this VM:
495
496 ----
497 auto eth0
498 iface eth0 inet static
499 address 10.0.3.101/24
500 ----
501
502 Then, you should be able to ping between both VMs over that network.
503
504
505 [[pvesdn_setup_example_qinq]]
506 QinQ Setup Example
507 ~~~~~~~~~~~~~~~~~~
508
509 TIP: While we show plain configuration content here, almost everything should
510 be configurable using the web-interface only.
511
512 Node1: /etc/network/interfaces
513
514 ----
515 auto vmbr0
516 iface vmbr0 inet manual
517 bridge-ports eno1
518 bridge-stp off
519 bridge-fd 0
520 bridge-vlan-aware yes
521 bridge-vids 2-4094
522
523 #management ip on vlan100
524 auto vmbr0.100
525 iface vmbr0.100 inet static
526 address 192.168.0.1/24
527
528 source /etc/network/interfaces.d/*
529 ----
530
531 Node2: /etc/network/interfaces
532
533 ----
534 auto vmbr0
535 iface vmbr0 inet manual
536 bridge-ports eno1
537 bridge-stp off
538 bridge-fd 0
539 bridge-vlan-aware yes
540 bridge-vids 2-4094
541
542 #management ip on vlan100
543 auto vmbr0.100
544 iface vmbr0.100 inet static
545 address 192.168.0.2/24
546
547 source /etc/network/interfaces.d/*
548 ----
549
550 Create an QinQ zone named `qinqzone1' with service VLAN 20
551
552 ----
553 id: qinqzone1
554 bridge: vmbr0
555 service vlan: 20
556 ----
557
558 Create another QinQ zone named `qinqzone2' with service VLAN 30
559
560 ----
561 id: qinqzone2
562 bridge: vmbr0
563 service vlan: 30
564 ----
565
566 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
567 created `qinqzone1' zone.
568
569 ----
570 id: myvnet1
571 zone: qinqzone1
572 tag: 100
573 ----
574
575 Create a `myvnet2' with customer VLAN-id 100 on the previously created
576 `qinqzone2' zone.
577
578 ----
579 id: myvnet2
580 zone: qinqzone2
581 tag: 100
582 ----
583
584 Apply the configuration on the main SDN web-interface panel to create VNets
585 locally on each nodes.
586
587 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
588
589 Use the following network configuration for this VM:
590
591 ----
592 auto eth0
593 iface eth0 inet static
594 address 10.0.3.100/24
595 ----
596
597 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
598 `myvnet1' as vm1.
599
600 Use the following network configuration for this VM:
601
602 ----
603 auto eth0
604 iface eth0 inet static
605 address 10.0.3.101/24
606 ----
607
608 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
609 `myvnet2'.
610
611 Use the following network configuration for this VM:
612
613 ----
614 auto eth0
615 iface eth0 inet static
616 address 10.0.3.102/24
617 ----
618
619 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
620 `myvnet2' as vm3.
621
622 Use the following network configuration for this VM:
623
624 ----
625 auto eth0
626 iface eth0 inet static
627 address 10.0.3.103/24
628 ----
629
630 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
631 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
632 or 'vm4', as they are on a different zone with different service-vlan.
633
634
635 [[pvesdn_setup_example_vxlan]]
636 VXLAN Setup Example
637 ~~~~~~~~~~~~~~~~~~~
638
639 TIP: While we show plain configuration content here, almost everything should
640 be configurable using the web-interface only.
641
642 node1: /etc/network/interfaces
643
644 ----
645 auto vmbr0
646 iface vmbr0 inet static
647 address 192.168.0.1/24
648 gateway 192.168.0.254
649 bridge-ports eno1
650 bridge-stp off
651 bridge-fd 0
652 mtu 1500
653
654 source /etc/network/interfaces.d/*
655 ----
656
657 node2: /etc/network/interfaces
658
659 ----
660 auto vmbr0
661 iface vmbr0 inet static
662 address 192.168.0.2/24
663 gateway 192.168.0.254
664 bridge-ports eno1
665 bridge-stp off
666 bridge-fd 0
667 mtu 1500
668
669 source /etc/network/interfaces.d/*
670 ----
671
672 node3: /etc/network/interfaces
673
674 ----
675 auto vmbr0
676 iface vmbr0 inet static
677 address 192.168.0.3/24
678 gateway 192.168.0.254
679 bridge-ports eno1
680 bridge-stp off
681 bridge-fd 0
682 mtu 1500
683
684 source /etc/network/interfaces.d/*
685 ----
686
687 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
688 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
689 the nodes as peer address list.
690
691 ----
692 id: myvxlanzone
693 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
694 mtu: 1450
695 ----
696
697 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
698 previously.
699
700 ----
701 id: myvnet1
702 zone: myvxlanzone
703 tag: 100000
704 ----
705
706 Apply the configuration on the main SDN web-interface panel to create VNets
707 locally on each nodes.
708
709 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
710
711 Use the following network configuration for this VM, note the lower MTU here.
712
713 ----
714 auto eth0
715 iface eth0 inet static
716 address 10.0.3.100/24
717 mtu 1450
718 ----
719
720 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
721 `myvnet1' as vm1.
722
723 Use the following network configuration for this VM:
724
725 ----
726 auto eth0
727 iface eth0 inet static
728 address 10.0.3.101/24
729 mtu 1450
730 ----
731
732 Then, you should be able to ping between between 'vm1' and 'vm2'.
733
734
735 [[pvesdn_setup_example_evpn]]
736 EVPN Setup Example
737 ~~~~~~~~~~~~~~~~~~
738
739 node1: /etc/network/interfaces
740
741 ----
742 auto vmbr0
743 iface vmbr0 inet static
744 address 192.168.0.1/24
745 gateway 192.168.0.254
746 bridge-ports eno1
747 bridge-stp off
748 bridge-fd 0
749 mtu 1500
750
751 source /etc/network/interfaces.d/*
752 ----
753
754 node2: /etc/network/interfaces
755
756 ----
757 auto vmbr0
758 iface vmbr0 inet static
759 address 192.168.0.2/24
760 gateway 192.168.0.254
761 bridge-ports eno1
762 bridge-stp off
763 bridge-fd 0
764 mtu 1500
765
766 source /etc/network/interfaces.d/*
767 ----
768
769 node3: /etc/network/interfaces
770
771 ----
772 auto vmbr0
773 iface vmbr0 inet static
774 address 192.168.0.3/24
775 gateway 192.168.0.254
776 bridge-ports eno1
777 bridge-stp off
778 bridge-fd 0
779 mtu 1500
780
781 source /etc/network/interfaces.d/*
782 ----
783
784 Create a EVPN controller, using a private ASN number and above node addreesses
785 as peers.
786
787 ----
788 id: myevpnctl
789 asn: 65000
790 peers: 192.168.0.1,192.168.0.2,192.168.0.3
791 ----
792
793 Create an EVPN zone named `myevpnzone' using the previously created
794 EVPN-controller Define 'node1' and 'node2' as exit nodes.
795
796
797 ----
798 id: myevpnzone
799 vrf vxlan tag: 10000
800 controller: myevpnctl
801 mtu: 1450
802 exitnodes: node1,node2
803 ----
804
805 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
806 ----
807 id: myvnet1
808 zone: myevpnzone
809 tag: 11000
810 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
811 ----
812
813 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway
814 ----
815 id: 10.0.1.0/24
816 gateway: 10.0.1.1
817 ----
818
819 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
820 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
821
822 ----
823 id: myvnet2
824 zone: myevpnzone
825 tag: 12000
826 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
827 ----
828
829 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway
830 ----
831 id: 10.0.2.0/24
832 gateway: 10.0.2.1
833 ----
834
835
836 Apply the configuration on the main SDN web-interface panel to create VNets
837 locally on each nodes and generate the FRR config.
838
839
840 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
841
842 Use the following network configuration for this VM:
843
844 ----
845 auto eth0
846 iface eth0 inet static
847 address 10.0.1.100/24
848 gateway 10.0.1.1 #this is the ip of the vnet1
849 mtu 1450
850 ----
851
852 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
853 `myvnet2'.
854
855 Use the following network configuration for this VM:
856
857 ----
858 auto eth0
859 iface eth0 inet static
860 address 10.0.2.100/24
861 gateway 10.0.2.1 #this is the ip of the vnet2
862 mtu 1450
863 ----
864
865
866 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
867
868 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
869 will go to the configured 'myvnet2' gateway, then will be routed to the exit
870 nodes ('node1' or 'node2') and from there it will leave those nodes over the
871 default gateway configured on node1 or node2.
872
873 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
874 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
875 public network can reply back.
876
877 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
878 and 10.0.2.0/24 in this example), will be announced dynamically.