]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
ccd03037e2f999d6f9d51a09faf652bf4221a216
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
22
23 ----
24 apt install libpve-network-perl
25 ----
26
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
29
30 ----
31 apt install ifupdown2
32 ----
33
34 You need to add
35 ----
36 source /etc/network/interfaces.d/*
37 ----
38 at the end of /etc/network/interfaces to have the sdn config included
39
40
41 Basic Overview
42 --------------
43
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
46
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
55
56
57 Main configuration
58 ~~~~~~~~~~~~~~~~~~
59
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
62 `/etc/pve/sdn`
63
64 On the web-interface SDN feature have 3 main sections for the configuration
65
66 * SDN: a overview of the SDN state
67
68 * Zones: Create and manage the virtual separated network Zones
69
70 * VNets: Create virtual network bridges + subnets management.
71
72 And some options:
73
74 * Controller: For complex setups to control Layer 3 routing
75
76 * Sub-nets: Used to defined ip networks on VNets.
77
78 * IPAM: Allow to use external tools for IP address management (guest IPs)
79
80 * DNS: Allow to define a DNS server api for registering a virtual guests
81 hostname and IP-addresses
82
83 [[pvesdn_config_main_sdn]]
84
85 SDN
86 ~~~
87
88 This is the main status panel. Here you can see deployment status of zones on
89 different nodes.
90
91 There is an 'Apply' button, to push and reload local configuration on all
92 cluster nodes.
93
94
95 [[pvesdn_local_deployment_monitoring]]
96 Local Deployment Monitoring
97 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
98
99 After applying the configuration through the main SDN web-interface panel,
100 the local network configuration is generated locally on each node in
101 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
102
103 You can monitor the status of local zones and vnets through the main tree.
104
105
106 [[pvesdn_config_zone]]
107 Zones
108 -----
109
110 A zone will define a virtually separated network.
111
112 It can use different technologies for separation:
113
114 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
115
116 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
117
118 * VXLAN: (layer2 vxlan)
119
120 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
121
122 * bgp-evpn: vxlan using layer3 border gateway protocol routing
123
124 You can restrict a zone to specific nodes.
125
126 It's also possible to add permissions on a zone, to restrict user to use only a
127 specific zone and only the VNets in that zone
128
129 Common options
130 ~~~~~~~~~~~~~~
131
132 The following options are available for all zone types.
133
134 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
135 nodes.
136
137 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
138
139 dns:: Optional, dns api server.
140
141 reversedns:: Optional, reverse dns api server.
142
143 dnszone:: Optional, dns domain name. Use to register hostname like
144 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
145
146
147 [[pvesdn_zone_plugin_simple]]
148 Simple Zones
149 ~~~~~~~~~~~~
150
151 This is the simplest plugin, it will create an isolated vnet bridge.
152 This bridge is not linked to physical interfaces, VM traffic is only
153 local to the node(s).
154 It can be also used for NAT or routed setup.
155
156 [[pvesdn_zone_plugin_vlan]]
157 VLAN Zones
158 ~~~~~~~~~~
159
160 This plugin will reuse an existing local Linux or OVS bridge,
161 and manage VLANs on it.
162 The benefit of using SDN module, is that you can create different zones with
163 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
164
165 Specific `VLAN` configuration options:
166
167 bridge:: Reuse this local bridge or OVS switch, already
168 configured on *each* local node.
169
170 [[pvesdn_zone_plugin_qinq]]
171 QinQ Zones
172 ~~~~~~~~~~
173
174 QinQ is stacked VLAN. The first VLAN tag defined for the zone
175 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
176
177 NOTE: Your physical network switches must support stacked VLANs!
178
179 Specific QinQ configuration options:
180
181 bridge:: A local VLAN-aware bridge already configured on each local node
182
183 service vlan:: The main VLAN tag of this zone
184
185 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
186 For example, you reduce the MTU to `1496` if you physical interface MTU is
187 `1500`.
188
189 [[pvesdn_zone_plugin_vxlan]]
190 VXLAN Zones
191 ~~~~~~~~~~~
192
193 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
194 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
195 4 UDP datagrams, using `4789` as the default destination port. You can, for
196 example, create a private IPv4 VXLAN network on top of public internet network
197 nodes.
198 This is a layer2 tunnel only, no routing between different VNets is possible.
199
200 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
201
202 Specific EVPN configuration options:
203
204 peers address list:: A list of IPs from all nodes through which you want to
205 communicate. Can also be external nodes.
206
207 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
208 lower than the outgoing physical interface.
209
210 [[pvesdn_zone_plugin_evpn]]
211 EVPN Zones
212 ~~~~~~~~~~
213
214 This is the most complex of all supported plugins.
215
216 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
217 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
218 node, with this a virtual guest can use that address as gateway.
219
220 Routing can work across VNets from different zones through a VRF (Virtual
221 Routing and Forwarding) interface.
222
223 Specific EVPN configuration options:
224
225 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
226 it must be different than VXLAN-id of VNets
227
228 controller:: an EVPN-controller need to be defined first (see controller
229 plugins section)
230
231
232 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
233 exit gateway from evpn network through real network. This nodes
234 will announce a default route in the evpn network.
235
236 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
237 lower than the outgoing physical interface.
238
239
240 [[pvesdn_config_vnet]]
241 VNets
242 -----
243
244 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
245 on the node and used for Virtual Machine communication.
246
247 VNet properties are:
248
249 ID:: a 8 characters ID to name and identify a VNet
250
251 Alias:: Optional longer name, if the ID isn't enough
252
253 Zone:: The associated zone for this VNet
254
255 Tag:: The unique VLAN or VXLAN id
256
257 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
258 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
259
260 [[pvesdn_config_subnet]]
261
262 Sub-Nets
263 ~~~~~~~~
264
265 A sub-network (subnet or sub-net) allows you to define a specific IP network
266 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
267
268 A subnet can be used to:
269
270 * restrict IP-addresses you can define on a specific VNET
271 * assign routes/gateway on a VNET in layer 3 zones
272 * enable SNAT on a VNET in layer 3 zones
273 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
274 * DNS registration through DNS plugins
275
276 If an IPAM server is associated to the subnet zone, the subnet prefix will be
277 automatically registered in the IPAM.
278
279
280 Subnet properties are:
281
282 ID:: a cidr network address. Ex: 10.0.0.0/8
283
284 Gateway:: ip address for the default gateway of the network.
285 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
286
287 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
288 The subnet source ip will be natted to server outgoing interface/ip.
289 On evpn zone, it's done only on evpn gateway-nodes.
290
291 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
292
293
294 [[pvesdn_config_controllers]]
295 Controllers
296 -----------
297
298 Some zone types need an external controller to manage the VNet control-plane.
299 Currently this is only required for the `bgp-evpn` zone plugin.
300
301 [[pvesdn_controller_plugin_evpn]]
302 EVPN Controller
303 ~~~~~~~~~~~~~~~
304
305 For `BGP-EVPN`, we need a controller to manage the control plane.
306 The currently supported software controller is the "frr" router.
307 You may need to install it on each node where you want to deploy EVPN zones.
308
309 ----
310 apt install frr frr-pythontools
311 ----
312
313 Configuration options:
314
315 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
316 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
317 breaking, or get broken, by global routing by mistake.
318
319 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
320 external nodes or route reflectors servers)
321
322
323 [[pvesdn_controller_plugin_BGP]]
324 BGP Controller
325 ~~~~~~~~~~~~~~~
326
327 The bgp controller is not used directly by a zone.
328 You can used it to configure frr to manage bgp peers.
329
330 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
331
332 Configuration options:
333
334 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
335 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
336 you could end up breaking, or get broken, by global routing by mistake.
337
338 peers:: An IP list of peers you want to communicate with for the underlying
339 BGP network.
340
341 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
342
343 node:: The node of this BGP controller
344
345 loopback:: If you want to use a loopback or dummy interface as source for the
346 evpn network. (for multipath)
347
348
349 [[pvesdn_config_ipam]]
350 IPAMs
351 -----
352 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
353 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
354
355 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
356
357
358 [[pvesdn_ipam_plugin_pveipam]]
359 {pve} IPAM plugin
360 ~~~~~~~~~~~~~~~~~
361
362 This is the default internal IPAM for your proxmox cluster if you don't have
363 external ipam software
364
365 [[pvesdn_ipam_plugin_phpipam]]
366 phpIPAM plugin
367 ~~~~~~~~~~~~~~
368 https://phpipam.net/
369
370 You need to create an application in phpipam, and add an api token with admin
371 permission
372
373 phpIPAM properties are:
374
375 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
376 token:: An API access token
377 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
378 installations use `sectionid=1` for customers.
379
380 [[pvesdn_ipam_plugin_netbox]]
381 Netbox IPAM plugin
382 ~~~~~~~~~~~~~~~~~~
383
384 NetBox is an IP address management (IPAM) and data center infrastructure
385 management (DCIM) tool, see the source code repository for details:
386 https://github.com/netbox-community/netbox
387
388 You need to create an api token in netbox
389 https://netbox.readthedocs.io/en/stable/api/authentication
390
391 NetBox properties are:
392
393 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
394 token:: An API access token
395
396 [[pvesdn_config_dns]]
397 DNS
398 ---
399
400 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
401 of your hostname and IP-address. A DNS configuration is associated with one or
402 more zones, to provide DNS registration for all the sub-net IPs configured for
403 a zone.
404
405 [[pvesdn_dns_plugin_powerdns]]
406 PowerDNS plugin
407 ~~~~~~~~~~~~~~~
408 https://doc.powerdns.com/authoritative/http-api/index.html
409
410 You need to enable the webserver and the API in your PowerDNS config:
411
412 ----
413 api=yes
414 api-key=arandomgeneratedstring
415 webserver=yes
416 webserver-port=8081
417 ----
418
419 Powerdns properties are:
420
421 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
422 key:: An API access key
423 ttl:: The default TTL for records
424
425
426 Examples
427 --------
428
429 [[pvesdn_setup_example_vlan]]
430 VLAN Setup Example
431 ~~~~~~~~~~~~~~~~~~
432
433 TIP: While we show plain configuration content here, almost everything should
434 be configurable using the web-interface only.
435
436 Node1: /etc/network/interfaces
437
438 ----
439 auto vmbr0
440 iface vmbr0 inet manual
441 bridge-ports eno1
442 bridge-stp off
443 bridge-fd 0
444 bridge-vlan-aware yes
445 bridge-vids 2-4094
446
447 #management ip on vlan100
448 auto vmbr0.100
449 iface vmbr0.100 inet static
450 address 192.168.0.1/24
451
452 source /etc/network/interfaces.d/*
453 ----
454
455 Node2: /etc/network/interfaces
456
457 ----
458 auto vmbr0
459 iface vmbr0 inet manual
460 bridge-ports eno1
461 bridge-stp off
462 bridge-fd 0
463 bridge-vlan-aware yes
464 bridge-vids 2-4094
465
466 #management ip on vlan100
467 auto vmbr0.100
468 iface vmbr0.100 inet static
469 address 192.168.0.2/24
470
471 source /etc/network/interfaces.d/*
472 ----
473
474 Create a VLAN zone named `myvlanzone':
475
476 ----
477 id: myvlanzone
478 bridge: vmbr0
479 ----
480
481 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
482 `myvlanzone' as it's zone.
483
484 ----
485 id: myvnet1
486 zone: myvlanzone
487 tag: 10
488 ----
489
490 Apply the configuration through the main SDN panel, to create VNets locally on
491 each nodes.
492
493 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
494
495 Use the following network configuration for this VM:
496
497 ----
498 auto eth0
499 iface eth0 inet static
500 address 10.0.3.100/24
501 ----
502
503 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
504 `myvnet1' as vm1.
505
506 Use the following network configuration for this VM:
507
508 ----
509 auto eth0
510 iface eth0 inet static
511 address 10.0.3.101/24
512 ----
513
514 Then, you should be able to ping between both VMs over that network.
515
516
517 [[pvesdn_setup_example_qinq]]
518 QinQ Setup Example
519 ~~~~~~~~~~~~~~~~~~
520
521 TIP: While we show plain configuration content here, almost everything should
522 be configurable using the web-interface only.
523
524 Node1: /etc/network/interfaces
525
526 ----
527 auto vmbr0
528 iface vmbr0 inet manual
529 bridge-ports eno1
530 bridge-stp off
531 bridge-fd 0
532 bridge-vlan-aware yes
533 bridge-vids 2-4094
534
535 #management ip on vlan100
536 auto vmbr0.100
537 iface vmbr0.100 inet static
538 address 192.168.0.1/24
539
540 source /etc/network/interfaces.d/*
541 ----
542
543 Node2: /etc/network/interfaces
544
545 ----
546 auto vmbr0
547 iface vmbr0 inet manual
548 bridge-ports eno1
549 bridge-stp off
550 bridge-fd 0
551 bridge-vlan-aware yes
552 bridge-vids 2-4094
553
554 #management ip on vlan100
555 auto vmbr0.100
556 iface vmbr0.100 inet static
557 address 192.168.0.2/24
558
559 source /etc/network/interfaces.d/*
560 ----
561
562 Create an QinQ zone named `qinqzone1' with service VLAN 20
563
564 ----
565 id: qinqzone1
566 bridge: vmbr0
567 service vlan: 20
568 ----
569
570 Create another QinQ zone named `qinqzone2' with service VLAN 30
571
572 ----
573 id: qinqzone2
574 bridge: vmbr0
575 service vlan: 30
576 ----
577
578 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
579 created `qinqzone1' zone.
580
581 ----
582 id: myvnet1
583 zone: qinqzone1
584 tag: 100
585 ----
586
587 Create a `myvnet2' with customer VLAN-id 100 on the previously created
588 `qinqzone2' zone.
589
590 ----
591 id: myvnet2
592 zone: qinqzone2
593 tag: 100
594 ----
595
596 Apply the configuration on the main SDN web-interface panel to create VNets
597 locally on each nodes.
598
599 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
600
601 Use the following network configuration for this VM:
602
603 ----
604 auto eth0
605 iface eth0 inet static
606 address 10.0.3.100/24
607 ----
608
609 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
610 `myvnet1' as vm1.
611
612 Use the following network configuration for this VM:
613
614 ----
615 auto eth0
616 iface eth0 inet static
617 address 10.0.3.101/24
618 ----
619
620 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
621 `myvnet2'.
622
623 Use the following network configuration for this VM:
624
625 ----
626 auto eth0
627 iface eth0 inet static
628 address 10.0.3.102/24
629 ----
630
631 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
632 `myvnet2' as vm3.
633
634 Use the following network configuration for this VM:
635
636 ----
637 auto eth0
638 iface eth0 inet static
639 address 10.0.3.103/24
640 ----
641
642 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
643 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
644 or 'vm4', as they are on a different zone with different service-vlan.
645
646
647 [[pvesdn_setup_example_vxlan]]
648 VXLAN Setup Example
649 ~~~~~~~~~~~~~~~~~~~
650
651 TIP: While we show plain configuration content here, almost everything should
652 be configurable using the web-interface only.
653
654 node1: /etc/network/interfaces
655
656 ----
657 auto vmbr0
658 iface vmbr0 inet static
659 address 192.168.0.1/24
660 gateway 192.168.0.254
661 bridge-ports eno1
662 bridge-stp off
663 bridge-fd 0
664 mtu 1500
665
666 source /etc/network/interfaces.d/*
667 ----
668
669 node2: /etc/network/interfaces
670
671 ----
672 auto vmbr0
673 iface vmbr0 inet static
674 address 192.168.0.2/24
675 gateway 192.168.0.254
676 bridge-ports eno1
677 bridge-stp off
678 bridge-fd 0
679 mtu 1500
680
681 source /etc/network/interfaces.d/*
682 ----
683
684 node3: /etc/network/interfaces
685
686 ----
687 auto vmbr0
688 iface vmbr0 inet static
689 address 192.168.0.3/24
690 gateway 192.168.0.254
691 bridge-ports eno1
692 bridge-stp off
693 bridge-fd 0
694 mtu 1500
695
696 source /etc/network/interfaces.d/*
697 ----
698
699 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
700 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
701 the nodes as peer address list.
702
703 ----
704 id: myvxlanzone
705 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
706 mtu: 1450
707 ----
708
709 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
710 previously.
711
712 ----
713 id: myvnet1
714 zone: myvxlanzone
715 tag: 100000
716 ----
717
718 Apply the configuration on the main SDN web-interface panel to create VNets
719 locally on each nodes.
720
721 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
722
723 Use the following network configuration for this VM, note the lower MTU here.
724
725 ----
726 auto eth0
727 iface eth0 inet static
728 address 10.0.3.100/24
729 mtu 1450
730 ----
731
732 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
733 `myvnet1' as vm1.
734
735 Use the following network configuration for this VM:
736
737 ----
738 auto eth0
739 iface eth0 inet static
740 address 10.0.3.101/24
741 mtu 1450
742 ----
743
744 Then, you should be able to ping between between 'vm1' and 'vm2'.
745
746
747 [[pvesdn_setup_example_evpn]]
748 EVPN Setup Example
749 ~~~~~~~~~~~~~~~~~~
750
751 node1: /etc/network/interfaces
752
753 ----
754 auto vmbr0
755 iface vmbr0 inet static
756 address 192.168.0.1/24
757 gateway 192.168.0.254
758 bridge-ports eno1
759 bridge-stp off
760 bridge-fd 0
761 mtu 1500
762
763 source /etc/network/interfaces.d/*
764 ----
765
766 node2: /etc/network/interfaces
767
768 ----
769 auto vmbr0
770 iface vmbr0 inet static
771 address 192.168.0.2/24
772 gateway 192.168.0.254
773 bridge-ports eno1
774 bridge-stp off
775 bridge-fd 0
776 mtu 1500
777
778 source /etc/network/interfaces.d/*
779 ----
780
781 node3: /etc/network/interfaces
782
783 ----
784 auto vmbr0
785 iface vmbr0 inet static
786 address 192.168.0.3/24
787 gateway 192.168.0.254
788 bridge-ports eno1
789 bridge-stp off
790 bridge-fd 0
791 mtu 1500
792
793 source /etc/network/interfaces.d/*
794 ----
795
796 Create a EVPN controller, using a private ASN number and above node addreesses
797 as peers.
798
799 ----
800 id: myevpnctl
801 asn: 65000
802 peers: 192.168.0.1,192.168.0.2,192.168.0.3
803 ----
804
805 Create an EVPN zone named `myevpnzone' using the previously created
806 EVPN-controller Define 'node1' and 'node2' as exit nodes.
807
808
809 ----
810 id: myevpnzone
811 vrf vxlan tag: 10000
812 controller: myevpnctl
813 mtu: 1450
814 exitnodes: node1,node2
815 ----
816
817 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
818 ----
819 id: myvnet1
820 zone: myevpnzone
821 tag: 11000
822 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
823 ----
824
825 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway
826 ----
827 id: 10.0.1.0/24
828 gateway: 10.0.1.1
829 ----
830
831 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
832 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
833
834 ----
835 id: myvnet2
836 zone: myevpnzone
837 tag: 12000
838 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
839 ----
840
841 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway
842 ----
843 id: 10.0.2.0/24
844 gateway: 10.0.2.1
845 ----
846
847
848 Apply the configuration on the main SDN web-interface panel to create VNets
849 locally on each nodes and generate the FRR config.
850
851
852 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
853
854 Use the following network configuration for this VM:
855
856 ----
857 auto eth0
858 iface eth0 inet static
859 address 10.0.1.100/24
860 gateway 10.0.1.1 #this is the ip of the vnet1
861 mtu 1450
862 ----
863
864 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
865 `myvnet2'.
866
867 Use the following network configuration for this VM:
868
869 ----
870 auto eth0
871 iface eth0 inet static
872 address 10.0.2.100/24
873 gateway 10.0.2.1 #this is the ip of the vnet2
874 mtu 1450
875 ----
876
877
878 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
879
880 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
881 will go to the configured 'myvnet2' gateway, then will be routed to the exit
882 nodes ('node1' or 'node2') and from there it will leave those nodes over the
883 default gateway configured on node1 or node2.
884
885 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
886 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
887 public network can reply back.
888
889 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
890 and 10.0.2.0/24 in this example), will be announced dynamically.