]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
ecf0d254e1f1ff794a4429804adab16967881d4f
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
22
23 ----
24 apt install libpve-network-perl
25 ----
26
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
29
30 ----
31 apt install ifupdown2
32 ----
33
34 You need to add
35 ----
36 source /etc/network/interfaces.d/*
37 ----
38 at the end of /etc/network/interfaces to have the sdn config included
39
40
41 Basic Overview
42 --------------
43
44 The {pve} SDN allows separation and fine grained control of Virtual Guests
45 networks, using flexible software controlled configurations.
46
47 Separation consists of zones, a zone is it's own virtual separated network area.
48 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
49 type or plugin the zone uses it can behave differently and offer different
50 features, advantages or disadvantages.
51 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
52 'VXLAN' tag, but some can also use layer 3 routing for control.
53 The 'VNets' are deployed locally on each node, after configuration was committed
54 from the cluster-wide datacenter SDN administration interface.
55
56
57 Main configuration
58 ~~~~~~~~~~~~~~~~~~
59
60 The configuration is done at datacenter (cluster-wide) level, it will be saved
61 in configuration files located in the shared configuration file system:
62 `/etc/pve/sdn`
63
64 On the web-interface SDN feature have 3 main sections for the configuration
65
66 * SDN: a overview of the SDN state
67
68 * Zones: Create and manage the virtual separated network Zones
69
70 * VNets: Create virtual network bridges + subnets management.
71
72 And some options:
73
74 * Controller: For complex setups to control Layer 3 routing
75
76 * Sub-nets: Used to defined ip networks on VNets.
77
78 * IPAM: Allow to use external tools for IP address management (guest IPs)
79
80 * DNS: Allow to define a DNS server api for registering a virtual guests
81 hostname and IP-addresses
82
83 [[pvesdn_config_main_sdn]]
84
85 SDN
86 ~~~
87
88 This is the main status panel. Here you can see deployment status of zones on
89 different nodes.
90
91 There is an 'Apply' button, to push and reload local configuration on all
92 cluster nodes.
93
94
95 [[pvesdn_local_deployment_monitoring]]
96 Local Deployment Monitoring
97 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
98
99 After applying the configuration through the main SDN web-interface panel,
100 the local network configuration is generated locally on each node in
101 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
102
103 You can monitor the status of local zones and vnets through the main tree.
104
105
106 [[pvesdn_config_zone]]
107 Zones
108 -----
109
110 A zone will define a virtually separated network.
111
112 It can use different technologies for separation:
113
114 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
115
116 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
117
118 * VXLAN: (layer2 vxlan)
119
120 * Simple: Isolated Bridge, simple l3 routing bridge (NAT)
121
122 * bgp-evpn: vxlan using layer3 border gateway protocol routing
123
124 You can restrict a zone to specific nodes.
125
126 It's also possible to add permissions on a zone, to restrict user to use only a
127 specific zone and only the VNets in that zone
128
129 Common options
130 ~~~~~~~~~~~~~~
131
132 The following options are available for all zone types.
133
134 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
135 nodes.
136
137 ipam:: Optional, if you want to use an ipam tool to manage ips in this zone
138
139 dns:: Optional, dns api server.
140
141 reversedns:: Optional, reverse dns api server.
142
143 dnszone:: Optional, dns domain name. Use to register hostname like
144 `<hostname>.<domain>`. The dns zone need to be already existing in dns server.
145
146
147 [[pvesdn_zone_plugin_simple]]
148 Simple Zones
149 ~~~~~~~~~~~~
150
151 This is the simplest plugin, it will create an isolated vnet bridge.
152 This bridge is not linked to physical interfaces, VM traffic is only
153 local to the node(s).
154 It can be also used for NAT or routed setup.
155
156 [[pvesdn_zone_plugin_vlan]]
157 VLAN Zones
158 ~~~~~~~~~~
159
160 This plugin will reuse an existing local Linux or OVS bridge,
161 and manage VLANs on it.
162 The benefit of using SDN module, is that you can create different zones with
163 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
164
165 Specific `VLAN` configuration options:
166
167 bridge:: Reuse this local bridge or OVS switch, already
168 configured on *each* local node.
169
170 [[pvesdn_zone_plugin_qinq]]
171 QinQ Zones
172 ~~~~~~~~~~
173
174 QinQ is stacked VLAN. The first VLAN tag defined for the zone
175 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
176
177 NOTE: Your physical network switches must support stacked VLANs!
178
179 Specific QinQ configuration options:
180
181 bridge:: A local VLAN-aware bridge already configured on each local node
182
183 service vlan:: The main VLAN tag of this zone
184
185 service vlan protocol:: allow to define a 802.1q (default) or 802.1ad service vlan type.
186
187 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
188 For example, you reduce the MTU to `1496` if you physical interface MTU is
189 `1500`.
190
191 [[pvesdn_zone_plugin_vxlan]]
192 VXLAN Zones
193 ~~~~~~~~~~~
194
195 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
196 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
197 4 UDP datagrams, using `4789` as the default destination port. You can, for
198 example, create a private IPv4 VXLAN network on top of public internet network
199 nodes.
200 This is a layer2 tunnel only, no routing between different VNets is possible.
201
202 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
203
204 Specific EVPN configuration options:
205
206 peers address list:: A list of IPs from all nodes through which you want to
207 communicate. Can also be external nodes.
208
209 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
210 lower than the outgoing physical interface.
211
212 [[pvesdn_zone_plugin_evpn]]
213 EVPN Zones
214 ~~~~~~~~~~
215
216 This is the most complex of all supported plugins.
217
218 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
219 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
220 node, with this a virtual guest can use that address as gateway.
221
222 Routing can work across VNets from different zones through a VRF (Virtual
223 Routing and Forwarding) interface.
224
225 Specific EVPN configuration options:
226
227 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
228 it must be different than VXLAN-id of VNets
229
230 controller:: an EVPN-controller need to be defined first (see controller
231 plugins section)
232
233 Vnet Mac Address:: An unique, anycast macaddress for all vnets in this zone.
234 Auto-generated if you don't define it.
235
236 Exit Nodes:: This is used if you want to defined some proxmox nodes, as
237 exit gateway from evpn network through real network. This nodes
238 will announce a default route in the evpn network.
239
240 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
241 lower than the outgoing physical interface.
242
243
244 [[pvesdn_config_vnet]]
245 VNets
246 -----
247
248 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
249 on the node and used for Virtual Machine communication.
250
251 VNet properties are:
252
253 ID:: a 8 characters ID to name and identify a VNet
254
255 Alias:: Optional longer name, if the ID isn't enough
256
257 Zone:: The associated zone for this VNet
258
259 Tag:: The unique VLAN or VXLAN id
260
261 VLAN Aware:: Allow to add an extra VLAN tag in the virtual machine or
262 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
263
264 [[pvesdn_config_subnet]]
265
266 Sub-Nets
267 ~~~~~~~~
268
269 A sub-network (subnet or sub-net) allows you to define a specific IP network
270 (IPv4 or IPv6). For each VNET, you can define one or more subnets.
271
272 A subnet can be used to:
273
274 * restrict IP-addresses you can define on a specific VNET
275 * assign routes/gateway on a VNET in layer 3 zones
276 * enable SNAT on a VNET in layer 3 zones
277 * auto assign IPs on virtual guests (VM or CT) through IPAM plugin
278 * DNS registration through DNS plugins
279
280 If an IPAM server is associated to the subnet zone, the subnet prefix will be
281 automatically registered in the IPAM.
282
283
284 Subnet properties are:
285
286 ID:: a cidr network address. Ex: 10.0.0.0/8
287
288 Gateway:: ip address for the default gateway of the network.
289 On layer3 zones (simple/evpn plugins), it'll be deployed on the vnet.
290
291 Snat:: Optional, Enable Snat for layer3 zones (simple/evpn plugins) for this subnet.
292 The subnet source ip will be natted to server outgoing interface/ip.
293 On evpn zone, it's done only on evpn gateway-nodes.
294
295 Dnszoneprefix:: Optional, add a prefix to domain registration, like <hostname>.prefix.<domain>
296
297
298 [[pvesdn_config_controllers]]
299 Controllers
300 -----------
301
302 Some zone types need an external controller to manage the VNet control-plane.
303 Currently this is only required for the `bgp-evpn` zone plugin.
304
305 [[pvesdn_controller_plugin_evpn]]
306 EVPN Controller
307 ~~~~~~~~~~~~~~~
308
309 For `BGP-EVPN`, we need a controller to manage the control plane.
310 The currently supported software controller is the "frr" router.
311 You may need to install it on each node where you want to deploy EVPN zones.
312
313 ----
314 apt install frr frr-pythontools
315 ----
316
317 Configuration options:
318
319 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
320 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
321 breaking, or get broken, by global routing by mistake.
322
323 peers:: An ip list of all nodes where you want to communicate for the EVPN (could be also
324 external nodes or route reflectors servers)
325
326
327 [[pvesdn_controller_plugin_BGP]]
328 BGP Controller
329 ~~~~~~~~~~~~~~~
330
331 The bgp controller is not used directly by a zone.
332 You can used it to configure frr to manage bgp peers.
333
334 For BGP-evpn, it can be use to define a different ASN by node, so doing EBGP.
335
336 Configuration options:
337
338 node:: The node of this BGP controller
339
340 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
341 number from the range (64512 - 65534) or (4200000000 - 4294967294), as else
342 you could end up breaking, or get broken, by global routing by mistake.
343
344 peers:: An IP list of peers you want to communicate with for the underlying
345 BGP network.
346
347 ebgp:: If your peer's remote-AS is different, it's enabling EBGP.
348
349 loopback:: If you want to use a loopback or dummy interface as source for the
350 evpn network. (for multipath)
351
352 ebgp-mutltihop:: if the peers are not directly connected or use loopback, you can increase the
353 number of hops to reach them.
354
355 [[pvesdn_config_ipam]]
356 IPAMs
357 -----
358 IPAM (IP address management) tools, are used to manage/assign ips on your devices on the network.
359 It can be used to find free ip address when you create a vm/ct for example (not yet implemented).
360
361 An IPAM is associated to 1 or multiple zones, to provide ip addresses for all subnets defined in this zone.
362
363
364 [[pvesdn_ipam_plugin_pveipam]]
365 {pve} IPAM plugin
366 ~~~~~~~~~~~~~~~~~
367
368 This is the default internal IPAM for your proxmox cluster if you don't have
369 external ipam software
370
371 [[pvesdn_ipam_plugin_phpipam]]
372 phpIPAM plugin
373 ~~~~~~~~~~~~~~
374 https://phpipam.net/
375
376 You need to create an application in phpipam, and add an api token with admin
377 permission
378
379 phpIPAM properties are:
380
381 url:: The REST-API endpoint: `http://phpipam.domain.com/api/<appname>/`
382 token:: An API access token
383 section:: An integer ID. Sections are group of subnets in phpIPAM. Default
384 installations use `sectionid=1` for customers.
385
386 [[pvesdn_ipam_plugin_netbox]]
387 Netbox IPAM plugin
388 ~~~~~~~~~~~~~~~~~~
389
390 NetBox is an IP address management (IPAM) and data center infrastructure
391 management (DCIM) tool, see the source code repository for details:
392 https://github.com/netbox-community/netbox
393
394 You need to create an api token in netbox
395 https://netbox.readthedocs.io/en/stable/api/authentication
396
397 NetBox properties are:
398
399 url:: The REST API endpoint: `http://yournetbox.domain.com/api`
400 token:: An API access token
401
402 [[pvesdn_config_dns]]
403 DNS
404 ---
405
406 The DNS plugin in {pve} SDN is used to define a DNS API server for registration
407 of your hostname and IP-address. A DNS configuration is associated with one or
408 more zones, to provide DNS registration for all the sub-net IPs configured for
409 a zone.
410
411 [[pvesdn_dns_plugin_powerdns]]
412 PowerDNS plugin
413 ~~~~~~~~~~~~~~~
414 https://doc.powerdns.com/authoritative/http-api/index.html
415
416 You need to enable the webserver and the API in your PowerDNS config:
417
418 ----
419 api=yes
420 api-key=arandomgeneratedstring
421 webserver=yes
422 webserver-port=8081
423 ----
424
425 Powerdns properties are:
426
427 url:: The REST API endpoint: http://yourpowerdnserver.domain.com:8081/api/v1/servers/localhost
428 key:: An API access key
429 ttl:: The default TTL for records
430
431
432 Examples
433 --------
434
435 [[pvesdn_setup_example_vlan]]
436 VLAN Setup Example
437 ~~~~~~~~~~~~~~~~~~
438
439 TIP: While we show plain configuration content here, almost everything should
440 be configurable using the web-interface only.
441
442 Node1: /etc/network/interfaces
443
444 ----
445 auto vmbr0
446 iface vmbr0 inet manual
447 bridge-ports eno1
448 bridge-stp off
449 bridge-fd 0
450 bridge-vlan-aware yes
451 bridge-vids 2-4094
452
453 #management ip on vlan100
454 auto vmbr0.100
455 iface vmbr0.100 inet static
456 address 192.168.0.1/24
457
458 source /etc/network/interfaces.d/*
459 ----
460
461 Node2: /etc/network/interfaces
462
463 ----
464 auto vmbr0
465 iface vmbr0 inet manual
466 bridge-ports eno1
467 bridge-stp off
468 bridge-fd 0
469 bridge-vlan-aware yes
470 bridge-vids 2-4094
471
472 #management ip on vlan100
473 auto vmbr0.100
474 iface vmbr0.100 inet static
475 address 192.168.0.2/24
476
477 source /etc/network/interfaces.d/*
478 ----
479
480 Create a VLAN zone named `myvlanzone':
481
482 ----
483 id: myvlanzone
484 bridge: vmbr0
485 ----
486
487 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
488 `myvlanzone' as it's zone.
489
490 ----
491 id: myvnet1
492 zone: myvlanzone
493 tag: 10
494 ----
495
496 Apply the configuration through the main SDN panel, to create VNets locally on
497 each nodes.
498
499 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
500
501 Use the following network configuration for this VM:
502
503 ----
504 auto eth0
505 iface eth0 inet static
506 address 10.0.3.100/24
507 ----
508
509 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
510 `myvnet1' as vm1.
511
512 Use the following network configuration for this VM:
513
514 ----
515 auto eth0
516 iface eth0 inet static
517 address 10.0.3.101/24
518 ----
519
520 Then, you should be able to ping between both VMs over that network.
521
522
523 [[pvesdn_setup_example_qinq]]
524 QinQ Setup Example
525 ~~~~~~~~~~~~~~~~~~
526
527 TIP: While we show plain configuration content here, almost everything should
528 be configurable using the web-interface only.
529
530 Node1: /etc/network/interfaces
531
532 ----
533 auto vmbr0
534 iface vmbr0 inet manual
535 bridge-ports eno1
536 bridge-stp off
537 bridge-fd 0
538 bridge-vlan-aware yes
539 bridge-vids 2-4094
540
541 #management ip on vlan100
542 auto vmbr0.100
543 iface vmbr0.100 inet static
544 address 192.168.0.1/24
545
546 source /etc/network/interfaces.d/*
547 ----
548
549 Node2: /etc/network/interfaces
550
551 ----
552 auto vmbr0
553 iface vmbr0 inet manual
554 bridge-ports eno1
555 bridge-stp off
556 bridge-fd 0
557 bridge-vlan-aware yes
558 bridge-vids 2-4094
559
560 #management ip on vlan100
561 auto vmbr0.100
562 iface vmbr0.100 inet static
563 address 192.168.0.2/24
564
565 source /etc/network/interfaces.d/*
566 ----
567
568 Create an QinQ zone named `qinqzone1' with service VLAN 20
569
570 ----
571 id: qinqzone1
572 bridge: vmbr0
573 service vlan: 20
574 ----
575
576 Create another QinQ zone named `qinqzone2' with service VLAN 30
577
578 ----
579 id: qinqzone2
580 bridge: vmbr0
581 service vlan: 30
582 ----
583
584 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
585 created `qinqzone1' zone.
586
587 ----
588 id: myvnet1
589 zone: qinqzone1
590 tag: 100
591 ----
592
593 Create a `myvnet2' with customer VLAN-id 100 on the previously created
594 `qinqzone2' zone.
595
596 ----
597 id: myvnet2
598 zone: qinqzone2
599 tag: 100
600 ----
601
602 Apply the configuration on the main SDN web-interface panel to create VNets
603 locally on each nodes.
604
605 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
606
607 Use the following network configuration for this VM:
608
609 ----
610 auto eth0
611 iface eth0 inet static
612 address 10.0.3.100/24
613 ----
614
615 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
616 `myvnet1' as vm1.
617
618 Use the following network configuration for this VM:
619
620 ----
621 auto eth0
622 iface eth0 inet static
623 address 10.0.3.101/24
624 ----
625
626 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
627 `myvnet2'.
628
629 Use the following network configuration for this VM:
630
631 ----
632 auto eth0
633 iface eth0 inet static
634 address 10.0.3.102/24
635 ----
636
637 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
638 `myvnet2' as vm3.
639
640 Use the following network configuration for this VM:
641
642 ----
643 auto eth0
644 iface eth0 inet static
645 address 10.0.3.103/24
646 ----
647
648 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
649 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
650 or 'vm4', as they are on a different zone with different service-vlan.
651
652
653 [[pvesdn_setup_example_vxlan]]
654 VXLAN Setup Example
655 ~~~~~~~~~~~~~~~~~~~
656
657 TIP: While we show plain configuration content here, almost everything should
658 be configurable using the web-interface only.
659
660 node1: /etc/network/interfaces
661
662 ----
663 auto vmbr0
664 iface vmbr0 inet static
665 address 192.168.0.1/24
666 gateway 192.168.0.254
667 bridge-ports eno1
668 bridge-stp off
669 bridge-fd 0
670 mtu 1500
671
672 source /etc/network/interfaces.d/*
673 ----
674
675 node2: /etc/network/interfaces
676
677 ----
678 auto vmbr0
679 iface vmbr0 inet static
680 address 192.168.0.2/24
681 gateway 192.168.0.254
682 bridge-ports eno1
683 bridge-stp off
684 bridge-fd 0
685 mtu 1500
686
687 source /etc/network/interfaces.d/*
688 ----
689
690 node3: /etc/network/interfaces
691
692 ----
693 auto vmbr0
694 iface vmbr0 inet static
695 address 192.168.0.3/24
696 gateway 192.168.0.254
697 bridge-ports eno1
698 bridge-stp off
699 bridge-fd 0
700 mtu 1500
701
702 source /etc/network/interfaces.d/*
703 ----
704
705 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
706 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
707 the nodes as peer address list.
708
709 ----
710 id: myvxlanzone
711 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
712 mtu: 1450
713 ----
714
715 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
716 previously.
717
718 ----
719 id: myvnet1
720 zone: myvxlanzone
721 tag: 100000
722 ----
723
724 Apply the configuration on the main SDN web-interface panel to create VNets
725 locally on each nodes.
726
727 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
728
729 Use the following network configuration for this VM, note the lower MTU here.
730
731 ----
732 auto eth0
733 iface eth0 inet static
734 address 10.0.3.100/24
735 mtu 1450
736 ----
737
738 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
739 `myvnet1' as vm1.
740
741 Use the following network configuration for this VM:
742
743 ----
744 auto eth0
745 iface eth0 inet static
746 address 10.0.3.101/24
747 mtu 1450
748 ----
749
750 Then, you should be able to ping between between 'vm1' and 'vm2'.
751
752
753 [[pvesdn_setup_example_evpn]]
754 EVPN Setup Example
755 ~~~~~~~~~~~~~~~~~~
756
757 node1: /etc/network/interfaces
758
759 ----
760 auto vmbr0
761 iface vmbr0 inet static
762 address 192.168.0.1/24
763 gateway 192.168.0.254
764 bridge-ports eno1
765 bridge-stp off
766 bridge-fd 0
767 mtu 1500
768
769 source /etc/network/interfaces.d/*
770 ----
771
772 node2: /etc/network/interfaces
773
774 ----
775 auto vmbr0
776 iface vmbr0 inet static
777 address 192.168.0.2/24
778 gateway 192.168.0.254
779 bridge-ports eno1
780 bridge-stp off
781 bridge-fd 0
782 mtu 1500
783
784 source /etc/network/interfaces.d/*
785 ----
786
787 node3: /etc/network/interfaces
788
789 ----
790 auto vmbr0
791 iface vmbr0 inet static
792 address 192.168.0.3/24
793 gateway 192.168.0.254
794 bridge-ports eno1
795 bridge-stp off
796 bridge-fd 0
797 mtu 1500
798
799 source /etc/network/interfaces.d/*
800 ----
801
802 Create a EVPN controller, using a private ASN number and above node addreesses
803 as peers.
804
805 ----
806 id: myevpnctl
807 asn: 65000
808 peers: 192.168.0.1,192.168.0.2,192.168.0.3
809 ----
810
811 Create an EVPN zone named `myevpnzone' using the previously created
812 EVPN-controller Define 'node1' and 'node2' as exit nodes.
813
814
815 ----
816 id: myevpnzone
817 vrf vxlan tag: 10000
818 controller: myevpnctl
819 mtu: 1450
820 vnet mac address: 32:F4:05:FE:6C:0A
821 exitnodes: node1,node2
822 ----
823
824 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone'.
825 ----
826 id: myvnet1
827 zone: myevpnzone
828 tag: 11000
829 ----
830
831 Create a subnet 10.0.1.0/24 with 10.0.1.1 as gateway on vnet1
832 ----
833 subnet: 10.0.1.0/24
834 gateway: 10.0.1.1
835 ----
836
837 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
838 different IPv4 CIDR network.
839
840 ----
841 id: myvnet2
842 zone: myevpnzone
843 tag: 12000
844 ----
845
846 Create a different subnet 10.0.2.0/24 with 10.0.2.1 as gateway on vnet2
847 ----
848 subnet: 10.0.2.0/24
849 gateway: 10.0.2.1
850 ----
851
852
853 Apply the configuration on the main SDN web-interface panel to create VNets
854 locally on each nodes and generate the FRR config.
855
856
857 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
858
859 Use the following network configuration for this VM:
860
861 ----
862 auto eth0
863 iface eth0 inet static
864 address 10.0.1.100/24
865 gateway 10.0.1.1 #this is the ip of the vnet1
866 mtu 1450
867 ----
868
869 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
870 `myvnet2'.
871
872 Use the following network configuration for this VM:
873
874 ----
875 auto eth0
876 iface eth0 inet static
877 address 10.0.2.100/24
878 gateway 10.0.2.1 #this is the ip of the vnet2
879 mtu 1450
880 ----
881
882
883 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
884
885 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
886 will go to the configured 'myvnet2' gateway, then will be routed to the exit
887 nodes ('node1' or 'node2') and from there it will leave those nodes over the
888 default gateway configured on node1 or node2.
889
890 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
891 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
892 public network can reply back.
893
894 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
895 and 10.0.2.0/24 in this example), will be announced dynamically.