]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
vzdump: drop overly scary & outdated warning about fleecing
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
22
23 ----
24 apt install libpve-network-perl
25 ----
26
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
29
30 ----
31 apt install ifupdown2
32 ----
33
34 Basic Overview
35 --------------
36
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
39
40 Separation consists of zones, a zone is it's own virtual separated network area.
41 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
42 type or plugin the zone uses it can behave differently and offer different
43 features, advantages or disadvantages.
44 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
45 'VXLAN' tag, but some can also use layer 3 routing for control.
46 The 'VNets' are deployed locally on each node, after configuration was committed
47 from the cluster wide datacenter SDN administration interface.
48
49
50 Main configuration
51 ------------------
52
53 The configuration is done at datacenter (cluster-wide) level, it will be saved
54 in configuration files located in the shared configuration file system:
55 `/etc/pve/sdn`
56
57 On the web-interface SDN feature have 4 main sections for the configuration
58
59 * SDN: a overview of the SDN state
60
61 * Zones: Create and manage the virtual separated network Zones
62
63 * VNets: The per-node building block to provide a Zone for VMs
64
65 * Controller: For complex setups to control Layer 3 routing
66
67
68 [[pvesdn_config_main_sdn]]
69 SDN
70 ~~~
71
72 This is the main status panel. Here you can see deployment status of zones on
73 different nodes.
74
75 There is an 'Apply' button, to push and reload local configuration on all
76 cluster nodes nodes.
77
78
79 [[pvesdn_config_zone]]
80 Zones
81 ~~~~~
82
83 A zone will define a virtually separated network.
84
85 It can use different technologies for separation:
86
87 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
88
89 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
90
91 * VXLAN: (layer2 vxlan)
92
93 * bgp-evpn: vxlan using layer3 border gateway protocol routing
94
95 You can restrict a zone to specific nodes.
96
97 It's also possible to add permissions on a zone, to restrict user to use only a
98 specific zone and only the VNets in that zone
99
100 [[pvesdn_config_vnet]]
101 VNets
102 ~~~~~
103
104 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
105 on the node and used for Virtual Machine communication.
106
107 VNet properties are:
108
109 * ID: a 8 characters ID to name and identify a VNet
110
111 * Alias: Optional longer name, if the ID isn't enough
112
113 * Zone: The associated zone for this VNet
114
115 * Tag: The unique VLAN or VXLAN id
116
117 * VLAN Aware: Allow to add an extra VLAN tag in the virtual machine or
118 container vNIC configurations or allow the guest OS to manage the VLAN's tag.
119
120 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
121 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
122
123 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
124 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
125
126
127 [[pvesdn_config_controllers]]
128 Controllers
129 ~~~~~~~~~~~
130
131 Some zone types need an external controller to manage the VNet control-plane.
132 Currently this is only required for the `bgp-evpn` zone plugin.
133
134
135 [[pvesdn_zone_plugins]]
136 Zones Plugins
137 -------------
138
139 Common options
140 ~~~~~~~~~~~~~~
141
142 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
143 nodes.
144
145 [[pvesdn_zone_plugin_vlan]]
146 VLAN Zones
147 ~~~~~~~~~~
148
149 This is the simplest plugin, it will reuse an existing local Linux or OVS
150 bridge, and manage VLANs on it.
151 The benefit of using SDN module, is that you can create different zones with
152 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
153
154 Specific `VLAN` configuration options:
155
156 bridge:: Reuse this local bridge or OVS switch, already
157 configured on *each* local node.
158
159 [[pvesdn_zone_plugin_qinq]]
160 QinQ Zones
161 ~~~~~~~~~~
162
163 QinQ is stacked VLAN. The first VLAN tag defined for the zone
164 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
165
166 NOTE: Your physical network switches must support stacked VLANs!
167
168 Specific QinQ configuration options:
169
170 bridge:: A local VLAN-aware bridge already configured on each local node
171
172 service vlan:: The main VLAN tag of this zone
173
174 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
175 For example, you reduce the MTU to `1496` if you physical interface MTU is
176 `1500`.
177
178 [[pvesdn_zone_plugin_vxlan]]
179 VXLAN Zones
180 ~~~~~~~~~~~
181
182 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
183 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
184 4 UDP datagrams, using `4789` as the default destination port. You can, for
185 example, create a private IPv4 VXLAN network on top of public internet network
186 nodes.
187 This is a layer2 tunnel only, no routing between different VNets is possible.
188
189 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
190
191 Specific EVPN configuration options:
192
193 peers address list:: A list of IPs from all nodes through which you want to
194 communicate. Can also be external nodes.
195
196 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
197 lower than the outgoing physical interface.
198
199 [[pvesdn_zone_plugin_evpn]]
200 EVPN Zones
201 ~~~~~~~~~~
202
203 This is the most complex of all supported plugins.
204
205 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
206 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
207 node, with this a virtual guest can use that address as gateway.
208
209 Routing can work across VNets from different zones through a VRF (Virtual
210 Routing and Forwarding) interface.
211
212 Specific EVPN configuration options:
213
214 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
215 it must be different than VXLAN-id of VNets
216
217 controller:: an EVPN-controller need to be defined first (see controller
218 plugins section)
219
220 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
221 lower than the outgoing physical interface.
222
223
224 [[pvesdn_controller_plugins]]
225 Controllers Plugins
226 -------------------
227
228 For complex zones requiring a control plane.
229
230 [[pvesdn_controller_plugin_evpn]]
231 EVPN Controller
232 ~~~~~~~~~~~~~~~
233
234 For `BGP-EVPN`, we need a controller to manage the control plane.
235 The currently supported software controller is the "frr" router.
236 You may need to install it on each node where you want to deploy EVPN zones.
237
238 ----
239 apt install frr
240 ----
241
242 Configuration options:
243
244 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
245 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
246 breaking, or get broken, by global routing by mistake.
247
248 peers:: An ip list of all nodes where you want to communicate (could be also
249 external nodes or route reflectors servers)
250
251 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
252 external world:
253
254 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
255 external through the nodes default gateway
256
257 gateway-external-peers:: If you want that gateway nodes don't use the default
258 gateway, but, for example, sent traffic to external BGP routers, which handle
259 (reverse) routing then dynamically you can use. For example
260 `192.168.0.253,192.168.0.254'
261
262
263 [[pvesdn_local_deployment_monitoring]]
264 Local Deployment Monitoring
265 ---------------------------
266
267 After applying the configuration through the main SDN web-interface panel,
268 the local network configuration is generated locally on each node in
269 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
270
271 You need to add
272 ----
273 source /etc/network/interfaces.d/*
274 ----
275 at the end of /etc/network/interfaces to have the sdn config included
276
277 You can monitor the status of local zones and vnets through the main tree.
278
279
280 [[pvesdn_setup_example_vlan]]
281 VLAN Setup Example
282 ------------------
283
284 TIP: While we show plain configuration content here, almost everything should
285 be configurable using the web-interface only.
286
287 Node1: /etc/network/interfaces
288
289 ----
290 auto vmbr0
291 iface vmbr0 inet manual
292 bridge-ports eno1
293 bridge-stp off
294 bridge-fd 0
295 bridge-vlan-aware yes
296 bridge-vids 2-4094
297
298 #management ip on vlan100
299 auto vmbr0.100
300 iface vmbr0.100 inet static
301 address 192.168.0.1/24
302
303 source /etc/network/interfaces.d/*
304 ----
305
306 Node2: /etc/network/interfaces
307
308 ----
309 auto vmbr0
310 iface vmbr0 inet manual
311 bridge-ports eno1
312 bridge-stp off
313 bridge-fd 0
314 bridge-vlan-aware yes
315 bridge-vids 2-4094
316
317 #management ip on vlan100
318 auto vmbr0.100
319 iface vmbr0.100 inet static
320 address 192.168.0.2/24
321
322 source /etc/network/interfaces.d/*
323 ----
324
325 Create a VLAN zone named `myvlanzone':
326
327 ----
328 id: myvlanzone
329 bridge: vmbr0
330 ----
331
332 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
333 `myvlanzone' as it's zone.
334
335 ----
336 id: myvnet1
337 zone: myvlanzone
338 tag: 10
339 ----
340
341 Apply the configuration through the main SDN panel, to create VNets locally on
342 each nodes.
343
344 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
345
346 Use the following network configuration for this VM:
347
348 ----
349 auto eth0
350 iface eth0 inet static
351 address 10.0.3.100/24
352 ----
353
354 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
355 `myvnet1' as vm1.
356
357 Use the following network configuration for this VM:
358
359 ----
360 auto eth0
361 iface eth0 inet static
362 address 10.0.3.101/24
363 ----
364
365 Then, you should be able to ping between both VMs over that network.
366
367
368 [[pvesdn_setup_example_qinq]]
369 QinQ Setup Example
370 ------------------
371
372 TIP: While we show plain configuration content here, almost everything should
373 be configurable using the web-interface only.
374
375 Node1: /etc/network/interfaces
376
377 ----
378 auto vmbr0
379 iface vmbr0 inet manual
380 bridge-ports eno1
381 bridge-stp off
382 bridge-fd 0
383 bridge-vlan-aware yes
384 bridge-vids 2-4094
385
386 #management ip on vlan100
387 auto vmbr0.100
388 iface vmbr0.100 inet static
389 address 192.168.0.1/24
390
391 source /etc/network/interfaces.d/*
392 ----
393
394 Node2: /etc/network/interfaces
395
396 ----
397 auto vmbr0
398 iface vmbr0 inet manual
399 bridge-ports eno1
400 bridge-stp off
401 bridge-fd 0
402 bridge-vlan-aware yes
403 bridge-vids 2-4094
404
405 #management ip on vlan100
406 auto vmbr0.100
407 iface vmbr0.100 inet static
408 address 192.168.0.2/24
409
410 source /etc/network/interfaces.d/*
411 ----
412
413 Create an QinQ zone named `qinqzone1' with service VLAN 20
414
415 ----
416 id: qinqzone1
417 bridge: vmbr0
418 service vlan: 20
419 ----
420
421 Create another QinQ zone named `qinqzone2' with service VLAN 30
422
423 ----
424 id: qinqzone2
425 bridge: vmbr0
426 service vlan: 30
427 ----
428
429 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
430 created `qinqzone1' zone.
431
432 ----
433 id: myvnet1
434 zone: qinqzone1
435 tag: 100
436 ----
437
438 Create a `myvnet2' with customer VLAN-id 100 on the previously created
439 `qinqzone2' zone.
440
441 ----
442 id: myvnet2
443 zone: qinqzone2
444 tag: 100
445 ----
446
447 Apply the configuration on the main SDN web-interface panel to create VNets
448 locally on each nodes.
449
450 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
451
452 Use the following network configuration for this VM:
453
454 ----
455 auto eth0
456 iface eth0 inet static
457 address 10.0.3.100/24
458 ----
459
460 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
461 `myvnet1' as vm1.
462
463 Use the following network configuration for this VM:
464
465 ----
466 auto eth0
467 iface eth0 inet static
468 address 10.0.3.101/24
469 ----
470
471 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
472 `myvnet2'.
473
474 Use the following network configuration for this VM:
475
476 ----
477 auto eth0
478 iface eth0 inet static
479 address 10.0.3.102/24
480 ----
481
482 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
483 `myvnet2' as vm3.
484
485 Use the following network configuration for this VM:
486
487 ----
488 auto eth0
489 iface eth0 inet static
490 address 10.0.3.103/24
491 ----
492
493 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
494 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
495 or 'vm4', as they are on a different zone with different service-vlan.
496
497
498 [[pvesdn_setup_example_vxlan]]
499 VXLAN Setup Example
500 -------------------
501
502 TIP: While we show plain configuration content here, almost everything should
503 be configurable using the web-interface only.
504
505 node1: /etc/network/interfaces
506
507 ----
508 auto vmbr0
509 iface vmbr0 inet static
510 address 192.168.0.1/24
511 gateway 192.168.0.254
512 bridge-ports eno1
513 bridge-stp off
514 bridge-fd 0
515 mtu 1500
516
517 source /etc/network/interfaces.d/*
518 ----
519
520 node2: /etc/network/interfaces
521
522 ----
523 auto vmbr0
524 iface vmbr0 inet static
525 address 192.168.0.2/24
526 gateway 192.168.0.254
527 bridge-ports eno1
528 bridge-stp off
529 bridge-fd 0
530 mtu 1500
531
532 source /etc/network/interfaces.d/*
533 ----
534
535 node3: /etc/network/interfaces
536
537 ----
538 auto vmbr0
539 iface vmbr0 inet static
540 address 192.168.0.3/24
541 gateway 192.168.0.254
542 bridge-ports eno1
543 bridge-stp off
544 bridge-fd 0
545 mtu 1500
546
547 source /etc/network/interfaces.d/*
548 ----
549
550 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
551 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
552 the nodes as peer address list.
553
554 ----
555 id: myvxlanzone
556 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
557 mtu: 1450
558 ----
559
560 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
561 previously.
562
563 ----
564 id: myvnet1
565 zone: myvxlanzone
566 tag: 100000
567 ----
568
569 Apply the configuration on the main SDN web-interface panel to create VNets
570 locally on each nodes.
571
572 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
573
574 Use the following network configuration for this VM, note the lower MTU here.
575
576 ----
577 auto eth0
578 iface eth0 inet static
579 address 10.0.3.100/24
580 mtu 1450
581 ----
582
583 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
584 `myvnet1' as vm1.
585
586 Use the following network configuration for this VM:
587
588 ----
589 auto eth0
590 iface eth0 inet static
591 address 10.0.3.101/24
592 mtu 1450
593 ----
594
595 Then, you should be able to ping between between 'vm1' and 'vm2'.
596
597
598 [[pvesdn_setup_example_evpn]]
599 EVPN Setup Example
600 ------------------
601
602 node1: /etc/network/interfaces
603
604 ----
605 auto vmbr0
606 iface vmbr0 inet static
607 address 192.168.0.1/24
608 gateway 192.168.0.254
609 bridge-ports eno1
610 bridge-stp off
611 bridge-fd 0
612 mtu 1500
613
614 source /etc/network/interfaces.d/*
615 ----
616
617 node2: /etc/network/interfaces
618
619 ----
620 auto vmbr0
621 iface vmbr0 inet static
622 address 192.168.0.2/24
623 gateway 192.168.0.254
624 bridge-ports eno1
625 bridge-stp off
626 bridge-fd 0
627 mtu 1500
628
629 source /etc/network/interfaces.d/*
630 ----
631
632 node3: /etc/network/interfaces
633
634 ----
635 auto vmbr0
636 iface vmbr0 inet static
637 address 192.168.0.3/24
638 gateway 192.168.0.254
639 bridge-ports eno1
640 bridge-stp off
641 bridge-fd 0
642 mtu 1500
643
644 source /etc/network/interfaces.d/*
645 ----
646
647 Create a EVPN controller, using a private ASN number and above node addreesses
648 as peers. Define 'node1' and 'node2' as gateway nodes.
649
650 ----
651 id: myevpnctl
652 asn: 65000
653 peers: 192.168.0.1,192.168.0.2,192.168.0.3
654 gateway nodes: node1,node2
655 ----
656
657 Create an EVPN zone named `myevpnzone' using the previously created
658 EVPN-controller.
659
660 ----
661 id: myevpnzone
662 vrf vxlan tag: 10000
663 controller: myevpnctl
664 mtu: 1450
665 ----
666
667 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
668 CIDR network and a random MAC address.
669
670 ----
671 id: myvnet1
672 zone: myevpnzone
673 tag: 11000
674 ipv4: 10.0.1.1/24
675 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
676 ----
677
678 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
679 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
680
681 ----
682 id: myvnet2
683 zone: myevpnzone
684 tag: 12000
685 ipv4: 10.0.2.1/24
686 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
687 ----
688
689 Apply the configuration on the main SDN web-interface panel to create VNets
690 locally on each nodes and generate the FRR config.
691
692
693 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
694
695 Use the following network configuration for this VM:
696
697 ----
698 auto eth0
699 iface eth0 inet static
700 address 10.0.1.100/24
701 gateway 10.0.1.1 #this is the ip of the vnet1
702 mtu 1450
703 ----
704
705 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
706 `myvnet2'.
707
708 Use the following network configuration for this VM:
709
710 ----
711 auto eth0
712 iface eth0 inet static
713 address 10.0.2.100/24
714 gateway 10.0.2.1 #this is the ip of the vnet2
715 mtu 1450
716 ----
717
718
719 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
720
721 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
722 will go to the configured 'myvnet2' gateway, then will be routed to gateway
723 nodes ('node1' or 'node2') and from there it will leave those nodes over the
724 default gateway configured on node1 or node2.
725
726 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
727 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
728 public network can reply back.
729
730 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
731 and 10.0.2.0/24 in this example), will be announced dynamically.