]> git.proxmox.com Git - pve-docs.git/blob - pvesdn.adoc
vzdump: drop overly scary & outdated warning about fleecing
[pve-docs.git] / pvesdn.adoc
1 [[chapter_pvesdn]]
2 Software Defined Network
3 ========================
4 ifndef::manvolnum[]
5 :pve-toplevel:
6 endif::manvolnum[]
7
8 The **S**oftware **D**efined **N**etwork (SDN) feature allows one to create
9 virtual networks (vnets) at datacenter level.
10
11 WARNING: SDN is currently an **experimental feature** in {pve}. This
12 Documentation for it is also still under development, ask on our
13 xref:getting_help[mailing lists or in the forum] for questions and feedback.
14
15
16 [[pvesdn_installation]]
17 Installation
18 ------------
19
20 To enable the experimental SDN integration, you need to install
21 "libpve-network-perl" package
22
23 ----
24 apt install libpve-network-perl
25 ----
26
27 You need to have `ifupdown2` package installed on each node to manage local
28 configuration reloading without reboot:
29
30 ----
31 apt install ifupdown2
32 ----
33
34 Basic Overview
35 --------------
36
37 The {pve} SDN allows separation and fine grained control of Virtual Guests
38 networks, using flexible software controlled configurations.
39
40 Separation consists of zones, a zone is it's own virtual separated network area.
41 A 'VNet' is a type of a virtual network connected to a zone. Depending on which
42 type or plugin the zone uses it can behave differently and offer different
43 features, advantages or disadvantages.
44 Normally a 'VNet' shows up as a common Linux bridge with either a VLAN or
45 'VXLAN' tag, but some can also use layer 3 routing for control.
46 The 'VNets' are deployed locally on each node, after configuration was committed
47 from the cluster wide datacenter SDN administration interface.
48
49
50 Main configuration
51 ------------------
52
53 The configuration is done at datacenter (cluster-wide) level, it will be saved
54 in configuration files located in the shared configuration file system:
55 `/etc/pve/sdn`
56
57 On the web-interface SDN feature have 4 main sections for the configuration
58
59 * SDN: a overview of the SDN state
60
61 * Zones: Create and manage the virtual separated network Zones
62
63 * VNets: The per-node building block to provide a Zone for VMs
64
65 * Controller: For complex setups to control Layer 3 routing
66
67
68 [[pvesdn_config_main_sdn]]
69 SDN
70 ~~~
71
72 This is the main status panel. Here you can see deployment status of zones on
73 different nodes.
74
75 There is an 'Apply' button, to push and reload local configuration on all
76 cluster nodes nodes.
77
78
79 [[pvesdn_config_zone]]
80 Zones
81 ~~~~~
82
83 A zone will define a virtually separated network.
84
85 It can use different technologies for separation:
86
87 * VLAN: Virtual LANs are the classic method to sub-divide a LAN
88
89 * QinQ: stacked VLAN (formally known as `IEEE 802.1ad`)
90
91 * VXLAN: (layer2 vxlan)
92
93 * bgp-evpn: vxlan using layer3 border gateway protocol routing
94
95 You can restrict a zone to specific nodes.
96
97 It's also possible to add permissions on a zone, to restrict user to use only a
98 specific zone and only the VNets in that zone
99
100 [[pvesdn_config_vnet]]
101 VNets
102 ~~~~~
103
104 A `VNet` is in its basic form just a Linux bridge that will be deployed locally
105 on the node and used for Virtual Machine communication.
106
107 VNet properties are:
108
109 * ID: a 8 characters ID to name and identify a VNet
110
111 * Alias: Optional longer name, if the ID isn't enough
112
113 * Zone: The associated zone for this VNet
114
115 * Tag: The unique VLAN or VXLAN id
116
117 * IPv4: an anycast IPv4 address, it will be configured on the underlying bridge
118 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
119
120 * IPv6: an anycast IPv6 address, it will be configured on the underlying bridge
121 on each node part of the Zone. It's only useful for `bgp-evpn` routing.
122
123
124 [[pvesdn_config_controllers]]
125 Controllers
126 ~~~~~~~~~~~
127
128 Some zone types need an external controller to manage the VNet control-plane.
129 Currently this is only required for the `bgp-evpn` zone plugin.
130
131
132 [[pvesdn_zone_plugins]]
133 Zones Plugins
134 -------------
135
136 Common options
137 ~~~~~~~~~~~~~~
138
139 nodes:: Deploy and allow to use a VNets configured for this Zone only on these
140 nodes.
141
142 [[pvesdn_zone_plugin_vlan]]
143 VLAN Zones
144 ~~~~~~~~~~
145
146 This is the simplest plugin, it will reuse an existing local Linux or OVS
147 bridge, and manage VLANs on it.
148 The benefit of using SDN module, is that you can create different zones with
149 specific VNets VLAN tag, and restrict Virtual Machines to separated zones.
150
151 Specific `VLAN` configuration options:
152
153 bridge:: Reuse this local bridge or OVS switch, already
154 configured on *each* local node.
155
156 [[pvesdn_zone_plugin_qinq]]
157 QinQ Zones
158 ~~~~~~~~~~
159
160 QinQ is stacked VLAN. The first VLAN tag defined for the zone
161 (so called 'service-vlan'), and the second VLAN tag defined for the vnets
162
163 NOTE: Your physical network switches must support stacked VLANs!
164
165 Specific QinQ configuration options:
166
167 bridge:: A local VLAN-aware bridge already configured on each local node
168
169 service vlan:: The main VLAN tag of this zone
170
171 mtu:: Due to the double stacking of tags you need 4 more bytes for QinQ VLANs.
172 For example, you reduce the MTU to `1496` if you physical interface MTU is
173 `1500`.
174
175 [[pvesdn_zone_plugin_vxlan]]
176 VXLAN Zones
177 ~~~~~~~~~~~
178
179 The VXLAN plugin will establish a tunnel (named overlay) on top of an existing
180 network (named underlay). It encapsulate layer 2 Ethernet frames within layer
181 4 UDP datagrams, using `4789` as the default destination port. You can, for
182 example, create a private IPv4 VXLAN network on top of public internet network
183 nodes.
184 This is a layer2 tunnel only, no routing between different VNets is possible.
185
186 Each VNet will have use specific VXLAN id from the range (1 - 16777215).
187
188 Specific EVPN configuration options:
189
190 peers address list:: A list of IPs from all nodes through which you want to
191 communicate. Can also be external nodes.
192
193 mtu:: Because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
194 lower than the outgoing physical interface.
195
196 [[pvesdn_zone_plugin_evpn]]
197 EVPN Zones
198 ~~~~~~~~~~
199
200 This is the most complex of all supported plugins.
201
202 BGP-EVPN allows one to create routable layer3 network. The VNet of EVPN can
203 have an anycast IP-address and or MAC-address. The bridge IP is the same on each
204 node, with this a virtual guest can use that address as gateway.
205
206 Routing can work across VNets from different zones through a VRF (Virtual
207 Routing and Forwarding) interface.
208
209 Specific EVPN configuration options:
210
211 VRF VXLAN Tag:: This is a vxlan-id used for routing interconnect between vnets,
212 it must be different than VXLAN-id of VNets
213
214 controller:: an EVPN-controller need to be defined first (see controller
215 plugins section)
216
217 mtu:: because VXLAN encapsulation use 50bytes, the MTU need to be 50 bytes
218 lower than the outgoing physical interface.
219
220
221 [[pvesdn_controller_plugins]]
222 Controllers Plugins
223 -------------------
224
225 For complex zones requiring a control plane.
226
227 [[pvesdn_controller_plugin_evpn]]
228 EVPN Controller
229 ~~~~~~~~~~~~~~~
230
231 For `BGP-EVPN`, we need a controller to manage the control plane.
232 The currently supported software controller is the "frr" router.
233 You may need to install it on each node where you want to deploy EVPN zones.
234
235 ----
236 apt install frr
237 ----
238
239 Configuration options:
240
241 asn:: A unique BGP ASN number. It's highly recommended to use private ASN
242 number (64512 – 65534, 4200000000 – 4294967294), as else you could end up
243 breaking, or get broken, by global routing by mistake.
244
245 peers:: An ip list of all nodes where you want to communicate (could be also
246 external nodes or route reflectors servers)
247
248 Additionally, if you want to route traffic from a SDN BGP-EVPN network to
249 external world:
250
251 gateway-nodes:: The proxmox nodes from where the bgp-evpn traffic will exit to
252 external through the nodes default gateway
253
254 gateway-external-peers:: If you want that gateway nodes don't use the default
255 gateway, but, for example, sent traffic to external BGP routers, which handle
256 (reverse) routing then dynamically you can use. For example
257 `192.168.0.253,192.168.0.254'
258
259
260 [[pvesdn_local_deployment_monitoring]]
261 Local Deployment Monitoring
262 ---------------------------
263
264 After applying the configuration through the main SDN web-interface panel,
265 the local network configuration is generated locally on each node in
266 `/etc/network/interfaces.d/sdn`, and with ifupdown2 reloaded.
267
268 You can monitor the status of local zones and vnets through the main tree.
269
270
271 [[pvesdn_setup_example_vlan]]
272 VLAN Setup Example
273 ------------------
274
275 TIP: While we show plain configuration content here, almost everything should
276 be configurable using the web-interface only.
277
278 Node1: /etc/network/interfaces
279
280 ----
281 auto vmbr0
282 iface vmbr0 inet manual
283 bridge-ports eno1
284 bridge-stp off
285 bridge-fd 0
286 bridge-vlan-aware yes
287 bridge-vids 2-4094
288
289 #management ip on vlan100
290 auto vmbr0.100
291 iface vmbr0.100 inet static
292 address 192.168.0.1/24
293
294 source /etc/network/interfaces.d/*
295 ----
296
297 Node2: /etc/network/interfaces
298
299 ----
300 auto vmbr0
301 iface vmbr0 inet manual
302 bridge-ports eno1
303 bridge-stp off
304 bridge-fd 0
305 bridge-vlan-aware yes
306 bridge-vids 2-4094
307
308 #management ip on vlan100
309 auto vmbr0.100
310 iface vmbr0.100 inet static
311 address 192.168.0.2/24
312
313 source /etc/network/interfaces.d/*
314 ----
315
316 Create a VLAN zone named `myvlanzone':
317
318 ----
319 id: myvlanzone
320 bridge: vmbr0
321 ----
322
323 Create a VNet named `myvnet1' with `vlan-id` `10' and the previously created
324 `myvlanzone' as it's zone.
325
326 ----
327 id: myvnet1
328 zone: myvlanzone
329 tag: 10
330 ----
331
332 Apply the configuration through the main SDN panel, to create VNets locally on
333 each nodes.
334
335 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
336
337 Use the following network configuration for this VM:
338
339 ----
340 auto eth0
341 iface eth0 inet static
342 address 10.0.3.100/24
343 ----
344
345 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
346 `myvnet1' as vm1.
347
348 Use the following network configuration for this VM:
349
350 ----
351 auto eth0
352 iface eth0 inet static
353 address 10.0.3.101/24
354 ----
355
356 Then, you should be able to ping between both VMs over that network.
357
358
359 [[pvesdn_setup_example_qinq]]
360 QinQ Setup Example
361 ------------------
362
363 TIP: While we show plain configuration content here, almost everything should
364 be configurable using the web-interface only.
365
366 Node1: /etc/network/interfaces
367
368 ----
369 auto vmbr0
370 iface vmbr0 inet manual
371 bridge-ports eno1
372 bridge-stp off
373 bridge-fd 0
374 bridge-vlan-aware yes
375 bridge-vids 2-4094
376
377 #management ip on vlan100
378 auto vmbr0.100
379 iface vmbr0.100 inet static
380 address 192.168.0.1/24
381
382 source /etc/network/interfaces.d/*
383 ----
384
385 Node2: /etc/network/interfaces
386
387 ----
388 auto vmbr0
389 iface vmbr0 inet manual
390 bridge-ports eno1
391 bridge-stp off
392 bridge-fd 0
393 bridge-vlan-aware yes
394 bridge-vids 2-4094
395
396 #management ip on vlan100
397 auto vmbr0.100
398 iface vmbr0.100 inet static
399 address 192.168.0.2/24
400
401 source /etc/network/interfaces.d/*
402 ----
403
404 Create an QinQ zone named `qinqzone1' with service VLAN 20
405
406 ----
407 id: qinqzone1
408 bridge: vmbr0
409 service vlan: 20
410 ----
411
412 Create another QinQ zone named `qinqzone2' with service VLAN 30
413
414 ----
415 id: qinqzone2
416 bridge: vmbr0
417 service vlan: 30
418 ----
419
420 Create a VNet named `myvnet1' with customer vlan-id 100 on the previously
421 created `qinqzone1' zone.
422
423 ----
424 id: myvnet1
425 zone: qinqzone1
426 tag: 100
427 ----
428
429 Create a `myvnet2' with customer VLAN-id 100 on the previously created
430 `qinqzone2' zone.
431
432 ----
433 id: myvnet2
434 zone: qinqzone1
435 tag: 100
436 ----
437
438 Apply the configuration on the main SDN web-interface panel to create VNets
439 locally on each nodes.
440
441 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
442
443 Use the following network configuration for this VM:
444
445 ----
446 auto eth0
447 iface eth0 inet static
448 address 10.0.3.100/24
449 ----
450
451 Create a second Virtual Machine (vm2) on node2, with a vNIC on the same VNet
452 `myvnet1' as vm1.
453
454 Use the following network configuration for this VM:
455
456 ----
457 auto eth0
458 iface eth0 inet static
459 address 10.0.3.101/24
460 ----
461
462 Create a third Virtual Machine (vm3) on node1, with a vNIC on the other VNet
463 `myvnet2'.
464
465 Use the following network configuration for this VM:
466
467 ----
468 auto eth0
469 iface eth0 inet static
470 address 10.0.3.102/24
471 ----
472
473 Create another Virtual Machine (vm4) on node2, with a vNIC on the same VNet
474 `myvnet2' as vm3.
475
476 Use the following network configuration for this VM:
477
478 ----
479 auto eth0
480 iface eth0 inet static
481 address 10.0.3.103/24
482 ----
483
484 Then, you should be able to ping between the VMs 'vm1' and 'vm2', also
485 between 'vm3' and 'vm4'. But, none of VMs 'vm1' or 'vm2' can ping the VMs 'vm3'
486 or 'vm4', as they are on a different zone with different service-vlan.
487
488
489 [[pvesdn_setup_example_vxlan]]
490 VXLAN Setup Example
491 -------------------
492
493 TIP: While we show plain configuration content here, almost everything should
494 be configurable using the web-interface only.
495
496 node1: /etc/network/interfaces
497
498 ----
499 auto vmbr0
500 iface vmbr0 inet static
501 address 192.168.0.1/24
502 gateway 192.168.0.254
503 bridge-ports eno1
504 bridge-stp off
505 bridge-fd 0
506 mtu 1500
507
508 source /etc/network/interfaces.d/*
509 ----
510
511 node2: /etc/network/interfaces
512
513 ----
514 auto vmbr0
515 iface vmbr0 inet static
516 address 192.168.0.2/24
517 gateway 192.168.0.254
518 bridge-ports eno1
519 bridge-stp off
520 bridge-fd 0
521 mtu 1500
522
523 source /etc/network/interfaces.d/*
524 ----
525
526 node3: /etc/network/interfaces
527
528 ----
529 auto vmbr0
530 iface vmbr0 inet static
531 address 192.168.0.3/24
532 gateway 192.168.0.254
533 bridge-ports eno1
534 bridge-stp off
535 bridge-fd 0
536 mtu 1500
537
538 source /etc/network/interfaces.d/*
539 ----
540
541 Create an VXLAN zone named `myvxlanzone', use the lower MTU to ensure the extra
542 50 bytes of the VXLAN header can fit. Add all previously configured IPs from
543 the nodes as peer address list.
544
545 ----
546 id: myvxlanzone
547 peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
548 mtu: 1450
549 ----
550
551 Create a VNet named `myvnet1' using the VXLAN zone `myvxlanzone' created
552 previously.
553
554 ----
555 id: myvnet1
556 zone: myvxlanzone
557 tag: 100000
558 ----
559
560 Apply the configuration on the main SDN web-interface panel to create VNets
561 locally on each nodes.
562
563 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
564
565 Use the following network configuration for this VM, note the lower MTU here.
566
567 ----
568 auto eth0
569 iface eth0 inet static
570 address 10.0.3.100/24
571 mtu 1450
572 ----
573
574 Create a second Virtual Machine (vm2) on node3, with a vNIC on the same VNet
575 `myvnet1' as vm1.
576
577 Use the following network configuration for this VM:
578
579 ----
580 auto eth0
581 iface eth0 inet static
582 address 10.0.3.101/24
583 mtu 1450
584 ----
585
586 Then, you should be able to ping between between 'vm1' and 'vm2'.
587
588
589 [[pvesdn_setup_example_evpn]]
590 EVPN Setup Example
591 ------------------
592
593 node1: /etc/network/interfaces
594
595 ----
596 auto vmbr0
597 iface vmbr0 inet static
598 address 192.168.0.1/24
599 gateway 192.168.0.254
600 bridge-ports eno1
601 bridge-stp off
602 bridge-fd 0
603 mtu 1500
604
605 source /etc/network/interfaces.d/*
606 ----
607
608 node2: /etc/network/interfaces
609
610 ----
611 auto vmbr0
612 iface vmbr0 inet static
613 address 192.168.0.2/24
614 gateway 192.168.0.254
615 bridge-ports eno1
616 bridge-stp off
617 bridge-fd 0
618 mtu 1500
619
620 source /etc/network/interfaces.d/*
621 ----
622
623 node3: /etc/network/interfaces
624
625 ----
626 auto vmbr0
627 iface vmbr0 inet static
628 address 192.168.0.3/24
629 gateway 192.168.0.254
630 bridge-ports eno1
631 bridge-stp off
632 bridge-fd 0
633 mtu 1500
634
635 source /etc/network/interfaces.d/*
636 ----
637
638 Create a EVPN controller, using a private ASN number and above node addreesses
639 as peers. Define 'node1' and 'node2' as gateway nodes.
640
641 ----
642 id: myevpnctl
643 asn: 65000
644 peers: 192.168.0.1,192.168.0.2,192.168.0.3
645 gateway nodes: node1,node2
646 ----
647
648 Create an EVPN zone named `myevpnzone' using the previously created
649 EVPN-controller.
650
651 ----
652 id: myevpnzone
653 vrf vxlan tag: 10000
654 controller: myevpnctl
655 mtu: 1450
656 ----
657
658 Create the first VNet named `myvnet1' using the EVPN zone `myevpnzone', a IPv4
659 CIDR network and a random MAC address.
660
661 ----
662 id: myvnet1
663 zone: myevpnzone
664 tag: 11000
665 ipv4: 10.0.1.1/24
666 mac address: 8C:73:B2:7B:F9:60 #random generate mac address
667 ----
668
669 Create the second VNet named `myvnet2' using the same EVPN zone `myevpnzone', a
670 different IPv4 CIDR network and a different random MAC address than `myvnet1'.
671
672 ----
673 id: myvnet2
674 zone: myevpnzone
675 tag: 12000
676 ipv4: 10.0.2.1/24
677 mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
678 ----
679
680 Apply the configuration on the main SDN web-interface panel to create VNets
681 locally on each nodes and generate the FRR config.
682
683
684 Create a Debian-based Virtual Machine (vm1) on node1, with a vNIC on `myvnet1'.
685
686 Use the following network configuration for this VM:
687
688 ----
689 auto eth0
690 iface eth0 inet static
691 address 10.0.1.100/24
692 gateway 10.0.1.1 #this is the ip of the vnet1
693 mtu 1450
694 ----
695
696 Create a second Virtual Machine (vm2) on node2, with a vNIC on the other VNet
697 `myvnet2'.
698
699 Use the following network configuration for this VM:
700
701 ----
702 auto eth0
703 iface eth0 inet static
704 address 10.0.2.100/24
705 gateway 10.0.2.1 #this is the ip of the vnet2
706 mtu 1450
707 ----
708
709
710 Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
711
712 If you ping an external IP from 'vm2' on the non-gateway 'node3', the packet
713 will go to the configured 'myvnet2' gateway, then will be routed to gateway
714 nodes ('node1' or 'node2') and from there it will leave those nodes over the
715 default gateway configured on node1 or node2.
716
717 NOTE: Of course you need to add reverse routes for the '10.0.1.0/24' and
718 '10.0.2.0/24' network to node1, node2 on your external gateway, so that the
719 public network can reply back.
720
721 If you have configured an external BGP router, the BGP-EVPN routes (10.0.1.0/24
722 and 10.0.2.0/24 in this example), will be announced dynamically.