]> git.proxmox.com Git - pve-docs.git/blame - pvesdn.adoc
add sdn documentation
[pve-docs.git] / pvesdn.adoc
CommitLineData
1556b768
AD
1[[chapter_pvesdn]]
2Software Defined Network
3========================
4ifndef::manvolnum[]
5:pve-toplevel:
6endif::manvolnum[]
7
8The SDN feature allow to create virtual networks (vnets)
9at datacenter level.
10
11To enable SDN feature, you need to install "libpve-network-perl" package
12
13----
14apt install libpve-network-perl
15----
16
17A vnet is a bridge with a vlan or vxlan tag.
18
19The vnets are deployed locally on each node after configuration
20commit at datacenter level.
21
22You need to have "ifupdown2" package installed on each node to manage local
23configuration reloading.
24
25----
26apt install ifupdown2
27----
28
29Main configuration
30------------------
31
32The configuration is done at datacenter level.
33
34The sdn feature have 4 main sections for the configuration
35
36* SDN
37
38* Zones
39
40* Vnets
41
42* Controller
43
44
45SDN
46~~~
47
48[thumbnail="screenshot/gui-sdn-status.png"]
49
50This is the Main panel, where you can see deployment of zones on differents nodes.
51
52They are an "apply" button, to push && reload local configuration on differents nodes.
53
54
55Zones
56~~~~~
57
58[thumbnail="screenshot/gui-sdn-zone.png"]
59
60A zone will defined the kind of virtual network you want to defined.
61
62it can be
63
64* vlan
65
66* QinQ (stacked vlan)
67
68* vxlan (layer2 vxlan)
69
70* bgp-evpn (vxlan with layer3 routing)
71
72You can restrict a zone to specific nodes.
73
74It's also possible to add permissions on a zone, to restrict user
75to use only a specific zone and the vnets in this zone
76
77Vnets
78~~~~~
79
80[thumbnail="screenshot/gui-sdn-vnet-evpn.png"]
81
82A vnet is a bridge that will be deployed locally on the node,
83for vm communication. (Like a classic vmbrX).
84
85Vnet properties are:
86
87* ID: a 8 characters ID
88
89* Alias: Optionnal bigger name
90
91* Zone: The associated zone of the vnet
92
93* Tag: unique vlan or vxlan id
94
95* ipv4: an anycast ipv4 address (same bridge ip deployed on each node), for bgp-evpn routing only
96
97* ipv6: an anycast ipv6 address (same bridge ip deployed on each node), for bgp-evpn routing only
98
99
100Controllers
101~~~~~~~~~~~
102
103[thumbnail="screenshot/gui-sdn-controller.png"]
104
105Some zone plugins (Currently : bgp-evpn only),
106need an external controller to manage the vnets control-plane.
107
108
109
110Zones Plugins
111-------------
112common zone options:
113
114* nodes: restrict deploy of the vnets of theses nodes only
115
116
117Vlan
118~~~~~
119
120[thumbnail="screenshot/gui-sdn-zone-vlan.png"]
121
122This is the most simple plugin, it'll reuse an existing local bridge or ovs,
123and manage vlan on it.
124The benefit of using sdn module, is that you can create different zones with specific
125vnets vlan tag, and restrict your customers on their zones.
126
127specific qinq configuration options:
128
129* bridge: a local vlan-aware bridge or ovs switch already configured on each local node
130
131QinQ
132~~~~~
133
134[thumbnail="screenshot/gui-sdn-zone-qinq.png"]
135
136QinQ is stacked vlan.
137you have the first vlan tag defined on the zone (service-vlan), and
138the second vlan tag defined on the vnets
139
140Your physical network switchs need to support stacked vlans !
141
142specific qinq configuration options:
143
144* bridge: a local vlan-aware bridge already configured on each local node
145* service vlan: The main vlan tag of this zone
146* mtu: you need 4 more bytes for the double tag vlan.
147 You can reduce the mtu to 1496 if you physical interface mtu is 1500.
148
149Vxlan
150~~~~~
151
152[thumbnail="screenshot/gui-sdn-zone-vxlan.png"]
153
154The vxlan plugin will established vxlan tunnel (overlay) on top of an existing network (underlay).
155you can for example, create a private ipv4 vxlan network on top of public internet network nodes.
156This is a layer2 tunnel only, no routing between different vnets is possible.
157
158Each vnet will have a specific vxlan id ( 1 - 16777215 )
159
160
161Specific evpn configuration options:
162
163* peers address list: an ip list of all nodes where you want to communicate (could be also external nodes)
164
165* mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower
166 than the outgoing physical interface.
167
168evpn
169~~~~
170
171[thumbnail="screenshot/gui-sdn-zone-evpn.png"]
172
173This is the most complex plugin.
174
175BGP-evpn allow to create routable layer3 network.
176The vnet of evpn can have an anycast ip address/mac address.
177The bridge ip is the same on each node, then vm can use
178as gateway.
179The routing is working only across vnets of a specific zone through a vrf.
180
181Specific evpn configuration options:
182
183* vrf vxlan tag: This is a vxlan-id used for routing interconnect between vnets,
184 it must be different than vxlan-id of vnets
185
186* controller: an evpn need to be defined first (see controller plugins section)
187
188* mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower
189 than the outgoing physical interface.
190
191
192Controllers Plugins
193-------------------
194
195evpn
196~~~~
197
198[thumbnail="screenshot/gui-sdn-controller-evpn.png"]
199
200For bgp-evpn, we need a controller to manage the control plane.
201The software controller is "frr" router.
202You need to install it on each node where you want to deploy the evpn zone.
203
204----
205apt install frr
206----
207
208configuration options:
209
210*asn: a unique bgp asn number.
211 It's recommended to use private asn number (64512 – 65534, 4200000000 – 4294967294)
212
213*peers: an ip list of all nodes where you want to communicate (could be also external nodes or route reflectors servers)
214
215If you want to route traffic from the sdn bgp-evpn network to external world:
216
217* gateway-nodes: The proxmox nodes from where the bgp-evpn traffic will exit to external through the nodes default gateway
218
219If you want that gateway nodes don't use the default gateway, but for example, sent traffic to external bgp routers
220
221* gateway-external-peers: 192.168.0.253,192.168.0.254
222
223
224Local deployment Monitoring
225---------------------------
226
227[thumbnail="screenshot/gui-sdn-local-status.png"]
228
229After apply configuration on the main sdn section,
230the local configuration is generated locally on each node,
231in /etc/network/interfaces.d/sdn, and reloaded.
232
233You can monitor the status of local zones && vnets through the main tree.
234
235
236
237Vlan setup example
238------------------
239node1: /etc/network/interfaces
240----
241auto vmbr0
242iface vmbr0 inet manual
243 bridge-ports eno1
244 bridge-stp off
245 bridge-fd 0
246 bridge-vlan-aware yes
247 bridge-vids 2-4094
248
249#management ip on vlan100
250auto vmbr0.100
251iface vmbr0.100 inet static
252 address 192.168.0.1/24
253
254source /etc/network/interfaces.d/*
255
256----
257
258node2: /etc/network/interfaces
259
260----
261auto vmbr0
262iface vmbr0 inet manual
263 bridge-ports eno1
264 bridge-stp off
265 bridge-fd 0
266 bridge-vlan-aware yes
267 bridge-vids 2-4094
268
269#management ip on vlan100
270auto vmbr0.100
271iface vmbr0.100 inet static
272 address 192.168.0.2/24
273
274source /etc/network/interfaces.d/*
275----
276
277create an vlan zone
278
279----
280id: mylanzone
281bridge: vmbr0
282----
283
284create a vnet1 with vlan-id 10
285
286----
287id: myvnet1
288zone: myvlanzone
289tag: 10
290----
291
292Apply the configuration on the main sdn section, to create vnets locally on each nodes,
293and generate frr config.
294
295
296create a vm1, with 1 nic on vnet1 on node1
297
298----
299auto eth0
300iface eth0 inet static
301 address 10.0.3.100/24
302----
303
304create a vm2, with 1 nic on vnet1 on node2
305----
306auto eth0
307iface eth0 inet static
308 address 10.0.3.101/24
309----
310
311Then, you should be able to ping between between vm1 && vm2
312
313
314QinQ setup example
315------------------
316node1: /etc/network/interfaces
317----
318auto vmbr0
319iface vmbr0 inet manual
320 bridge-ports eno1
321 bridge-stp off
322 bridge-fd 0
323 bridge-vlan-aware yes
324 bridge-vids 2-4094
325
326#management ip on vlan100
327auto vmbr0.100
328iface vmbr0.100 inet static
329 address 192.168.0.1/24
330
331source /etc/network/interfaces.d/*
332----
333
334node2: /etc/network/interfaces
335
336----
337auto vmbr0
338iface vmbr0 inet manual
339 bridge-ports eno1
340 bridge-stp off
341 bridge-fd 0
342 bridge-vlan-aware yes
343 bridge-vids 2-4094
344
345#management ip on vlan100
346auto vmbr0.100
347iface vmbr0.100 inet static
348 address 192.168.0.2/24
349
350source /etc/network/interfaces.d/*
351----
352
353create an qinq zone1 with service vlan 20
354
355----
356id: qinqzone1
357bridge: vmbr0
358service vlan: 20
359----
360
361create an qinq zone2 with service vlan 30
362
363----
364id: qinqzone2
365bridge: vmbr0
366service vlan: 30
367----
368
369create a vnet1 with customer vlan-id 100 on qinqzone1
370
371----
372id: myvnet1
373zone: qinqzone1
374tag: 100
375----
376
377create a vnet2 with customer vlan-id 100 on qinqzone2
378
379----
380id: myvnet2
381zone: qinqzone1
382tag: 100
383----
384
385Apply the configuration on the main sdn section, to create vnets locally on each nodes,
386and generate frr config.
387
388
389create a vm1, with 1 nic on vnet1 on node1
390
391----
392auto eth0
393iface eth0 inet static
394 address 10.0.3.100/24
395----
396
397create a vm2, with 1 nic on vnet1 on node2
398----
399auto eth0
400iface eth0 inet static
401 address 10.0.3.101/24
402----
403
404create a vm3, with 1 nic on vnet2 on node1
405
406----
407auto eth0
408iface eth0 inet static
409 address 10.0.3.102/24
410----
411
412create a vm4, with 1 nic on vnet2 on node2
413----
414auto eth0
415iface eth0 inet static
416 address 10.0.3.103/24
417----
418
419Then, you should be able to ping between between vm1 && vm2
420vm3 && vm4 could ping together
421
422but vm1 && vm2 couldn't ping vm3 && vm4,
423as it's a different zone, with different service vlan
424
425
426Vxlan setup example
427-------------------
428node1: /etc/network/interfaces
429----
430auto vmbr0
431iface vmbr0 inet static
432 address 192.168.0.1/24
433 gateway 192.168.0.254
434 bridge-ports eno1
435 bridge-stp off
436 bridge-fd 0
437 mtu 1500
438
439source /etc/network/interfaces.d/*
440----
441
442node2: /etc/network/interfaces
443
444----
445auto vmbr0
446iface vmbr0 inet static
447 address 192.168.0.2/24
448 gateway 192.168.0.254
449 bridge-ports eno1
450 bridge-stp off
451 bridge-fd 0
452 mtu 1500
453
454source /etc/network/interfaces.d/*
455----
456
457node3: /etc/network/interfaces
458
459----
460auto vmbr0
461iface vmbr0 inet static
462 address 192.168.0.3/24
463 gateway 192.168.0.254
464 bridge-ports eno1
465 bridge-stp off
466 bridge-fd 0
467 mtu 1500
468
469source /etc/network/interfaces.d/*
470----
471
472create an vxlan zone
473
474----
475id: myvxlanzone
476peers address list: 192.168.0.1,192.168.0.2,192.168.0.3
477mtu: 1450
478----
479
480create first vnet
481
482----
483id: myvnet1
484zone: myvxlanzone
485tag: 100000
486----
487
488Apply the configuration on the main sdn section, to create vnets locally on each nodes,
489and generate frr config.
490
491
492create a vm1, with 1 nic on vnet1 on node2
493
494----
495auto eth0
496iface eth0 inet static
497 address 10.0.3.100/24
498 mtu 1450
499----
500
501create a vm2, with 1 nic on vnet1 on node3
502----
503auto eth0
504iface eth0 inet static
505 address 10.0.3.101/24
506 mtu 1450
507----
508
509Then, you should be able to ping between between vm1 && vm2
510
511
512
513EVPN setup example
514------------------
515node1: /etc/network/interfaces
516
517----
518auto vmbr0
519iface vmbr0 inet static
520 address 192.168.0.1/24
521 gateway 192.168.0.254
522 bridge-ports eno1
523 bridge-stp off
524 bridge-fd 0
525 mtu 1500
526
527source /etc/network/interfaces.d/*
528----
529
530node2: /etc/network/interfaces
531
532----
533auto vmbr0
534iface vmbr0 inet static
535 address 192.168.0.2/24
536 gateway 192.168.0.254
537 bridge-ports eno1
538 bridge-stp off
539 bridge-fd 0
540 mtu 1500
541
542source /etc/network/interfaces.d/*
543----
544
545node3: /etc/network/interfaces
546
547----
548auto vmbr0
549iface vmbr0 inet static
550 address 192.168.0.3/24
551 gateway 192.168.0.254
552 bridge-ports eno1
553 bridge-stp off
554 bridge-fd 0
555 mtu 1500
556
557source /etc/network/interfaces.d/*
558----
559
560create a evpn controller
561
562----
563id: myevpnctl
564asn: 65000
565peers: 192.168.0.1,192.168.0.2,192.168.0.3
566gateway nodes: node1,node2
567----
568
569create an evpn zone
570
571----
572id: myevpnzone
573vrf vxlan tag: 10000
574controller: myevpnctl
575mtu: 1450
576----
577
578create first vnet
579
580----
581id: myvnet1
582zone: myevpnzone
583tag: 11000
584ipv4: 10.0.1.1/24
585mac address: 8C:73:B2:7B:F9:60 #random generate mac addres
586----
587
588create second vnet
589
590----
591id: myvnet2
592zone: myevpnzone
593tag: 12000
594ipv4: 10.0.2.1/24
595mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet
596----
597
598Apply the configuration on the main sdn section, to create vnets locally on each nodes,
599and generate frr config.
600
601
602
603create a vm1, with 1 nic on vnet1 on node2
604
605----
606auto eth0
607iface eth0 inet static
608 address 10.0.1.100/24
609 gateway 10.0.1.1 #this is the ip of the vnet1
610 mtu 1450
611----
612
613create a vm2, with 1 nic on vnet2 on node3
614----
615auto eth0
616iface eth0 inet static
617 address 10.0.2.100/24
618 gateway 10.0.2.1 #this is the ip of the vnet2
619 mtu 1450
620----
621
622
623Then, you should be able to ping vm2 from vm1, and vm1 from vm2.
624
625from vm2 on node3, if you ping an external ip, the packet will go
626to the vnet2 gateway, then will be routed to gateway nodes (node1 or node2)
627then the packet will be routed to the node1 or node2 default gw.
628
629Of course you need to add reverse routes to 10.0.1.0/24 && 10.0.2.0/24 to node1,node2 on your external gateway.
630
631If you have configured an external bgp router, the bgp-evpn routes (10.0.1.0/24 && 10.0.2.0/24),
632will be announced dynamically.
633