]>
Commit | Line | Data |
---|---|---|
1556b768 AD |
1 | [[chapter_pvesdn]] |
2 | Software Defined Network | |
3 | ======================== | |
4 | ifndef::manvolnum[] | |
5 | :pve-toplevel: | |
6 | endif::manvolnum[] | |
7 | ||
8 | The SDN feature allow to create virtual networks (vnets) | |
9 | at datacenter level. | |
10 | ||
11 | To enable SDN feature, you need to install "libpve-network-perl" package | |
12 | ||
13 | ---- | |
14 | apt install libpve-network-perl | |
15 | ---- | |
16 | ||
17 | A vnet is a bridge with a vlan or vxlan tag. | |
18 | ||
19 | The vnets are deployed locally on each node after configuration | |
20 | commit at datacenter level. | |
21 | ||
22 | You need to have "ifupdown2" package installed on each node to manage local | |
23 | configuration reloading. | |
24 | ||
25 | ---- | |
26 | apt install ifupdown2 | |
27 | ---- | |
28 | ||
29 | Main configuration | |
30 | ------------------ | |
31 | ||
32 | The configuration is done at datacenter level. | |
33 | ||
34 | The sdn feature have 4 main sections for the configuration | |
35 | ||
36 | * SDN | |
37 | ||
38 | * Zones | |
39 | ||
40 | * Vnets | |
41 | ||
42 | * Controller | |
43 | ||
44 | ||
45 | SDN | |
46 | ~~~ | |
47 | ||
48 | [thumbnail="screenshot/gui-sdn-status.png"] | |
49 | ||
50 | This is the Main panel, where you can see deployment of zones on differents nodes. | |
51 | ||
52 | They are an "apply" button, to push && reload local configuration on differents nodes. | |
53 | ||
54 | ||
55 | Zones | |
56 | ~~~~~ | |
57 | ||
58 | [thumbnail="screenshot/gui-sdn-zone.png"] | |
59 | ||
60 | A zone will defined the kind of virtual network you want to defined. | |
61 | ||
62 | it can be | |
63 | ||
64 | * vlan | |
65 | ||
66 | * QinQ (stacked vlan) | |
67 | ||
68 | * vxlan (layer2 vxlan) | |
69 | ||
70 | * bgp-evpn (vxlan with layer3 routing) | |
71 | ||
72 | You can restrict a zone to specific nodes. | |
73 | ||
74 | It's also possible to add permissions on a zone, to restrict user | |
75 | to use only a specific zone and the vnets in this zone | |
76 | ||
77 | Vnets | |
78 | ~~~~~ | |
79 | ||
80 | [thumbnail="screenshot/gui-sdn-vnet-evpn.png"] | |
81 | ||
82 | A vnet is a bridge that will be deployed locally on the node, | |
83 | for vm communication. (Like a classic vmbrX). | |
84 | ||
85 | Vnet properties are: | |
86 | ||
87 | * ID: a 8 characters ID | |
88 | ||
89 | * Alias: Optionnal bigger name | |
90 | ||
91 | * Zone: The associated zone of the vnet | |
92 | ||
93 | * Tag: unique vlan or vxlan id | |
94 | ||
95 | * ipv4: an anycast ipv4 address (same bridge ip deployed on each node), for bgp-evpn routing only | |
96 | ||
97 | * ipv6: an anycast ipv6 address (same bridge ip deployed on each node), for bgp-evpn routing only | |
98 | ||
99 | ||
100 | Controllers | |
101 | ~~~~~~~~~~~ | |
102 | ||
103 | [thumbnail="screenshot/gui-sdn-controller.png"] | |
104 | ||
105 | Some zone plugins (Currently : bgp-evpn only), | |
106 | need an external controller to manage the vnets control-plane. | |
107 | ||
108 | ||
109 | ||
110 | Zones Plugins | |
111 | ------------- | |
112 | common zone options: | |
113 | ||
114 | * nodes: restrict deploy of the vnets of theses nodes only | |
115 | ||
116 | ||
117 | Vlan | |
118 | ~~~~~ | |
119 | ||
120 | [thumbnail="screenshot/gui-sdn-zone-vlan.png"] | |
121 | ||
122 | This is the most simple plugin, it'll reuse an existing local bridge or ovs, | |
123 | and manage vlan on it. | |
124 | The benefit of using sdn module, is that you can create different zones with specific | |
125 | vnets vlan tag, and restrict your customers on their zones. | |
126 | ||
127 | specific qinq configuration options: | |
128 | ||
129 | * bridge: a local vlan-aware bridge or ovs switch already configured on each local node | |
130 | ||
131 | QinQ | |
132 | ~~~~~ | |
133 | ||
134 | [thumbnail="screenshot/gui-sdn-zone-qinq.png"] | |
135 | ||
136 | QinQ is stacked vlan. | |
137 | you have the first vlan tag defined on the zone (service-vlan), and | |
138 | the second vlan tag defined on the vnets | |
139 | ||
140 | Your physical network switchs need to support stacked vlans ! | |
141 | ||
142 | specific qinq configuration options: | |
143 | ||
144 | * bridge: a local vlan-aware bridge already configured on each local node | |
145 | * service vlan: The main vlan tag of this zone | |
146 | * mtu: you need 4 more bytes for the double tag vlan. | |
147 | You can reduce the mtu to 1496 if you physical interface mtu is 1500. | |
148 | ||
149 | Vxlan | |
150 | ~~~~~ | |
151 | ||
152 | [thumbnail="screenshot/gui-sdn-zone-vxlan.png"] | |
153 | ||
154 | The vxlan plugin will established vxlan tunnel (overlay) on top of an existing network (underlay). | |
155 | you can for example, create a private ipv4 vxlan network on top of public internet network nodes. | |
156 | This is a layer2 tunnel only, no routing between different vnets is possible. | |
157 | ||
158 | Each vnet will have a specific vxlan id ( 1 - 16777215 ) | |
159 | ||
160 | ||
161 | Specific evpn configuration options: | |
162 | ||
163 | * peers address list: an ip list of all nodes where you want to communicate (could be also external nodes) | |
164 | ||
165 | * mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower | |
166 | than the outgoing physical interface. | |
167 | ||
168 | evpn | |
169 | ~~~~ | |
170 | ||
171 | [thumbnail="screenshot/gui-sdn-zone-evpn.png"] | |
172 | ||
173 | This is the most complex plugin. | |
174 | ||
175 | BGP-evpn allow to create routable layer3 network. | |
176 | The vnet of evpn can have an anycast ip address/mac address. | |
177 | The bridge ip is the same on each node, then vm can use | |
178 | as gateway. | |
179 | The routing is working only across vnets of a specific zone through a vrf. | |
180 | ||
181 | Specific evpn configuration options: | |
182 | ||
183 | * vrf vxlan tag: This is a vxlan-id used for routing interconnect between vnets, | |
184 | it must be different than vxlan-id of vnets | |
185 | ||
186 | * controller: an evpn need to be defined first (see controller plugins section) | |
187 | ||
188 | * mtu: because vxlan encapsulation use 50bytes, the mtu need to be 50 bytes lower | |
189 | than the outgoing physical interface. | |
190 | ||
191 | ||
192 | Controllers Plugins | |
193 | ------------------- | |
194 | ||
195 | evpn | |
196 | ~~~~ | |
197 | ||
198 | [thumbnail="screenshot/gui-sdn-controller-evpn.png"] | |
199 | ||
200 | For bgp-evpn, we need a controller to manage the control plane. | |
201 | The software controller is "frr" router. | |
202 | You need to install it on each node where you want to deploy the evpn zone. | |
203 | ||
204 | ---- | |
205 | apt install frr | |
206 | ---- | |
207 | ||
208 | configuration options: | |
209 | ||
210 | *asn: a unique bgp asn number. | |
211 | It's recommended to use private asn number (64512 – 65534, 4200000000 – 4294967294) | |
212 | ||
213 | *peers: an ip list of all nodes where you want to communicate (could be also external nodes or route reflectors servers) | |
214 | ||
215 | If you want to route traffic from the sdn bgp-evpn network to external world: | |
216 | ||
217 | * gateway-nodes: The proxmox nodes from where the bgp-evpn traffic will exit to external through the nodes default gateway | |
218 | ||
219 | If you want that gateway nodes don't use the default gateway, but for example, sent traffic to external bgp routers | |
220 | ||
221 | * gateway-external-peers: 192.168.0.253,192.168.0.254 | |
222 | ||
223 | ||
224 | Local deployment Monitoring | |
225 | --------------------------- | |
226 | ||
227 | [thumbnail="screenshot/gui-sdn-local-status.png"] | |
228 | ||
229 | After apply configuration on the main sdn section, | |
230 | the local configuration is generated locally on each node, | |
231 | in /etc/network/interfaces.d/sdn, and reloaded. | |
232 | ||
233 | You can monitor the status of local zones && vnets through the main tree. | |
234 | ||
235 | ||
236 | ||
237 | Vlan setup example | |
238 | ------------------ | |
239 | node1: /etc/network/interfaces | |
240 | ---- | |
241 | auto vmbr0 | |
242 | iface vmbr0 inet manual | |
243 | bridge-ports eno1 | |
244 | bridge-stp off | |
245 | bridge-fd 0 | |
246 | bridge-vlan-aware yes | |
247 | bridge-vids 2-4094 | |
248 | ||
249 | #management ip on vlan100 | |
250 | auto vmbr0.100 | |
251 | iface vmbr0.100 inet static | |
252 | address 192.168.0.1/24 | |
253 | ||
254 | source /etc/network/interfaces.d/* | |
255 | ||
256 | ---- | |
257 | ||
258 | node2: /etc/network/interfaces | |
259 | ||
260 | ---- | |
261 | auto vmbr0 | |
262 | iface vmbr0 inet manual | |
263 | bridge-ports eno1 | |
264 | bridge-stp off | |
265 | bridge-fd 0 | |
266 | bridge-vlan-aware yes | |
267 | bridge-vids 2-4094 | |
268 | ||
269 | #management ip on vlan100 | |
270 | auto vmbr0.100 | |
271 | iface vmbr0.100 inet static | |
272 | address 192.168.0.2/24 | |
273 | ||
274 | source /etc/network/interfaces.d/* | |
275 | ---- | |
276 | ||
277 | create an vlan zone | |
278 | ||
279 | ---- | |
280 | id: mylanzone | |
281 | bridge: vmbr0 | |
282 | ---- | |
283 | ||
284 | create a vnet1 with vlan-id 10 | |
285 | ||
286 | ---- | |
287 | id: myvnet1 | |
288 | zone: myvlanzone | |
289 | tag: 10 | |
290 | ---- | |
291 | ||
292 | Apply the configuration on the main sdn section, to create vnets locally on each nodes, | |
293 | and generate frr config. | |
294 | ||
295 | ||
296 | create a vm1, with 1 nic on vnet1 on node1 | |
297 | ||
298 | ---- | |
299 | auto eth0 | |
300 | iface eth0 inet static | |
301 | address 10.0.3.100/24 | |
302 | ---- | |
303 | ||
304 | create a vm2, with 1 nic on vnet1 on node2 | |
305 | ---- | |
306 | auto eth0 | |
307 | iface eth0 inet static | |
308 | address 10.0.3.101/24 | |
309 | ---- | |
310 | ||
311 | Then, you should be able to ping between between vm1 && vm2 | |
312 | ||
313 | ||
314 | QinQ setup example | |
315 | ------------------ | |
316 | node1: /etc/network/interfaces | |
317 | ---- | |
318 | auto vmbr0 | |
319 | iface vmbr0 inet manual | |
320 | bridge-ports eno1 | |
321 | bridge-stp off | |
322 | bridge-fd 0 | |
323 | bridge-vlan-aware yes | |
324 | bridge-vids 2-4094 | |
325 | ||
326 | #management ip on vlan100 | |
327 | auto vmbr0.100 | |
328 | iface vmbr0.100 inet static | |
329 | address 192.168.0.1/24 | |
330 | ||
331 | source /etc/network/interfaces.d/* | |
332 | ---- | |
333 | ||
334 | node2: /etc/network/interfaces | |
335 | ||
336 | ---- | |
337 | auto vmbr0 | |
338 | iface vmbr0 inet manual | |
339 | bridge-ports eno1 | |
340 | bridge-stp off | |
341 | bridge-fd 0 | |
342 | bridge-vlan-aware yes | |
343 | bridge-vids 2-4094 | |
344 | ||
345 | #management ip on vlan100 | |
346 | auto vmbr0.100 | |
347 | iface vmbr0.100 inet static | |
348 | address 192.168.0.2/24 | |
349 | ||
350 | source /etc/network/interfaces.d/* | |
351 | ---- | |
352 | ||
353 | create an qinq zone1 with service vlan 20 | |
354 | ||
355 | ---- | |
356 | id: qinqzone1 | |
357 | bridge: vmbr0 | |
358 | service vlan: 20 | |
359 | ---- | |
360 | ||
361 | create an qinq zone2 with service vlan 30 | |
362 | ||
363 | ---- | |
364 | id: qinqzone2 | |
365 | bridge: vmbr0 | |
366 | service vlan: 30 | |
367 | ---- | |
368 | ||
369 | create a vnet1 with customer vlan-id 100 on qinqzone1 | |
370 | ||
371 | ---- | |
372 | id: myvnet1 | |
373 | zone: qinqzone1 | |
374 | tag: 100 | |
375 | ---- | |
376 | ||
377 | create a vnet2 with customer vlan-id 100 on qinqzone2 | |
378 | ||
379 | ---- | |
380 | id: myvnet2 | |
381 | zone: qinqzone1 | |
382 | tag: 100 | |
383 | ---- | |
384 | ||
385 | Apply the configuration on the main sdn section, to create vnets locally on each nodes, | |
386 | and generate frr config. | |
387 | ||
388 | ||
389 | create a vm1, with 1 nic on vnet1 on node1 | |
390 | ||
391 | ---- | |
392 | auto eth0 | |
393 | iface eth0 inet static | |
394 | address 10.0.3.100/24 | |
395 | ---- | |
396 | ||
397 | create a vm2, with 1 nic on vnet1 on node2 | |
398 | ---- | |
399 | auto eth0 | |
400 | iface eth0 inet static | |
401 | address 10.0.3.101/24 | |
402 | ---- | |
403 | ||
404 | create a vm3, with 1 nic on vnet2 on node1 | |
405 | ||
406 | ---- | |
407 | auto eth0 | |
408 | iface eth0 inet static | |
409 | address 10.0.3.102/24 | |
410 | ---- | |
411 | ||
412 | create a vm4, with 1 nic on vnet2 on node2 | |
413 | ---- | |
414 | auto eth0 | |
415 | iface eth0 inet static | |
416 | address 10.0.3.103/24 | |
417 | ---- | |
418 | ||
419 | Then, you should be able to ping between between vm1 && vm2 | |
420 | vm3 && vm4 could ping together | |
421 | ||
422 | but vm1 && vm2 couldn't ping vm3 && vm4, | |
423 | as it's a different zone, with different service vlan | |
424 | ||
425 | ||
426 | Vxlan setup example | |
427 | ------------------- | |
428 | node1: /etc/network/interfaces | |
429 | ---- | |
430 | auto vmbr0 | |
431 | iface vmbr0 inet static | |
432 | address 192.168.0.1/24 | |
433 | gateway 192.168.0.254 | |
434 | bridge-ports eno1 | |
435 | bridge-stp off | |
436 | bridge-fd 0 | |
437 | mtu 1500 | |
438 | ||
439 | source /etc/network/interfaces.d/* | |
440 | ---- | |
441 | ||
442 | node2: /etc/network/interfaces | |
443 | ||
444 | ---- | |
445 | auto vmbr0 | |
446 | iface vmbr0 inet static | |
447 | address 192.168.0.2/24 | |
448 | gateway 192.168.0.254 | |
449 | bridge-ports eno1 | |
450 | bridge-stp off | |
451 | bridge-fd 0 | |
452 | mtu 1500 | |
453 | ||
454 | source /etc/network/interfaces.d/* | |
455 | ---- | |
456 | ||
457 | node3: /etc/network/interfaces | |
458 | ||
459 | ---- | |
460 | auto vmbr0 | |
461 | iface vmbr0 inet static | |
462 | address 192.168.0.3/24 | |
463 | gateway 192.168.0.254 | |
464 | bridge-ports eno1 | |
465 | bridge-stp off | |
466 | bridge-fd 0 | |
467 | mtu 1500 | |
468 | ||
469 | source /etc/network/interfaces.d/* | |
470 | ---- | |
471 | ||
472 | create an vxlan zone | |
473 | ||
474 | ---- | |
475 | id: myvxlanzone | |
476 | peers address list: 192.168.0.1,192.168.0.2,192.168.0.3 | |
477 | mtu: 1450 | |
478 | ---- | |
479 | ||
480 | create first vnet | |
481 | ||
482 | ---- | |
483 | id: myvnet1 | |
484 | zone: myvxlanzone | |
485 | tag: 100000 | |
486 | ---- | |
487 | ||
488 | Apply the configuration on the main sdn section, to create vnets locally on each nodes, | |
489 | and generate frr config. | |
490 | ||
491 | ||
492 | create a vm1, with 1 nic on vnet1 on node2 | |
493 | ||
494 | ---- | |
495 | auto eth0 | |
496 | iface eth0 inet static | |
497 | address 10.0.3.100/24 | |
498 | mtu 1450 | |
499 | ---- | |
500 | ||
501 | create a vm2, with 1 nic on vnet1 on node3 | |
502 | ---- | |
503 | auto eth0 | |
504 | iface eth0 inet static | |
505 | address 10.0.3.101/24 | |
506 | mtu 1450 | |
507 | ---- | |
508 | ||
509 | Then, you should be able to ping between between vm1 && vm2 | |
510 | ||
511 | ||
512 | ||
513 | EVPN setup example | |
514 | ------------------ | |
515 | node1: /etc/network/interfaces | |
516 | ||
517 | ---- | |
518 | auto vmbr0 | |
519 | iface vmbr0 inet static | |
520 | address 192.168.0.1/24 | |
521 | gateway 192.168.0.254 | |
522 | bridge-ports eno1 | |
523 | bridge-stp off | |
524 | bridge-fd 0 | |
525 | mtu 1500 | |
526 | ||
527 | source /etc/network/interfaces.d/* | |
528 | ---- | |
529 | ||
530 | node2: /etc/network/interfaces | |
531 | ||
532 | ---- | |
533 | auto vmbr0 | |
534 | iface vmbr0 inet static | |
535 | address 192.168.0.2/24 | |
536 | gateway 192.168.0.254 | |
537 | bridge-ports eno1 | |
538 | bridge-stp off | |
539 | bridge-fd 0 | |
540 | mtu 1500 | |
541 | ||
542 | source /etc/network/interfaces.d/* | |
543 | ---- | |
544 | ||
545 | node3: /etc/network/interfaces | |
546 | ||
547 | ---- | |
548 | auto vmbr0 | |
549 | iface vmbr0 inet static | |
550 | address 192.168.0.3/24 | |
551 | gateway 192.168.0.254 | |
552 | bridge-ports eno1 | |
553 | bridge-stp off | |
554 | bridge-fd 0 | |
555 | mtu 1500 | |
556 | ||
557 | source /etc/network/interfaces.d/* | |
558 | ---- | |
559 | ||
560 | create a evpn controller | |
561 | ||
562 | ---- | |
563 | id: myevpnctl | |
564 | asn: 65000 | |
565 | peers: 192.168.0.1,192.168.0.2,192.168.0.3 | |
566 | gateway nodes: node1,node2 | |
567 | ---- | |
568 | ||
569 | create an evpn zone | |
570 | ||
571 | ---- | |
572 | id: myevpnzone | |
573 | vrf vxlan tag: 10000 | |
574 | controller: myevpnctl | |
575 | mtu: 1450 | |
576 | ---- | |
577 | ||
578 | create first vnet | |
579 | ||
580 | ---- | |
581 | id: myvnet1 | |
582 | zone: myevpnzone | |
583 | tag: 11000 | |
584 | ipv4: 10.0.1.1/24 | |
585 | mac address: 8C:73:B2:7B:F9:60 #random generate mac addres | |
586 | ---- | |
587 | ||
588 | create second vnet | |
589 | ||
590 | ---- | |
591 | id: myvnet2 | |
592 | zone: myevpnzone | |
593 | tag: 12000 | |
594 | ipv4: 10.0.2.1/24 | |
595 | mac address: 8C:73:B2:7B:F9:61 #random mac, need to be different on each vnet | |
596 | ---- | |
597 | ||
598 | Apply the configuration on the main sdn section, to create vnets locally on each nodes, | |
599 | and generate frr config. | |
600 | ||
601 | ||
602 | ||
603 | create a vm1, with 1 nic on vnet1 on node2 | |
604 | ||
605 | ---- | |
606 | auto eth0 | |
607 | iface eth0 inet static | |
608 | address 10.0.1.100/24 | |
609 | gateway 10.0.1.1 #this is the ip of the vnet1 | |
610 | mtu 1450 | |
611 | ---- | |
612 | ||
613 | create a vm2, with 1 nic on vnet2 on node3 | |
614 | ---- | |
615 | auto eth0 | |
616 | iface eth0 inet static | |
617 | address 10.0.2.100/24 | |
618 | gateway 10.0.2.1 #this is the ip of the vnet2 | |
619 | mtu 1450 | |
620 | ---- | |
621 | ||
622 | ||
623 | Then, you should be able to ping vm2 from vm1, and vm1 from vm2. | |
624 | ||
625 | from vm2 on node3, if you ping an external ip, the packet will go | |
626 | to the vnet2 gateway, then will be routed to gateway nodes (node1 or node2) | |
627 | then the packet will be routed to the node1 or node2 default gw. | |
628 | ||
629 | Of course you need to add reverse routes to 10.0.1.0/24 && 10.0.2.0/24 to node1,node2 on your external gateway. | |
630 | ||
631 | If you have configured an external bgp router, the bgp-evpn routes (10.0.1.0/24 && 10.0.2.0/24), | |
632 | will be announced dynamically. | |
633 |