]> git.proxmox.com Git - mirror_ovs.git/blob - ovn/ovn-architecture.7.xml
ovn*.xml: Convert tabs to spaces.
[mirror_ovs.git] / ovn / ovn-architecture.7.xml
1 <?xml version="1.0" encoding="utf-8"?>
2 <manpage program="ovn-architecture" section="7" title="OVN Architecture">
3 <h1>Name</h1>
4 <p>ovn-architecture -- Open Virtual Network architecture</p>
5
6 <h1>Description</h1>
7
8 <p>
9 OVN, the Open Virtual Network, is a system to support virtual network
10 abstraction. OVN complements the existing capabilities of OVS to add
11 native support for virtual network abstractions, such as virtual L2 and L3
12 overlays and security groups. Services such as DHCP are also desirable
13 features. Just like OVS, OVN's design goal is to have a production-quality
14 implementation that can operate at significant scale.
15 </p>
16
17 <p>
18 An OVN deployment consists of several components:
19 </p>
20
21 <ul>
22 <li>
23 <p>
24 A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
25 OVN's ultimate client (via its users and administrators). OVN
26 integration requires installing a CMS-specific plugin and
27 related software (see below). OVN initially targets OpenStack
28 as CMS.
29 </p>
30
31 <p>
32 We generally speak of ``the'' CMS, but one can imagine scenarios in
33 which multiple CMSes manage different parts of an OVN deployment.
34 </p>
35 </li>
36
37 <li>
38 An OVN Database physical or virtual node (or, eventually, cluster)
39 installed in a central location.
40 </li>
41
42 <li>
43 One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
44 Open vSwitch and implement the interface described in
45 <code>IntegrationGuide.md</code> in the OVS source tree. Any hypervisor
46 platform supported by Open vSwitch is acceptable.
47 </li>
48
49 <li>
50 <p>
51 Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
52 logical network into a physical network by bidirectionally forwarding
53 packets between tunnels and a physical Ethernet port. This allows
54 non-virtualized machines to participate in logical networks. A gateway
55 may be a physical host, a virtual machine, or an ASIC-based hardware
56 switch that supports the <code>vtep</code>(5) schema. (Support for the
57 latter will come later in OVN implementation.)
58 </p>
59
60 <p>
61 Hypervisors and gateways are together called <dfn>transport node</dfn>
62 or <dfn>chassis</dfn>.
63 </p>
64 </li>
65 </ul>
66
67 <p>
68 The diagram below shows how the major components of OVN and related
69 software interact. Starting at the top of the diagram, we have:
70 </p>
71
72 <ul>
73 <li>
74 The Cloud Management System, as defined above.
75 </li>
76
77 <li>
78 <p>
79 The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
80 interfaces to OVN. In OpenStack, this is a Neutron plugin.
81 The plugin's main purpose is to translate the CMS's notion of logical
82 network configuration, stored in the CMS's configuration database in a
83 CMS-specific format, into an intermediate representation understood by
84 OVN.
85 </p>
86
87 <p>
88 This component is necessarily CMS-specific, so a new plugin needs to be
89 developed for each CMS that is integrated with OVN. All of the
90 components below this one in the diagram are CMS-independent.
91 </p>
92 </li>
93
94 <li>
95 <p>
96 The <dfn>OVN Northbound Database</dfn> receives the intermediate
97 representation of logical network configuration passed down by the
98 OVN/CMS Plugin. The database schema is meant to be ``impedance
99 matched'' with the concepts used in a CMS, so that it directly supports
100 notions of logical switches, routers, ACLs, and so on. See
101 <code>ovs-nb</code>(5) for details.
102 </p>
103
104 <p>
105 The OVN Northbound Database has only two clients: the OVN/CMS Plugin
106 above it and <code>ovn-northd</code> below it.
107 </p>
108 </li>
109
110 <li>
111 <code>ovn-northd</code>(8) connects to the OVN Northbound Database
112 above it and the OVN Southbound Database below it. It translates the
113 logical network configuration in terms of conventional network
114 concepts, taken from the OVN Northbound Database, into logical
115 datapath flows in the OVN Southbound Database below it.
116 </li>
117
118 <li>
119 <p>
120 The <dfn>OVN Southbound Database</dfn> is the center of the system.
121 Its clients are <code>ovn-northd</code>(8) above it and
122 <code>ovn-controller</code>(8) on every transport node below it.
123 </p>
124
125 <p>
126 The OVN Southbound Database contains three kinds of data: <dfn>Physical
127 Network</dfn> (PN) tables that specify how to reach hypervisor and
128 other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
129 logical network in terms of ``logical datapath flows,'' and
130 <dfn>Binding</dfn> tables that link logical network components'
131 locations to the physical network. The hypervisors populate the PN and
132 Binding tables, whereas <code>ovn-northd</code>(8) populates the LN
133 tables.
134 </p>
135
136 <p>
137 OVN Southbound Database performance must scale with the number of
138 transport nodes. This will likely require some work on
139 <code>ovsdb-server</code>(1) as we encounter bottlenecks.
140 Clustering for availability may be needed.
141 </p>
142 </li>
143 </ul>
144
145 <p>
146 The remaining components are replicated onto each hypervisor:
147 </p>
148
149 <ul>
150 <li>
151 <code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
152 software gateway. Northbound, it connects to the OVN Southbound
153 Database to learn about OVN configuration and status and to
154 populate the PN table and the <code>Chassis</code> column in
155 <code>Bindings</code> table with the hypervisor's status.
156 Southbound, it connects to <code>ovs-vswitchd</code>(8) as an
157 OpenFlow controller, for control over network traffic, and to the
158 local <code>ovsdb-server</code>(1) to allow it to monitor and
159 control Open vSwitch configuration.
160 </li>
161
162 <li>
163 <code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
164 conventional components of Open vSwitch.
165 </li>
166 </ul>
167
168 <pre fixed="yes">
169 CMS
170 |
171 |
172 +-----------|-----------+
173 | | |
174 | OVN/CMS Plugin |
175 | | |
176 | | |
177 | OVN Northbound DB |
178 | | |
179 | | |
180 | ovn-northd |
181 | | |
182 +-----------|-----------+
183 |
184 |
185 +-------------------+
186 | OVN Southbound DB |
187 +-------------------+
188 |
189 |
190 +------------------+------------------+
191 | | |
192 HV 1 | | HV n |
193 +---------------|---------------+ . +---------------|---------------+
194 | | | . | | |
195 | ovn-controller | . | ovn-controller |
196 | | | | . | | | |
197 | | | | | | | |
198 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
199 | | | |
200 +-------------------------------+ +-------------------------------+
201 </pre>
202
203 <h2>Chassis Setup</h2>
204
205 <p>
206 Each chassis in an OVN deployment must be configured with an Open vSwitch
207 bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
208 System startup scripts create this bridge prior to starting
209 <code>ovn-controller</code>. The ports on the integration bridge include:
210 </p>
211
212 <ul>
213 <li>
214 On any chassis, tunnel ports that OVN uses to maintain logical network
215 connectivity. <code>ovn-controller</code> adds, updates, and removes
216 these tunnel ports.
217 </li>
218
219 <li>
220 On a hypervisor, any VIFs that are to be attached to logical networks.
221 The hypervisor itself, or the integration between Open vSwitch and the
222 hypervisor (described in <code>IntegrationGuide.md</code>) takes care of
223 this. (This is not part of OVN or new to OVN; this is pre-existing
224 integration work that has already been done on hypervisors that support
225 OVS.)
226 </li>
227
228 <li>
229 On a gateway, the physical port used for logical network connectivity.
230 System startup scripts add this port to the bridge prior to starting
231 <code>ovn-controller</code>. This can be a patch port to another bridge,
232 instead of a physical port, in more sophisticated setups.
233 </li>
234 </ul>
235
236 <p>
237 Other ports should not be attached to the integration bridge. In
238 particular, physical ports attached to the underlay network (as opposed to
239 gateway ports, which are physical ports attached to logical networks) must
240 not be attached to the integration bridge. Underlay physical ports should
241 instead be attached to a separate Open vSwitch bridge (they need not be
242 attached to any bridge at all, in fact).
243 </p>
244
245 <p>
246 The integration bridge must be configured with failure mode ``secure'' to
247 avoid switching packets between isolated logical networks before
248 <code>ovn-controller</code> starts up. See <code>Controller Failure
249 Settings</code> in <code>ovs-vsctl</code>(8) for more information.
250 </p>
251
252 <p>
253 The customary name for the integration bridge is <code>br-int</code>, but
254 another name may be used.
255 </p>
256
257 <h2>Logical Networks</h2>
258
259 <p>
260 A <dfn>logical network</dfn> implements the same concepts as physical
261 networks, but they are insulated from the physical network with tunnels or
262 other encapsulations. This allows logical networks to have separate IP and
263 other address spaces that overlap, without conflicting, with those used for
264 physical networks. Logical network topologies can be arranged without
265 regard for the topologies of the physical networks on which they run.
266 </p>
267
268 <p>
269 Logical network concepts in OVN include:
270 </p>
271
272 <ul>
273 <li>
274 <dfn>Logical switches</dfn>, the logical version of Ethernet switches.
275 </li>
276
277 <li>
278 <dfn>Logical routers</dfn>, the logical version of IP routers. Logical
279 switches and routers can be connected into sophisticated topologies.
280 </li>
281
282 <li>
283 <dfn>Logical datapaths</dfn> are the logical version of an OpenFlow
284 switch. Logical switches and routers are both implemented as logical
285 datapaths.
286 </li>
287 </ul>
288
289 <h2>Life Cycle of a VIF</h2>
290
291 <p>
292 Tables and their schemas presented in isolation are difficult to
293 understand. Here's an example.
294 </p>
295
296 <p>
297 A VIF on a hypervisor is a virtual network interface attached either
298 to a VM or a container running directly on that hypervisor (This is
299 different from the interface of a container running inside a VM).
300 </p>
301
302 <p>
303 The steps in this example refer often to details of the OVN and OVN
304 Northbound database schemas. Please see <code>ovn-sb</code>(5) and
305 <code>ovn-nb</code>(5), respectively, for the full story on these
306 databases.
307 </p>
308
309 <ol>
310 <li>
311 A VIF's life cycle begins when a CMS administrator creates a new VIF
312 using the CMS user interface or API and adds it to a switch (one
313 implemented by OVN as a logical switch). The CMS updates its own
314 configuration. This includes associating unique, persistent identifier
315 <var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
316 </li>
317
318 <li>
319 The CMS plugin updates the OVN Northbound database to include the new
320 VIF, by adding a row to the <code>Logical_Port</code> table. In the new
321 row, <code>name</code> is <var>vif-id</var>, <code>mac</code> is
322 <var>mac</var>, <code>switch</code> points to the OVN logical switch's
323 Logical_Switch record, and other columns are initialized appropriately.
324 </li>
325
326 <li>
327 <code>ovn-northd</code> receives the OVN Northbound database update.
328 In turn, it makes the corresponding updates to the OVN Southbound
329 database, by adding rows to the OVN Southbound database
330 <code>Pipeline</code> table to reflect the new port, e.g. add a
331 flow to recognize that packets destined to the new port's MAC
332 address should be delivered to it, and update the flow that
333 delivers broadcast and multicast packets to include the new port.
334 It also creates a record in the <code>Bindings</code> table and
335 populates all its columns except the column that identifies the
336 <code>chassis</code>.
337 </li>
338
339 <li>
340 On every hypervisor, <code>ovn-controller</code> receives the
341 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
342 in the previous step. As long as the VM that owns the VIF is powered off,
343 <code>ovn-controller</code> cannot do much; it cannot, for example,
344 arrange to send packets to or receive packets from the VIF, because the
345 VIF does not actually exist anywhere.
346 </li>
347
348 <li>
349 Eventually, a user powers on the VM that owns the VIF. On the hypervisor
350 where the VM is powered on, the integration between the hypervisor and
351 Open vSwitch (described in <code>IntegrationGuide.md</code>) adds the VIF
352 to the OVN integration bridge and stores <var>vif-id</var> in
353 <code>external-ids</code>:<code>iface-id</code> to indicate that the
354 interface is an instantiation of the new VIF. (None of this code is new
355 in OVN; this is pre-existing integration work that has already been done
356 on hypervisors that support OVS.)
357 </li>
358
359 <li>
360 On the hypervisor where the VM is powered on, <code>ovn-controller</code>
361 notices <code>external-ids</code>:<code>iface-id</code> in the new
362 Interface. In response, it updates the local hypervisor's OpenFlow
363 tables so that packets to and from the VIF are properly handled.
364 Afterward, in the OVN Southbound DB, it updates the
365 <code>Bindings</code> table's <code>chassis</code> column for the
366 row that links the logical port from
367 <code>external-ids</code>:<code>iface-id</code> to the hypervisor.
368 </li>
369
370 <li>
371 Some CMS systems, including OpenStack, fully start a VM only when its
372 networking is ready. To support this, <code>ovn-northd</code> notices
373 the <code>chassis</code> column updated for the row in
374 <code>Bindings</code> table and pushes this upward by updating the
375 <ref column="up" table="Logical_Port" db="OVN_NB"/> column in the OVN
376 Northbound database's <ref table="Logical_Port" db="OVN_NB"/> table to
377 indicate that the VIF is now up. The CMS, if it uses this feature, can
378 then
379 react by allowing the VM's execution to proceed.
380 </li>
381
382 <li>
383 On every hypervisor but the one where the VIF resides,
384 <code>ovn-controller</code> notices the completely populated row in the
385 <code>Bindings</code> table. This provides <code>ovn-controller</code>
386 the physical location of the logical port, so each instance updates the
387 OpenFlow tables of its switch (based on logical datapath flows in the OVN
388 DB <code>Pipeline</code> table) so that packets to and from the VIF can
389 be properly handled via tunnels.
390 </li>
391
392 <li>
393 Eventually, a user powers off the VM that owns the VIF. On the
394 hypervisor where the VM was powered off, the VIF is deleted from the OVN
395 integration bridge.
396 </li>
397
398 <li>
399 On the hypervisor where the VM was powered off,
400 <code>ovn-controller</code> notices that the VIF was deleted. In
401 response, it removes the <code>Chassis</code> column content in the
402 <code>Bindings</code> table for the logical port.
403 </li>
404
405 <li>
406 On every hypervisor, <code>ovn-controller</code> notices the empty
407 <code>Chassis</code> column in the <code>Bindings</code> table's row
408 for the logical port. This means that <code>ovn-controller</code> no
409 longer knows the physical location of the logical port, so each instance
410 updates its OpenFlow table to reflect that.
411 </li>
412
413 <li>
414 Eventually, when the VIF (or its entire VM) is no longer needed by
415 anyone, an administrator deletes the VIF using the CMS user interface or
416 API. The CMS updates its own configuration.
417 </li>
418
419 <li>
420 The CMS plugin removes the VIF from the OVN Northbound database,
421 by deleting its row in the <code>Logical_Port</code> table.
422 </li>
423
424 <li>
425 <code>ovn-northd</code> receives the OVN Northbound update and in turn
426 updates the OVN Southbound database accordingly, by removing or
427 updating the rows from the OVN Southbound database
428 <code>Pipeline</code> table and <code>Bindings</code> table that
429 were related to the now-destroyed VIF.
430 </li>
431
432 <li>
433 On every hypervisor, <code>ovn-controller</code> receives the
434 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
435 in the previous step. <code>ovn-controller</code> updates OpenFlow tables
436 to reflect the update, although there may not be much to do, since the VIF
437 had already become unreachable when it was removed from the
438 <code>Bindings</code> table in a previous step.
439 </li>
440 </ol>
441
442 <h2>Life Cycle of a container interface inside a VM</h2>
443
444 <p>
445 OVN provides virtual network abstractions by converting information
446 written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
447 virtual networking for multi-tenants can only be provided if OVN controller
448 is the only entity that can modify flows in Open vSwitch. When the
449 Open vSwitch integration bridge resides in the hypervisor, it is a
450 fair assumption to make that tenant workloads running inside VMs cannot
451 make any changes to Open vSwitch flows.
452 </p>
453
454 <p>
455 If the infrastructure provider trusts the applications inside the
456 containers not to break out and modify the Open vSwitch flows, then
457 containers can be run in hypervisors. This is also the case when
458 containers are run inside the VMs and Open vSwitch integration bridge
459 with flows added by OVN controller resides in the same VM. For both
460 the above cases, the workflow is the same as explained with an example
461 in the previous section ("Life Cycle of a VIF").
462 </p>
463
464 <p>
465 This section talks about the life cycle of a container interface (CIF)
466 when containers are created in the VMs and the Open vSwitch integration
467 bridge resides inside the hypervisor. In this case, even if a container
468 application breaks out, other tenants are not affected because the
469 containers running inside the VMs cannot modify the flows in the
470 Open vSwitch integration bridge.
471 </p>
472
473 <p>
474 When multiple containers are created inside a VM, there are multiple
475 CIFs associated with them. The network traffic associated with these
476 CIFs need to reach the Open vSwitch integration bridge running in the
477 hypervisor for OVN to support virtual network abstractions. OVN should
478 also be able to distinguish network traffic coming from different CIFs.
479 There are two ways to distinguish network traffic of CIFs.
480 </p>
481
482 <p>
483 One way is to provide one VIF for every CIF (1:1 model). This means that
484 there could be a lot of network devices in the hypervisor. This would slow
485 down OVS because of all the additional CPU cycles needed for the management
486 of all the VIFs. It would also mean that the entity creating the
487 containers in a VM should also be able to create the corresponding VIFs in
488 the hypervisor.
489 </p>
490
491 <p>
492 The second way is to provide a single VIF for all the CIFs (1:many model).
493 OVN could then distinguish network traffic coming from different CIFs via
494 a tag written in every packet. OVN uses this mechanism and uses VLAN as
495 the tagging mechanism.
496 </p>
497
498 <ol>
499 <li>
500 A CIF's life cycle begins when a container is spawned inside a VM by
501 the either the same CMS that created the VM or a tenant that owns that VM
502 or even a container Orchestration System that is different than the CMS
503 that initially created the VM. Whoever the entity is, it will need to
504 know the <var>vif-id</var> that is associated with the network interface
505 of the VM through which the container interface's network traffic is
506 expected to go through. The entity that creates the container interface
507 will also need to choose an unused VLAN inside that VM.
508 </li>
509
510 <li>
511 The container spawning entity (either directly or through the CMS that
512 manages the underlying infrastructure) updates the OVN Northbound
513 database to include the new CIF, by adding a row to the
514 <code>Logical_Port</code> table. In the new row, <code>name</code> is
515 any unique identifier, <code>parent_name</code> is the <var>vif-id</var>
516 of the VM through which the CIF's network traffic is expected to go
517 through and the <code>tag</code> is the VLAN tag that identifies the
518 network traffic of that CIF.
519 </li>
520
521 <li>
522 <code>ovn-northd</code> receives the OVN Northbound database update.
523 In turn, it makes the corresponding updates to the OVN Southbound
524 database, by adding rows to the OVN Southbound database's
525 <code>Pipeline</code> table to reflect the new port and also by
526 creating a new row in the <code>Bindings</code> table and
527 populating all its columns except the column that identifies the
528 <code>chassis</code>.
529 </li>
530
531 <li>
532 On every hypervisor, <code>ovn-controller</code> subscribes to the
533 changes in the <code>Bindings</code> table. When a new row is created
534 by <code>ovn-northd</code> that includes a value in
535 <code>parent_port</code> column of <code>Bindings</code> table, the
536 <code>ovn-controller</code> in the hypervisor whose OVN integration bridge
537 has that same value in <var>vif-id</var> in
538 <code>external-ids</code>:<code>iface-id</code>
539 updates the local hypervisor's OpenFlow tables so that packets to and
540 from the VIF with the particular VLAN <code>tag</code> are properly
541 handled. Afterward it updates the <code>chassis</code> column of
542 the <code>Bindings</code> to reflect the physical location.
543 </li>
544
545 <li>
546 One can only start the application inside the container after the
547 underlying network is ready. To support this, <code>ovn-northd</code>
548 notices the updated <code>chassis</code> column in <code>Bindings</code>
549 table and updates the <ref column="up" table="Logical_Port"
550 db="OVN_NB"/> column in the OVN Northbound database's
551 <ref table="Logical_Port" db="OVN_NB"/> table to indicate that the
552 CIF is now up. The entity responsible to start the container application
553 queries this value and starts the application.
554 </li>
555
556 <li>
557 Eventually the entity that created and started the container, stops it.
558 The entity, through the CMS (or directly) deletes its row in the
559 <code>Logical_Port</code> table.
560 </li>
561
562 <li>
563 <code>ovn-northd</code> receives the OVN Northbound update and in turn
564 updates the OVN Southbound database accordingly, by removing or
565 updating the rows from the OVN Southbound database
566 <code>Pipeline</code> table that were related to the now-destroyed
567 CIF. It also deletes the row in the <code>Bindings</code> table
568 for that CIF.
569 </li>
570
571 <li>
572 On every hypervisor, <code>ovn-controller</code> receives the
573 <code>Pipeline</code> table updates that <code>ovn-northd</code> made
574 in the previous step. <code>ovn-controller</code> updates OpenFlow tables
575 to reflect the update.
576 </li>
577 </ol>
578 </manpage>