]> git.proxmox.com Git - mirror_ovs.git/blob - ovn/ovn-architecture.7.xml
ovn: Change strategy for tunnel keys.
[mirror_ovs.git] / ovn / ovn-architecture.7.xml
1 <?xml version="1.0" encoding="utf-8"?>
2 <manpage program="ovn-architecture" section="7" title="OVN Architecture">
3 <h1>Name</h1>
4 <p>ovn-architecture -- Open Virtual Network architecture</p>
5
6 <h1>Description</h1>
7
8 <p>
9 OVN, the Open Virtual Network, is a system to support virtual network
10 abstraction. OVN complements the existing capabilities of OVS to add
11 native support for virtual network abstractions, such as virtual L2 and L3
12 overlays and security groups. Services such as DHCP are also desirable
13 features. Just like OVS, OVN's design goal is to have a production-quality
14 implementation that can operate at significant scale.
15 </p>
16
17 <p>
18 An OVN deployment consists of several components:
19 </p>
20
21 <ul>
22 <li>
23 <p>
24 A <dfn>Cloud Management System</dfn> (<dfn>CMS</dfn>), which is
25 OVN's ultimate client (via its users and administrators). OVN
26 integration requires installing a CMS-specific plugin and
27 related software (see below). OVN initially targets OpenStack
28 as CMS.
29 </p>
30
31 <p>
32 We generally speak of ``the'' CMS, but one can imagine scenarios in
33 which multiple CMSes manage different parts of an OVN deployment.
34 </p>
35 </li>
36
37 <li>
38 An OVN Database physical or virtual node (or, eventually, cluster)
39 installed in a central location.
40 </li>
41
42 <li>
43 One or more (usually many) <dfn>hypervisors</dfn>. Hypervisors must run
44 Open vSwitch and implement the interface described in
45 <code>IntegrationGuide.md</code> in the OVS source tree. Any hypervisor
46 platform supported by Open vSwitch is acceptable.
47 </li>
48
49 <li>
50 <p>
51 Zero or more <dfn>gateways</dfn>. A gateway extends a tunnel-based
52 logical network into a physical network by bidirectionally forwarding
53 packets between tunnels and a physical Ethernet port. This allows
54 non-virtualized machines to participate in logical networks. A gateway
55 may be a physical host, a virtual machine, or an ASIC-based hardware
56 switch that supports the <code>vtep</code>(5) schema. (Support for the
57 latter will come later in OVN implementation.)
58 </p>
59
60 <p>
61 Hypervisors and gateways are together called <dfn>transport node</dfn>
62 or <dfn>chassis</dfn>.
63 </p>
64 </li>
65 </ul>
66
67 <p>
68 The diagram below shows how the major components of OVN and related
69 software interact. Starting at the top of the diagram, we have:
70 </p>
71
72 <ul>
73 <li>
74 The Cloud Management System, as defined above.
75 </li>
76
77 <li>
78 <p>
79 The <dfn>OVN/CMS Plugin</dfn> is the component of the CMS that
80 interfaces to OVN. In OpenStack, this is a Neutron plugin.
81 The plugin's main purpose is to translate the CMS's notion of logical
82 network configuration, stored in the CMS's configuration database in a
83 CMS-specific format, into an intermediate representation understood by
84 OVN.
85 </p>
86
87 <p>
88 This component is necessarily CMS-specific, so a new plugin needs to be
89 developed for each CMS that is integrated with OVN. All of the
90 components below this one in the diagram are CMS-independent.
91 </p>
92 </li>
93
94 <li>
95 <p>
96 The <dfn>OVN Northbound Database</dfn> receives the intermediate
97 representation of logical network configuration passed down by the
98 OVN/CMS Plugin. The database schema is meant to be ``impedance
99 matched'' with the concepts used in a CMS, so that it directly supports
100 notions of logical switches, routers, ACLs, and so on. See
101 <code>ovn-nb</code>(5) for details.
102 </p>
103
104 <p>
105 The OVN Northbound Database has only two clients: the OVN/CMS Plugin
106 above it and <code>ovn-northd</code> below it.
107 </p>
108 </li>
109
110 <li>
111 <code>ovn-northd</code>(8) connects to the OVN Northbound Database
112 above it and the OVN Southbound Database below it. It translates the
113 logical network configuration in terms of conventional network
114 concepts, taken from the OVN Northbound Database, into logical
115 datapath flows in the OVN Southbound Database below it.
116 </li>
117
118 <li>
119 <p>
120 The <dfn>OVN Southbound Database</dfn> is the center of the system.
121 Its clients are <code>ovn-northd</code>(8) above it and
122 <code>ovn-controller</code>(8) on every transport node below it.
123 </p>
124
125 <p>
126 The OVN Southbound Database contains three kinds of data: <dfn>Physical
127 Network</dfn> (PN) tables that specify how to reach hypervisor and
128 other nodes, <dfn>Logical Network</dfn> (LN) tables that describe the
129 logical network in terms of ``logical datapath flows,'' and
130 <dfn>Binding</dfn> tables that link logical network components'
131 locations to the physical network. The hypervisors populate the PN and
132 Port_Binding tables, whereas <code>ovn-northd</code>(8) populates the
133 LN tables.
134 </p>
135
136 <p>
137 OVN Southbound Database performance must scale with the number of
138 transport nodes. This will likely require some work on
139 <code>ovsdb-server</code>(1) as we encounter bottlenecks.
140 Clustering for availability may be needed.
141 </p>
142 </li>
143 </ul>
144
145 <p>
146 The remaining components are replicated onto each hypervisor:
147 </p>
148
149 <ul>
150 <li>
151 <code>ovn-controller</code>(8) is OVN's agent on each hypervisor and
152 software gateway. Northbound, it connects to the OVN Southbound
153 Database to learn about OVN configuration and status and to
154 populate the PN table and the <code>Chassis</code> column in
155 <code>Binding</code> table with the hypervisor's status.
156 Southbound, it connects to <code>ovs-vswitchd</code>(8) as an
157 OpenFlow controller, for control over network traffic, and to the
158 local <code>ovsdb-server</code>(1) to allow it to monitor and
159 control Open vSwitch configuration.
160 </li>
161
162 <li>
163 <code>ovs-vswitchd</code>(8) and <code>ovsdb-server</code>(1) are
164 conventional components of Open vSwitch.
165 </li>
166 </ul>
167
168 <pre fixed="yes">
169 CMS
170 |
171 |
172 +-----------|-----------+
173 | | |
174 | OVN/CMS Plugin |
175 | | |
176 | | |
177 | OVN Northbound DB |
178 | | |
179 | | |
180 | ovn-northd |
181 | | |
182 +-----------|-----------+
183 |
184 |
185 +-------------------+
186 | OVN Southbound DB |
187 +-------------------+
188 |
189 |
190 +------------------+------------------+
191 | | |
192 HV 1 | | HV n |
193 +---------------|---------------+ . +---------------|---------------+
194 | | | . | | |
195 | ovn-controller | . | ovn-controller |
196 | | | | . | | | |
197 | | | | | | | |
198 | ovs-vswitchd ovsdb-server | | ovs-vswitchd ovsdb-server |
199 | | | |
200 +-------------------------------+ +-------------------------------+
201 </pre>
202
203 <h2>Chassis Setup</h2>
204
205 <p>
206 Each chassis in an OVN deployment must be configured with an Open vSwitch
207 bridge dedicated for OVN's use, called the <dfn>integration bridge</dfn>.
208 System startup scripts create this bridge prior to starting
209 <code>ovn-controller</code>. The ports on the integration bridge include:
210 </p>
211
212 <ul>
213 <li>
214 On any chassis, tunnel ports that OVN uses to maintain logical network
215 connectivity. <code>ovn-controller</code> adds, updates, and removes
216 these tunnel ports.
217 </li>
218
219 <li>
220 On a hypervisor, any VIFs that are to be attached to logical networks.
221 The hypervisor itself, or the integration between Open vSwitch and the
222 hypervisor (described in <code>IntegrationGuide.md</code>) takes care of
223 this. (This is not part of OVN or new to OVN; this is pre-existing
224 integration work that has already been done on hypervisors that support
225 OVS.)
226 </li>
227
228 <li>
229 On a gateway, the physical port used for logical network connectivity.
230 System startup scripts add this port to the bridge prior to starting
231 <code>ovn-controller</code>. This can be a patch port to another bridge,
232 instead of a physical port, in more sophisticated setups.
233 </li>
234 </ul>
235
236 <p>
237 Other ports should not be attached to the integration bridge. In
238 particular, physical ports attached to the underlay network (as opposed to
239 gateway ports, which are physical ports attached to logical networks) must
240 not be attached to the integration bridge. Underlay physical ports should
241 instead be attached to a separate Open vSwitch bridge (they need not be
242 attached to any bridge at all, in fact).
243 </p>
244
245 <p>
246 The integration bridge should be configured as described below.
247 The effect of each of these settings is documented in
248 <code>ovs-vswitchd.conf.db</code>(5):
249 </p>
250
251 <dl>
252 <dt><code>fail-mode=secure</code></dt>
253 <dd>
254 Avoids switching packets between isolated logical networks before
255 <code>ovn-controller</code> starts up. See <code>Controller Failure
256 Settings</code> in <code>ovs-vsctl</code>(8) for more information.
257 </dd>
258
259 <dt><code>other-config:disable-in-band=true</code></dt>
260 <dd>
261 Suppresses in-band control flows for the integration bridge. It would be
262 unusual for such flows to show up anyway, because OVN uses a local
263 controller (over a Unix domain socket) instead of a remote controller.
264 It's possible, however, for some other bridge in the same system to have
265 an in-band remote controller, and in that case this suppresses the flows
266 that in-band control would ordinarily set up. See <code>In-Band
267 Control</code> in <code>DESIGN.md</code> for more information.
268 </dd>
269 </dl>
270
271 <p>
272 The customary name for the integration bridge is <code>br-int</code>, but
273 another name may be used.
274 </p>
275
276 <h2>Logical Networks</h2>
277
278 <p>
279 A <dfn>logical network</dfn> implements the same concepts as physical
280 networks, but they are insulated from the physical network with tunnels or
281 other encapsulations. This allows logical networks to have separate IP and
282 other address spaces that overlap, without conflicting, with those used for
283 physical networks. Logical network topologies can be arranged without
284 regard for the topologies of the physical networks on which they run.
285 </p>
286
287 <p>
288 Logical network concepts in OVN include:
289 </p>
290
291 <ul>
292 <li>
293 <dfn>Logical switches</dfn>, the logical version of Ethernet switches.
294 </li>
295
296 <li>
297 <dfn>Logical routers</dfn>, the logical version of IP routers. Logical
298 switches and routers can be connected into sophisticated topologies.
299 </li>
300
301 <li>
302 <dfn>Logical datapaths</dfn> are the logical version of an OpenFlow
303 switch. Logical switches and routers are both implemented as logical
304 datapaths.
305 </li>
306 </ul>
307
308 <h2>Life Cycle of a VIF</h2>
309
310 <p>
311 Tables and their schemas presented in isolation are difficult to
312 understand. Here's an example.
313 </p>
314
315 <p>
316 A VIF on a hypervisor is a virtual network interface attached either
317 to a VM or a container running directly on that hypervisor (This is
318 different from the interface of a container running inside a VM).
319 </p>
320
321 <p>
322 The steps in this example refer often to details of the OVN and OVN
323 Northbound database schemas. Please see <code>ovn-sb</code>(5) and
324 <code>ovn-nb</code>(5), respectively, for the full story on these
325 databases.
326 </p>
327
328 <ol>
329 <li>
330 A VIF's life cycle begins when a CMS administrator creates a new VIF
331 using the CMS user interface or API and adds it to a switch (one
332 implemented by OVN as a logical switch). The CMS updates its own
333 configuration. This includes associating unique, persistent identifier
334 <var>vif-id</var> and Ethernet address <var>mac</var> with the VIF.
335 </li>
336
337 <li>
338 The CMS plugin updates the OVN Northbound database to include the new
339 VIF, by adding a row to the <code>Logical_Port</code> table. In the new
340 row, <code>name</code> is <var>vif-id</var>, <code>mac</code> is
341 <var>mac</var>, <code>switch</code> points to the OVN logical switch's
342 Logical_Switch record, and other columns are initialized appropriately.
343 </li>
344
345 <li>
346 <code>ovn-northd</code> receives the OVN Northbound database update. In
347 turn, it makes the corresponding updates to the OVN Southbound database,
348 by adding rows to the OVN Southbound database <code>Logical_Flow</code>
349 table to reflect the new port, e.g. add a flow to recognize that packets
350 destined to the new port's MAC address should be delivered to it, and
351 update the flow that delivers broadcast and multicast packets to include
352 the new port. It also creates a record in the <code>Binding</code> table
353 and populates all its columns except the column that identifies the
354 <code>chassis</code>.
355 </li>
356
357 <li>
358 On every hypervisor, <code>ovn-controller</code> receives the
359 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
360 in the previous step. As long as the VM that owns the VIF is powered
361 off, <code>ovn-controller</code> cannot do much; it cannot, for example,
362 arrange to send packets to or receive packets from the VIF, because the
363 VIF does not actually exist anywhere.
364 </li>
365
366 <li>
367 Eventually, a user powers on the VM that owns the VIF. On the hypervisor
368 where the VM is powered on, the integration between the hypervisor and
369 Open vSwitch (described in <code>IntegrationGuide.md</code>) adds the VIF
370 to the OVN integration bridge and stores <var>vif-id</var> in
371 <code>external-ids</code>:<code>iface-id</code> to indicate that the
372 interface is an instantiation of the new VIF. (None of this code is new
373 in OVN; this is pre-existing integration work that has already been done
374 on hypervisors that support OVS.)
375 </li>
376
377 <li>
378 On the hypervisor where the VM is powered on, <code>ovn-controller</code>
379 notices <code>external-ids</code>:<code>iface-id</code> in the new
380 Interface. In response, it updates the local hypervisor's OpenFlow
381 tables so that packets to and from the VIF are properly handled.
382 Afterward, in the OVN Southbound DB, it updates the
383 <code>Binding</code> table's <code>chassis</code> column for the
384 row that links the logical port from
385 <code>external-ids</code>:<code>iface-id</code> to the hypervisor.
386 </li>
387
388 <li>
389 Some CMS systems, including OpenStack, fully start a VM only when its
390 networking is ready. To support this, <code>ovn-northd</code> notices
391 the <code>chassis</code> column updated for the row in
392 <code>Binding</code> table and pushes this upward by updating the
393 <ref column="up" table="Logical_Port" db="OVN_NB"/> column in the OVN
394 Northbound database's <ref table="Logical_Port" db="OVN_NB"/> table to
395 indicate that the VIF is now up. The CMS, if it uses this feature, can
396 then
397 react by allowing the VM's execution to proceed.
398 </li>
399
400 <li>
401 On every hypervisor but the one where the VIF resides,
402 <code>ovn-controller</code> notices the completely populated row in the
403 <code>Binding</code> table. This provides <code>ovn-controller</code>
404 the physical location of the logical port, so each instance updates the
405 OpenFlow tables of its switch (based on logical datapath flows in the OVN
406 DB <code>Logical_Flow</code> table) so that packets to and from the VIF
407 can be properly handled via tunnels.
408 </li>
409
410 <li>
411 Eventually, a user powers off the VM that owns the VIF. On the
412 hypervisor where the VM was powered off, the VIF is deleted from the OVN
413 integration bridge.
414 </li>
415
416 <li>
417 On the hypervisor where the VM was powered off,
418 <code>ovn-controller</code> notices that the VIF was deleted. In
419 response, it removes the <code>Chassis</code> column content in the
420 <code>Binding</code> table for the logical port.
421 </li>
422
423 <li>
424 On every hypervisor, <code>ovn-controller</code> notices the empty
425 <code>Chassis</code> column in the <code>Binding</code> table's row
426 for the logical port. This means that <code>ovn-controller</code> no
427 longer knows the physical location of the logical port, so each instance
428 updates its OpenFlow table to reflect that.
429 </li>
430
431 <li>
432 Eventually, when the VIF (or its entire VM) is no longer needed by
433 anyone, an administrator deletes the VIF using the CMS user interface or
434 API. The CMS updates its own configuration.
435 </li>
436
437 <li>
438 The CMS plugin removes the VIF from the OVN Northbound database,
439 by deleting its row in the <code>Logical_Port</code> table.
440 </li>
441
442 <li>
443 <code>ovn-northd</code> receives the OVN Northbound update and in turn
444 updates the OVN Southbound database accordingly, by removing or updating
445 the rows from the OVN Southbound database <code>Logical_Flow</code> table
446 and <code>Binding</code> table that were related to the now-destroyed
447 VIF.
448 </li>
449
450 <li>
451 On every hypervisor, <code>ovn-controller</code> receives the
452 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
453 in the previous step. <code>ovn-controller</code> updates OpenFlow
454 tables to reflect the update, although there may not be much to do, since
455 the VIF had already become unreachable when it was removed from the
456 <code>Binding</code> table in a previous step.
457 </li>
458 </ol>
459
460 <h2>Life Cycle of a container interface inside a VM</h2>
461
462 <p>
463 OVN provides virtual network abstractions by converting information
464 written in OVN_NB database to OpenFlow flows in each hypervisor. Secure
465 virtual networking for multi-tenants can only be provided if OVN controller
466 is the only entity that can modify flows in Open vSwitch. When the
467 Open vSwitch integration bridge resides in the hypervisor, it is a
468 fair assumption to make that tenant workloads running inside VMs cannot
469 make any changes to Open vSwitch flows.
470 </p>
471
472 <p>
473 If the infrastructure provider trusts the applications inside the
474 containers not to break out and modify the Open vSwitch flows, then
475 containers can be run in hypervisors. This is also the case when
476 containers are run inside the VMs and Open vSwitch integration bridge
477 with flows added by OVN controller resides in the same VM. For both
478 the above cases, the workflow is the same as explained with an example
479 in the previous section ("Life Cycle of a VIF").
480 </p>
481
482 <p>
483 This section talks about the life cycle of a container interface (CIF)
484 when containers are created in the VMs and the Open vSwitch integration
485 bridge resides inside the hypervisor. In this case, even if a container
486 application breaks out, other tenants are not affected because the
487 containers running inside the VMs cannot modify the flows in the
488 Open vSwitch integration bridge.
489 </p>
490
491 <p>
492 When multiple containers are created inside a VM, there are multiple
493 CIFs associated with them. The network traffic associated with these
494 CIFs need to reach the Open vSwitch integration bridge running in the
495 hypervisor for OVN to support virtual network abstractions. OVN should
496 also be able to distinguish network traffic coming from different CIFs.
497 There are two ways to distinguish network traffic of CIFs.
498 </p>
499
500 <p>
501 One way is to provide one VIF for every CIF (1:1 model). This means that
502 there could be a lot of network devices in the hypervisor. This would slow
503 down OVS because of all the additional CPU cycles needed for the management
504 of all the VIFs. It would also mean that the entity creating the
505 containers in a VM should also be able to create the corresponding VIFs in
506 the hypervisor.
507 </p>
508
509 <p>
510 The second way is to provide a single VIF for all the CIFs (1:many model).
511 OVN could then distinguish network traffic coming from different CIFs via
512 a tag written in every packet. OVN uses this mechanism and uses VLAN as
513 the tagging mechanism.
514 </p>
515
516 <ol>
517 <li>
518 A CIF's life cycle begins when a container is spawned inside a VM by
519 the either the same CMS that created the VM or a tenant that owns that VM
520 or even a container Orchestration System that is different than the CMS
521 that initially created the VM. Whoever the entity is, it will need to
522 know the <var>vif-id</var> that is associated with the network interface
523 of the VM through which the container interface's network traffic is
524 expected to go through. The entity that creates the container interface
525 will also need to choose an unused VLAN inside that VM.
526 </li>
527
528 <li>
529 The container spawning entity (either directly or through the CMS that
530 manages the underlying infrastructure) updates the OVN Northbound
531 database to include the new CIF, by adding a row to the
532 <code>Logical_Port</code> table. In the new row, <code>name</code> is
533 any unique identifier, <code>parent_name</code> is the <var>vif-id</var>
534 of the VM through which the CIF's network traffic is expected to go
535 through and the <code>tag</code> is the VLAN tag that identifies the
536 network traffic of that CIF.
537 </li>
538
539 <li>
540 <code>ovn-northd</code> receives the OVN Northbound database update. In
541 turn, it makes the corresponding updates to the OVN Southbound database,
542 by adding rows to the OVN Southbound database's <code>Logical_Flow</code>
543 table to reflect the new port and also by creating a new row in the
544 <code>Binding</code> table and populating all its columns except the
545 column that identifies the <code>chassis</code>.
546 </li>
547
548 <li>
549 On every hypervisor, <code>ovn-controller</code> subscribes to the
550 changes in the <code>Binding</code> table. When a new row is created
551 by <code>ovn-northd</code> that includes a value in
552 <code>parent_port</code> column of <code>Binding</code> table, the
553 <code>ovn-controller</code> in the hypervisor whose OVN integration bridge
554 has that same value in <var>vif-id</var> in
555 <code>external-ids</code>:<code>iface-id</code>
556 updates the local hypervisor's OpenFlow tables so that packets to and
557 from the VIF with the particular VLAN <code>tag</code> are properly
558 handled. Afterward it updates the <code>chassis</code> column of
559 the <code>Binding</code> to reflect the physical location.
560 </li>
561
562 <li>
563 One can only start the application inside the container after the
564 underlying network is ready. To support this, <code>ovn-northd</code>
565 notices the updated <code>chassis</code> column in <code>Binding</code>
566 table and updates the <ref column="up" table="Logical_Port"
567 db="OVN_NB"/> column in the OVN Northbound database's
568 <ref table="Logical_Port" db="OVN_NB"/> table to indicate that the
569 CIF is now up. The entity responsible to start the container application
570 queries this value and starts the application.
571 </li>
572
573 <li>
574 Eventually the entity that created and started the container, stops it.
575 The entity, through the CMS (or directly) deletes its row in the
576 <code>Logical_Port</code> table.
577 </li>
578
579 <li>
580 <code>ovn-northd</code> receives the OVN Northbound update and in turn
581 updates the OVN Southbound database accordingly, by removing or updating
582 the rows from the OVN Southbound database <code>Logical_Flow</code> table
583 that were related to the now-destroyed CIF. It also deletes the row in
584 the <code>Binding</code> table for that CIF.
585 </li>
586
587 <li>
588 On every hypervisor, <code>ovn-controller</code> receives the
589 <code>Logical_Flow</code> table updates that <code>ovn-northd</code> made
590 in the previous step. <code>ovn-controller</code> updates OpenFlow
591 tables to reflect the update.
592 </li>
593 </ol>
594
595 <h2>Life Cycle of a Packet</h2>
596
597 <p>
598 This section describes how a packet travels from one virtual machine or
599 container to another through OVN. This description focuses on the physical
600 treatment of a packet; for a description of the logical life cycle of a
601 packet, please refer to the <code>Logical_Flow</code> table in
602 <code>ovn-sb</code>(5).
603 </p>
604
605 <p>
606 This section mentions several data and metadata fields, for clarity
607 summarized here:
608 </p>
609
610 <dl>
611 <dt>tunnel key</dt>
612 <dd>
613 When OVN encapsulates a packet in Geneve or another tunnel, it attaches
614 extra data to it to allow the receiving OVN instance to process it
615 correctly. This takes different forms depending on the particular
616 encapsulation, but in each case we refer to it here as the ``tunnel
617 key.'' See <code>Tunnel Encapsulations</code>, below, for details.
618 </dd>
619
620 <dt>logical datapath field</dt>
621 <dd>
622 A field that denotes the logical datapath through which a packet is being
623 processed. OVN uses the field that OpenFlow 1.1+ simply (and
624 confusingly) calls ``metadata'' to store the logical datapath. (This
625 field is passed across tunnels as part of the tunnel key.)
626 </dd>
627
628 <dt>logical input port field</dt>
629 <dd>
630 A field that denotes the logical port from which the packet entered the
631 logical datapath. OVN stores this in a Nicira extension register. (This
632 field is passed across tunnels as part of the tunnel key.)
633 </dd>
634
635 <dt>logical output port field</dt>
636 <dd>
637 A field that denotes the logical port from which the packet will leave
638 the logical datapath. This is initialized to 0 at the beginning of the
639 logical ingress pipeline. OVN stores this in a Nicira extension
640 register. (This field is passed across tunnels as part of the tunnel
641 key.)
642 </dd>
643
644 <dt>VLAN ID</dt>
645 <dd>
646 The VLAN ID is used as an interface between OVN and containers nested
647 inside a VM (see <code>Life Cycle of a container interface inside a
648 VM</code>, above, for more information).
649 </dd>
650 </dl>
651
652 <p>
653 Initially, a VM or container on the ingress hypervisor sends a packet on a
654 port attached to the OVN integration bridge. Then:
655 </p>
656
657 <ol>
658 <li>
659 <p>
660 OpenFlow table 0 performs physical-to-logical translation. It matches
661 the packet's ingress port. Its actions annotate the packet with
662 logical metadata, by setting the logical datapath field to identify the
663 logical datapath that the packet is traversing and the logical input
664 port field to identify the ingress port. Then it resubmits to table 16
665 to enter the logical ingress pipeline.
666 </p>
667
668 <p>
669 Packets that originate from a container nested within a VM are treated
670 in a slightly different way. The originating container can be
671 distinguished based on the VIF-specific VLAN ID, so the
672 physical-to-logical translation flows additionally match on VLAN ID and
673 the actions strip the VLAN header. Following this step, OVN treats
674 packets from containers just like any other packets.
675 </p>
676
677 <p>
678 Table 0 also processes packets that arrive from other chassis. It
679 distinguishes them from other packets by ingress port, which is a
680 tunnel. As with packets just entering the OVN pipeline, the actions
681 annotate these packets with logical datapath and logical ingress port
682 metadata. In addition, the actions set the logical output port field,
683 which is available because in OVN tunneling occurs after the logical
684 output port is known. These three pieces of information are obtained
685 from the tunnel encapsulation metadata (see <code>Tunnel
686 Encapsulations</code> for encoding details). Then the actions resubmit
687 to table 33 to enter the logical egress pipeline.
688 </p>
689 </li>
690
691 <li>
692 <p>
693 OpenFlow tables 16 through 31 execute the logical ingress pipeline from
694 the <code>Logical_Flow</code> table in the OVN Southbound database.
695 These tables are expressed entirely in terms of logical concepts like
696 logical ports and logical datapaths. A big part of
697 <code>ovn-controller</code>'s job is to translate them into equivalent
698 OpenFlow (in particular it translates the table numbers:
699 <code>Logical_Flow</code> tables 0 through 15 become OpenFlow tables 16
700 through 31). For a given packet, the logical ingress pipeline
701 eventually executes zero or more <code>output</code> actions:
702 </p>
703
704 <ul>
705 <li>
706 If the pipeline executes no <code>output</code> actions at all, the
707 packet is effectively dropped.
708 </li>
709
710 <li>
711 Most commonly, the pipeline executes one <code>output</code> action,
712 which <code>ovn-controller</code> implements by resubmitting the
713 packet to table 32.
714 </li>
715
716 <li>
717 If the pipeline can execute more than one <code>output</code> action,
718 then each one is separately resubmitted to table 32. This can be
719 used to send multiple copies of the packet to multiple ports. (If
720 the packet was not modified between the <code>output</code> actions,
721 and some of the copies are destined to the same hypervisor, then
722 using a logical multicast output port would save bandwidth between
723 hypervisors.)
724 </li>
725 </ul>
726 </li>
727
728 <li>
729 <p>
730 OpenFlow tables 32 through 47 implement the <code>output</code> action
731 in the logical ingress pipeline. Specifically, table 32 handles
732 packets to remote hypervisors, table 33 handles packets to the local
733 hypervisor, and table 34 discards packets whose logical ingress and
734 egress port are the same.
735 </p>
736
737 <p>
738 Each flow in table 32 matches on a logical output port for unicast or
739 multicast logical ports that include a logical port on a remote
740 hypervisor. Each flow's actions implement sending a packet to the port
741 it matches. For unicast logical output ports on remote hypervisors,
742 the actions set the tunnel key to the correct value, then send the
743 packet on the tunnel port to the correct hypervisor. (When the remote
744 hypervisor receives the packet, table 0 there will recognize it as a
745 tunneled packet and pass it along to table 33.) For multicast logical
746 output ports, the actions send one copy of the packet to each remote
747 hypervisor, in the same way as for unicast destinations. If a
748 multicast group includes a logical port or ports on the local
749 hypervisor, then its actions also resubmit to table 33. Table 32 also
750 includes a fallback flow that resubmits to table 33 if there is no
751 other match.
752 </p>
753
754 <p>
755 Flows in table 33 resemble those in table 32 but for logical ports that
756 reside locally rather than remotely. For unicast logical output ports
757 on the local hypervisor, the actions just resubmit to table 34. For
758 multicast output ports that include one or more logical ports on the
759 local hypervisor, for each such logical port <var>P</var>, the actions
760 change the logical output port to <var>P</var>, then resubmit to table
761 34.
762 </p>
763
764 <p>
765 Table 34 matches and drops packets for which the logical input and
766 output ports are the same. It resubmits other packets to table 48.
767 </p>
768 </li>
769
770 <li>
771 <p>
772 OpenFlow tables 48 through 63 execute the logical egress pipeline from
773 the <code>Logical_Flow</code> table in the OVN Southbound database.
774 The egress pipeline can perform a final stage of validation before
775 packet delivery. Eventually, it may execute an <code>output</code>
776 action, which <code>ovn-controller</code> implements by resubmitting to
777 table 64. A packet for which the pipeline never executes
778 <code>output</code> is effectively dropped (although it may have been
779 transmitted through a tunnel across a physical network).
780 </p>
781
782 <p>
783 The egress pipeline cannot change the logical output port or cause
784 further tunneling.
785 </p>
786 </li>
787
788 <li>
789 <p>
790 OpenFlow table 64 performs logical-to-physical translation, the
791 opposite of table 0. It matches the packet's logical egress port. Its
792 actions output the packet to the port attached to the OVN integration
793 bridge that represents that logical port. If the logical egress port
794 is a container nested with a VM, then before sending the packet the
795 actions push on a VLAN header with an appropriate VLAN ID.
796 </p>
797 </li>
798 </ol>
799
800 <h1>Design Decisions</h1>
801
802 <h2>Tunnel Encapsulations</h2>
803
804 <p>
805 OVN annotates logical network packets that it sends from one hypervisor to
806 another with the following three pieces of metadata, which are encoded in
807 an encapsulation-specific fashion:
808 </p>
809
810 <ul>
811 <li>
812 24-bit logical datapath identifier, from the <code>tunnel_key</code>
813 column in the OVN Southbound <code>Datapath_Binding</code> table.
814 </li>
815
816 <li>
817 15-bit logical ingress port identifier. ID 0 is reserved for internal
818 use within OVN. IDs 1 through 32767, inclusive, may be assigned to
819 logical ports (see the <code>tunnel_key</code> column in the OVN
820 Southbound <code>Port_Binding</code> table).
821 </li>
822
823 <li>
824 16-bit logical egress port identifier. IDs 0 through 32767 have the same
825 meaning as for logical ingress ports. IDs 32768 through 65535,
826 inclusive, may be assigned to logical multicast groups (see the
827 <code>tunnel_key</code> column in the OVN Southbound
828 <code>Multicast_Group</code> table).
829 </li>
830 </ul>
831
832 <p>
833 For hypervisor-to-hypervisor traffic, OVN supports only Geneve and STT
834 encapsulations, for the following reasons:
835 </p>
836
837 <ul>
838 <li>
839 Only STT and Geneve support the large amounts of metadata (over 32 bits
840 per packet) that OVN uses (as described above).
841 </li>
842
843 <li>
844 STT and Geneve use randomized UDP or TCP source ports that allows
845 efficient distribution among multiple paths in environments that use ECMP
846 in their underlay.
847 </li>
848
849 <li>
850 NICs are available to offload STT and Geneve encapsulation and
851 decapsulation.
852 </li>
853 </ul>
854
855 <p>
856 Due to its flexibility, the preferred encapsulation between hypervisors is
857 Geneve. For Geneve encapsulation, OVN transmits the logical datapath
858 identifier in the Geneve VNI.
859
860 <!-- Keep the following in sync with ovn/controller/physical.h. -->
861 OVN transmits the logical ingress and logical egress ports in a TLV with
862 class 0xffff, type 0, and a 32-bit value encoded as follows, from MSB to
863 LSB:
864 </p>
865
866 <diagram>
867 <header name="">
868 <bits name="rsv" above="1" below="0" width=".25"/>
869 <bits name="ingress port" above="15" width=".75"/>
870 <bits name="egress port" above="16" width=".75"/>
871 </header>
872 </diagram>
873
874 <p>
875 Environments whose NICs lack Geneve offload may prefer STT encapsulation
876 for performance reasons. For STT encapsulation, OVN encodes all three
877 pieces of logical metadata in the STT 64-bit tunnel ID as follows, from MSB
878 to LSB:
879 </p>
880
881 <diagram>
882 <header name="">
883 <bits name="reserved" above="9" below="0" width=".5"/>
884 <bits name="ingress port" above="15" width=".75"/>
885 <bits name="egress port" above="16" width=".75"/>
886 <bits name="datapath" above="24" width="1.25"/>
887 </header>
888 </diagram>
889
890 <p>
891 For connecting to gateways, in addition to Geneve and STT, OVN supports
892 VXLAN, because only VXLAN support is common on top-of-rack (ToR) switches.
893 Currently, gateways have a feature set that matches the capabilities as
894 defined by the VTEP schema, so fewer bits of metadata are necessary. In
895 the future, gateways that do not support encapsulations with large amounts
896 of metadata may continue to have a reduced feature set.
897 </p>
898 </manpage>