]> git.proxmox.com Git - ovs.git/blob - WHY-OVS.rst
ovn: Add ovn db servers ocf script in fedora packager
[ovs.git] / WHY-OVS.rst
1 ..
2 Licensed under the Apache License, Version 2.0 (the "License"); you may
3 not use this file except in compliance with the License. You may obtain
4 a copy of the License at
5
6 http://www.apache.org/licenses/LICENSE-2.0
7
8 Unless required by applicable law or agreed to in writing, software
9 distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
10 WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
11 License for the specific language governing permissions and limitations
12 under the License.
13
14 Convention for heading levels in Open vSwitch documentation:
15
16 ======= Heading 0 (reserved for the title in a document)
17 ------- Heading 1
18 ~~~~~~~ Heading 2
19 +++++++ Heading 3
20 ''''''' Heading 4
21
22 Avoid deeper levels because they do not render well.
23
24 =================
25 Why Open vSwitch?
26 =================
27
28 Hypervisors need the ability to bridge traffic between VMs and with the outside
29 world. On Linux-based hypervisors, this used to mean using the built-in L2
30 switch (the Linux bridge), which is fast and reliable. So, it is reasonable to
31 ask why Open vSwitch is used.
32
33 The answer is that Open vSwitch is targeted at multi-server virtualization
34 deployments, a landscape for which the previous stack is not well suited. These
35 environments are often characterized by highly dynamic end-points, the
36 maintenance of logical abstractions, and (sometimes) integration with or
37 offloading to special purpose switching hardware.
38
39 The following characteristics and design considerations help Open vSwitch cope
40 with the above requirements.
41
42 The mobility of state
43 ---------------------
44
45 All network state associated with a network entity (say a virtual machine)
46 should be easily identifiable and migratable between different hosts. This may
47 include traditional "soft state" (such as an entry in an L2 learning table), L3
48 forwarding state, policy routing state, ACLs, QoS policy, monitoring
49 configuration (e.g. NetFlow, IPFIX, sFlow), etc.
50
51 Open vSwitch has support for both configuring and migrating both slow
52 (configuration) and fast network state between instances. For example, if a VM
53 migrates between end-hosts, it is possible to not only migrate associated
54 configuration (SPAN rules, ACLs, QoS) but any live network state (including,
55 for example, existing state which may be difficult to reconstruct). Further,
56 Open vSwitch state is typed and backed by a real data-model allowing for the
57 development of structured automation systems.
58
59 Responding to network dynamics
60 ------------------------------
61
62 Virtual environments are often characterized by high-rates of change. VMs
63 coming and going, VMs moving backwards and forwards in time, changes to the
64 logical network environments, and so forth.
65
66 Open vSwitch supports a number of features that allow a network control system
67 to respond and adapt as the environment changes. This includes simple
68 accounting and visibility support such as NetFlow, IPFIX, and sFlow. But
69 perhaps more useful, Open vSwitch supports a network state database (OVSDB)
70 that supports remote triggers. Therefore, a piece of orchestration software can
71 "watch" various aspects of the network and respond if/when they change. This is
72 used heavily today, for example, to respond to and track VM migrations.
73
74 Open vSwitch also supports OpenFlow as a method of exporting remote access to
75 control traffic. There are a number of uses for this including global network
76 discovery through inspection of discovery or link-state traffic (e.g. LLDP,
77 CDP, OSPF, etc.).
78
79 Maintenance of logical tags
80 ----------------------------
81
82 Distributed virtual switches (such as VMware vDS and Cisco's Nexus 1000V) often
83 maintain logical context within the network through appending or manipulating
84 tags in network packets. This can be used to uniquely identify a VM (in a
85 manner resistant to hardware spoofing), or to hold some other context that is
86 only relevant in the logical domain. Much of the problem of building a
87 distributed virtual switch is to efficiently and correctly manage these tags.
88
89 Open vSwitch includes multiple methods for specifying and maintaining tagging
90 rules, all of which are accessible to a remote process for orchestration.
91 Further, in many cases these tagging rules are stored in an optimized form so
92 they don't have to be coupled with a heavyweight network device. This allows,
93 for example, thousands of tagging or address remapping rules to be configured,
94 changed, and migrated.
95
96 In a similar vein, Open vSwitch supports a GRE implementation that can handle
97 thousands of simultaneous GRE tunnels and supports remote configuration for
98 tunnel creation, configuration, and tear-down. This, for example, can be used
99 to connect private VM networks in different data centers.
100
101 Hardware integration
102 --------------------
103
104 Open vSwitch's forwarding path (the in-kernel datapath) is designed to be
105 amenable to "offloading" packet processing to hardware chipsets, whether housed
106 in a classic hardware switch chassis or in an end-host NIC. This allows for the
107 Open vSwitch control path to be able to both control a pure software
108 implementation or a hardware switch.
109
110 There are many ongoing efforts to port Open vSwitch to hardware chipsets. These
111 include multiple merchant silicon chipsets (Broadcom and Marvell), as well as a
112 number of vendor-specific platforms. (The PORTING file discusses how one would
113 go about making such a port.)
114
115 The advantage of hardware integration is not only performance within
116 virtualized environments. If physical switches also expose the Open vSwitch
117 control abstractions, both bare-metal and virtualized hosting environments can
118 be managed using the same mechanism for automated network control.
119
120 Summary
121 -------
122
123 In many ways, Open vSwitch targets a different point in the design space than
124 previous hypervisor networking stacks, focusing on the need for automated and
125 dynamic network control in large-scale Linux-based virtualization environments.
126
127 The goal with Open vSwitch is to keep the in-kernel code as small as possible
128 (as is necessary for performance) and to re-use existing subsystems when
129 applicable (for example Open vSwitch uses the existing QoS stack). As of Linux
130 3.3, Open vSwitch is included as a part of the kernel and packaging for the
131 userspace utilities are available on most popular distributions.