]> git.proxmox.com Git - mirror_ovs.git/blob - WHY-OVS
util: Introduce ASSIGN_CONTAINER to make iteration macros easier to read.
[mirror_ovs.git] / WHY-OVS
1 Why Open vSwitch?
2 =================
3
4 We love the existing network stack in Linux. It is robust, flexible,
5 and feature rich. Linux already contains an in-kernel L2 switch (the
6 Linux bridge) which can be used by VMs for inter-VM communication. So,
7 it is reasonable to ask why there is a need for a new network switch.
8
9 The answer is that Open vSwitch is targeted at multi-server
10 virtualization deployments, a landscape for which the existing stack is
11 not well suited. These environments are often characterized by highly
12 dynamic end-points, the maintenance of logical abstractions, and
13 (sometimes) integration with or offloading to special purpose switching
14 hardware.
15
16 The following characteristics and design considerations help Open
17 vSwitch cope with the above requirements.
18
19 * The mobility of state: All network state associated with a network
20 entity (say a virtual machine) should be easily identifiable and
21 migratable between different hosts. This may include traditional
22 "soft state" (such as an entry in an L2 learning table), L3 forwarding
23 state, policy routing state, ACLs, QoS policy, monitoring
24 configuration (e.g. NetFlow, sFlow), etc.
25
26 Open vSwitch has support for both configuring and migrating both slow
27 (configuration) and fast network state between instances. For
28 example, if a VM migrates between end-hosts, it is possible to not
29 only migrate associated configuration (SPAN rules, ACLs, QoS) but any
30 live network state (including, for example, existing state which
31 may be difficult to reconstruct). Further, Open vSwitch state is
32 typed and backed by a real data-model allowing for the development of
33 structured automation systems.
34
35 * Responding to network dynamics: Virtual environments are often
36 characterized by high-rates of change. VMs coming and going, VMs
37 moving backwards and forwards in time, changes to the logical network
38 environments, and so forth.
39
40 Open vSwitch supports a number of features that allow a network
41 control system to respond and adapt as the environment changes. This
42 includes simple accounting and visibility support such as NetFlow and
43 sFlow. But perhaps more useful, Open vSwitch supports a network state
44 database (OVSDB) that supports remote triggers. Therefore, a piece of
45 orchestration software can "watch" various aspects of the network and
46 respond if/when they change. This is used heavily today, for example,
47 to respond to and track VM migrations.
48
49 Open vSwitch also supports OpenFlow as a method of exporting remote
50 access to control traffic. There are a number of uses for this
51 including global network discovery through inspection of discovery
52 or link-state traffic (e.g. LLDP, CDP, OSPF, etc.).
53
54 * Maintenance of logical tags: Distributed virtual switches (such as
55 VMware vDS and Cisco's Nexus 1000V) often maintain logical context
56 within the network through appending or manipulating tags in network
57 packets. This can be used to uniquely identify a VM (in a manner
58 resistant to hardware spoofing), or to hold some other context that
59 is only relevant in the logical domain. Much of the problem of
60 building a distributed virtual switch is to efficiently and correctly
61 manage these tags.
62
63 Open vSwitch includes multiple methods for specifying and maintaining
64 tagging rules, all of which are accessible to a remote process for
65 orchestration. Further, in many cases these tagging rules are stored
66 in an optimized form so they don't have to be coupled with a
67 heavyweight network device. This allows, for example, thousands of
68 tagging or address remapping rules to be configured, changed, and
69 migrated.
70
71 In a similar vein, Open vSwitch supports a GRE implementation that can
72 handle thousands of simultaneous GRE tunnels and supports remote
73 configuration for tunnel creation, configuration, and tear-down.
74 This, for example, can be used to connect private VM networks in
75 different data centers.
76
77 * Hardware integration: Open vSwitch's forwarding path (the in-kernel
78 datapath) is designed to be amenable to "offloading" packet processing
79 to hardware chipsets, whether housed in a classic hardware switch
80 chassis or in an end-host NIC. This allows for the Open vSwitch
81 control path to be able to both control a pure software
82 implementation or a hardware switch.
83
84 There are many ongoing efforts to port Open vSwitch to hardware
85 chipsets. These include multiple merchant silicon chipsets (Broadcom
86 and Marvell), as well as a number of vendor-specific platforms.
87
88 The advantage of hardware integration is not only performance within
89 virtualized environments. If physical switches also expose the Open
90 vSwitch control abstractions, both bare-metal and virtualized hosting
91 environments can be managed using the same mechanism for automated
92 network control.
93
94 In many ways, Open vSwitch targets a different point in the design space
95 than the existing Linux networking stack, focusing on the need for
96 automated and dynamic network control in large-scale Linux-based
97 virtualization environments.
98
99 The goal with Open vSwitch is to keep the in-kernel code as small as
100 possible (as is necessary for performance) and to re-use existing
101 subsystems when applicable (for example Open vSwitch uses the existing
102 QoS stack). Open vSwitch limits disruption by using existing hooks into
103 the kernel, so Open vSwitch can be deployed as a module without
104 requiring any modification to the kernel.