]> git.proxmox.com Git - mirror_ovs.git/blob - INSTALL.Docker.md
Add OpenFlow command to flush conntrack table entries.
[mirror_ovs.git] / INSTALL.Docker.md
1 How to Use Open Virtual Networking With Docker
2 ==============================================
3
4 This document describes how to use Open Virtual Networking with Docker
5 1.9.0 or later. This document assumes that you have installed Open
6 vSwitch by following [INSTALL.md] or by using the distribution packages
7 such as .deb or.rpm. Consult www.docker.com for instructions on how to
8 install Docker. Docker 1.9.0 comes with support for multi-host networking.
9
10 Setup
11 =====
12
13 For multi-host networking with OVN and Docker, Docker has to be started
14 with a destributed key-value store. For e.g., if you decide to use consul
15 as your distributed key-value store, and your host IP address is $HOST_IP,
16 start your Docker daemon with:
17
18 ```
19 docker daemon --cluster-store=consul://127.0.0.1:8500 \
20 --cluster-advertise=$HOST_IP:0
21 ```
22
23 OVN provides network virtualization to containers. OVN's integration with
24 Docker currently works in two modes - the "underlay" mode or the "overlay"
25 mode.
26
27 In the "underlay" mode, OVN requires a OpenStack setup to provide container
28 networking. In this mode, one can create logical networks and can have
29 containers running inside VMs, standalone VMs (without having any containers
30 running inside them) and physical machines connected to the same logical
31 network. This is a multi-tenant, multi-host solution.
32
33 In the "overlay" mode, OVN can create a logical network amongst containers
34 running on multiple hosts. This is a single-tenant (extendable to
35 multi-tenants depending on the security characteristics of the workloads),
36 multi-host solution. In this mode, you do not need a pre-created OpenStack
37 setup.
38
39 For both the modes to work, a user has to install and start Open vSwitch in
40 each VM/host that he plans to run his containers.
41
42
43 The "overlay" mode
44 ==================
45
46 OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
47
48 * Start the central components.
49
50 OVN architecture has a central component which stores your networking intent
51 in a database. On one of your machines, with an IP Address of $CENTRAL_IP,
52 where you have installed and started Open vSwitch, you will need to start some
53 central components.
54
55 Start ovn-northd daemon. This daemon translates networking intent from Docker
56 stored in the OVN_Northbound database to logical flows in OVN_Southbound
57 database.
58
59 ```
60 /usr/share/openvswitch/scripts/ovn-ctl start_northd
61 ```
62
63 * One time setup.
64
65 On each host, where you plan to spawn your containers, you will need to
66 run the following command once. (You need to run it again if your OVS database
67 gets cleared. It is harmless to run it again in any case.)
68
69 $LOCAL_IP in the below command is the IP address via which other hosts
70 can reach this host. This acts as your local tunnel endpoint.
71
72 $ENCAP_TYPE is the type of tunnel that you would like to use for overlay
73 networking. The options are "geneve" or "stt". (Please note that your
74 kernel should have support for your chosen $ENCAP_TYPE. Both geneve
75 and stt are part of the Open vSwitch kernel module that is compiled from this
76 repo. If you use the Open vSwitch kernel module from upstream Linux,
77 you will need a minumum kernel version of 3.18 for geneve. There is no stt
78 support in upstream Linux. You can verify whether you have the support in your
79 kernel by doing a "lsmod | grep $ENCAP_TYPE".)
80
81 ```
82 ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6642" \
83 external_ids:ovn-nb="tcp:$CENTRAL_IP:6641" external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="$ENCAP_TYPE"
84 ```
85
86 And finally, start the ovn-controller. (You need to run the below command
87 on every boot)
88
89 ```
90 /usr/share/openvswitch/scripts/ovn-ctl start_controller
91 ```
92
93 * Start the Open vSwitch network driver.
94
95 By default Docker uses Linux bridge for networking. But it has support
96 for external drivers. To use Open vSwitch instead of the Linux bridge,
97 you will need to start the Open vSwitch driver.
98
99 The Open vSwitch driver uses the Python's flask module to listen to
100 Docker's networking api calls. So, if your host does not have Python's
101 flask module, install it with:
102
103 ```
104 easy_install -U pip
105 pip install Flask
106 ```
107
108 Start the Open vSwitch driver on every host where you plan to create your
109 containers. (Please read a note on $OVS_PYTHON_LIBS_PATH that is used below
110 at the end of this document.)
111
112 ```
113 PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-overlay-driver --detach
114 ```
115
116 Docker has inbuilt primitives that closely match OVN's logical switches
117 and logical port concepts. Please consult Docker's documentation for
118 all the possible commands. Here are some examples.
119
120 * Create your logical switch.
121
122 To create a logical switch with name 'foo', on subnet '192.168.1.0/24' run:
123
124 ```
125 NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
126 ```
127
128 * List your logical switches.
129
130 ```
131 docker network ls
132 ```
133
134 You can also look at this logical switch in OVN's northbound database by
135 running the following command.
136
137 ```
138 ovn-nbctl --db=tcp:$CENTRAL_IP:6640 ls-list
139 ```
140
141 * Docker creates your logical port and attaches it to the logical network
142 in a single step.
143
144 For e.g., to attach a logical port to network 'foo' inside cotainer busybox,
145 run:
146
147 ```
148 docker run -itd --net=foo --name=busybox busybox
149 ```
150
151 * List all your logical ports.
152
153 Docker currently does not have a CLI command to list all your logical ports.
154 But you can look at them in the OVN database, by running:
155
156 ```
157 ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lsp-list $NID
158 ```
159
160 * You can also create a logical port and attach it to a running container.
161
162 ```
163 docker network create -d openvswitch --subnet=192.168.2.0/24 bar
164 docker network connect bar busybox
165 ```
166
167 You can delete your logical port and detach it from a running container by
168 running:
169
170 ```
171 docker network disconnect bar busybox
172 ```
173
174 * You can delete your logical switch by running:
175
176 ```
177 docker network rm bar
178 ```
179
180
181 The "underlay" mode
182 ===================
183
184 This mode requires that you have a OpenStack setup pre-installed with OVN
185 providing the underlay networking.
186
187 * One time setup.
188
189 A OpenStack tenant creates a VM with a single network interface (or multiple)
190 that belongs to management logical networks. The tenant needs to fetch the
191 port-id associated with the interface via which he plans to send the container
192 traffic inside the spawned VM. This can be obtained by running the
193 below command to fetch the 'id' associated with the VM.
194
195 ```
196 nova list
197 ```
198
199 and then by running:
200
201 ```
202 neutron port-list --device_id=$id
203 ```
204
205 Inside the VM, download the OpenStack RC file that contains the tenant
206 information (henceforth referred to as 'openrc.sh'). Edit the file and add the
207 previously obtained port-id information to the file by appending the following
208 line: export OS_VIF_ID=$port_id. After this edit, the file will look something
209 like:
210
211 ```
212 #!/bin/bash
213 export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
214 export OS_TENANT_ID=fab106b215d943c3bad519492278443d
215 export OS_TENANT_NAME="demo"
216 export OS_USERNAME="demo"
217 export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9
218 ```
219
220 * Create the Open vSwitch bridge.
221
222 If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add
223 that device as a port to an Open vSwitch bridge 'breth0' and move its IP
224 address and route related information to that bridge. (If it has multiple
225 network interfaces, you will need to create and attach an Open vSwitch bridge
226 for the interface via which you plan to send your container traffic.)
227
228 If you use DHCP to obtain an IP address, then you should kill the DHCP client
229 that was listening on the physical Ethernet interface (e.g. eth0) and start
230 one listening on the Open vSwitch bridge (e.g. breth0).
231
232 Depending on your VM, you can make the above step persistent across reboots.
233 For e.g.:, if your VM is Debian/Ubuntu, you can read
234 [openvswitch-switch.README.Debian]. If your VM is RHEL based, you can read
235 [README.RHEL]
236
237
238 * Start the Open vSwitch network driver.
239
240 The Open vSwitch driver uses the Python's flask module to listen to
241 Docker's networking api calls. The driver also uses OpenStack's
242 python-neutronclient libraries. So, if your host does not have Python's
243 flask module or python-neutronclient install them with:
244
245 ```
246 easy_install -U pip
247 pip install python-neutronclient
248 pip install Flask
249 ```
250
251 Source the openrc file. e.g.:
252 ````
253 . ./openrc.sh
254 ```
255
256 Start the network driver and provide your OpenStack tenant password
257 when prompted. (Please read a note on $OVS_PYTHON_LIBS_PATH that is used below
258 at the end of this document.)
259
260 ```
261 PYTHONPATH=$OVS_PYTHON_LIBS_PATH ovn-docker-underlay-driver --bridge breth0 \
262 --detach
263 ```
264
265 From here-on you can use the same Docker commands as described in the
266 section 'The "overlay" mode'.
267
268 Please read 'man ovn-architecture' to understand OVN's architecture in
269 detail.
270
271 Note on $OVS_PYTHON_LIBS_PATH
272 =============================
273
274 $OVS_PYTHON_LIBS_PATH should point to the directory where Open vSwitch
275 python modules are installed. If you installed Open vSwitch python
276 modules via the debian package of 'python-openvswitch' or via pip by
277 running 'pip install ovs', you do not need to specify the path.
278 If you installed it by following the instructions in INSTALL.md, you
279 should specify the path. The path in that case depends on the options passed
280 to ./configure. (It is usually either '/usr/share/openvswitch/python' or
281 '/usr/local/share/openvswitch/python'.)
282
283 [INSTALL.md]: INSTALL.md
284 [openvswitch-switch.README.Debian]: debian/openvswitch-switch.README.Debian
285 [README.RHEL]: rhel/README.RHEL