]> git.proxmox.com Git - mirror_ovs.git/blame - INSTALL.Docker.md
bridge: Do not add bridges with '/' in name.
[mirror_ovs.git] / INSTALL.Docker.md
CommitLineData
542cc9bb
TG
1How to Use Open vSwitch with Docker
2====================================
ec8f0f0c 3
eaa923e3 4This document describes how to use Open vSwitch with Docker 1.9.0 or
4384556d
GS
5later. This document assumes that you installed Open vSwitch by following
6[INSTALL.md] or by using the distribution packages such as .deb or .rpm.
7Consult www.docker.com for instructions on how to install Docker.
ec8f0f0c 8
eaa923e3
GS
9Docker 1.9.0 comes with support for multi-host networking. Integration
10of Docker networking and Open vSwitch can be achieved via Open vSwitch
11virtual network (OVN).
12
ec8f0f0c
GS
13
14Setup
eaa923e3
GS
15=====
16
17For multi-host networking with OVN and Docker, Docker has to be started
18with a destributed key-value store. For e.g., if you decide to use consul
19as your distributed key-value store, and your host IP address is $HOST_IP,
20start your Docker daemon with:
21
22```
23docker daemon --cluster-store=consul://127.0.0.1:8500 \
24--cluster-advertise=$HOST_IP:0
25```
26
27OVN provides network virtualization to containers. OVN's integration with
28Docker currently works in two modes - the "underlay" mode or the "overlay"
29mode.
30
31In the "underlay" mode, OVN requires a OpenStack setup to provide container
32networking. In this mode, one can create logical networks and can have
33containers running inside VMs, standalone VMs (without having any containers
34running inside them) and physical machines connected to the same logical
35network. This is a multi-tenant, multi-host solution.
36
37In the "overlay" mode, OVN can create a logical network amongst containers
38running on multiple hosts. This is a single-tenant (extendable to
39multi-tenants depending on the security characteristics of the workloads),
40multi-host solution. In this mode, you do not need a pre-created OpenStack
41setup.
42
43For both the modes to work, a user has to install and start Open vSwitch in
44each VM/host that he plans to run his containers.
45
46
47The "overlay" mode
48==================
49
50OVN in "overlay" mode needs a minimum Open vSwitch version of 2.5.
51
52* Start the central components.
53
54OVN architecture has a central component which stores your networking intent
55in a database. On one of your machines, with an IP Address of $CENTRAL_IP,
56where you have installed and started Open vSwitch, you will need to start some
57central components.
58
59Begin by making ovsdb-server listen on a TCP port by running:
60
61```
62ovs-appctl -t ovsdb-server ovsdb-server/add-remote ptcp:6640
63```
64
65Start ovn-northd daemon. This daemon translates networking intent from Docker
66stored in the OVN_Northbound database to logical flows in OVN_Southbound
67database.
68
69```
70/usr/share/openvswitch/scripts/ovn-ctl start_northd
71```
72
73* One time setup.
74
75On each host, where you plan to spawn your containers, you will need to
76run the following command once. (You need to run it again if your OVS database
77gets cleared. It is harmless to run it again in any case.)
78
79$LOCAL_IP in the below command is the IP address via which other hosts
80can reach this host. This acts as your local tunnel endpoint.
81
82$ENCAP_TYPE is the type of tunnel that you would like to use for overlay
83networking. The options are "geneve" or "stt". (Please note that your
84kernel should have support for your chosen $ENCAP_TYPE. Both geneve
85and stt are part of the Open vSwitch kernel module that is compiled from this
86repo. If you use the Open vSwitch kernel module from upstream Linux,
87you will need a minumum kernel version of 3.18 for geneve. There is no stt
88support in upstream Linux. You can verify whether you have the support in your
89kernel by doing a "lsmod | grep $ENCAP_TYPE".)
90
91```
92ovs-vsctl set Open_vSwitch . external_ids:ovn-remote="tcp:$CENTRAL_IP:6640" \
93 external_ids:ovn-encap-ip=$LOCAL_IP external_ids:ovn-encap-type="$ENCAP_TYPE"
94```
95
96And finally, start the ovn-controller. (You need to run the below command
97on every boot)
98
99```
100/usr/share/openvswitch/scripts/ovn-ctl start_controller
101```
102
103* Start the Open vSwitch network driver.
104
105By default Docker uses Linux bridge for networking. But it has support
106for external drivers. To use Open vSwitch instead of the Linux bridge,
107you will need to start the Open vSwitch driver.
108
109The Open vSwitch driver uses the Python's flask module to listen to
110Docker's networking api calls. So, if your host does not have Python's
111flask module, install it with:
112
113```
114easy_install -U pip
115pip install Flask
116```
117
118Start the Open vSwitch driver on every host where you plan to create your
119containers.
120
121```
122ovn-docker-overlay-driver --detach
123```
124
125Docker has inbuilt primitives that closely match OVN's logical switches
126and logical port concepts. Please consult Docker's documentation for
127all the possible commands. Here are some examples.
128
129* Create your logical switch.
130
131To create a logical switch with name 'foo', on subnet '192.168.1.0/24' run:
132
133```
134NID=`docker network create -d openvswitch --subnet=192.168.1.0/24 foo`
135```
136
137* List your logical switches.
138
139```
140docker network ls
141```
142
143You can also look at this logical switch in OVN's northbound database by
144running the following command.
145
146```
147ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lswitch-list
148```
149
150* Docker creates your logical port and attaches it to the logical network
151in a single step.
152
153For e.g., to attach a logical port to network 'foo' inside cotainer busybox,
154run:
155
156```
157docker run -itd --net=foo --name=busybox busybox
158```
159
160* List all your logical ports.
161
162Docker currently does not have a CLI command to list all your logical ports.
163But you can look at them in the OVN database, by running:
ec8f0f0c 164
542cc9bb 165```
eaa923e3 166ovn-nbctl --db=tcp:$CENTRAL_IP:6640 lport-list $NID
542cc9bb 167```
ec8f0f0c 168
eaa923e3 169* You can also create a logical port and attach it to a running container.
ec8f0f0c 170
542cc9bb 171```
eaa923e3
GS
172docker network create -d openvswitch --subnet=192.168.2.0/24 bar
173docker network connect bar busybox
542cc9bb 174```
ec8f0f0c 175
eaa923e3
GS
176You can delete your logical port and detach it from a running container by
177running:
178
179```
180docker network disconnect bar busybox
181```
ec8f0f0c 182
eaa923e3 183* You can delete your logical switch by running:
ec8f0f0c 184
eaa923e3
GS
185```
186docker network rm bar
187```
ec8f0f0c 188
ec8f0f0c 189
eaa923e3
GS
190The "underlay" mode
191===================
192
193This mode requires that you have a OpenStack setup pre-installed with OVN
194providing the underlay networking.
195
196* One time setup.
197
198A OpenStack tenant creates a VM with a single network interface (or multiple)
199that belongs to management logical networks. The tenant needs to fetch the
200port-id associated with the interface via which he plans to send the container
201traffic inside the spawned VM. This can be obtained by running the
202below command to fetch the 'id' associated with the VM.
ec8f0f0c 203
05444f07 204```
eaa923e3 205nova list
05444f07 206```
ec8f0f0c 207
eaa923e3 208and then by running:
ec8f0f0c 209
eaa923e3
GS
210```
211neutron port-list --device_id=$id
212```
ec8f0f0c 213
eaa923e3
GS
214Inside the VM, download the OpenStack RC file that contains the tenant
215information (henceforth referred to as 'openrc.sh'). Edit the file and add the
216previously obtained port-id information to the file by appending the following
217line: export OS_VIF_ID=$port_id. After this edit, the file will look something
218like:
ec8f0f0c 219
eaa923e3
GS
220```
221#!/bin/bash
222export OS_AUTH_URL=http://10.33.75.122:5000/v2.0
223export OS_TENANT_ID=fab106b215d943c3bad519492278443d
224export OS_TENANT_NAME="demo"
225export OS_USERNAME="demo"
226export OS_VIF_ID=e798c371-85f4-4f2d-ad65-d09dd1d3c1c9
227```
228
229* Create the Open vSwitch bridge.
230
231If your VM has one ethernet interface (e.g.: 'eth0'), you will need to add
232that device as a port to an Open vSwitch bridge 'breth0' and move its IP
233address and route related information to that bridge. (If it has multiple
234network interfaces, you will need to create and attach an Open vSwitch bridge
235for the interface via which you plan to send your container traffic.)
236
237If you use DHCP to obtain an IP address, then you should kill the DHCP client
238that was listening on the physical Ethernet interface (e.g. eth0) and start
239one listening on the Open vSwitch bridge (e.g. breth0).
ec8f0f0c 240
eaa923e3
GS
241Depending on your VM, you can make the above step persistent across reboots.
242For e.g.:, if your VM is Debian/Ubuntu, you can read
243[openvswitch-switch.README.Debian]. If your VM is RHEL based, you can read
244[README.RHEL]
ec8f0f0c 245
ec8f0f0c 246
eaa923e3 247* Start the Open vSwitch network driver.
7894385a 248
eaa923e3
GS
249The Open vSwitch driver uses the Python's flask module to listen to
250Docker's networking api calls. The driver also uses OpenStack's
251python-neutronclient libraries. So, if your host does not have Python's
252flask module or python-neutronclient install them with:
253
254```
255easy_install -U pip
256pip install python-neutronclient
257pip install Flask
7894385a 258```
eaa923e3
GS
259
260Source the openrc file. e.g.:
261````
262. ./openrc.sh
7894385a
GS
263```
264
eaa923e3
GS
265Start the network driver and provide your OpenStack tenant password
266when prompted.
ec8f0f0c 267
eaa923e3
GS
268```
269ovn-docker-underlay-driver --bridge breth0 --detach
270```
ec8f0f0c 271
eaa923e3
GS
272From here-on you can use the same Docker commands as described in the
273section 'The "overlay" mode'.
ec8f0f0c 274
eaa923e3
GS
275Please read 'man ovn-architecture' to understand OVN's architecture in
276detail.
9feb1017 277
eaa923e3
GS
278[INSTALL.md]: INSTALL.md
279[openvswitch-switch.README.Debian]: debian/openvswitch-switch.README.Debian
280[README.RHEL]: rhel/README.RHEL