]> git.proxmox.com Git - pve-docs.git/blob - pve-network.adoc
8146a3b16ea4f8f10b15fa7f9f2540f7b45558c2
[pve-docs.git] / pve-network.adoc
1 Network Configuration
2 ---------------------
3 include::attributes.txt[]
4
5 {pve} uses a bridged networking model. Each host can have up to 4094
6 bridges. Bridges are like physical network switches implemented in
7 software. All VMs can share a single bridge, as if
8 virtual network cables from each guest were all plugged into the same
9 switch. But you can also create multiple bridges to separate network
10 domains.
11
12 For connecting VMs to the outside world, bridges are attached to
13 physical network cards. For further flexibility, you can configure
14 VLANs (IEEE 802.1q) and network bonding, also known as "link
15 aggregation". That way it is possible to build complex and flexible
16 virtual networks.
17
18 Debian traditionally uses the `ifup` and `ifdown` commands to
19 configure the network. The file `/etc/network/interfaces` contains the
20 whole network setup. Please refer to to manual page (`man interfaces`)
21 for a complete format description.
22
23 NOTE: {pve} does not write changes directly to
24 `/etc/network/interfaces`. Instead, we write into a temporary file
25 called `/etc/network/interfaces.new`, and commit those changes when
26 you reboot the node.
27
28 It is worth mentioning that you can directly edit the configuration
29 file. All {pve} tools tries hard to keep such direct user
30 modifications. Using the GUI is still preferable, because it
31 protect you from errors.
32
33
34 Naming Conventions
35 ~~~~~~~~~~~~~~~~~~
36
37 We currently use the following naming conventions for device names:
38
39 * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
40
41 * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
42
43 * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
44
45 * VLANs: Simply add the VLAN number to the device name,
46 separated by a period (`eth0.50`, `bond1.30`)
47
48 This makes it easier to debug networks problems, because the device
49 names implies the device type.
50
51 Default Configuration using a Bridge
52 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
53
54 The installation program creates a single bridge named `vmbr0`, which
55 is connected to the first ethernet card `eth0`. The corresponding
56 configuration in `/etc/network/interfaces` looks like this:
57
58 ----
59 auto lo
60 iface lo inet loopback
61
62 iface eth0 inet manual
63
64 auto vmbr0
65 iface vmbr0 inet static
66 address 192.168.10.2
67 netmask 255.255.255.0
68 gateway 192.168.10.1
69 bridge_ports eth0
70 bridge_stp off
71 bridge_fd 0
72 ----
73
74 Virtual machines behave as if they were directly connected to the
75 physical network. The network, in turn, sees each virtual machine as
76 having its own MAC, even though there is only one network cable
77 connecting all of these VMs to the network.
78
79
80 Routed Configuration
81 ~~~~~~~~~~~~~~~~~~~~
82
83 Most hosting providers do not support the above setup. For security
84 reasons, they disable networking as soon as they detect multiple MAC
85 addresses on a single interface.
86
87 TIP: Some providers allows you to register additional MACs on there
88 management interface. This avoids the problem, but is clumsy to
89 configure because you need to register a MAC for each of your VMs.
90
91 You can avoid the problem by ``routing'' all traffic via a single
92 interface. This makes sure that all network packets use the same MAC
93 address.
94
95 A common scenario is that you have a public IP (assume `192.168.10.2`
96 for this example), and an additional IP block for your VMs
97 (`10.10.10.1/255.255.255.0`). We recommend the following setup for such
98 situations:
99
100 ----
101 auto lo
102 iface lo inet loopback
103
104 auto eth0
105 iface eth0 inet static
106 address 192.168.10.2
107 netmask 255.255.255.0
108 gateway 192.168.10.1
109 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
110
111
112 auto vmbr0
113 iface vmbr0 inet static
114 address 10.10.10.1
115 netmask 255.255.255.0
116 bridge_ports none
117 bridge_stp off
118 bridge_fd 0
119 ----
120
121
122 Masquerading (NAT) with `iptables`
123 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
124
125 In some cases you may want to use private IPs behind your Proxmox
126 host's true IP, and masquerade the traffic using NAT:
127
128 ----
129 auto lo
130 iface lo inet loopback
131
132 auto eth0
133 #real IP adress
134 iface eth0 inet static
135 address 192.168.10.2
136 netmask 255.255.255.0
137 gateway 192.168.10.1
138
139 auto vmbr0
140 #private sub network
141 iface vmbr0 inet static
142 address 10.10.10.1
143 netmask 255.255.255.0
144 bridge_ports none
145 bridge_stp off
146 bridge_fd 0
147
148 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
149 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
150 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
151 ----
152
153
154 Linux Bond
155 ~~~~~~~~~~
156
157 Bonding is a technique for binding multiple NIC's to a single network
158 device. It is possible to achieve different goals, like make the
159 network fault-tolerant, increase the performance or both
160 together.
161
162 There are 7 modes for bonding:
163
164 * *Round-robin (balance-rr):* Transmit network packets in sequential
165 order from the first available network interface (NIC) slave through
166 the last. This mode provides load balancing and fault tolerance.
167
168 * *Active-backup (active-backup):* Only one NIC slave in the bond is
169 active. A different slave becomes active if, and only if, the active
170 slave fails. The single logical bonded interface's MAC address is
171 externally visible on only one NIC (port) to avoid distortion in the
172 network switch. This mode provides fault tolerance.
173
174 * *XOR (balance-xor):* Transmit network packets based on [(source MAC
175 address XOR'd with destination MAC address) modulo NIC slave
176 count]. This selects the same NIC slave for each destination MAC
177 address. This mode provides load balancing and fault tolerance.
178
179 * *Broadcast (broadcast):* Transmit network packets on all slave
180 network interfaces. This mode provides fault tolerance.
181
182 * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
183 aggregation groups that share the same speed and duplex
184 settings. Utilizes all slave network interfaces in the active
185 aggregator group according to the 802.3ad specification.
186
187 * *Adaptive transmit load balancing (balance-tlb):* Linux bonding
188 driver mode that does not require any special network-switch
189 support. The outgoing network packet traffic is distributed according
190 to the current load (computed relative to the speed) on each network
191 interface slave. Incoming traffic is received by one currently
192 designated slave network interface. If this receiving slave fails,
193 another slave takes over the MAC address of the failed receiving
194 slave.
195
196 * *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link
197 aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive
198 load balancing (rlb) for IPV4 traffic, and does not require any
199 special network switch support. The receive load balancing is achieved
200 by ARP negotiation. The bonding driver intercepts the ARP Replies sent
201 by the local system on their way out and overwrites the source
202 hardware address with the unique hardware address of one of the NIC
203 slaves in the single logical bonded interface such that different
204 network-peers use different MAC addresses for their network packet
205 traffic.
206
207 For the most setups the active-backup are the best choice or if your
208 switch support LACP "IEEE 802.3ad" this mode should be preferred.
209
210 The following bond configuration can be used as distributed/shared
211 storage network. The benefit would be that you get more speed and the
212 network will be fault-tolerant.
213
214 .Example: Use bond with fixed IP address
215 ----
216 auto lo
217 iface lo inet loopback
218
219 iface eth1 inet manual
220
221 iface eth2 inet manual
222
223 auto bond0
224 iface bond0 inet static
225 slaves eth1 eth2
226 address 192.168.1.2
227 netmask 255.255.255.0
228 bond_miimon 100
229 bond_mode 802.3ad
230 bond_xmit_hash_policy layer2+3
231
232 auto vmbr0
233 iface vmbr0 inet static
234 address 10.10.10.2
235 netmask 255.255.255.0
236 gateway 10.10.10.1
237 bridge_ports eth0
238 bridge_stp off
239 bridge_fd 0
240
241 ----
242
243
244 Another possibility it to use the bond directly as bridge port.
245 This can be used to make the guest network fault-tolerant.
246
247 .Example: Use a bond as bridge port
248 ----
249 auto lo
250 iface lo inet loopback
251
252 iface eth1 inet manual
253
254 iface eth2 inet manual
255
256 auto bond0
257 iface bond0 inet maunal
258 slaves eth1 eth2
259 bond_miimon 100
260 bond_mode 802.3ad
261 bond_xmit_hash_policy layer2+3
262
263 auto vmbr0
264 iface vmbr0 inet static
265 address 10.10.10.2
266 netmask 255.255.255.0
267 gateway 10.10.10.1
268 bridge_ports bond0
269 bridge_stp off
270 bridge_fd 0
271
272 ----
273
274 ////
275 TODO: explain IPv6 support?
276 TODO: explan OVS
277 ////