]> git.proxmox.com Git - pve-docs.git/blame - pve-network.adoc
pvesr.adoc: add dummy man page for storage replication tool
[pve-docs.git] / pve-network.adoc
CommitLineData
80c0adcb 1[[sysadmin_network_configuration]]
0bcd1f7f
DM
2Network Configuration
3---------------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
0bcd1f7f
DM
8{pve} uses a bridged networking model. Each host can have up to 4094
9bridges. Bridges are like physical network switches implemented in
10software. All VMs can share a single bridge, as if
11virtual network cables from each guest were all plugged into the same
12switch. But you can also create multiple bridges to separate network
13domains.
14
15For connecting VMs to the outside world, bridges are attached to
16physical network cards. For further flexibility, you can configure
17VLANs (IEEE 802.1q) and network bonding, also known as "link
18aggregation". That way it is possible to build complex and flexible
19virtual networks.
20
8c1189b6
FG
21Debian traditionally uses the `ifup` and `ifdown` commands to
22configure the network. The file `/etc/network/interfaces` contains the
23whole network setup. Please refer to to manual page (`man interfaces`)
0bcd1f7f
DM
24for a complete format description.
25
26NOTE: {pve} does not write changes directly to
8c1189b6
FG
27`/etc/network/interfaces`. Instead, we write into a temporary file
28called `/etc/network/interfaces.new`, and commit those changes when
0bcd1f7f
DM
29you reboot the node.
30
31It is worth mentioning that you can directly edit the configuration
32file. All {pve} tools tries hard to keep such direct user
33modifications. Using the GUI is still preferable, because it
34protect you from errors.
35
5eba0743 36
0bcd1f7f
DM
37Naming Conventions
38~~~~~~~~~~~~~~~~~~
39
40We currently use the following naming conventions for device names:
41
42* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...)
43
44* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`)
45
46* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...)
47
48* VLANs: Simply add the VLAN number to the device name,
49 separated by a period (`eth0.50`, `bond1.30`)
50
51This makes it easier to debug networks problems, because the device
52names implies the device type.
53
54Default Configuration using a Bridge
55~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56
57The installation program creates a single bridge named `vmbr0`, which
58is connected to the first ethernet card `eth0`. The corresponding
8c1189b6 59configuration in `/etc/network/interfaces` looks like this:
0bcd1f7f
DM
60
61----
62auto lo
63iface lo inet loopback
64
65iface eth0 inet manual
66
67auto vmbr0
68iface vmbr0 inet static
69 address 192.168.10.2
70 netmask 255.255.255.0
71 gateway 192.168.10.1
72 bridge_ports eth0
73 bridge_stp off
74 bridge_fd 0
75----
76
77Virtual machines behave as if they were directly connected to the
78physical network. The network, in turn, sees each virtual machine as
79having its own MAC, even though there is only one network cable
80connecting all of these VMs to the network.
81
82
83Routed Configuration
84~~~~~~~~~~~~~~~~~~~~
85
86Most hosting providers do not support the above setup. For security
87reasons, they disable networking as soon as they detect multiple MAC
88addresses on a single interface.
89
90TIP: Some providers allows you to register additional MACs on there
91management interface. This avoids the problem, but is clumsy to
92configure because you need to register a MAC for each of your VMs.
93
8c1189b6 94You can avoid the problem by ``routing'' all traffic via a single
0bcd1f7f
DM
95interface. This makes sure that all network packets use the same MAC
96address.
97
8c1189b6 98A common scenario is that you have a public IP (assume `192.168.10.2`
0bcd1f7f 99for this example), and an additional IP block for your VMs
8c1189b6 100(`10.10.10.1/255.255.255.0`). We recommend the following setup for such
0bcd1f7f
DM
101situations:
102
103----
104auto lo
105iface lo inet loopback
106
107auto eth0
108iface eth0 inet static
109 address 192.168.10.2
110 netmask 255.255.255.0
111 gateway 192.168.10.1
112 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp
113
114
115auto vmbr0
116iface vmbr0 inet static
117 address 10.10.10.1
118 netmask 255.255.255.0
119 bridge_ports none
120 bridge_stp off
121 bridge_fd 0
122----
123
124
8c1189b6
FG
125Masquerading (NAT) with `iptables`
126~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
0bcd1f7f
DM
127
128In some cases you may want to use private IPs behind your Proxmox
129host's true IP, and masquerade the traffic using NAT:
130
131----
132auto lo
133iface lo inet loopback
134
135auto eth0
136#real IP adress
137iface eth0 inet static
138 address 192.168.10.2
139 netmask 255.255.255.0
140 gateway 192.168.10.1
141
142auto vmbr0
143#private sub network
144iface vmbr0 inet static
145 address 10.10.10.1
146 netmask 255.255.255.0
147 bridge_ports none
148 bridge_stp off
149 bridge_fd 0
150
151 post-up echo 1 > /proc/sys/net/ipv4/ip_forward
152 post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
153 post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE
154----
155
b4c06a93
WL
156
157Linux Bond
158~~~~~~~~~~
159
3eafe338
WL
160Bonding (also called NIC teaming or Link Aggregation) is a technique
161for binding multiple NIC's to a single network device. It is possible
162to achieve different goals, like make the network fault-tolerant,
163increase the performance or both together.
164
165High-speed hardware like Fibre Channel and the associated switching
166hardware can be quite expensive. By doing link aggregation, two NICs
167can appear as one logical interface, resulting in double speed. This
168is a native Linux kernel feature that is supported by most
169switches. If your nodes have multiple Ethernet ports, you can
170distribute your points of failure by running network cables to
171different switches and the bonded connection will failover to one
172cable or the other in case of network trouble.
173
174Aggregated links can improve live-migration delays and improve the
175speed of replication of data between Proxmox VE Cluster nodes.
b4c06a93
WL
176
177There are 7 modes for bonding:
178
179* *Round-robin (balance-rr):* Transmit network packets in sequential
180order from the first available network interface (NIC) slave through
181the last. This mode provides load balancing and fault tolerance.
182
183* *Active-backup (active-backup):* Only one NIC slave in the bond is
184active. A different slave becomes active if, and only if, the active
185slave fails. The single logical bonded interface's MAC address is
186externally visible on only one NIC (port) to avoid distortion in the
187network switch. This mode provides fault tolerance.
188
189* *XOR (balance-xor):* Transmit network packets based on [(source MAC
190address XOR'd with destination MAC address) modulo NIC slave
191count]. This selects the same NIC slave for each destination MAC
192address. This mode provides load balancing and fault tolerance.
193
194* *Broadcast (broadcast):* Transmit network packets on all slave
195network interfaces. This mode provides fault tolerance.
196
197* *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates
198aggregation groups that share the same speed and duplex
199settings. Utilizes all slave network interfaces in the active
200aggregator group according to the 802.3ad specification.
201
202* *Adaptive transmit load balancing (balance-tlb):* Linux bonding
203driver mode that does not require any special network-switch
204support. The outgoing network packet traffic is distributed according
205to the current load (computed relative to the speed) on each network
206interface slave. Incoming traffic is received by one currently
207designated slave network interface. If this receiving slave fails,
208another slave takes over the MAC address of the failed receiving
209slave.
210
e60ce90c 211* *Adaptive load balancing (balance-alb):* Includes balance-tlb plus receive
b4c06a93
WL
212load balancing (rlb) for IPV4 traffic, and does not require any
213special network switch support. The receive load balancing is achieved
214by ARP negotiation. The bonding driver intercepts the ARP Replies sent
215by the local system on their way out and overwrites the source
216hardware address with the unique hardware address of one of the NIC
217slaves in the single logical bonded interface such that different
218network-peers use different MAC addresses for their network packet
219traffic.
220
221For the most setups the active-backup are the best choice or if your
222switch support LACP "IEEE 802.3ad" this mode should be preferred.
223
cd1de2c2
WL
224The following bond configuration can be used as distributed/shared
225storage network. The benefit would be that you get more speed and the
226network will be fault-tolerant.
227
b4c06a93
WL
228.Example: Use bond with fixed IP address
229----
230auto lo
231iface lo inet loopback
232
233iface eth1 inet manual
234
235iface eth2 inet manual
236
237auto bond0
238iface bond0 inet static
239 slaves eth1 eth2
240 address 192.168.1.2
241 netmask 255.255.255.0
242 bond_miimon 100
243 bond_mode 802.3ad
244 bond_xmit_hash_policy layer2+3
245
246auto vmbr0
247iface vmbr0 inet static
248 address 10.10.10.2
249 netmask 255.255.255.0
250 gateway 10.10.10.1
251 bridge_ports eth0
252 bridge_stp off
253 bridge_fd 0
254
255----
256
cd1de2c2
WL
257
258Another possibility it to use the bond directly as bridge port.
259This can be used to make the guest network fault-tolerant.
260
261.Example: Use a bond as bridge port
b4c06a93
WL
262----
263auto lo
264iface lo inet loopback
265
266iface eth1 inet manual
267
268iface eth2 inet manual
269
270auto bond0
271iface bond0 inet maunal
272 slaves eth1 eth2
273 bond_miimon 100
274 bond_mode 802.3ad
275 bond_xmit_hash_policy layer2+3
276
277auto vmbr0
278iface vmbr0 inet static
279 address 10.10.10.2
280 netmask 255.255.255.0
281 gateway 10.10.10.1
282 bridge_ports bond0
283 bridge_stp off
284 bridge_fd 0
285
286----
287
0bcd1f7f
DM
288////
289TODO: explain IPv6 support?
290TODO: explan OVS
291////