[[sysadmin_network_configuration]] Network Configuration --------------------- include::attributes.txt[] ifdef::wiki[] :pve-toplevel: endif::wiki[] {pve} uses a bridged networking model. Each host can have up to 4094 bridges. Bridges are like physical network switches implemented in software. All VMs can share a single bridge, as if virtual network cables from each guest were all plugged into the same switch. But you can also create multiple bridges to separate network domains. For connecting VMs to the outside world, bridges are attached to physical network cards. For further flexibility, you can configure VLANs (IEEE 802.1q) and network bonding, also known as "link aggregation". That way it is possible to build complex and flexible virtual networks. Debian traditionally uses the `ifup` and `ifdown` commands to configure the network. The file `/etc/network/interfaces` contains the whole network setup. Please refer to to manual page (`man interfaces`) for a complete format description. NOTE: {pve} does not write changes directly to `/etc/network/interfaces`. Instead, we write into a temporary file called `/etc/network/interfaces.new`, and commit those changes when you reboot the node. It is worth mentioning that you can directly edit the configuration file. All {pve} tools tries hard to keep such direct user modifications. Using the GUI is still preferable, because it protect you from errors. Naming Conventions ~~~~~~~~~~~~~~~~~~ We currently use the following naming conventions for device names: * Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) * Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`) * Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...) * VLANs: Simply add the VLAN number to the device name, separated by a period (`eth0.50`, `bond1.30`) This makes it easier to debug networks problems, because the device names implies the device type. Default Configuration using a Bridge ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The installation program creates a single bridge named `vmbr0`, which is connected to the first ethernet card `eth0`. The corresponding configuration in `/etc/network/interfaces` looks like this: ---- auto lo iface lo inet loopback iface eth0 inet manual auto vmbr0 iface vmbr0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 bridge_ports eth0 bridge_stp off bridge_fd 0 ---- Virtual machines behave as if they were directly connected to the physical network. The network, in turn, sees each virtual machine as having its own MAC, even though there is only one network cable connecting all of these VMs to the network. Routed Configuration ~~~~~~~~~~~~~~~~~~~~ Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface. TIP: Some providers allows you to register additional MACs on there management interface. This avoids the problem, but is clumsy to configure because you need to register a MAC for each of your VMs. You can avoid the problem by ``routing'' all traffic via a single interface. This makes sure that all network packets use the same MAC address. A common scenario is that you have a public IP (assume `192.168.10.2` for this example), and an additional IP block for your VMs (`10.10.10.1/255.255.255.0`). We recommend the following setup for such situations: ---- auto lo iface lo inet loopback auto eth0 iface eth0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp auto vmbr0 iface vmbr0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 ---- Masquerading (NAT) with `iptables` ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In some cases you may want to use private IPs behind your Proxmox host's true IP, and masquerade the traffic using NAT: ---- auto lo iface lo inet loopback auto eth0 #real IP adress iface eth0 inet static address 192.168.10.2 netmask 255.255.255.0 gateway 192.168.10.1 auto vmbr0 #private sub network iface vmbr0 inet static address 10.10.10.1 netmask 255.255.255.0 bridge_ports none bridge_stp off bridge_fd 0 post-up echo 1 > /proc/sys/net/ipv4/ip_forward post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE ---- Linux Bond ~~~~~~~~~~ Bonding (also called NIC teaming or Link Aggregation) is a technique for binding multiple NIC's to a single network device. It is possible to achieve different goals, like make the network fault-tolerant, increase the performance or both together. High-speed hardware like Fibre Channel and the associated switching hardware can be quite expensive. By doing link aggregation, two NICs can appear as one logical interface, resulting in double speed. This is a native Linux kernel feature that is supported by most switches. If your nodes have multiple Ethernet ports, you can distribute your points of failure by running network cables to different switches and the bonded connection will failover to one cable or the other in case of network trouble. Aggregated links can improve live-migration delays and improve the speed of replication of data between Proxmox VE Cluster nodes. There are 7 modes for bonding: * *Round-robin (balance-rr):* Transmit network packets in sequential order from the first available network interface (NIC) slave through the last. This mode provides load balancing and fault tolerance. * *Active-backup (active-backup):* Only one NIC slave in the bond is active. A different slave becomes active if, and only if, the active slave fails. The single logical bonded interface's MAC address is externally visible on only one NIC (port) to avoid distortion in the network switch. This mode provides fault tolerance. * *XOR (balance-xor):* Transmit network packets based on [(source MAC address XOR'd with destination MAC address) modulo NIC slave count]. This selects the same NIC slave for each destination MAC address. This mode provides load balancing and fault tolerance. * *Broadcast (broadcast):* Transmit network packets on all slave network interfaces. This mode provides fault tolerance. * *IEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):* Creates aggregation groups that share the same speed and duplex settings. Utilizes all slave network interfaces in the active aggregator group according to the 802.3ad specification. * *Adaptive transmit load balancing (balance-tlb):* Linux bonding driver mode that does not require any special network-switch support. The outgoing network packet traffic is distributed according to the current load (computed relative to the speed) on each network interface slave. Incoming traffic is received by one currently designated slave network interface. If this receiving slave fails, another slave takes over the MAC address of the failed receiving slave. * *Adaptive load balancing (balanceIEEE 802.3ad Dynamic link aggregation (802.3ad)(LACP):-alb):* Includes balance-tlb plus receive load balancing (rlb) for IPV4 traffic, and does not require any special network switch support. The receive load balancing is achieved by ARP negotiation. The bonding driver intercepts the ARP Replies sent by the local system on their way out and overwrites the source hardware address with the unique hardware address of one of the NIC slaves in the single logical bonded interface such that different network-peers use different MAC addresses for their network packet traffic. For the most setups the active-backup are the best choice or if your switch support LACP "IEEE 802.3ad" this mode should be preferred. The following bond configuration can be used as distributed/shared storage network. The benefit would be that you get more speed and the network will be fault-tolerant. .Example: Use bond with fixed IP address ---- auto lo iface lo inet loopback iface eth1 inet manual iface eth2 inet manual auto bond0 iface bond0 inet static slaves eth1 eth2 address 192.168.1.2 netmask 255.255.255.0 bond_miimon 100 bond_mode 802.3ad bond_xmit_hash_policy layer2+3 auto vmbr0 iface vmbr0 inet static address 10.10.10.2 netmask 255.255.255.0 gateway 10.10.10.1 bridge_ports eth0 bridge_stp off bridge_fd 0 ---- Another possibility it to use the bond directly as bridge port. This can be used to make the guest network fault-tolerant. .Example: Use a bond as bridge port ---- auto lo iface lo inet loopback iface eth1 inet manual iface eth2 inet manual auto bond0 iface bond0 inet maunal slaves eth1 eth2 bond_miimon 100 bond_mode 802.3ad bond_xmit_hash_policy layer2+3 auto vmbr0 iface vmbr0 inet static address 10.10.10.2 netmask 255.255.255.0 gateway 10.10.10.1 bridge_ports bond0 bridge_stp off bridge_fd 0 ---- //// TODO: explain IPv6 support? TODO: explan OVS ////