X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=sysadmin.adoc;h=21537f1d9d9cf9eac2aa00945833ea7f89dab5a5;hp=91e32229984e15ff1e1368329bb67e9eba36d997;hb=7d6078845fa6a3bd308c7dc843273e56be33f315;hpb=9ee943233e087624aafa8ff2071e663886a7fee8 diff --git a/sysadmin.adoc b/sysadmin.adoc index 91e3222..21537f1 100644 --- a/sysadmin.adoc +++ b/sysadmin.adoc @@ -1,6 +1,9 @@ +[[chapter_system_administration]] Host System Administration ========================== -include::attributes.txt[] +ifndef::manvolnum[] +:pve-toplevel: +endif::manvolnum[] {pve} is based on the famous https://www.debian.org/[Debian] Linux distribution. That means that you have access to the whole world of @@ -23,214 +26,55 @@ For example, we ship Intel network card drivers to support their newest hardware. The following sections will concentrate on virtualization related -topics. They either explains things which are different on {pve}, or +topics. They either explain things which are different on {pve}, or tasks which are commonly used on {pve}. For other topics, please refer to the standard Debian documentation. -System requirements -------------------- -For production servers, high quality server equipment is needed. Keep -in mind, if you run 10 Virtual Servers on one machine and you then -experience a hardware failure, 10 services are lost. {pve} -supports clustering, this means that multiple {pve} installations -can be centrally managed thanks to the included cluster functionality. +ifdef::wiki[] -{pve} can use local storage (DAS), SAN, NAS and also distributed -storage (Ceph RBD). For details see xref:chapter-storage[chapter storage]. +See Also +-------- -Minimum requirements, for evaluation -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +* link:/wiki/Package_Repositories[Package Repositories] -* CPU: 64bit (Intel EMT64 or AMD64) +* link:/wiki/Network_Configuration[Network Configuration] -* RAM: 1 GB RAM +* link:/wiki/System_Software_Updates[System Software Updates] -* Hard drive +* link:/wiki/External_Metric_Server[External Metric Server] -* One NIC +* link:/wiki/Disk_Health_Monitoring[Disk Health Monitoring] -Recommended system requirements -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +* link:/wiki/Logical_Volume_Manager_(LVM)[Logical Volume Manager (LVM)] -* CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended +* link:/wiki/ZFS_on_Linux[ZFS on Linux] -* RAM: 8 GB is good, more is better +* link:/wiki/Certificate_Management[Certificate Management] +endif::wiki[] -* Hardware RAID with batteries protected write cache (BBU) or flash - based protection -* Fast hard drives, best results with 15k rpm SAS, Raid10 - -* At least two NIC´s, depending on the used storage technology you need more - - -include::getting-help.adoc[] +ifndef::wiki[] include::pve-package-repos.adoc[] -include::pve-installation.adoc[] - include::system-software-updates.adoc[] +include::pve-network.adoc[] -Network Configuration ---------------------- - -{pve} uses a bridged networking model. Each host can have up to 4094 -bridges. Bridges are like physical network switches implemented in -software. All VMs can share a single bridge, as if -virtual network cables from each guest were all plugged into the same -switch. But you can also create multiple bridges to separate network -domains. - -For connecting VMs to the outside world, bridges are attached to -physical network cards. For further flexibility, you can configure -VLANs (IEEE 802.1q) and network bonding, also known as "link -aggregation". That way it is possible to build complex and flexible -virtual networks. - -Debian traditionally uses the 'ifup' and 'ifdown' commands to -configure the network. The file '/etc/network/interfaces' contains the -whole network setup. Please refer to to manual page ('man interfaces') -for a complete format description. - -NOTE: {pve} does not write changes directly to -'/etc/network/interfaces'. Instead, we write into a temporary file -called '/etc/network/interfaces.new', and commit those changes when -you reboot the node. - -It is worth mentioning that you can directly edit the configuration -file. All {pve} tools tries hard to keep such direct user -modifications. Using the GUI is still preferable, because it -protect you from errors. - -Naming Conventions -~~~~~~~~~~~~~~~~~~ - -We currently use the following naming conventions for device names: - -* Ethernet devices: eth[N], where 0 ≤ N (`eth0`, `eth1`, ...) - -* Bridge names: vmbr[N], where 0 ≤ N ≤ 4094 (`vmbr0` - `vmbr4094`) - -* Bonds: bond[N], where 0 ≤ N (`bond0`, `bond1`, ...) - -* VLANs: Simply add the VLAN number to the device name, - separated by a period (`eth0.50`, `bond1.30`) - -This makes it easier to debug networks problems, because the device -names implies the device type. - -Default Configuration using a Bridge -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -The installation program creates a single bridge named `vmbr0`, which -is connected to the first ethernet card `eth0`. The corresponding -configuration in '/etc/network/interfaces' looks like this: - ----- -auto lo -iface lo inet loopback +include::system-timesync.adoc[] -iface eth0 inet manual +include::pve-external-metric-server.adoc[] -auto vmbr0 -iface vmbr0 inet static - address 192.168.10.2 - netmask 255.255.255.0 - gateway 192.168.10.1 - bridge_ports eth0 - bridge_stp off - bridge_fd 0 ----- +include::pve-disk-health-monitoring.adoc[] -Virtual machines behave as if they were directly connected to the -physical network. The network, in turn, sees each virtual machine as -having its own MAC, even though there is only one network cable -connecting all of these VMs to the network. +include::local-lvm.adoc[] +include::local-zfs.adoc[] -Routed Configuration -~~~~~~~~~~~~~~~~~~~~ - -Most hosting providers do not support the above setup. For security -reasons, they disable networking as soon as they detect multiple MAC -addresses on a single interface. - -TIP: Some providers allows you to register additional MACs on there -management interface. This avoids the problem, but is clumsy to -configure because you need to register a MAC for each of your VMs. - -You can avoid the problem by "routing" all traffic via a single -interface. This makes sure that all network packets use the same MAC -address. - -A common scenario is that you have a public IP (assume 192.168.10.2 -for this example), and an additional IP block for your VMs -(10.10.10.1/255.255.255.0). We recommend the following setup for such -situations: - ----- -auto lo -iface lo inet loopback - -auto eth0 -iface eth0 inet static - address 192.168.10.2 - netmask 255.255.255.0 - gateway 192.168.10.1 - post-up echo 1 > /proc/sys/net/ipv4/conf/eth0/proxy_arp - - -auto vmbr0 -iface vmbr0 inet static - address 10.10.10.1 - netmask 255.255.255.0 - bridge_ports none - bridge_stp off - bridge_fd 0 ----- - - -Masquerading (NAT) with iptables -~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ - -In some cases you may want to use private IPs behind your Proxmox -host's true IP, and masquerade the traffic using NAT: - ----- -auto lo -iface lo inet loopback - -auto eth0 -#real IP adress -iface eth0 inet static - address 192.168.10.2 - netmask 255.255.255.0 - gateway 192.168.10.1 - -auto vmbr0 -#private sub network -iface vmbr0 inet static - address 10.10.10.1 - netmask 255.255.255.0 - bridge_ports none - bridge_stp off - bridge_fd 0 - - post-up echo 1 > /proc/sys/net/ipv4/ip_forward - post-up iptables -t nat -A POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE - post-down iptables -t nat -D POSTROUTING -s '10.10.10.0/24' -o eth0 -j MASQUERADE ----- - -//// -TODO: explain IPv6 support? -TODO: explan OVS -//// - +include::certificate-management.adoc[] -include::local-zfs.adoc[] +endif::wiki[] ////