X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pve-system-requirements.adoc;fp=pve-system-requirements.adoc;h=bb6dce3355633179cb867f0bee90361da62d1dc0;hp=52aa608652904d3e65fa9b0a1e566ef3a226a666;hb=58610ec92c883874f6ad40e8508cd53988250c18;hpb=27d024e6335a2073d36e290bbef32fe48df76cd7 diff --git a/pve-system-requirements.adoc b/pve-system-requirements.adoc index 52aa608..bb6dce3 100644 --- a/pve-system-requirements.adoc +++ b/pve-system-requirements.adoc @@ -31,18 +31,26 @@ Minimum Requirements, for Evaluation Recommended System Requirements ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -* CPU: 64bit (Intel EMT64 or AMD64), Multi core CPU recommended +* Intel EMT64 or AMD64 with Intel VT/AMD-V CPU flag. -* Intel VT/AMD-V capable CPU/Mainboard for KVM Full Virtualization support +* Memory, minimum 2 GB for OS and Proxmox VE services. Plus designated memory + for guests. For Ceph or ZFS additional memory is required, approximately 1 GB + memory for every TB used storage. + +* Fast and redundant storage, best results with SSD disks. -* RAM: 8 GB RAM, plus additional RAM used for guests +* OS storage: Hardware RAID with batteries protected write cache (``BBU'') or + non-RAID with ZFS and SSD cache. -* Hardware RAID with batteries protected write cache (``BBU'') or flash - based protection +* VM storage: For local storage use a hardware RAID with battery backed + write cache (BBU) or non-RAID for ZFS. Neither ZFS nor Ceph are compatible + with a hardware RAID controller. Shared and distributed storage is also + possible. -* Fast hard drives, best results with 15k rpm SAS, Raid10 +* Redundant Gbit NICs, additional NICs depending on the preferred storage + technology and cluster setup – 10 Gbit and higher is also supported. -* At least two NICs, depending on the used storage technology you need more +* For PCI passthrough a CPU with VT-d/AMD-d CPU flag is needed. Simple Performance Overview