X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=qm.adoc;h=917bc0f03e33cf38108acf8465d6980baf4d2cf3;hb=7eed72d8459c2f1694d4ebd836205c56a2557022;hp=756178de527b4c4f9db7bbfaa7d6f2d186a67d01;hpb=9c54f973c4e3eb9bc8673b8bf3340429d06ecc83;p=pve-docs.git diff --git a/qm.adoc b/qm.adoc index 756178d..917bc0f 100644 --- a/qm.adoc +++ b/qm.adoc @@ -304,22 +304,35 @@ the kvm64 default. If you don’t care about live migration or have a homogeneou cluster where all nodes have the same CPU, set the CPU type to host, as in theory this will give your guests maximum performance. -PCID Flag -^^^^^^^^^ - -The *PCID* CPU flag helps to improve performance of the Meltdown vulnerability -footnote:[Meltdown Attack https://meltdownattack.com/] mitigation approach. In -Linux the mitigation is called 'Kernel Page-Table Isolation (KPTI)', which -effectively hides the Kernel memory from the user space, which, without PCID, -is an expensive operation footnote:[PCID is now a critical performance/security -feature on x86 +Meltdown / Spectre related CPU flags +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +There are two CPU flags related to the Meltdown and Spectre vulnerabilities +footnote:[Meltdown Attack https://meltdownattack.com/] which need to be set +manually unless the selected CPU type of your VM already enables them by default. + +The first, called 'pcid', helps to reduce the performance impact of the Meltdown +mitigation called 'Kernel Page-Table Isolation (KPTI)', which effectively hides +the Kernel memory from the user space. Without PCID, KPTI is quite an expensive +mechanism footnote:[PCID is now a critical performance/security feature on x86 https://groups.google.com/forum/m/#!topic/mechanical-sympathy/L9mHTbeQLNU]. -There are two requirements to reduce the cost of the mitigation: +The second CPU flag is called 'spec-ctrl', which allows an operating system to +selectively disable or restrict speculative execution in order to limit the +ability of attackers to exploit the Spectre vulnerability. + +There are two requirements that need to be fulfilled in order to use these two +CPU flags: -* The host CPU must support PCID and propagate it to the guest's virtual CPU(s) -* The guest Operating System must be updated to a version which mitigates the - attack and utilizes the PCID feature marked by its flag. +* The host CPU(s) must support the feature and propagate it to the guest's virtual CPU(s) +* The guest operating system must be updated to a version which mitigates the + attacks and is able to utilize the CPU feature + +In order to use 'spec-ctrl', your CPU or system vendor also needs to provide a +so-called ``microcode update'' footnote:[You can use `intel-microcode' / +`amd-microcode' from Debian non-free if your vendor does not provide such an +update. Note that not all affected CPUs can be updated to support spec-ctrl.] +for your CPU. To check if the {pve} host supports PCID, execute the following command as root: @@ -327,10 +340,23 @@ To check if the {pve} host supports PCID, execute the following command as root: # grep ' pcid ' /proc/cpuinfo ---- -If this does not return empty your host's CPU has support for PCID. If you use -`host' as CPU type and the guest OS is able to use it, you're done. -Otherwise you need to set the PCID CPU flag for the virtual CPU. This can be -done by editing the CPU options through the WebUI. +If this does not return empty your host's CPU has support for 'pcid'. + +To check if the {pve} host supports spec-ctrl, execute the following command as root: + +---- +# grep ' spec_ctrl ' /proc/cpuinfo +---- + +If this does not return empty your host's CPU has support for 'spec-ctrl'. + +If you use `host' or another CPU type which enables the desired flags by +default, and you updated your guest OS to make use of the associated CPU +features, you're already set. + +Otherwise you need to set the desired CPU flag of the virtual CPU, either by +editing the CPU options in the WebUI, or by setting the 'flags' property of the +'cpu' option in the VM configuration file. NUMA ^^^^ @@ -603,15 +629,17 @@ parameters: * *Start/Shutdown order*: Defines the start order priority. E.g. set it to 1 if you want the VM to be the first to be started. (We use the reverse startup order for shutdown, so a machine with a start order of 1 would be the last to -be shut down) +be shut down). If multiple VMs have the same order defined on a host, they will +additionally get ordered by 'VMID' in ascending order. * *Startup delay*: Defines the interval between this VM start and subsequent VMs starts . E.g. set it to 240 if you want to wait 240 seconds before starting other VMs. * *Shutdown timeout*: Defines the duration in seconds {pve} should wait for the VM to be offline after issuing a shutdown command. -By default this value is set to 60, which means that {pve} will issue a -shutdown request, wait 60s for the machine to be offline, and if after 60s -the machine is still online will notify that the shutdown action failed. +By default this value is set to 180, which means that {pve} will issue a +shutdown request, wait 180 seconds for the machine to be offline. If, after +this timeout, the machine is still online it will be tried to forcefully stop +it. NOTE: VMs managed by the HA stack do not follow the 'start on boot' and 'boot order' options currently. Those VMs will be skipped by the startup and @@ -619,8 +647,8 @@ shutdown algorithm as the HA manager itself ensures that VMs get started and stopped. Please note that machines without a Start/Shutdown order parameter will always -start after those where the parameter is set, and this parameter only -makes sense between the machines running locally on a host, and not +start after those where the parameter is set. Further, this parameter can only +be enforced between virtual machines, running locally on a host, but not cluster-wide.