]>
Commit | Line | Data |
---|---|---|
f69cfd23 DM |
1 | ifdef::manvolnum[] |
2 | PVE({manvolnum}) | |
3 | ================ | |
38fd0958 | 4 | include::attributes.txt[] |
f69cfd23 DM |
5 | |
6 | NAME | |
7 | ---- | |
8 | ||
9 | qm - Qemu/KVM Virtual Machine Manager | |
10 | ||
11 | ||
12 | SYNOPSYS | |
13 | -------- | |
14 | ||
15 | include::qm.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Qemu/KVM Virtual Machines | |
23 | ========================= | |
38fd0958 | 24 | include::attributes.txt[] |
f69cfd23 DM |
25 | endif::manvolnum[] |
26 | ||
c4cba5d7 EK |
27 | // deprecates |
28 | // http://pve.proxmox.com/wiki/Container_and_Full_Virtualization | |
29 | // http://pve.proxmox.com/wiki/KVM | |
30 | // http://pve.proxmox.com/wiki/Qemu_Server | |
31 | ||
32 | Qemu (short form for Quick Emulator) is an opensource hypervisor that emulates a | |
33 | physical computer. From the perspective of the host system where Qemu is | |
34 | running, Qemu is a user program which has access to a number of local resources | |
35 | like partitions, files, network cards which are then passed to an | |
36 | emulated computer which sees them as if they were real devices. | |
37 | ||
38 | A guest operating system running in the emulated computer accesses these | |
39 | devices, and runs as it were running on real hardware. For instance you can pass | |
40 | an iso image as a parameter to Qemu, and the OS running in the emulated computer | |
41 | will see a real CDROM inserted in a CD drive. | |
42 | ||
43 | Qemu can emulates a great variety of hardware from ARM to Sparc, but {pve} is | |
44 | only concerned with 32 and 64 bits PC clone emulation, since it represents the | |
45 | overwhelming majority of server hardware. The emulation of PC clones is also one | |
46 | of the fastest due to the availability of processor extensions which greatly | |
47 | speed up Qemu when the emulated architecture is the same as the host | |
48 | architecture. + | |
49 | Qemu inside {pve} runs as a root process, since this is required to access block | |
50 | and PCI devices. | |
51 | ||
52 | Emulated devices and paravirtualized devices | |
53 | -------------------------------------------- | |
54 | ||
55 | The PC hardware emulated by Qemu includes a mainboard, network controllers, | |
56 | scsi, ide and sata controllers, serial ports (the complete list can be seen in | |
57 | the `kvm(1)` man page) all of them emulated in software. All these devices | |
58 | are the exact software equivalent of existing hardware devices, and if the OS | |
59 | running in the guest has the proper drivers it will use the devices as if it | |
60 | were running on real hardware. This allows Qemu to runs _unmodified_ operating | |
61 | systems. | |
62 | ||
63 | This however has a performance cost, as running in software what was meant to | |
64 | run in hardware involves a lot of extra work for the host CPU. To mitigate this, | |
65 | Qemu can present to the guest operating system _paravirtualized devices_, where | |
66 | the guest OS recognizes it is running inside Qemu and cooperates with the | |
67 | hypervisor. | |
68 | ||
69 | Qemu relies on the virtio virtualization standard, and is thus able to presente | |
70 | paravirtualized virtio devices, which includes a paravirtualized generic disk | |
71 | controller, a paravirtualized network card, a paravirtualized serial port, | |
72 | a paravirtualized SCSI controller, etc ... | |
73 | ||
74 | It is highly recommended to use the virtio devices whenever you can, as they | |
75 | provide a big performance improvement. Using the virtio generic disk controller | |
76 | versus an emulated IDE controller will double the sequential write throughput, | |
77 | as measured with `bonnie++(8)`. Using the virtio network interface can deliver | |
78 | up to three times the throughput of an emulated Intel E1000 network card, as | |
79 | measured with `iperf(1)`. footnote:[See this benchmark on the KVM wiki | |
80 | http://www.linux-kvm.org/page/Using_VirtIO_NIC] | |
81 | ||
82 | Virtual Machines settings | |
83 | ------------------------- | |
84 | Generally speaking {pve} tries to choose sane defaults for virtual machines | |
85 | (VM). Make sure you understand the meaning of the settings you change, as it | |
86 | could incur a performance slowdown, or putting your data at risk. | |
87 | ||
88 | General Settings | |
89 | ~~~~~~~~~~~~~~~~ | |
90 | General settings of a VM include | |
91 | ||
92 | * the *Node* : the physical server on which the VM will run | |
93 | * the *VM ID*: a unique number in this {pve} installation used to identify your VM | |
94 | * *Name*: a free form text string you can use to describe the VM | |
95 | * *Resource Pool*: a logical group of VMs | |
96 | ||
97 | OS Settings | |
98 | ~~~~~~~~~~~ | |
99 | When creating a VM, setting the proper Operating System(OS) allows {pve} to | |
100 | optimize some low level parameters. For instance Windows OS expect the BIOS | |
101 | clock to use the local time, while Unix based OS expect the BIOS clock to have | |
102 | the UTC time. | |
103 | ||
104 | Hard Disk | |
105 | ~~~~~~~~~ | |
2ec49380 | 106 | Qemu can emulate a number of storage controllers: |
c4cba5d7 EK |
107 | |
108 | * the *IDE* controller, has a design which goes back to the 1984 PC/AT disk | |
109 | controller. Even if this controller has been superseded by more more designs, | |
110 | each and every OS you can think has support for it, making it a great choice | |
111 | if you want to run an OS released before 2003. You can connect up to 4 devices | |
112 | on this controller. | |
113 | ||
114 | * the *SATA* (Serial ATA) controller, dating from 2003, has a more modern | |
115 | design, allowing higher throughput and a greater number of devices to be | |
116 | connected. You can connect up to 6 devices on this controller. | |
117 | ||
118 | * the *SCSI* controller, designed in 1985, is commonly found on server | |
119 | grade hardware, and can connect up to 14 storage devices. {pve} emulates by | |
120 | default a LSI 53C895A controller. | |
121 | ||
122 | * The *Virtio* controller is a generic paravirtualized controller, and is the | |
123 | recommended setting if you aim for performance. To use this controller, the OS | |
124 | need to have special drivers which may be included in your installation ISO or | |
125 | not. Linux distributions have support for the Virtio controller since 2010, and | |
126 | FreeBSD since 2014. For Windows OSes, you need to provide an extra iso | |
127 | containing the Virtio drivers during the installation. | |
128 | // see: https://pve.proxmox.com/wiki/Paravirtualized_Block_Drivers_for_Windows#During_windows_installation. | |
129 | You can connect up to 16 devices on this controller. | |
130 | ||
131 | On each controller you attach a number of emulated hard disks, which are backed | |
132 | by a file or a block device residing in the configured storage. The choice of | |
133 | a storage type will determine the format of the hard disk image. Storages which | |
134 | present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*, | |
135 | whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose | |
136 | either the *raw disk image format* or the *QEMU image format*. | |
137 | ||
138 | * the *QEMU image format* is a copy on write format which allows snapshots, and | |
139 | thin provisioning of the disk image. | |
140 | * the *raw disk image* is a bit-to-bit image of a hard disk, similar to what | |
141 | you would get when executing the `dd` command on a block device in Linux. This | |
142 | format do not support thin provisioning or snapshotting by itself, requiring | |
143 | cooperation from the storage layer for these tasks. It is however 10% faster | |
144 | than the *QEMU image format*. footnote:[See this benchmark for details | |
145 | http://events.linuxfoundation.org/sites/events/files/slides/CloudOpen2013_Khoa_Huynh_v3.pdf] | |
146 | * the *VMware image format* only makes sense if you intend to import/export the | |
147 | disk image to other hypervisors. | |
148 | ||
149 | Setting the *Cache* mode of the hard drive will impact how the host system will | |
150 | notify the guest systems of block write completions. The *No cache* default | |
151 | means that the guest system will be notified that a write is complete when each | |
152 | block reaches the physical storage write queue, ignoring the host page cache. | |
153 | This provides a good balance between safety and speed. | |
154 | ||
155 | If you want the {pve} backup manager to skip a disk when doing a backup of a VM, | |
156 | you can set the *No backup* option on that disk. | |
157 | ||
158 | If your storage supports _thin provisioning_ (see the storage chapter in the | |
159 | {pve} guide), and your VM has a *SCSI* controller you can activate the *Discard* | |
160 | option on the hard disks connected to that controller. With *Discard* enabled, | |
161 | when the filesystem of a VM marks blocks as unused after removing files, the | |
162 | emulated SCSI controller will relay this information to the storage, which will | |
163 | then shrink the disk image accordingly. | |
164 | ||
dd042288 EK |
165 | Managing Virtual Machines with 'qm' |
166 | ------------------------------------ | |
f69cfd23 | 167 | |
dd042288 | 168 | qm is the tool to manage Qemu/Kvm virtual machines on {pve}. You can |
f69cfd23 DM |
169 | create and destroy virtual machines, and control execution |
170 | (start/stop/suspend/resume). Besides that, you can use qm to set | |
171 | parameters in the associated config file. It is also possible to | |
172 | create and delete virtual disks. | |
173 | ||
dd042288 EK |
174 | CLI Usage Examples |
175 | ~~~~~~~~~~~~~~~~~~ | |
176 | ||
177 | Create a new VM with 4 GB IDE disk. | |
178 | ||
179 | qm create 300 -ide0 4 -net0 e1000 -cdrom proxmox-mailgateway_2.1.iso | |
180 | ||
181 | Start the new VM | |
182 | ||
183 | qm start 300 | |
184 | ||
185 | Send a shutdown request, then wait until the VM is stopped. | |
186 | ||
187 | qm shutdown 300 && qm wait 300 | |
188 | ||
189 | Same as above, but only wait for 40 seconds. | |
190 | ||
191 | qm shutdown 300 && qm wait 300 -timeout 40 | |
192 | ||
f69cfd23 DM |
193 | Configuration |
194 | ------------- | |
195 | ||
196 | All configuration files consists of lines in the form | |
197 | ||
198 | PARAMETER: value | |
199 | ||
871e1fd6 | 200 | Configuration files are stored inside the Proxmox cluster file |
c4cba5d7 | 201 | system, and can be accessed at '/etc/pve/qemu-server/<VMID>.conf'. |
f69cfd23 | 202 | |
a7f36905 DM |
203 | Options |
204 | ~~~~~~~ | |
205 | ||
206 | include::qm.conf.5-opts.adoc[] | |
207 | ||
f69cfd23 DM |
208 | |
209 | Locks | |
210 | ----- | |
211 | ||
871e1fd6 FG |
212 | Online migrations and backups ('vzdump') set a lock to prevent incompatible |
213 | concurrent actions on the affected VMs. Sometimes you need to remove such a | |
214 | lock manually (e.g., after a power failure). | |
f69cfd23 DM |
215 | |
216 | qm unlock <vmid> | |
217 | ||
f69cfd23 DM |
218 | |
219 | ifdef::manvolnum[] | |
220 | include::pve-copyright.adoc[] | |
221 | endif::manvolnum[] |