]>
Commit | Line | Data |
---|---|---|
2e3c8f8d DDAG |
1 | ========= |
2 | Migration | |
3 | ========= | |
f58ae59c JQ |
4 | |
5 | QEMU has code to load/save the state of the guest that it is running. | |
dda5336e | 6 | These are two complementary operations. Saving the state just does |
f58ae59c JQ |
7 | that, saves the state for each device that the guest is running. |
8 | Restoring a guest is just the opposite operation: we need to load the | |
9 | state of each device. | |
10 | ||
dda5336e | 11 | For this to work, QEMU has to be launched with the same arguments the |
f58ae59c JQ |
12 | two times. I.e. it can only restore the state in one guest that has |
13 | the same devices that the one it was saved (this last requirement can | |
dda5336e | 14 | be relaxed a bit, but for now we can consider that configuration has |
f58ae59c JQ |
15 | to be exactly the same). |
16 | ||
17 | Once that we are able to save/restore a guest, a new functionality is | |
18 | requested: migration. This means that QEMU is able to start in one | |
dda5336e SW |
19 | machine and being "migrated" to another machine. I.e. being moved to |
20 | another machine. | |
f58ae59c JQ |
21 | |
22 | Next was the "live migration" functionality. This is important | |
23 | because some guests run with a lot of state (specially RAM), and it | |
24 | can take a while to move all state from one machine to another. Live | |
25 | migration allows the guest to continue running while the state is | |
26 | transferred. Only while the last part of the state is transferred has | |
27 | the guest to be stopped. Typically the time that the guest is | |
28 | unresponsive during live migration is the low hundred of milliseconds | |
dda5336e | 29 | (notice that this depends on a lot of things). |
f58ae59c | 30 | |
d8a0f054 JQ |
31 | .. contents:: |
32 | ||
edd70806 DDAG |
33 | Transports |
34 | ========== | |
f58ae59c | 35 | |
edd70806 DDAG |
36 | The migration stream is normally just a byte stream that can be passed |
37 | over any transport. | |
f58ae59c JQ |
38 | |
39 | - tcp migration: do the migration using tcp sockets | |
40 | - unix migration: do the migration using unix sockets | |
41 | - exec migration: do the migration using the stdin/stdout through a process. | |
9277d81f | 42 | - fd migration: do the migration using a file descriptor that is |
dda5336e | 43 | passed to QEMU. QEMU doesn't care how this file descriptor is opened. |
f58ae59c | 44 | |
edd70806 DDAG |
45 | In addition, support is included for migration using RDMA, which |
46 | transports the page data using ``RDMA``, where the hardware takes care of | |
47 | transporting the pages, and the load on the CPU is much lower. While the | |
48 | internals of RDMA migration are a bit different, this isn't really visible | |
49 | outside the RAM migration code. | |
50 | ||
51 | All these migration protocols use the same infrastructure to | |
f58ae59c JQ |
52 | save/restore state devices. This infrastructure is shared with the |
53 | savevm/loadvm functionality. | |
54 | ||
979da8b3 MAL |
55 | Debugging |
56 | ========= | |
57 | ||
4df3a7bf | 58 | The migration stream can be analyzed thanks to ``scripts/analyze-migration.py``. |
979da8b3 MAL |
59 | |
60 | Example usage: | |
61 | ||
62 | .. code-block:: shell | |
63 | ||
243e7480 MA |
64 | $ qemu-system-x86_64 -display none -monitor stdio |
65 | (qemu) migrate "exec:cat > mig" | |
66 | (qemu) q | |
67 | $ ./scripts/analyze-migration.py -f mig | |
979da8b3 MAL |
68 | { |
69 | "ram (3)": { | |
70 | "section sizes": { | |
71 | "pc.ram": "0x0000000008000000", | |
72 | ... | |
73 | ||
243e7480 | 74 | See also ``analyze-migration.py -h`` help for more options. |
979da8b3 | 75 | |
2e3c8f8d DDAG |
76 | Common infrastructure |
77 | ===================== | |
f58ae59c | 78 | |
2e3c8f8d | 79 | The files, sockets or fd's that carry the migration stream are abstracted by |
4df3a7bf PM |
80 | the ``QEMUFile`` type (see ``migration/qemu-file.h``). In most cases this |
81 | is connected to a subtype of ``QIOChannel`` (see ``io/``). | |
f58ae59c | 82 | |
edd70806 | 83 | |
2e3c8f8d DDAG |
84 | Saving the state of one device |
85 | ============================== | |
f58ae59c | 86 | |
edd70806 DDAG |
87 | For most devices, the state is saved in a single call to the migration |
88 | infrastructure; these are *non-iterative* devices. The data for these | |
89 | devices is sent at the end of precopy migration, when the CPUs are paused. | |
90 | There are also *iterative* devices, which contain a very large amount of | |
91 | data (e.g. RAM or large tables). See the iterative device section below. | |
92 | ||
93 | General advice for device developers | |
94 | ------------------------------------ | |
95 | ||
96 | - The migration state saved should reflect the device being modelled rather | |
97 | than the way your implementation works. That way if you change the implementation | |
98 | later the migration stream will stay compatible. That model may include | |
99 | internal state that's not directly visible in a register. | |
100 | ||
101 | - When saving a migration stream the device code may walk and check | |
102 | the state of the device. These checks might fail in various ways (e.g. | |
103 | discovering internal state is corrupt or that the guest has done something bad). | |
104 | Consider carefully before asserting/aborting at this point, since the | |
105 | normal response from users is that *migration broke their VM* since it had | |
106 | apparently been running fine until then. In these error cases, the device | |
107 | should log a message indicating the cause of error, and should consider | |
108 | putting the device into an error state, allowing the rest of the VM to | |
109 | continue execution. | |
110 | ||
111 | - The migration might happen at an inconvenient point, | |
112 | e.g. right in the middle of the guest reprogramming the device, during | |
113 | guest reboot or shutdown or while the device is waiting for external IO. | |
114 | It's strongly preferred that migrations do not fail in this situation, | |
115 | since in the cloud environment migrations might happen automatically to | |
116 | VMs that the administrator doesn't directly control. | |
117 | ||
118 | - If you do need to fail a migration, ensure that sufficient information | |
119 | is logged to identify what went wrong. | |
120 | ||
121 | - The destination should treat an incoming migration stream as hostile | |
122 | (which we do to varying degrees in the existing code). Check that offsets | |
123 | into buffers and the like can't cause overruns. Fail the incoming migration | |
124 | in the case of a corrupted stream like this. | |
125 | ||
126 | - Take care with internal device state or behaviour that might become | |
127 | migration version dependent. For example, the order of PCI capabilities | |
128 | is required to stay constant across migration. Another example would | |
129 | be that a special case handled by subsections (see below) might become | |
130 | much more common if a default behaviour is changed. | |
131 | ||
132 | - The state of the source should not be changed or destroyed by the | |
133 | outgoing migration. Migrations timing out or being failed by | |
134 | higher levels of management, or failures of the destination host are | |
135 | not unusual, and in that case the VM is restarted on the source. | |
136 | Note that the management layer can validly revert the migration | |
137 | even though the QEMU level of migration has succeeded as long as it | |
138 | does it before starting execution on the destination. | |
139 | ||
140 | - Buses and devices should be able to explicitly specify addresses when | |
141 | instantiated, and management tools should use those. For example, | |
142 | when hot adding USB devices it's important to specify the ports | |
143 | and addresses, since implicit ordering based on the command line order | |
144 | may be different on the destination. This can result in the | |
145 | device state being loaded into the wrong device. | |
f58ae59c | 146 | |
2e3c8f8d DDAG |
147 | VMState |
148 | ------- | |
f58ae59c | 149 | |
edd70806 DDAG |
150 | Most device data can be described using the ``VMSTATE`` macros (mostly defined |
151 | in ``include/migration/vmstate.h``). | |
f58ae59c | 152 | |
7465dfec | 153 | An example (from hw/input/pckbd.c) |
f58ae59c | 154 | |
2e3c8f8d DDAG |
155 | .. code:: c |
156 | ||
157 | static const VMStateDescription vmstate_kbd = { | |
158 | .name = "pckbd", | |
159 | .version_id = 3, | |
160 | .minimum_version_id = 3, | |
161 | .fields = (VMStateField[]) { | |
162 | VMSTATE_UINT8(write_cmd, KBDState), | |
163 | VMSTATE_UINT8(status, KBDState), | |
164 | VMSTATE_UINT8(mode, KBDState), | |
165 | VMSTATE_UINT8(pending, KBDState), | |
166 | VMSTATE_END_OF_LIST() | |
167 | } | |
168 | }; | |
f58ae59c | 169 | |
5b146be3 JQ |
170 | We are declaring the state with name "pckbd". The ``version_id`` is |
171 | 3, and there are 4 uint8_t fields in the KBDState structure. We | |
172 | registered this ``VMSTATEDescription`` with one of the following | |
173 | functions. The first one will generate a device ``instance_id`` | |
174 | different for each registration. Use the second one if you already | |
175 | have an id that is different for each instance of the device: | |
f58ae59c | 176 | |
2e3c8f8d DDAG |
177 | .. code:: c |
178 | ||
5b146be3 JQ |
179 | vmstate_register_any(NULL, &vmstate_kbd, s); |
180 | vmstate_register(NULL, instance_id, &vmstate_kbd, s); | |
f58ae59c | 181 | |
4df3a7bf | 182 | For devices that are ``qdev`` based, we can register the device in the class |
edd70806 | 183 | init function: |
f58ae59c | 184 | |
edd70806 | 185 | .. code:: c |
f58ae59c | 186 | |
edd70806 | 187 | dc->vmsd = &vmstate_kbd_isa; |
f58ae59c | 188 | |
edd70806 DDAG |
189 | The VMState macros take care of ensuring that the device data section |
190 | is formatted portably (normally big endian) and make some compile time checks | |
191 | against the types of the fields in the structures. | |
f58ae59c | 192 | |
edd70806 DDAG |
193 | VMState macros can include other VMStateDescriptions to store substructures |
194 | (see ``VMSTATE_STRUCT_``), arrays (``VMSTATE_ARRAY_``) and variable length | |
195 | arrays (``VMSTATE_VARRAY_``). Various other macros exist for special | |
196 | cases. | |
5f9412bb | 197 | |
edd70806 DDAG |
198 | Note that the format on the wire is still very raw; i.e. a VMSTATE_UINT32 |
199 | ends up with a 4 byte bigendian representation on the wire; in the future | |
200 | it might be possible to use a more structured format. | |
f58ae59c | 201 | |
edd70806 DDAG |
202 | Legacy way |
203 | ---------- | |
f58ae59c | 204 | |
edd70806 DDAG |
205 | This way is going to disappear as soon as all current users are ported to VMSTATE; |
206 | although converting existing code can be tricky, and thus 'soon' is relative. | |
f58ae59c | 207 | |
edd70806 DDAG |
208 | Each device has to register two functions, one to save the state and |
209 | another to load the state back. | |
f58ae59c | 210 | |
edd70806 | 211 | .. code:: c |
f58ae59c | 212 | |
ce62df53 | 213 | int register_savevm_live(const char *idstr, |
edd70806 DDAG |
214 | int instance_id, |
215 | int version_id, | |
216 | SaveVMHandlers *ops, | |
217 | void *opaque); | |
f58ae59c | 218 | |
4df3a7bf PM |
219 | Two functions in the ``ops`` structure are the ``save_state`` |
220 | and ``load_state`` functions. Notice that ``load_state`` receives a version_id | |
221 | parameter to know what state format is receiving. ``save_state`` doesn't | |
edd70806 | 222 | have a version_id parameter because it always uses the latest version. |
f58ae59c | 223 | |
edd70806 DDAG |
224 | Note that because the VMState macros still save the data in a raw |
225 | format, in many cases it's possible to replace legacy code | |
226 | with a carefully constructed VMState description that matches the | |
227 | byte layout of the existing code. | |
f58ae59c | 228 | |
edd70806 DDAG |
229 | Changing migration data structures |
230 | ---------------------------------- | |
f58ae59c | 231 | |
edd70806 DDAG |
232 | When we migrate a device, we save/load the state as a series |
233 | of fields. Sometimes, due to bugs or new functionality, we need to | |
234 | change the state to store more/different information. Changing the migration | |
235 | state saved for a device can break migration compatibility unless | |
236 | care is taken to use the appropriate techniques. In general QEMU tries | |
237 | to maintain forward migration compatibility (i.e. migrating from | |
238 | QEMU n->n+1) and there are users who benefit from backward compatibility | |
239 | as well. | |
a6c5c079 | 240 | |
2e3c8f8d DDAG |
241 | Subsections |
242 | ----------- | |
f58ae59c | 243 | |
edd70806 DDAG |
244 | The most common structure change is adding new data, e.g. when adding |
245 | a newer form of device, or adding that state that you previously | |
246 | forgot to migrate. This is best solved using a subsection. | |
f58ae59c | 247 | |
edd70806 DDAG |
248 | A subsection is "like" a device vmstate, but with a particularity, it |
249 | has a Boolean function that tells if that values are needed to be sent | |
250 | or not. If this functions returns false, the subsection is not sent. | |
251 | Subsections have a unique name, that is looked for on the receiving | |
252 | side. | |
f58ae59c JQ |
253 | |
254 | On the receiving side, if we found a subsection for a device that we | |
255 | don't understand, we just fail the migration. If we understand all | |
edd70806 DDAG |
256 | the subsections, then we load the state with success. There's no check |
257 | that a subsection is loaded, so a newer QEMU that knows about a subsection | |
258 | can (with care) load a stream from an older QEMU that didn't send | |
259 | the subsection. | |
260 | ||
261 | If the new data is only needed in a rare case, then the subsection | |
262 | can be made conditional on that case and the migration will still | |
263 | succeed to older QEMUs in most cases. This is OK for data that's | |
264 | critical, but in some use cases it's preferred that the migration | |
265 | should succeed even with the data missing. To support this the | |
266 | subsection can be connected to a device property and from there | |
267 | to a versioned machine type. | |
f58ae59c | 268 | |
3eb21fe9 DDAG |
269 | The 'pre_load' and 'post_load' functions on subsections are only |
270 | called if the subsection is loaded. | |
271 | ||
272 | One important note is that the outer post_load() function is called "after" | |
273 | loading all subsections, because a newer subsection could change the same | |
274 | value that it uses. A flag, and the combination of outer pre_load and | |
275 | post_load can be used to detect whether a subsection was loaded, and to | |
edd70806 | 276 | fall back on default behaviour when the subsection isn't present. |
f58ae59c JQ |
277 | |
278 | Example: | |
279 | ||
2e3c8f8d DDAG |
280 | .. code:: c |
281 | ||
282 | static bool ide_drive_pio_state_needed(void *opaque) | |
283 | { | |
284 | IDEState *s = opaque; | |
285 | ||
286 | return ((s->status & DRQ_STAT) != 0) | |
287 | || (s->bus->error_status & BM_STATUS_PIO_RETRY); | |
288 | } | |
289 | ||
290 | const VMStateDescription vmstate_ide_drive_pio_state = { | |
291 | .name = "ide_drive/pio_state", | |
292 | .version_id = 1, | |
293 | .minimum_version_id = 1, | |
294 | .pre_save = ide_drive_pio_pre_save, | |
295 | .post_load = ide_drive_pio_post_load, | |
296 | .needed = ide_drive_pio_state_needed, | |
297 | .fields = (VMStateField[]) { | |
298 | VMSTATE_INT32(req_nb_sectors, IDEState), | |
299 | VMSTATE_VARRAY_INT32(io_buffer, IDEState, io_buffer_total_len, 1, | |
300 | vmstate_info_uint8, uint8_t), | |
301 | VMSTATE_INT32(cur_io_buffer_offset, IDEState), | |
302 | VMSTATE_INT32(cur_io_buffer_len, IDEState), | |
303 | VMSTATE_UINT8(end_transfer_fn_idx, IDEState), | |
304 | VMSTATE_INT32(elementary_transfer_size, IDEState), | |
305 | VMSTATE_INT32(packet_transfer_size, IDEState), | |
306 | VMSTATE_END_OF_LIST() | |
307 | } | |
308 | }; | |
309 | ||
310 | const VMStateDescription vmstate_ide_drive = { | |
311 | .name = "ide_drive", | |
312 | .version_id = 3, | |
313 | .minimum_version_id = 0, | |
314 | .post_load = ide_drive_post_load, | |
315 | .fields = (VMStateField[]) { | |
316 | .... several fields .... | |
317 | VMSTATE_END_OF_LIST() | |
318 | }, | |
319 | .subsections = (const VMStateDescription*[]) { | |
320 | &vmstate_ide_drive_pio_state, | |
321 | NULL | |
322 | } | |
323 | }; | |
f58ae59c JQ |
324 | |
325 | Here we have a subsection for the pio state. We only need to | |
326 | save/send this state when we are in the middle of a pio operation | |
2e3c8f8d | 327 | (that is what ``ide_drive_pio_state_needed()`` checks). If DRQ_STAT is |
f58ae59c JQ |
328 | not enabled, the values on that fields are garbage and don't need to |
329 | be sent. | |
2bfdd1c8 | 330 | |
edd70806 DDAG |
331 | Connecting subsections to properties |
332 | ------------------------------------ | |
333 | ||
5f9412bb | 334 | Using a condition function that checks a 'property' to determine whether |
edd70806 DDAG |
335 | to send a subsection allows backward migration compatibility when |
336 | new subsections are added, especially when combined with versioned | |
337 | machine types. | |
5f9412bb | 338 | |
2e3c8f8d DDAG |
339 | For example: |
340 | ||
341 | a) Add a new property using ``DEFINE_PROP_BOOL`` - e.g. support-foo and | |
5f9412bb | 342 | default it to true. |
ac78f737 | 343 | b) Add an entry to the ``hw_compat_`` for the previous version that sets |
2e3c8f8d | 344 | the property to false. |
5f9412bb DDAG |
345 | c) Add a static bool support_foo function that tests the property. |
346 | d) Add a subsection with a .needed set to the support_foo function | |
3eb21fe9 DDAG |
347 | e) (potentially) Add an outer pre_load that sets up a default value |
348 | for 'foo' to be used if the subsection isn't loaded. | |
5f9412bb DDAG |
349 | |
350 | Now that subsection will not be generated when using an older | |
351 | machine type and the migration stream will be accepted by older | |
edd70806 | 352 | QEMU versions. |
5f9412bb | 353 | |
2e3c8f8d DDAG |
354 | Not sending existing elements |
355 | ----------------------------- | |
356 | ||
357 | Sometimes members of the VMState are no longer needed: | |
5f9412bb | 358 | |
2e3c8f8d DDAG |
359 | - removing them will break migration compatibility |
360 | ||
edd70806 DDAG |
361 | - making them version dependent and bumping the version will break backward migration |
362 | compatibility. | |
363 | ||
364 | Adding a dummy field into the migration stream is normally the best way to preserve | |
365 | compatibility. | |
5f9412bb | 366 | |
edd70806 | 367 | If the field really does need to be removed then: |
2e3c8f8d DDAG |
368 | |
369 | a) Add a new property/compatibility/function in the same way for subsections above. | |
5f9412bb | 370 | b) replace the VMSTATE macro with the _TEST version of the macro, e.g.: |
2e3c8f8d DDAG |
371 | |
372 | ``VMSTATE_UINT32(foo, barstruct)`` | |
373 | ||
5f9412bb | 374 | becomes |
5f9412bb | 375 | |
2e3c8f8d DDAG |
376 | ``VMSTATE_UINT32_TEST(foo, barstruct, pre_version_baz)`` |
377 | ||
378 | Sometime in the future when we no longer care about the ancient versions these can be killed off. | |
edd70806 DDAG |
379 | Note that for backward compatibility it's important to fill in the structure with |
380 | data that the destination will understand. | |
381 | ||
382 | Any difference in the predicates on the source and destination will end up | |
383 | with different fields being enabled and data being loaded into the wrong | |
384 | fields; for this reason conditional fields like this are very fragile. | |
385 | ||
386 | Versions | |
387 | -------- | |
388 | ||
389 | Version numbers are intended for major incompatible changes to the | |
390 | migration of a device, and using them breaks backward-migration | |
391 | compatibility; in general most changes can be made by adding Subsections | |
392 | (see above) or _TEST macros (see above) which won't break compatibility. | |
393 | ||
4df3a7bf PM |
394 | Each version is associated with a series of fields saved. The ``save_state`` always saves |
395 | the state as the newer version. But ``load_state`` sometimes is able to | |
edd70806 DDAG |
396 | load state from an older version. |
397 | ||
18621987 | 398 | You can see that there are two version fields: |
edd70806 | 399 | |
4df3a7bf PM |
400 | - ``version_id``: the maximum version_id supported by VMState for that device. |
401 | - ``minimum_version_id``: the minimum version_id that VMState is able to understand | |
edd70806 | 402 | for that device. |
18621987 PM |
403 | |
404 | VMState is able to read versions from minimum_version_id to version_id. | |
edd70806 DDAG |
405 | |
406 | There are *_V* forms of many ``VMSTATE_`` macros to load fields for version dependent fields, | |
407 | e.g. | |
408 | ||
409 | .. code:: c | |
410 | ||
411 | VMSTATE_UINT16_V(ip_id, Slirp, 2), | |
412 | ||
413 | only loads that field for versions 2 and newer. | |
414 | ||
415 | Saving state will always create a section with the 'version_id' value | |
416 | and thus can't be loaded by any older QEMU. | |
417 | ||
418 | Massaging functions | |
419 | ------------------- | |
420 | ||
421 | Sometimes, it is not enough to be able to save the state directly | |
422 | from one structure, we need to fill the correct values there. One | |
423 | example is when we are using kvm. Before saving the cpu state, we | |
424 | need to ask kvm to copy to QEMU the state that it is using. And the | |
425 | opposite when we are loading the state, we need a way to tell kvm to | |
426 | load the state for the cpu that we have just loaded from the QEMUFile. | |
427 | ||
428 | The functions to do that are inside a vmstate definition, and are called: | |
429 | ||
430 | - ``int (*pre_load)(void *opaque);`` | |
431 | ||
432 | This function is called before we load the state of one device. | |
433 | ||
434 | - ``int (*post_load)(void *opaque, int version_id);`` | |
435 | ||
436 | This function is called after we load the state of one device. | |
437 | ||
438 | - ``int (*pre_save)(void *opaque);`` | |
439 | ||
440 | This function is called before we save the state of one device. | |
441 | ||
8c07559f AL |
442 | - ``int (*post_save)(void *opaque);`` |
443 | ||
444 | This function is called after we save the state of one device | |
445 | (even upon failure, unless the call to pre_save returned an error). | |
446 | ||
447 | Example: You can look at hpet.c, that uses the first three functions | |
448 | to massage the state that is transferred. | |
edd70806 DDAG |
449 | |
450 | The ``VMSTATE_WITH_TMP`` macro may be useful when the migration | |
451 | data doesn't match the stored device data well; it allows an | |
452 | intermediate temporary structure to be populated with migration | |
453 | data and then transferred to the main structure. | |
454 | ||
455 | If you use memory API functions that update memory layout outside | |
456 | initialization (i.e., in response to a guest action), this is a strong | |
4df3a7bf | 457 | indication that you need to call these functions in a ``post_load`` callback. |
edd70806 DDAG |
458 | Examples of such memory API functions are: |
459 | ||
460 | - memory_region_add_subregion() | |
461 | - memory_region_del_subregion() | |
462 | - memory_region_set_readonly() | |
c26763f8 | 463 | - memory_region_set_nonvolatile() |
edd70806 DDAG |
464 | - memory_region_set_enabled() |
465 | - memory_region_set_address() | |
466 | - memory_region_set_alias_offset() | |
467 | ||
468 | Iterative device migration | |
469 | -------------------------- | |
470 | ||
471 | Some devices, such as RAM, Block storage or certain platform devices, | |
472 | have large amounts of data that would mean that the CPUs would be | |
473 | paused for too long if they were sent in one section. For these | |
474 | devices an *iterative* approach is taken. | |
475 | ||
476 | The iterative devices generally don't use VMState macros | |
477 | (although it may be possible in some cases) and instead use | |
478 | qemu_put_*/qemu_get_* macros to read/write data to the stream. Specialist | |
479 | versions exist for high bandwidth IO. | |
480 | ||
481 | ||
482 | An iterative device must provide: | |
483 | ||
484 | - A ``save_setup`` function that initialises the data structures and | |
485 | transmits a first section containing information on the device. In the | |
486 | case of RAM this transmits a list of RAMBlocks and sizes. | |
487 | ||
488 | - A ``load_setup`` function that initialises the data structures on the | |
489 | destination. | |
490 | ||
c8df4a7a JQ |
491 | - A ``state_pending_exact`` function that indicates how much more |
492 | data we must save. The core migration code will use this to | |
493 | determine when to pause the CPUs and complete the migration. | |
494 | ||
495 | - A ``state_pending_estimate`` function that indicates how much more | |
496 | data we must save. When the estimated amount is smaller than the | |
497 | threshold, we call ``state_pending_exact``. | |
498 | ||
499 | - A ``save_live_iterate`` function should send a chunk of data until | |
500 | the point that stream bandwidth limits tell it to stop. Each call | |
501 | generates one section. | |
edd70806 DDAG |
502 | |
503 | - A ``save_live_complete_precopy`` function that must transmit the | |
504 | last section for the device containing any remaining data. | |
505 | ||
506 | - A ``load_state`` function used to load sections generated by | |
507 | any of the save functions that generate sections. | |
508 | ||
509 | - ``cleanup`` functions for both save and load that are called | |
510 | at the end of migration. | |
511 | ||
512 | Note that the contents of the sections for iterative migration tend | |
513 | to be open-coded by the devices; care should be taken in parsing | |
514 | the results and structuring the stream to make them easy to validate. | |
515 | ||
516 | Device ordering | |
517 | --------------- | |
518 | ||
519 | There are cases in which the ordering of device loading matters; for | |
520 | example in some systems where a device may assert an interrupt during loading, | |
521 | if the interrupt controller is loaded later then it might lose the state. | |
522 | ||
523 | Some ordering is implicitly provided by the order in which the machine | |
524 | definition creates devices, however this is somewhat fragile. | |
525 | ||
526 | The ``MigrationPriority`` enum provides a means of explicitly enforcing | |
527 | ordering. Numerically higher priorities are loaded earlier. | |
528 | The priority is set by setting the ``priority`` field of the top level | |
529 | ``VMStateDescription`` for the device. | |
530 | ||
531 | Stream structure | |
532 | ================ | |
533 | ||
534 | The stream tries to be word and endian agnostic, allowing migration between hosts | |
535 | of different characteristics running the same VM. | |
536 | ||
537 | - Header | |
538 | ||
539 | - Magic | |
540 | - Version | |
541 | - VM configuration section | |
542 | ||
543 | - Machine type | |
544 | - Target page bits | |
545 | - List of sections | |
546 | Each section contains a device, or one iteration of a device save. | |
547 | ||
548 | - section type | |
549 | - section id | |
550 | - ID string (First section of each device) | |
551 | - instance id (First section of each device) | |
552 | - version id (First section of each device) | |
553 | - <device data> | |
554 | - Footer mark | |
555 | - EOF mark | |
556 | - VM Description structure | |
557 | Consisting of a JSON description of the contents for analysis only | |
558 | ||
559 | The ``device data`` in each section consists of the data produced | |
560 | by the code described above. For non-iterative devices they have a single | |
561 | section; iterative devices have an initial and last section and a set | |
562 | of parts in between. | |
563 | Note that there is very little checking by the common code of the integrity | |
564 | of the ``device data`` contents, that's up to the devices themselves. | |
565 | The ``footer mark`` provides a little bit of protection for the case where | |
566 | the receiving side reads more or less data than expected. | |
567 | ||
568 | The ``ID string`` is normally unique, having been formed from a bus name | |
569 | and device address, PCI devices and storage devices hung off PCI controllers | |
570 | fit this pattern well. Some devices are fixed single instances (e.g. "pc-ram"). | |
571 | Others (especially either older devices or system devices which for | |
572 | some reason don't have a bus concept) make use of the ``instance id`` | |
573 | for otherwise identically named devices. | |
5f9412bb | 574 | |
2e3c8f8d DDAG |
575 | Return path |
576 | ----------- | |
2bfdd1c8 | 577 | |
edd70806 DDAG |
578 | Only a unidirectional stream is required for normal migration, however a |
579 | ``return path`` can be created when bidirectional communication is desired. | |
580 | This is primarily used by postcopy, but is also used to return a success | |
581 | flag to the source at the end of migration. | |
2bfdd1c8 | 582 | |
2e3c8f8d | 583 | ``qemu_file_get_return_path(QEMUFile* fwdpath)`` gives the QEMUFile* for the return |
2bfdd1c8 DDAG |
584 | path. |
585 | ||
586 | Source side | |
2e3c8f8d | 587 | |
2bfdd1c8 DDAG |
588 | Forward path - written by migration thread |
589 | Return path - opened by main thread, read by return-path thread | |
590 | ||
591 | Destination side | |
2e3c8f8d | 592 | |
2bfdd1c8 DDAG |
593 | Forward path - read by main thread |
594 | Return path - opened by main thread, written by main thread AND postcopy | |
2e3c8f8d DDAG |
595 | thread (protected by rp_mutex) |
596 | ||
ceddc482 HH |
597 | Dirty limit |
598 | ===================== | |
599 | The dirty limit, short for dirty page rate upper limit, is a new capability | |
600 | introduced in the 8.1 QEMU release that uses a new algorithm based on the KVM | |
601 | dirty ring to throttle down the guest during live migration. | |
602 | ||
603 | The algorithm framework is as follows: | |
604 | ||
605 | :: | |
606 | ||
607 | ------------------------------------------------------------------------------ | |
608 | main --------------> throttle thread ------------> PREPARE(1) <-------- | |
609 | thread \ | | | |
610 | \ | | | |
611 | \ V | | |
612 | -\ CALCULATE(2) | | |
613 | \ | | | |
614 | \ | | | |
615 | \ V | | |
616 | \ SET PENALTY(3) ----- | |
617 | -\ | | |
618 | \ | | |
619 | \ V | |
620 | -> virtual CPU thread -------> ACCEPT PENALTY(4) | |
621 | ------------------------------------------------------------------------------ | |
622 | ||
623 | When the qmp command qmp_set_vcpu_dirty_limit is called for the first time, | |
624 | the QEMU main thread starts the throttle thread. The throttle thread, once | |
625 | launched, executes the loop, which consists of three steps: | |
626 | ||
627 | - PREPARE (1) | |
628 | ||
629 | The entire work of PREPARE (1) is preparation for the second stage, | |
630 | CALCULATE(2), as the name implies. It involves preparing the dirty | |
631 | page rate value and the corresponding upper limit of the VM: | |
632 | The dirty page rate is calculated via the KVM dirty ring mechanism, | |
633 | which tells QEMU how many dirty pages a virtual CPU has had since the | |
634 | last KVM_EXIT_DIRTY_RING_FULL exception; The dirty page rate upper | |
635 | limit is specified by caller, therefore fetch it directly. | |
636 | ||
637 | - CALCULATE (2) | |
638 | ||
639 | Calculate a suitable sleep period for each virtual CPU, which will be | |
640 | used to determine the penalty for the target virtual CPU. The | |
641 | computation must be done carefully in order to reduce the dirty page | |
642 | rate progressively down to the upper limit without oscillation. To | |
643 | achieve this, two strategies are provided: the first is to add or | |
644 | subtract sleep time based on the ratio of the current dirty page rate | |
645 | to the limit, which is used when the current dirty page rate is far | |
646 | from the limit; the second is to add or subtract a fixed time when | |
647 | the current dirty page rate is close to the limit. | |
648 | ||
649 | - SET PENALTY (3) | |
650 | ||
651 | Set the sleep time for each virtual CPU that should be penalized based | |
652 | on the results of the calculation supplied by step CALCULATE (2). | |
653 | ||
654 | After completing the three above stages, the throttle thread loops back | |
655 | to step PREPARE (1) until the dirty limit is reached. | |
656 | ||
657 | On the other hand, each virtual CPU thread reads the sleep duration and | |
658 | sleeps in the path of the KVM_EXIT_DIRTY_RING_FULL exception handler, that | |
659 | is ACCEPT PENALTY (4). Virtual CPUs tied with writing processes will | |
660 | obviously exit to the path and get penalized, whereas virtual CPUs involved | |
661 | with read processes will not. | |
662 | ||
663 | In summary, thanks to the KVM dirty ring technology, the dirty limit | |
664 | algorithm will restrict virtual CPUs as needed to keep their dirty page | |
665 | rate inside the limit. This leads to more steady reading performance during | |
666 | live migration and can aid in improving large guest responsiveness. | |
667 | ||
2e3c8f8d DDAG |
668 | Postcopy |
669 | ======== | |
2bfdd1c8 | 670 | |
2bfdd1c8 DDAG |
671 | 'Postcopy' migration is a way to deal with migrations that refuse to converge |
672 | (or take too long to converge) its plus side is that there is an upper bound on | |
673 | the amount of migration traffic and time it takes, the down side is that during | |
f014880a | 674 | the postcopy phase, a failure of *either* side causes the guest to be lost. |
2bfdd1c8 DDAG |
675 | |
676 | In postcopy the destination CPUs are started before all the memory has been | |
677 | transferred, and accesses to pages that are yet to be transferred cause | |
678 | a fault that's translated by QEMU into a request to the source QEMU. | |
679 | ||
680 | Postcopy can be combined with precopy (i.e. normal migration) so that if precopy | |
681 | doesn't finish in a given time the switch is made to postcopy. | |
682 | ||
2e3c8f8d DDAG |
683 | Enabling postcopy |
684 | ----------------- | |
2bfdd1c8 | 685 | |
c2eb7f21 GK |
686 | To enable postcopy, issue this command on the monitor (both source and |
687 | destination) prior to the start of migration: | |
2bfdd1c8 | 688 | |
2e3c8f8d | 689 | ``migrate_set_capability postcopy-ram on`` |
2bfdd1c8 DDAG |
690 | |
691 | The normal commands are then used to start a migration, which is still | |
692 | started in precopy mode. Issuing: | |
693 | ||
2e3c8f8d | 694 | ``migrate_start_postcopy`` |
2bfdd1c8 DDAG |
695 | |
696 | will now cause the transition from precopy to postcopy. | |
697 | It can be issued immediately after migration is started or any | |
698 | time later on. Issuing it after the end of a migration is harmless. | |
699 | ||
9ed01779 | 700 | Blocktime is a postcopy live migration metric, intended to show how |
76ca4b58 | 701 | long the vCPU was in state of interruptible sleep due to pagefault. |
9ed01779 AP |
702 | That metric is calculated both for all vCPUs as overlapped value, and |
703 | separately for each vCPU. These values are calculated on destination | |
704 | side. To enable postcopy blocktime calculation, enter following | |
705 | command on destination monitor: | |
706 | ||
707 | ``migrate_set_capability postcopy-blocktime on`` | |
708 | ||
709 | Postcopy blocktime can be retrieved by query-migrate qmp command. | |
710 | postcopy-blocktime value of qmp command will show overlapped blocking | |
711 | time for all vCPU, postcopy-vcpu-blocktime will show list of blocking | |
712 | time per vCPU. | |
713 | ||
2e3c8f8d DDAG |
714 | .. note:: |
715 | During the postcopy phase, the bandwidth limits set using | |
cbde7be9 | 716 | ``migrate_set_parameter`` is ignored (to avoid delaying requested pages that |
2e3c8f8d | 717 | the destination is waiting for). |
2bfdd1c8 | 718 | |
2e3c8f8d DDAG |
719 | Postcopy device transfer |
720 | ------------------------ | |
2bfdd1c8 DDAG |
721 | |
722 | Loading of device data may cause the device emulation to access guest RAM | |
723 | that may trigger faults that have to be resolved by the source, as such | |
724 | the migration stream has to be able to respond with page data *during* the | |
725 | device load, and hence the device data has to be read from the stream completely | |
726 | before the device load begins to free the stream up. This is achieved by | |
727 | 'packaging' the device data into a blob that's read in one go. | |
728 | ||
729 | Source behaviour | |
2e3c8f8d | 730 | ---------------- |
2bfdd1c8 DDAG |
731 | |
732 | Until postcopy is entered the migration stream is identical to normal | |
733 | precopy, except for the addition of a 'postcopy advise' command at | |
734 | the beginning, to tell the destination that postcopy might happen. | |
735 | When postcopy starts the source sends the page discard data and then | |
736 | forms the 'package' containing: | |
737 | ||
2e3c8f8d DDAG |
738 | - Command: 'postcopy listen' |
739 | - The device state | |
2bfdd1c8 | 740 | |
2e3c8f8d DDAG |
741 | A series of sections, identical to the precopy streams device state stream |
742 | containing everything except postcopiable devices (i.e. RAM) | |
743 | - Command: 'postcopy run' | |
744 | ||
745 | The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the | |
2bfdd1c8 DDAG |
746 | contents are formatted in the same way as the main migration stream. |
747 | ||
748 | During postcopy the source scans the list of dirty pages and sends them | |
749 | to the destination without being requested (in much the same way as precopy), | |
750 | however when a page request is received from the destination, the dirty page | |
751 | scanning restarts from the requested location. This causes requested pages | |
752 | to be sent quickly, and also causes pages directly after the requested page | |
753 | to be sent quickly in the hope that those pages are likely to be used | |
754 | by the destination soon. | |
755 | ||
756 | Destination behaviour | |
2e3c8f8d | 757 | --------------------- |
2bfdd1c8 DDAG |
758 | |
759 | Initially the destination looks the same as precopy, with a single thread | |
760 | reading the migration stream; the 'postcopy advise' and 'discard' commands | |
761 | are processed to change the way RAM is managed, but don't affect the stream | |
762 | processing. | |
763 | ||
2e3c8f8d DDAG |
764 | :: |
765 | ||
766 | ------------------------------------------------------------------------------ | |
767 | 1 2 3 4 5 6 7 | |
768 | main -----DISCARD-CMD_PACKAGED ( LISTEN DEVICE DEVICE DEVICE RUN ) | |
769 | thread | | | |
770 | | (page request) | |
771 | | \___ | |
772 | v \ | |
773 | listen thread: --- page -- page -- page -- page -- page -- | |
774 | ||
775 | a b c | |
776 | ------------------------------------------------------------------------------ | |
777 | ||
778 | - On receipt of ``CMD_PACKAGED`` (1) | |
779 | ||
780 | All the data associated with the package - the ( ... ) section in the diagram - | |
781 | is read into memory, and the main thread recurses into qemu_loadvm_state_main | |
782 | to process the contents of the package (2) which contains commands (3,6) and | |
783 | devices (4...) | |
784 | ||
785 | - On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package) | |
786 | ||
787 | a new thread (a) is started that takes over servicing the migration stream, | |
788 | while the main thread carries on loading the package. It loads normal | |
789 | background page data (b) but if during a device load a fault happens (5) | |
790 | the returned page (c) is loaded by the listen thread allowing the main | |
791 | threads device load to carry on. | |
792 | ||
793 | - The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6) | |
794 | ||
795 | letting the destination CPUs start running. At the end of the | |
796 | ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and | |
797 | is no longer used by migration, while the listen thread carries on servicing | |
798 | page data until the end of migration. | |
799 | ||
f014880a PX |
800 | Postcopy Recovery |
801 | ----------------- | |
802 | ||
803 | Comparing to precopy, postcopy is special on error handlings. When any | |
804 | error happens (in this case, mostly network errors), QEMU cannot easily | |
805 | fail a migration because VM data resides in both source and destination | |
806 | QEMU instances. On the other hand, when issue happens QEMU on both sides | |
807 | will go into a paused state. It'll need a recovery phase to continue a | |
808 | paused postcopy migration. | |
809 | ||
810 | The recovery phase normally contains a few steps: | |
811 | ||
812 | - When network issue occurs, both QEMU will go into PAUSED state | |
813 | ||
814 | - When the network is recovered (or a new network is provided), the admin | |
815 | can setup the new channel for migration using QMP command | |
816 | 'migrate-recover' on destination node, preparing for a resume. | |
817 | ||
818 | - On source host, the admin can continue the interrupted postcopy | |
819 | migration using QMP command 'migrate' with resume=true flag set. | |
820 | ||
821 | - After the connection is re-established, QEMU will continue the postcopy | |
822 | migration on both sides. | |
823 | ||
824 | During a paused postcopy migration, the VM can logically still continue | |
825 | running, and it will not be impacted from any page access to pages that | |
826 | were already migrated to destination VM before the interruption happens. | |
827 | However, if any of the missing pages got accessed on destination VM, the VM | |
828 | thread will be halted waiting for the page to be migrated, it means it can | |
829 | be halted until the recovery is complete. | |
830 | ||
831 | The impact of accessing missing pages can be relevant to different | |
832 | configurations of the guest. For example, when with async page fault | |
833 | enabled, logically the guest can proactively schedule out the threads | |
834 | accessing missing pages. | |
835 | ||
2e3c8f8d DDAG |
836 | Postcopy states |
837 | --------------- | |
2bfdd1c8 DDAG |
838 | |
839 | Postcopy moves through a series of states (see postcopy_state) from | |
840 | ADVISE->DISCARD->LISTEN->RUNNING->END | |
841 | ||
2e3c8f8d DDAG |
842 | - Advise |
843 | ||
844 | Set at the start of migration if postcopy is enabled, even | |
845 | if it hasn't had the start command; here the destination | |
846 | checks that its OS has the support needed for postcopy, and performs | |
847 | setup to ensure the RAM mappings are suitable for later postcopy. | |
848 | The destination will fail early in migration at this point if the | |
849 | required OS support is not present. | |
850 | (Triggered by reception of POSTCOPY_ADVISE command) | |
851 | ||
852 | - Discard | |
853 | ||
854 | Entered on receipt of the first 'discard' command; prior to | |
855 | the first Discard being performed, hugepages are switched off | |
856 | (using madvise) to ensure that no new huge pages are created | |
857 | during the postcopy phase, and to cause any huge pages that | |
858 | have discards on them to be broken. | |
859 | ||
860 | - Listen | |
861 | ||
862 | The first command in the package, POSTCOPY_LISTEN, switches | |
863 | the destination state to Listen, and starts a new thread | |
864 | (the 'listen thread') which takes over the job of receiving | |
865 | pages off the migration stream, while the main thread carries | |
866 | on processing the blob. With this thread able to process page | |
867 | reception, the destination now 'sensitises' the RAM to detect | |
868 | any access to missing pages (on Linux using the 'userfault' | |
869 | system). | |
870 | ||
871 | - Running | |
872 | ||
873 | POSTCOPY_RUN causes the destination to synchronise all | |
874 | state and start the CPUs and IO devices running. The main | |
875 | thread now finishes processing the migration package and | |
876 | now carries on as it would for normal precopy migration | |
877 | (although it can't do the cleanup it would do as it | |
878 | finishes a normal migration). | |
879 | ||
f014880a PX |
880 | - Paused |
881 | ||
882 | Postcopy can run into a paused state (normally on both sides when | |
883 | happens), where all threads will be temporarily halted mostly due to | |
884 | network errors. When reaching paused state, migration will make sure | |
885 | the qemu binary on both sides maintain the data without corrupting | |
886 | the VM. To continue the migration, the admin needs to fix the | |
887 | migration channel using the QMP command 'migrate-recover' on the | |
888 | destination node, then resume the migration using QMP command 'migrate' | |
889 | again on source node, with resume=true flag set. | |
890 | ||
2e3c8f8d DDAG |
891 | - End |
892 | ||
893 | The listen thread can now quit, and perform the cleanup of migration | |
894 | state, the migration is now complete. | |
895 | ||
f014880a PX |
896 | Source side page map |
897 | -------------------- | |
2bfdd1c8 | 898 | |
f014880a PX |
899 | The 'migration bitmap' in postcopy is basically the same as in the precopy, |
900 | where each of the bit to indicate that page is 'dirty' - i.e. needs | |
901 | sending. During the precopy phase this is updated as the CPU dirties | |
902 | pages, however during postcopy the CPUs are stopped and nothing should | |
903 | dirty anything any more. Instead, dirty bits are cleared when the relevant | |
904 | pages are sent during postcopy. | |
2bfdd1c8 | 905 | |
2e3c8f8d DDAG |
906 | Postcopy with hugepages |
907 | ----------------------- | |
0c1f4036 DDAG |
908 | |
909 | Postcopy now works with hugetlbfs backed memory: | |
2e3c8f8d | 910 | |
0c1f4036 DDAG |
911 | a) The linux kernel on the destination must support userfault on hugepages. |
912 | b) The huge-page configuration on the source and destination VMs must be | |
913 | identical; i.e. RAMBlocks on both sides must use the same page size. | |
2e3c8f8d | 914 | c) Note that ``-mem-path /dev/hugepages`` will fall back to allocating normal |
0c1f4036 | 915 | RAM if it doesn't have enough hugepages, triggering (b) to fail. |
2e3c8f8d | 916 | Using ``-mem-prealloc`` enforces the allocation using hugepages. |
0c1f4036 DDAG |
917 | d) Care should be taken with the size of hugepage used; postcopy with 2MB |
918 | hugepages works well, however 1GB hugepages are likely to be problematic | |
919 | since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link, | |
920 | and until the full page is transferred the destination thread is blocked. | |
1dc61e7b DDAG |
921 | |
922 | Postcopy with shared memory | |
923 | --------------------------- | |
924 | ||
925 | Postcopy migration with shared memory needs explicit support from the other | |
926 | processes that share memory and from QEMU. There are restrictions on the type of | |
927 | memory that userfault can support shared. | |
928 | ||
4df3a7bf PM |
929 | The Linux kernel userfault support works on ``/dev/shm`` memory and on ``hugetlbfs`` |
930 | (although the kernel doesn't provide an equivalent to ``madvise(MADV_DONTNEED)`` | |
1dc61e7b DDAG |
931 | for hugetlbfs which may be a problem in some configurations). |
932 | ||
933 | The vhost-user code in QEMU supports clients that have Postcopy support, | |
4df3a7bf | 934 | and the ``vhost-user-bridge`` (in ``tests/``) and the DPDK package have changes |
1dc61e7b DDAG |
935 | to support postcopy. |
936 | ||
937 | The client needs to open a userfaultfd and register the areas | |
938 | of memory that it maps with userfault. The client must then pass the | |
939 | userfaultfd back to QEMU together with a mapping table that allows | |
940 | fault addresses in the clients address space to be converted back to | |
941 | RAMBlock/offsets. The client's userfaultfd is added to the postcopy | |
942 | fault-thread and page requests are made on behalf of the client by QEMU. | |
943 | QEMU performs 'wake' operations on the client's userfaultfd to allow it | |
944 | to continue after a page has arrived. | |
945 | ||
946 | .. note:: | |
947 | There are two future improvements that would be nice: | |
948 | a) Some way to make QEMU ignorant of the addresses in the clients | |
949 | address space | |
950 | b) Avoiding the need for QEMU to perform ufd-wake calls after the | |
951 | pages have arrived | |
952 | ||
953 | Retro-fitting postcopy to existing clients is possible: | |
954 | a) A mechanism is needed for the registration with userfault as above, | |
955 | and the registration needs to be coordinated with the phases of | |
956 | postcopy. In vhost-user extra messages are added to the existing | |
957 | control channel. | |
958 | b) Any thread that can block due to guest memory accesses must be | |
959 | identified and the implication understood; for example if the | |
960 | guest memory access is made while holding a lock then all other | |
961 | threads waiting for that lock will also be blocked. | |
edd70806 | 962 | |
f014880a PX |
963 | Postcopy Preemption Mode |
964 | ------------------------ | |
965 | ||
966 | Postcopy preempt is a new capability introduced in 8.0 QEMU release, it | |
967 | allows urgent pages (those got page fault requested from destination QEMU | |
968 | explicitly) to be sent in a separate preempt channel, rather than queued in | |
969 | the background migration channel. Anyone who cares about latencies of page | |
970 | faults during a postcopy migration should enable this feature. By default, | |
971 | it's not enabled. | |
972 | ||
edd70806 DDAG |
973 | Firmware |
974 | ======== | |
975 | ||
976 | Migration migrates the copies of RAM and ROM, and thus when running | |
977 | on the destination it includes the firmware from the source. Even after | |
978 | resetting a VM, the old firmware is used. Only once QEMU has been restarted | |
979 | is the new firmware in use. | |
980 | ||
981 | - Changes in firmware size can cause changes in the required RAMBlock size | |
982 | to hold the firmware and thus migration can fail. In practice it's best | |
983 | to pad firmware images to convenient powers of 2 with plenty of space | |
984 | for growth. | |
985 | ||
986 | - Care should be taken with device emulation code so that newer | |
987 | emulation code can work with older firmware to allow forward migration. | |
988 | ||
989 | - Care should be taken with newer firmware so that backward migration | |
990 | to older systems with older device emulation code will work. | |
991 | ||
992 | In some cases it may be best to tie specific firmware versions to specific | |
993 | versioned machine types to cut down on the combinations that will need | |
994 | support. This is also useful when newer versions of firmware outgrow | |
995 | the padding. | |
996 | ||
1aefe2ca JQ |
997 | |
998 | Backwards compatibility | |
999 | ======================= | |
1000 | ||
1001 | How backwards compatibility works | |
1002 | --------------------------------- | |
1003 | ||
1004 | When we do migration, we have two QEMU processes: the source and the | |
1005 | target. There are two cases, they are the same version or they are | |
1006 | different versions. The easy case is when they are the same version. | |
1007 | The difficult one is when they are different versions. | |
1008 | ||
1009 | There are two things that are different, but they have very similar | |
1010 | names and sometimes get confused: | |
1011 | ||
1012 | - QEMU version | |
1013 | - machine type version | |
1014 | ||
1015 | Let's start with a practical example, we start with: | |
1016 | ||
1017 | - qemu-system-x86_64 (v5.2), from now on qemu-5.2. | |
1018 | - qemu-system-x86_64 (v5.1), from now on qemu-5.1. | |
1019 | ||
1020 | Related to this are the "latest" machine types defined on each of | |
1021 | them: | |
1022 | ||
1023 | - pc-q35-5.2 (newer one in qemu-5.2) from now on pc-5.2 | |
1024 | - pc-q35-5.1 (newer one in qemu-5.1) from now on pc-5.1 | |
1025 | ||
1026 | First of all, migration is only supposed to work if you use the same | |
1027 | machine type in both source and destination. The QEMU hardware | |
1028 | configuration needs to be the same also on source and destination. | |
1029 | Most aspects of the backend configuration can be changed at will, | |
1030 | except for a few cases where the backend features influence frontend | |
1031 | device feature exposure. But that is not relevant for this section. | |
1032 | ||
1033 | I am going to list the number of combinations that we can have. Let's | |
1034 | start with the trivial ones, QEMU is the same on source and | |
1035 | destination: | |
1036 | ||
1037 | 1 - qemu-5.2 -M pc-5.2 -> migrates to -> qemu-5.2 -M pc-5.2 | |
1038 | ||
1039 | This is the latest QEMU with the latest machine type. | |
1040 | This have to work, and if it doesn't work it is a bug. | |
1041 | ||
1042 | 2 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1 | |
1043 | ||
1044 | Exactly the same case than the previous one, but for 5.1. | |
1045 | Nothing to see here either. | |
1046 | ||
1047 | This are the easiest ones, we will not talk more about them in this | |
1048 | section. | |
1049 | ||
1050 | Now we start with the more interesting cases. Consider the case where | |
1051 | we have the same QEMU version in both sides (qemu-5.2) but we are using | |
1052 | the latest machine type for that version (pc-5.2) but one of an older | |
1053 | QEMU version, in this case pc-5.1. | |
1054 | ||
1055 | 3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1 | |
1056 | ||
1057 | It needs to use the definition of pc-5.1 and the devices as they | |
1058 | were configured on 5.1, but this should be easy in the sense that | |
1059 | both sides are the same QEMU and both sides have exactly the same | |
1060 | idea of what the pc-5.1 machine is. | |
1061 | ||
1062 | 4 - qemu-5.1 -M pc-5.2 -> migrates to -> qemu-5.1 -M pc-5.2 | |
1063 | ||
2a620ed5 | 1064 | This combination is not possible as the qemu-5.1 doesn't understand |
1aefe2ca JQ |
1065 | pc-5.2 machine type. So nothing to worry here. |
1066 | ||
1067 | Now it comes the interesting ones, when both QEMU processes are | |
1068 | different. Notice also that the machine type needs to be pc-5.1, | |
1069 | because we have the limitation than qemu-5.1 doesn't know pc-5.2. So | |
1070 | the possible cases are: | |
1071 | ||
1072 | 5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1 | |
1073 | ||
1074 | This migration is known as newer to older. We need to make sure | |
1075 | when we are developing 5.2 we need to take care about not to break | |
1076 | migration to qemu-5.1. Notice that we can't make updates to | |
1077 | qemu-5.1 to understand whatever qemu-5.2 decides to change, so it is | |
1078 | in qemu-5.2 side to make the relevant changes. | |
1079 | ||
1080 | 6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1 | |
1081 | ||
1082 | This migration is known as older to newer. We need to make sure | |
1083 | than we are able to receive migrations from qemu-5.1. The problem is | |
1084 | similar to the previous one. | |
1085 | ||
1086 | If qemu-5.1 and qemu-5.2 were the same, there will not be any | |
1087 | compatibility problems. But the reason that we create qemu-5.2 is to | |
1088 | get new features, devices, defaults, etc. | |
1089 | ||
1090 | If we get a device that has a new feature, or change a default value, | |
1091 | we have a problem when we try to migrate between different QEMU | |
1092 | versions. | |
1093 | ||
1094 | So we need a way to tell qemu-5.2 that when we are using machine type | |
1095 | pc-5.1, it needs to **not** use the feature, to be able to migrate to | |
1096 | real qemu-5.1. | |
1097 | ||
1098 | And the equivalent part when migrating from qemu-5.1 to qemu-5.2. | |
1099 | qemu-5.2 has to expect that it is not going to get data for the new | |
1100 | feature, because qemu-5.1 doesn't know about it. | |
1101 | ||
1102 | How do we tell QEMU about these device feature changes? In | |
1103 | hw/core/machine.c:hw_compat_X_Y arrays. | |
1104 | ||
1105 | If we change a default value, we need to put back the old value on | |
1106 | that array. And the device, during initialization needs to look at | |
1107 | that array to see what value it needs to get for that feature. And | |
1108 | what are we going to put in that array, the value of a property. | |
1109 | ||
1110 | To create a property for a device, we need to use one of the | |
1111 | DEFINE_PROP_*() macros. See include/hw/qdev-properties.h to find the | |
1112 | macros that exist. With it, we set the default value for that | |
1113 | property, and that is what it is going to get in the latest released | |
1114 | version. But if we want a different value for a previous version, we | |
1115 | can change that in the hw_compat_X_Y arrays. | |
1116 | ||
1117 | hw_compat_X_Y is an array of registers that have the format: | |
1118 | ||
1119 | - name_device | |
1120 | - name_property | |
1121 | - value | |
1122 | ||
1123 | Let's see a practical example. | |
1124 | ||
1125 | In qemu-5.2 virtio-blk-device got multi queue support. This is a | |
1126 | change that is not backward compatible. In qemu-5.1 it has one | |
1127 | queue. In qemu-5.2 it has the same number of queues as the number of | |
1128 | cpus in the system. | |
1129 | ||
1130 | When we are doing migration, if we migrate from a device that has 4 | |
1131 | queues to a device that have only one queue, we don't know where to | |
1132 | put the extra information for the other 3 queues, and we fail | |
1133 | migration. | |
1134 | ||
1135 | Similar problem when we migrate from qemu-5.1 that has only one queue | |
1136 | to qemu-5.2, we only sent information for one queue, but destination | |
1137 | has 4, and we have 3 queues that are not properly initialized and | |
1138 | anything can happen. | |
1139 | ||
1140 | So, how can we address this problem. Easy, just convince qemu-5.2 | |
1141 | that when it is running pc-5.1, it needs to set the number of queues | |
1142 | for virtio-blk-devices to 1. | |
1143 | ||
1144 | That way we fix the cases 5 and 6. | |
1145 | ||
1146 | 5 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.1 -M pc-5.1 | |
1147 | ||
1148 | qemu-5.2 -M pc-5.1 sets number of queues to be 1. | |
1149 | qemu-5.1 -M pc-5.1 expects number of queues to be 1. | |
1150 | ||
1151 | correct. migration works. | |
1152 | ||
1153 | 6 - qemu-5.1 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1 | |
1154 | ||
1155 | qemu-5.1 -M pc-5.1 sets number of queues to be 1. | |
1156 | qemu-5.2 -M pc-5.1 expects number of queues to be 1. | |
1157 | ||
1158 | correct. migration works. | |
1159 | ||
1160 | And now the other interesting case, case 3. In this case we have: | |
1161 | ||
1162 | 3 - qemu-5.2 -M pc-5.1 -> migrates to -> qemu-5.2 -M pc-5.1 | |
1163 | ||
1164 | Here we have the same QEMU in both sides. So it doesn't matter a | |
1165 | lot if we have set the number of queues to 1 or not, because | |
1166 | they are the same. | |
1167 | ||
1168 | WRONG! | |
1169 | ||
1170 | Think what happens if we do one of this double migrations: | |
1171 | ||
1172 | A -> migrates -> B -> migrates -> C | |
1173 | ||
1174 | where: | |
1175 | ||
1176 | A: qemu-5.1 -M pc-5.1 | |
1177 | B: qemu-5.2 -M pc-5.1 | |
1178 | C: qemu-5.2 -M pc-5.1 | |
1179 | ||
1180 | migration A -> B is case 6, so number of queues needs to be 1. | |
1181 | ||
1182 | migration B -> C is case 3, so we don't care. But actually we | |
1183 | care because we haven't started the guest in qemu-5.2, it came | |
1184 | migrated from qemu-5.1. So to be in the safe place, we need to | |
1185 | always use number of queues 1 when we are using pc-5.1. | |
1186 | ||
1187 | Now, how was this done in reality? The following commit shows how it | |
1188 | was done:: | |
1189 | ||
1190 | commit 9445e1e15e66c19e42bea942ba810db28052cd05 | |
1191 | Author: Stefan Hajnoczi <stefanha@redhat.com> | |
1192 | Date: Tue Aug 18 15:33:47 2020 +0100 | |
1193 | ||
1194 | virtio-blk-pci: default num_queues to -smp N | |
1195 | ||
1196 | The relevant parts for migration are:: | |
1197 | ||
1198 | @@ -1281,7 +1284,8 @@ static Property virtio_blk_properties[] = { | |
1199 | #endif | |
1200 | DEFINE_PROP_BIT("request-merging", VirtIOBlock, conf.request_merging, 0, | |
1201 | true), | |
1202 | - DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, 1), | |
1203 | + DEFINE_PROP_UINT16("num-queues", VirtIOBlock, conf.num_queues, | |
1204 | + VIRTIO_BLK_AUTO_NUM_QUEUES), | |
1205 | DEFINE_PROP_UINT16("queue-size", VirtIOBlock, conf.queue_size, 256), | |
1206 | ||
1207 | It changes the default value of num_queues. But it fishes it for old | |
1208 | machine types to have the right value:: | |
1209 | ||
1210 | @@ -31,6 +31,7 @@ | |
1211 | GlobalProperty hw_compat_5_1[] = { | |
1212 | ... | |
1213 | + { "virtio-blk-device", "num-queues", "1"}, | |
1214 | ... | |
1215 | }; | |
593c28c0 | 1216 | |
2a620ed5 MT |
1217 | A device with different features on both sides |
1218 | ---------------------------------------------- | |
593c28c0 JQ |
1219 | |
1220 | Let's assume that we are using the same QEMU binary on both sides, | |
1221 | just to make the things easier. But we have a device that has | |
1222 | different features on both sides of the migration. That can be | |
1223 | because the devices are different, because the kernel driver of both | |
1224 | devices have different features, whatever. | |
1225 | ||
1226 | How can we get this to work with migration. The way to do that is | |
1227 | "theoretically" easy. You have to get the features that the device | |
1228 | has in the source of the migration. The features that the device has | |
1229 | on the target of the migration, you get the intersection of the | |
1230 | features of both sides, and that is the way that you should launch | |
1231 | QEMU. | |
1232 | ||
1233 | Notice that this is not completely related to QEMU. The most | |
1234 | important thing here is that this should be handled by the managing | |
1235 | application that launches QEMU. If QEMU is configured correctly, the | |
1236 | migration will succeed. | |
1237 | ||
1238 | That said, actually doing it is complicated. Almost all devices are | |
1239 | bad at being able to be launched with only some features enabled. | |
1240 | With one big exception: cpus. | |
1241 | ||
1242 | You can read the documentation for QEMU x86 cpu models here: | |
1243 | ||
1244 | https://qemu-project.gitlab.io/qemu/system/qemu-cpu-models.html | |
1245 | ||
1246 | See when they talk about migration they recommend that one chooses the | |
1247 | newest cpu model that is supported for all cpus. | |
1248 | ||
1249 | Let's say that we have: | |
1250 | ||
1251 | Host A: | |
1252 | ||
1253 | Device X has the feature Y | |
1254 | ||
1255 | Host B: | |
1256 | ||
1257 | Device X has not the feature Y | |
1258 | ||
1259 | If we try to migrate without any care from host A to host B, it will | |
1260 | fail because when migration tries to load the feature Y on | |
1261 | destination, it will find that the hardware is not there. | |
1262 | ||
1263 | Doing this would be the equivalent of doing with cpus: | |
1264 | ||
1265 | Host A: | |
1266 | ||
1267 | $ qemu-system-x86_64 -cpu host | |
1268 | ||
1269 | Host B: | |
1270 | ||
1271 | $ qemu-system-x86_64 -cpu host | |
1272 | ||
1273 | When both hosts have different cpu features this is guaranteed to | |
1274 | fail. Especially if Host B has less features than host A. If host A | |
1275 | has less features than host B, sometimes it works. Important word of | |
1276 | last sentence is "sometimes". | |
1277 | ||
1278 | So, forgetting about cpu models and continuing with the -cpu host | |
1279 | example, let's see that the differences of the cpus is that Host A and | |
1280 | B have the following features: | |
1281 | ||
1282 | Features: 'pcid' 'stibp' 'taa-no' | |
1283 | Host A: X X | |
1284 | Host B: X | |
1285 | ||
1286 | And we want to migrate between them, the way configure both QEMU cpu | |
1287 | will be: | |
1288 | ||
1289 | Host A: | |
1290 | ||
1291 | $ qemu-system-x86_64 -cpu host,pcid=off,stibp=off | |
1292 | ||
1293 | Host B: | |
1294 | ||
1295 | $ qemu-system-x86_64 -cpu host,taa-no=off | |
1296 | ||
2a620ed5 | 1297 | And you would be able to migrate between them. It is responsibility |
593c28c0 JQ |
1298 | of the management application or of the user to make sure that the |
1299 | configuration is correct. QEMU doesn't know how to look at this kind | |
1300 | of features in general. | |
1301 | ||
2a620ed5 | 1302 | Notice that we don't recommend to use -cpu host for migration. It is |
593c28c0 JQ |
1303 | used in this example because it makes the example simpler. |
1304 | ||
1305 | Other devices have worse control about individual features. If they | |
1306 | want to be able to migrate between hosts that show different features, | |
1307 | the device needs a way to configure which ones it is going to use. | |
1308 | ||
1309 | In this section we have considered that we are using the same QEMU | |
1310 | binary in both sides of the migration. If we use different QEMU | |
1311 | versions process, then we need to have into account all other | |
1312 | differences and the examples become even more complicated. | |
e7732617 JQ |
1313 | |
1314 | How to mitigate when we have a backward compatibility error | |
1315 | ----------------------------------------------------------- | |
1316 | ||
1317 | We broke migration for old machine types continuously during | |
1318 | development. But as soon as we find that there is a problem, we fix | |
1319 | it. The problem is what happens when we detect after we have done a | |
1320 | release that something has gone wrong. | |
1321 | ||
1322 | Let see how it worked with one example. | |
1323 | ||
1324 | After the release of qemu-8.0 we found a problem when doing migration | |
1325 | of the machine type pc-7.2. | |
1326 | ||
1327 | - $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2 | |
1328 | ||
1329 | This migration works | |
1330 | ||
1331 | - $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2 | |
1332 | ||
1333 | This migration works | |
1334 | ||
1335 | - $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2 | |
1336 | ||
1337 | This migration fails | |
1338 | ||
1339 | - $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2 | |
1340 | ||
1341 | This migration fails | |
1342 | ||
1343 | So clearly something fails when migration between qemu-7.2 and | |
1344 | qemu-8.0 with machine type pc-7.2. The error messages, and git bisect | |
1345 | pointed to this commit. | |
1346 | ||
1347 | In qemu-8.0 we got this commit:: | |
1348 | ||
1349 | commit 010746ae1db7f52700cb2e2c46eb94f299cfa0d2 | |
1350 | Author: Jonathan Cameron <Jonathan.Cameron@huawei.com> | |
1351 | Date: Thu Mar 2 13:37:02 2023 +0000 | |
1352 | ||
1353 | hw/pci/aer: Implement PCI_ERR_UNCOR_MASK register | |
1354 | ||
1355 | ||
1356 | The relevant bits of the commit for our example are this ones:: | |
1357 | ||
1358 | --- a/hw/pci/pcie_aer.c | |
1359 | +++ b/hw/pci/pcie_aer.c | |
1360 | @@ -112,6 +112,10 @@ int pcie_aer_init(PCIDevice *dev, | |
1361 | ||
1362 | pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS, | |
1363 | PCI_ERR_UNC_SUPPORTED); | |
1364 | + pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK, | |
1365 | + PCI_ERR_UNC_MASK_DEFAULT); | |
1366 | + pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK, | |
1367 | + PCI_ERR_UNC_SUPPORTED); | |
1368 | ||
1369 | pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER, | |
1370 | PCI_ERR_UNC_SEVERITY_DEFAULT); | |
1371 | ||
1372 | The patch changes how we configure PCI space for AER. But QEMU fails | |
1373 | when the PCI space configuration is different between source and | |
1374 | destination. | |
1375 | ||
1376 | The following commit shows how this got fixed:: | |
1377 | ||
1378 | commit 5ed3dabe57dd9f4c007404345e5f5bf0e347317f | |
1379 | Author: Leonardo Bras <leobras@redhat.com> | |
1380 | Date: Tue May 2 21:27:02 2023 -0300 | |
1381 | ||
1382 | hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0 | |
1383 | ||
1384 | [...] | |
1385 | ||
1386 | The relevant parts of the fix in QEMU are as follow: | |
1387 | ||
1388 | First, we create a new property for the device to be able to configure | |
1389 | the old behaviour or the new behaviour:: | |
1390 | ||
1391 | diff --git a/hw/pci/pci.c b/hw/pci/pci.c | |
1392 | index 8a87ccc8b0..5153ad63d6 100644 | |
1393 | --- a/hw/pci/pci.c | |
1394 | +++ b/hw/pci/pci.c | |
1395 | @@ -79,6 +79,8 @@ static Property pci_props[] = { | |
1396 | DEFINE_PROP_STRING("failover_pair_id", PCIDevice, | |
1397 | failover_pair_id), | |
1398 | DEFINE_PROP_UINT32("acpi-index", PCIDevice, acpi_index, 0), | |
1399 | + DEFINE_PROP_BIT("x-pcie-err-unc-mask", PCIDevice, cap_present, | |
1400 | + QEMU_PCIE_ERR_UNC_MASK_BITNR, true), | |
1401 | DEFINE_PROP_END_OF_LIST() | |
1402 | }; | |
1403 | ||
1404 | Notice that we enable the feature for new machine types. | |
1405 | ||
1406 | Now we see how the fix is done. This is going to depend on what kind | |
1407 | of breakage happens, but in this case it is quite simple:: | |
1408 | ||
1409 | diff --git a/hw/pci/pcie_aer.c b/hw/pci/pcie_aer.c | |
1410 | index 103667c368..374d593ead 100644 | |
1411 | --- a/hw/pci/pcie_aer.c | |
1412 | +++ b/hw/pci/pcie_aer.c | |
1413 | @@ -112,10 +112,13 @@ int pcie_aer_init(PCIDevice *dev, uint8_t cap_ver, | |
1414 | uint16_t offset, | |
1415 | ||
1416 | pci_set_long(dev->w1cmask + offset + PCI_ERR_UNCOR_STATUS, | |
1417 | PCI_ERR_UNC_SUPPORTED); | |
1418 | - pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK, | |
1419 | - PCI_ERR_UNC_MASK_DEFAULT); | |
1420 | - pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK, | |
1421 | - PCI_ERR_UNC_SUPPORTED); | |
1422 | + | |
1423 | + if (dev->cap_present & QEMU_PCIE_ERR_UNC_MASK) { | |
1424 | + pci_set_long(dev->config + offset + PCI_ERR_UNCOR_MASK, | |
1425 | + PCI_ERR_UNC_MASK_DEFAULT); | |
1426 | + pci_set_long(dev->wmask + offset + PCI_ERR_UNCOR_MASK, | |
1427 | + PCI_ERR_UNC_SUPPORTED); | |
1428 | + } | |
1429 | ||
1430 | pci_set_long(dev->config + offset + PCI_ERR_UNCOR_SEVER, | |
1431 | PCI_ERR_UNC_SEVERITY_DEFAULT); | |
1432 | ||
1433 | I.e. If the property bit is enabled, we configure it as we did for | |
1434 | qemu-8.0. If the property bit is not set, we configure it as it was in 7.2. | |
1435 | ||
1436 | And now, everything that is missing is disabling the feature for old | |
1437 | machine types:: | |
1438 | ||
1439 | diff --git a/hw/core/machine.c b/hw/core/machine.c | |
1440 | index 47a34841a5..07f763eb2e 100644 | |
1441 | --- a/hw/core/machine.c | |
1442 | +++ b/hw/core/machine.c | |
1443 | @@ -48,6 +48,7 @@ GlobalProperty hw_compat_7_2[] = { | |
1444 | { "e1000e", "migrate-timadj", "off" }, | |
1445 | { "virtio-mem", "x-early-migration", "false" }, | |
1446 | { "migration", "x-preempt-pre-7-2", "true" }, | |
1447 | + { TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" }, | |
1448 | }; | |
1449 | const size_t hw_compat_7_2_len = G_N_ELEMENTS(hw_compat_7_2); | |
1450 | ||
1451 | And now, when qemu-8.0.1 is released with this fix, all combinations | |
1452 | are going to work as supposed. | |
1453 | ||
1454 | - $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works) | |
1455 | - $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works) | |
1456 | - $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2 (works) | |
1457 | - $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 (works) | |
1458 | ||
1459 | So the normality has been restored and everything is ok, no? | |
1460 | ||
1461 | Not really, now our matrix is much bigger. We started with the easy | |
1462 | cases, migration from the same version to the same version always | |
1463 | works: | |
1464 | ||
1465 | - $ qemu-7.2 -M pc-7.2 -> qemu-7.2 -M pc-7.2 | |
1466 | - $ qemu-8.0 -M pc-7.2 -> qemu-8.0 -M pc-7.2 | |
1467 | - $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 | |
1468 | ||
1469 | Now the interesting ones. When the QEMU processes versions are | |
1470 | different. For the 1st set, their fail and we can do nothing, both | |
1471 | versions are released and we can't change anything. | |
1472 | ||
1473 | - $ qemu-7.2 -M pc-7.2 -> qemu-8.0 -M pc-7.2 | |
1474 | - $ qemu-8.0 -M pc-7.2 -> qemu-7.2 -M pc-7.2 | |
1475 | ||
1476 | This two are the ones that work. The whole point of making the | |
1477 | change in qemu-8.0.1 release was to fix this issue: | |
1478 | ||
1479 | - $ qemu-7.2 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 | |
1480 | - $ qemu-8.0.1 -M pc-7.2 -> qemu-7.2 -M pc-7.2 | |
1481 | ||
1482 | But now we found that qemu-8.0 neither can migrate to qemu-7.2 not | |
1483 | qemu-8.0.1. | |
1484 | ||
1485 | - $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 | |
1486 | - $ qemu-8.0.1 -M pc-7.2 -> qemu-8.0 -M pc-7.2 | |
1487 | ||
1488 | So, if we start a pc-7.2 machine in qemu-8.0 we can't migrate it to | |
1489 | anything except to qemu-8.0. | |
1490 | ||
1491 | Can we do better? | |
1492 | ||
1493 | Yeap. If we know that we are going to do this migration: | |
1494 | ||
1495 | - $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 | |
1496 | ||
1497 | We can launch the appropriate devices with:: | |
1498 | ||
1499 | --device...,x-pci-e-err-unc-mask=on | |
1500 | ||
1501 | And now we can receive a migration from 8.0. And from now on, we can | |
1502 | do that migration to new machine types if we remember to enable that | |
1503 | property for pc-7.2. Notice that we need to remember, it is not | |
1504 | enough to know that the source of the migration is qemu-8.0. Think of | |
1505 | this example: | |
1506 | ||
1507 | $ qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2 -> qemu-8.2 -M pc-7.2 | |
1508 | ||
1509 | In the second migration, the source is not qemu-8.0, but we still have | |
1510 | that "problem" and have that property enabled. Notice that we need to | |
1511 | continue having this mark/property until we have this machine | |
1512 | rebooted. But it is not a normal reboot (that don't reload QEMU) we | |
1513 | need the machine to poweroff/poweron on a fixed QEMU. And from now | |
1514 | on we can use the proper real machine. |