]>
Commit | Line | Data |
---|---|---|
2e3c8f8d DDAG |
1 | ========= |
2 | Migration | |
3 | ========= | |
f58ae59c JQ |
4 | |
5 | QEMU has code to load/save the state of the guest that it is running. | |
dda5336e | 6 | These are two complementary operations. Saving the state just does |
f58ae59c JQ |
7 | that, saves the state for each device that the guest is running. |
8 | Restoring a guest is just the opposite operation: we need to load the | |
9 | state of each device. | |
10 | ||
dda5336e | 11 | For this to work, QEMU has to be launched with the same arguments the |
f58ae59c JQ |
12 | two times. I.e. it can only restore the state in one guest that has |
13 | the same devices that the one it was saved (this last requirement can | |
dda5336e | 14 | be relaxed a bit, but for now we can consider that configuration has |
f58ae59c JQ |
15 | to be exactly the same). |
16 | ||
17 | Once that we are able to save/restore a guest, a new functionality is | |
18 | requested: migration. This means that QEMU is able to start in one | |
dda5336e SW |
19 | machine and being "migrated" to another machine. I.e. being moved to |
20 | another machine. | |
f58ae59c JQ |
21 | |
22 | Next was the "live migration" functionality. This is important | |
23 | because some guests run with a lot of state (specially RAM), and it | |
24 | can take a while to move all state from one machine to another. Live | |
25 | migration allows the guest to continue running while the state is | |
26 | transferred. Only while the last part of the state is transferred has | |
27 | the guest to be stopped. Typically the time that the guest is | |
28 | unresponsive during live migration is the low hundred of milliseconds | |
dda5336e | 29 | (notice that this depends on a lot of things). |
f58ae59c | 30 | |
edd70806 DDAG |
31 | Transports |
32 | ========== | |
f58ae59c | 33 | |
edd70806 DDAG |
34 | The migration stream is normally just a byte stream that can be passed |
35 | over any transport. | |
f58ae59c JQ |
36 | |
37 | - tcp migration: do the migration using tcp sockets | |
38 | - unix migration: do the migration using unix sockets | |
39 | - exec migration: do the migration using the stdin/stdout through a process. | |
9277d81f | 40 | - fd migration: do the migration using a file descriptor that is |
dda5336e | 41 | passed to QEMU. QEMU doesn't care how this file descriptor is opened. |
f58ae59c | 42 | |
edd70806 DDAG |
43 | In addition, support is included for migration using RDMA, which |
44 | transports the page data using ``RDMA``, where the hardware takes care of | |
45 | transporting the pages, and the load on the CPU is much lower. While the | |
46 | internals of RDMA migration are a bit different, this isn't really visible | |
47 | outside the RAM migration code. | |
48 | ||
49 | All these migration protocols use the same infrastructure to | |
f58ae59c JQ |
50 | save/restore state devices. This infrastructure is shared with the |
51 | savevm/loadvm functionality. | |
52 | ||
2e3c8f8d DDAG |
53 | Common infrastructure |
54 | ===================== | |
f58ae59c | 55 | |
2e3c8f8d DDAG |
56 | The files, sockets or fd's that carry the migration stream are abstracted by |
57 | the ``QEMUFile`` type (see `migration/qemu-file.h`). In most cases this | |
58 | is connected to a subtype of ``QIOChannel`` (see `io/`). | |
f58ae59c | 59 | |
edd70806 | 60 | |
2e3c8f8d DDAG |
61 | Saving the state of one device |
62 | ============================== | |
f58ae59c | 63 | |
edd70806 DDAG |
64 | For most devices, the state is saved in a single call to the migration |
65 | infrastructure; these are *non-iterative* devices. The data for these | |
66 | devices is sent at the end of precopy migration, when the CPUs are paused. | |
67 | There are also *iterative* devices, which contain a very large amount of | |
68 | data (e.g. RAM or large tables). See the iterative device section below. | |
69 | ||
70 | General advice for device developers | |
71 | ------------------------------------ | |
72 | ||
73 | - The migration state saved should reflect the device being modelled rather | |
74 | than the way your implementation works. That way if you change the implementation | |
75 | later the migration stream will stay compatible. That model may include | |
76 | internal state that's not directly visible in a register. | |
77 | ||
78 | - When saving a migration stream the device code may walk and check | |
79 | the state of the device. These checks might fail in various ways (e.g. | |
80 | discovering internal state is corrupt or that the guest has done something bad). | |
81 | Consider carefully before asserting/aborting at this point, since the | |
82 | normal response from users is that *migration broke their VM* since it had | |
83 | apparently been running fine until then. In these error cases, the device | |
84 | should log a message indicating the cause of error, and should consider | |
85 | putting the device into an error state, allowing the rest of the VM to | |
86 | continue execution. | |
87 | ||
88 | - The migration might happen at an inconvenient point, | |
89 | e.g. right in the middle of the guest reprogramming the device, during | |
90 | guest reboot or shutdown or while the device is waiting for external IO. | |
91 | It's strongly preferred that migrations do not fail in this situation, | |
92 | since in the cloud environment migrations might happen automatically to | |
93 | VMs that the administrator doesn't directly control. | |
94 | ||
95 | - If you do need to fail a migration, ensure that sufficient information | |
96 | is logged to identify what went wrong. | |
97 | ||
98 | - The destination should treat an incoming migration stream as hostile | |
99 | (which we do to varying degrees in the existing code). Check that offsets | |
100 | into buffers and the like can't cause overruns. Fail the incoming migration | |
101 | in the case of a corrupted stream like this. | |
102 | ||
103 | - Take care with internal device state or behaviour that might become | |
104 | migration version dependent. For example, the order of PCI capabilities | |
105 | is required to stay constant across migration. Another example would | |
106 | be that a special case handled by subsections (see below) might become | |
107 | much more common if a default behaviour is changed. | |
108 | ||
109 | - The state of the source should not be changed or destroyed by the | |
110 | outgoing migration. Migrations timing out or being failed by | |
111 | higher levels of management, or failures of the destination host are | |
112 | not unusual, and in that case the VM is restarted on the source. | |
113 | Note that the management layer can validly revert the migration | |
114 | even though the QEMU level of migration has succeeded as long as it | |
115 | does it before starting execution on the destination. | |
116 | ||
117 | - Buses and devices should be able to explicitly specify addresses when | |
118 | instantiated, and management tools should use those. For example, | |
119 | when hot adding USB devices it's important to specify the ports | |
120 | and addresses, since implicit ordering based on the command line order | |
121 | may be different on the destination. This can result in the | |
122 | device state being loaded into the wrong device. | |
f58ae59c | 123 | |
2e3c8f8d DDAG |
124 | VMState |
125 | ------- | |
f58ae59c | 126 | |
edd70806 DDAG |
127 | Most device data can be described using the ``VMSTATE`` macros (mostly defined |
128 | in ``include/migration/vmstate.h``). | |
f58ae59c | 129 | |
7465dfec | 130 | An example (from hw/input/pckbd.c) |
f58ae59c | 131 | |
2e3c8f8d DDAG |
132 | .. code:: c |
133 | ||
134 | static const VMStateDescription vmstate_kbd = { | |
135 | .name = "pckbd", | |
136 | .version_id = 3, | |
137 | .minimum_version_id = 3, | |
138 | .fields = (VMStateField[]) { | |
139 | VMSTATE_UINT8(write_cmd, KBDState), | |
140 | VMSTATE_UINT8(status, KBDState), | |
141 | VMSTATE_UINT8(mode, KBDState), | |
142 | VMSTATE_UINT8(pending, KBDState), | |
143 | VMSTATE_END_OF_LIST() | |
144 | } | |
145 | }; | |
f58ae59c JQ |
146 | |
147 | We are declaring the state with name "pckbd". | |
2e3c8f8d | 148 | The `version_id` is 3, and the fields are 4 uint8_t in a KBDState structure. |
f58ae59c JQ |
149 | We registered this with: |
150 | ||
2e3c8f8d DDAG |
151 | .. code:: c |
152 | ||
f58ae59c JQ |
153 | vmstate_register(NULL, 0, &vmstate_kbd, s); |
154 | ||
edd70806 DDAG |
155 | For devices that are `qdev` based, we can register the device in the class |
156 | init function: | |
f58ae59c | 157 | |
edd70806 | 158 | .. code:: c |
f58ae59c | 159 | |
edd70806 | 160 | dc->vmsd = &vmstate_kbd_isa; |
f58ae59c | 161 | |
edd70806 DDAG |
162 | The VMState macros take care of ensuring that the device data section |
163 | is formatted portably (normally big endian) and make some compile time checks | |
164 | against the types of the fields in the structures. | |
f58ae59c | 165 | |
edd70806 DDAG |
166 | VMState macros can include other VMStateDescriptions to store substructures |
167 | (see ``VMSTATE_STRUCT_``), arrays (``VMSTATE_ARRAY_``) and variable length | |
168 | arrays (``VMSTATE_VARRAY_``). Various other macros exist for special | |
169 | cases. | |
5f9412bb | 170 | |
edd70806 DDAG |
171 | Note that the format on the wire is still very raw; i.e. a VMSTATE_UINT32 |
172 | ends up with a 4 byte bigendian representation on the wire; in the future | |
173 | it might be possible to use a more structured format. | |
f58ae59c | 174 | |
edd70806 DDAG |
175 | Legacy way |
176 | ---------- | |
f58ae59c | 177 | |
edd70806 DDAG |
178 | This way is going to disappear as soon as all current users are ported to VMSTATE; |
179 | although converting existing code can be tricky, and thus 'soon' is relative. | |
f58ae59c | 180 | |
edd70806 DDAG |
181 | Each device has to register two functions, one to save the state and |
182 | another to load the state back. | |
f58ae59c | 183 | |
edd70806 | 184 | .. code:: c |
f58ae59c | 185 | |
edd70806 DDAG |
186 | int register_savevm_live(DeviceState *dev, |
187 | const char *idstr, | |
188 | int instance_id, | |
189 | int version_id, | |
190 | SaveVMHandlers *ops, | |
191 | void *opaque); | |
f58ae59c | 192 | |
edd70806 DDAG |
193 | Two functions in the ``ops`` structure are the `save_state` |
194 | and `load_state` functions. Notice that `load_state` receives a version_id | |
195 | parameter to know what state format is receiving. `save_state` doesn't | |
196 | have a version_id parameter because it always uses the latest version. | |
f58ae59c | 197 | |
edd70806 DDAG |
198 | Note that because the VMState macros still save the data in a raw |
199 | format, in many cases it's possible to replace legacy code | |
200 | with a carefully constructed VMState description that matches the | |
201 | byte layout of the existing code. | |
f58ae59c | 202 | |
edd70806 DDAG |
203 | Changing migration data structures |
204 | ---------------------------------- | |
f58ae59c | 205 | |
edd70806 DDAG |
206 | When we migrate a device, we save/load the state as a series |
207 | of fields. Sometimes, due to bugs or new functionality, we need to | |
208 | change the state to store more/different information. Changing the migration | |
209 | state saved for a device can break migration compatibility unless | |
210 | care is taken to use the appropriate techniques. In general QEMU tries | |
211 | to maintain forward migration compatibility (i.e. migrating from | |
212 | QEMU n->n+1) and there are users who benefit from backward compatibility | |
213 | as well. | |
a6c5c079 | 214 | |
2e3c8f8d DDAG |
215 | Subsections |
216 | ----------- | |
f58ae59c | 217 | |
edd70806 DDAG |
218 | The most common structure change is adding new data, e.g. when adding |
219 | a newer form of device, or adding that state that you previously | |
220 | forgot to migrate. This is best solved using a subsection. | |
f58ae59c | 221 | |
edd70806 DDAG |
222 | A subsection is "like" a device vmstate, but with a particularity, it |
223 | has a Boolean function that tells if that values are needed to be sent | |
224 | or not. If this functions returns false, the subsection is not sent. | |
225 | Subsections have a unique name, that is looked for on the receiving | |
226 | side. | |
f58ae59c JQ |
227 | |
228 | On the receiving side, if we found a subsection for a device that we | |
229 | don't understand, we just fail the migration. If we understand all | |
edd70806 DDAG |
230 | the subsections, then we load the state with success. There's no check |
231 | that a subsection is loaded, so a newer QEMU that knows about a subsection | |
232 | can (with care) load a stream from an older QEMU that didn't send | |
233 | the subsection. | |
234 | ||
235 | If the new data is only needed in a rare case, then the subsection | |
236 | can be made conditional on that case and the migration will still | |
237 | succeed to older QEMUs in most cases. This is OK for data that's | |
238 | critical, but in some use cases it's preferred that the migration | |
239 | should succeed even with the data missing. To support this the | |
240 | subsection can be connected to a device property and from there | |
241 | to a versioned machine type. | |
f58ae59c JQ |
242 | |
243 | One important note is that the post_load() function is called "after" | |
244 | loading all subsections, because a newer subsection could change same | |
edd70806 DDAG |
245 | value that it uses. A flag, and the combination of pre_load and post_load |
246 | can be used to detect whether a subsection was loaded, and to | |
247 | fall back on default behaviour when the subsection isn't present. | |
f58ae59c JQ |
248 | |
249 | Example: | |
250 | ||
2e3c8f8d DDAG |
251 | .. code:: c |
252 | ||
253 | static bool ide_drive_pio_state_needed(void *opaque) | |
254 | { | |
255 | IDEState *s = opaque; | |
256 | ||
257 | return ((s->status & DRQ_STAT) != 0) | |
258 | || (s->bus->error_status & BM_STATUS_PIO_RETRY); | |
259 | } | |
260 | ||
261 | const VMStateDescription vmstate_ide_drive_pio_state = { | |
262 | .name = "ide_drive/pio_state", | |
263 | .version_id = 1, | |
264 | .minimum_version_id = 1, | |
265 | .pre_save = ide_drive_pio_pre_save, | |
266 | .post_load = ide_drive_pio_post_load, | |
267 | .needed = ide_drive_pio_state_needed, | |
268 | .fields = (VMStateField[]) { | |
269 | VMSTATE_INT32(req_nb_sectors, IDEState), | |
270 | VMSTATE_VARRAY_INT32(io_buffer, IDEState, io_buffer_total_len, 1, | |
271 | vmstate_info_uint8, uint8_t), | |
272 | VMSTATE_INT32(cur_io_buffer_offset, IDEState), | |
273 | VMSTATE_INT32(cur_io_buffer_len, IDEState), | |
274 | VMSTATE_UINT8(end_transfer_fn_idx, IDEState), | |
275 | VMSTATE_INT32(elementary_transfer_size, IDEState), | |
276 | VMSTATE_INT32(packet_transfer_size, IDEState), | |
277 | VMSTATE_END_OF_LIST() | |
278 | } | |
279 | }; | |
280 | ||
281 | const VMStateDescription vmstate_ide_drive = { | |
282 | .name = "ide_drive", | |
283 | .version_id = 3, | |
284 | .minimum_version_id = 0, | |
285 | .post_load = ide_drive_post_load, | |
286 | .fields = (VMStateField[]) { | |
287 | .... several fields .... | |
288 | VMSTATE_END_OF_LIST() | |
289 | }, | |
290 | .subsections = (const VMStateDescription*[]) { | |
291 | &vmstate_ide_drive_pio_state, | |
292 | NULL | |
293 | } | |
294 | }; | |
f58ae59c JQ |
295 | |
296 | Here we have a subsection for the pio state. We only need to | |
297 | save/send this state when we are in the middle of a pio operation | |
2e3c8f8d | 298 | (that is what ``ide_drive_pio_state_needed()`` checks). If DRQ_STAT is |
f58ae59c JQ |
299 | not enabled, the values on that fields are garbage and don't need to |
300 | be sent. | |
2bfdd1c8 | 301 | |
edd70806 DDAG |
302 | Connecting subsections to properties |
303 | ------------------------------------ | |
304 | ||
5f9412bb | 305 | Using a condition function that checks a 'property' to determine whether |
edd70806 DDAG |
306 | to send a subsection allows backward migration compatibility when |
307 | new subsections are added, especially when combined with versioned | |
308 | machine types. | |
5f9412bb | 309 | |
2e3c8f8d DDAG |
310 | For example: |
311 | ||
312 | a) Add a new property using ``DEFINE_PROP_BOOL`` - e.g. support-foo and | |
5f9412bb | 313 | default it to true. |
2e3c8f8d DDAG |
314 | b) Add an entry to the ``HW_COMPAT_`` for the previous version that sets |
315 | the property to false. | |
5f9412bb DDAG |
316 | c) Add a static bool support_foo function that tests the property. |
317 | d) Add a subsection with a .needed set to the support_foo function | |
318 | e) (potentially) Add a pre_load that sets up a default value for 'foo' | |
319 | to be used if the subsection isn't loaded. | |
320 | ||
321 | Now that subsection will not be generated when using an older | |
322 | machine type and the migration stream will be accepted by older | |
edd70806 | 323 | QEMU versions. |
5f9412bb | 324 | |
2e3c8f8d DDAG |
325 | Not sending existing elements |
326 | ----------------------------- | |
327 | ||
328 | Sometimes members of the VMState are no longer needed: | |
5f9412bb | 329 | |
2e3c8f8d DDAG |
330 | - removing them will break migration compatibility |
331 | ||
edd70806 DDAG |
332 | - making them version dependent and bumping the version will break backward migration |
333 | compatibility. | |
334 | ||
335 | Adding a dummy field into the migration stream is normally the best way to preserve | |
336 | compatibility. | |
5f9412bb | 337 | |
edd70806 | 338 | If the field really does need to be removed then: |
2e3c8f8d DDAG |
339 | |
340 | a) Add a new property/compatibility/function in the same way for subsections above. | |
5f9412bb | 341 | b) replace the VMSTATE macro with the _TEST version of the macro, e.g.: |
2e3c8f8d DDAG |
342 | |
343 | ``VMSTATE_UINT32(foo, barstruct)`` | |
344 | ||
5f9412bb | 345 | becomes |
5f9412bb | 346 | |
2e3c8f8d DDAG |
347 | ``VMSTATE_UINT32_TEST(foo, barstruct, pre_version_baz)`` |
348 | ||
349 | Sometime in the future when we no longer care about the ancient versions these can be killed off. | |
edd70806 DDAG |
350 | Note that for backward compatibility it's important to fill in the structure with |
351 | data that the destination will understand. | |
352 | ||
353 | Any difference in the predicates on the source and destination will end up | |
354 | with different fields being enabled and data being loaded into the wrong | |
355 | fields; for this reason conditional fields like this are very fragile. | |
356 | ||
357 | Versions | |
358 | -------- | |
359 | ||
360 | Version numbers are intended for major incompatible changes to the | |
361 | migration of a device, and using them breaks backward-migration | |
362 | compatibility; in general most changes can be made by adding Subsections | |
363 | (see above) or _TEST macros (see above) which won't break compatibility. | |
364 | ||
365 | Each version is associated with a series of fields saved. The `save_state` always saves | |
366 | the state as the newer version. But `load_state` sometimes is able to | |
367 | load state from an older version. | |
368 | ||
369 | You can see that there are several version fields: | |
370 | ||
371 | - `version_id`: the maximum version_id supported by VMState for that device. | |
372 | - `minimum_version_id`: the minimum version_id that VMState is able to understand | |
373 | for that device. | |
374 | - `minimum_version_id_old`: For devices that were not able to port to vmstate, we can | |
375 | assign a function that knows how to read this old state. This field is | |
376 | ignored if there is no `load_state_old` handler. | |
377 | ||
378 | VMState is able to read versions from minimum_version_id to | |
379 | version_id. And the function ``load_state_old()`` (if present) is able to | |
380 | load state from minimum_version_id_old to minimum_version_id. This | |
381 | function is deprecated and will be removed when no more users are left. | |
382 | ||
383 | There are *_V* forms of many ``VMSTATE_`` macros to load fields for version dependent fields, | |
384 | e.g. | |
385 | ||
386 | .. code:: c | |
387 | ||
388 | VMSTATE_UINT16_V(ip_id, Slirp, 2), | |
389 | ||
390 | only loads that field for versions 2 and newer. | |
391 | ||
392 | Saving state will always create a section with the 'version_id' value | |
393 | and thus can't be loaded by any older QEMU. | |
394 | ||
395 | Massaging functions | |
396 | ------------------- | |
397 | ||
398 | Sometimes, it is not enough to be able to save the state directly | |
399 | from one structure, we need to fill the correct values there. One | |
400 | example is when we are using kvm. Before saving the cpu state, we | |
401 | need to ask kvm to copy to QEMU the state that it is using. And the | |
402 | opposite when we are loading the state, we need a way to tell kvm to | |
403 | load the state for the cpu that we have just loaded from the QEMUFile. | |
404 | ||
405 | The functions to do that are inside a vmstate definition, and are called: | |
406 | ||
407 | - ``int (*pre_load)(void *opaque);`` | |
408 | ||
409 | This function is called before we load the state of one device. | |
410 | ||
411 | - ``int (*post_load)(void *opaque, int version_id);`` | |
412 | ||
413 | This function is called after we load the state of one device. | |
414 | ||
415 | - ``int (*pre_save)(void *opaque);`` | |
416 | ||
417 | This function is called before we save the state of one device. | |
418 | ||
419 | Example: You can look at hpet.c, that uses the three function to | |
420 | massage the state that is transferred. | |
421 | ||
422 | The ``VMSTATE_WITH_TMP`` macro may be useful when the migration | |
423 | data doesn't match the stored device data well; it allows an | |
424 | intermediate temporary structure to be populated with migration | |
425 | data and then transferred to the main structure. | |
426 | ||
427 | If you use memory API functions that update memory layout outside | |
428 | initialization (i.e., in response to a guest action), this is a strong | |
429 | indication that you need to call these functions in a `post_load` callback. | |
430 | Examples of such memory API functions are: | |
431 | ||
432 | - memory_region_add_subregion() | |
433 | - memory_region_del_subregion() | |
434 | - memory_region_set_readonly() | |
435 | - memory_region_set_enabled() | |
436 | - memory_region_set_address() | |
437 | - memory_region_set_alias_offset() | |
438 | ||
439 | Iterative device migration | |
440 | -------------------------- | |
441 | ||
442 | Some devices, such as RAM, Block storage or certain platform devices, | |
443 | have large amounts of data that would mean that the CPUs would be | |
444 | paused for too long if they were sent in one section. For these | |
445 | devices an *iterative* approach is taken. | |
446 | ||
447 | The iterative devices generally don't use VMState macros | |
448 | (although it may be possible in some cases) and instead use | |
449 | qemu_put_*/qemu_get_* macros to read/write data to the stream. Specialist | |
450 | versions exist for high bandwidth IO. | |
451 | ||
452 | ||
453 | An iterative device must provide: | |
454 | ||
455 | - A ``save_setup`` function that initialises the data structures and | |
456 | transmits a first section containing information on the device. In the | |
457 | case of RAM this transmits a list of RAMBlocks and sizes. | |
458 | ||
459 | - A ``load_setup`` function that initialises the data structures on the | |
460 | destination. | |
461 | ||
462 | - A ``save_live_pending`` function that is called repeatedly and must | |
463 | indicate how much more data the iterative data must save. The core | |
464 | migration code will use this to determine when to pause the CPUs | |
465 | and complete the migration. | |
466 | ||
467 | - A ``save_live_iterate`` function (called after ``save_live_pending`` | |
468 | when there is significant data still to be sent). It should send | |
469 | a chunk of data until the point that stream bandwidth limits tell it | |
470 | to stop. Each call generates one section. | |
471 | ||
472 | - A ``save_live_complete_precopy`` function that must transmit the | |
473 | last section for the device containing any remaining data. | |
474 | ||
475 | - A ``load_state`` function used to load sections generated by | |
476 | any of the save functions that generate sections. | |
477 | ||
478 | - ``cleanup`` functions for both save and load that are called | |
479 | at the end of migration. | |
480 | ||
481 | Note that the contents of the sections for iterative migration tend | |
482 | to be open-coded by the devices; care should be taken in parsing | |
483 | the results and structuring the stream to make them easy to validate. | |
484 | ||
485 | Device ordering | |
486 | --------------- | |
487 | ||
488 | There are cases in which the ordering of device loading matters; for | |
489 | example in some systems where a device may assert an interrupt during loading, | |
490 | if the interrupt controller is loaded later then it might lose the state. | |
491 | ||
492 | Some ordering is implicitly provided by the order in which the machine | |
493 | definition creates devices, however this is somewhat fragile. | |
494 | ||
495 | The ``MigrationPriority`` enum provides a means of explicitly enforcing | |
496 | ordering. Numerically higher priorities are loaded earlier. | |
497 | The priority is set by setting the ``priority`` field of the top level | |
498 | ``VMStateDescription`` for the device. | |
499 | ||
500 | Stream structure | |
501 | ================ | |
502 | ||
503 | The stream tries to be word and endian agnostic, allowing migration between hosts | |
504 | of different characteristics running the same VM. | |
505 | ||
506 | - Header | |
507 | ||
508 | - Magic | |
509 | - Version | |
510 | - VM configuration section | |
511 | ||
512 | - Machine type | |
513 | - Target page bits | |
514 | - List of sections | |
515 | Each section contains a device, or one iteration of a device save. | |
516 | ||
517 | - section type | |
518 | - section id | |
519 | - ID string (First section of each device) | |
520 | - instance id (First section of each device) | |
521 | - version id (First section of each device) | |
522 | - <device data> | |
523 | - Footer mark | |
524 | - EOF mark | |
525 | - VM Description structure | |
526 | Consisting of a JSON description of the contents for analysis only | |
527 | ||
528 | The ``device data`` in each section consists of the data produced | |
529 | by the code described above. For non-iterative devices they have a single | |
530 | section; iterative devices have an initial and last section and a set | |
531 | of parts in between. | |
532 | Note that there is very little checking by the common code of the integrity | |
533 | of the ``device data`` contents, that's up to the devices themselves. | |
534 | The ``footer mark`` provides a little bit of protection for the case where | |
535 | the receiving side reads more or less data than expected. | |
536 | ||
537 | The ``ID string`` is normally unique, having been formed from a bus name | |
538 | and device address, PCI devices and storage devices hung off PCI controllers | |
539 | fit this pattern well. Some devices are fixed single instances (e.g. "pc-ram"). | |
540 | Others (especially either older devices or system devices which for | |
541 | some reason don't have a bus concept) make use of the ``instance id`` | |
542 | for otherwise identically named devices. | |
5f9412bb | 543 | |
2e3c8f8d DDAG |
544 | Return path |
545 | ----------- | |
2bfdd1c8 | 546 | |
edd70806 DDAG |
547 | Only a unidirectional stream is required for normal migration, however a |
548 | ``return path`` can be created when bidirectional communication is desired. | |
549 | This is primarily used by postcopy, but is also used to return a success | |
550 | flag to the source at the end of migration. | |
2bfdd1c8 | 551 | |
2e3c8f8d | 552 | ``qemu_file_get_return_path(QEMUFile* fwdpath)`` gives the QEMUFile* for the return |
2bfdd1c8 DDAG |
553 | path. |
554 | ||
555 | Source side | |
2e3c8f8d | 556 | |
2bfdd1c8 DDAG |
557 | Forward path - written by migration thread |
558 | Return path - opened by main thread, read by return-path thread | |
559 | ||
560 | Destination side | |
2e3c8f8d | 561 | |
2bfdd1c8 DDAG |
562 | Forward path - read by main thread |
563 | Return path - opened by main thread, written by main thread AND postcopy | |
2e3c8f8d DDAG |
564 | thread (protected by rp_mutex) |
565 | ||
566 | Postcopy | |
567 | ======== | |
2bfdd1c8 | 568 | |
2bfdd1c8 DDAG |
569 | 'Postcopy' migration is a way to deal with migrations that refuse to converge |
570 | (or take too long to converge) its plus side is that there is an upper bound on | |
571 | the amount of migration traffic and time it takes, the down side is that during | |
572 | the postcopy phase, a failure of *either* side or the network connection causes | |
573 | the guest to be lost. | |
574 | ||
575 | In postcopy the destination CPUs are started before all the memory has been | |
576 | transferred, and accesses to pages that are yet to be transferred cause | |
577 | a fault that's translated by QEMU into a request to the source QEMU. | |
578 | ||
579 | Postcopy can be combined with precopy (i.e. normal migration) so that if precopy | |
580 | doesn't finish in a given time the switch is made to postcopy. | |
581 | ||
2e3c8f8d DDAG |
582 | Enabling postcopy |
583 | ----------------- | |
2bfdd1c8 | 584 | |
c2eb7f21 GK |
585 | To enable postcopy, issue this command on the monitor (both source and |
586 | destination) prior to the start of migration: | |
2bfdd1c8 | 587 | |
2e3c8f8d | 588 | ``migrate_set_capability postcopy-ram on`` |
2bfdd1c8 DDAG |
589 | |
590 | The normal commands are then used to start a migration, which is still | |
591 | started in precopy mode. Issuing: | |
592 | ||
2e3c8f8d | 593 | ``migrate_start_postcopy`` |
2bfdd1c8 DDAG |
594 | |
595 | will now cause the transition from precopy to postcopy. | |
596 | It can be issued immediately after migration is started or any | |
597 | time later on. Issuing it after the end of a migration is harmless. | |
598 | ||
9ed01779 AP |
599 | Blocktime is a postcopy live migration metric, intended to show how |
600 | long the vCPU was in state of interruptable sleep due to pagefault. | |
601 | That metric is calculated both for all vCPUs as overlapped value, and | |
602 | separately for each vCPU. These values are calculated on destination | |
603 | side. To enable postcopy blocktime calculation, enter following | |
604 | command on destination monitor: | |
605 | ||
606 | ``migrate_set_capability postcopy-blocktime on`` | |
607 | ||
608 | Postcopy blocktime can be retrieved by query-migrate qmp command. | |
609 | postcopy-blocktime value of qmp command will show overlapped blocking | |
610 | time for all vCPU, postcopy-vcpu-blocktime will show list of blocking | |
611 | time per vCPU. | |
612 | ||
2e3c8f8d DDAG |
613 | .. note:: |
614 | During the postcopy phase, the bandwidth limits set using | |
615 | ``migrate_set_speed`` is ignored (to avoid delaying requested pages that | |
616 | the destination is waiting for). | |
2bfdd1c8 | 617 | |
2e3c8f8d DDAG |
618 | Postcopy device transfer |
619 | ------------------------ | |
2bfdd1c8 DDAG |
620 | |
621 | Loading of device data may cause the device emulation to access guest RAM | |
622 | that may trigger faults that have to be resolved by the source, as such | |
623 | the migration stream has to be able to respond with page data *during* the | |
624 | device load, and hence the device data has to be read from the stream completely | |
625 | before the device load begins to free the stream up. This is achieved by | |
626 | 'packaging' the device data into a blob that's read in one go. | |
627 | ||
628 | Source behaviour | |
2e3c8f8d | 629 | ---------------- |
2bfdd1c8 DDAG |
630 | |
631 | Until postcopy is entered the migration stream is identical to normal | |
632 | precopy, except for the addition of a 'postcopy advise' command at | |
633 | the beginning, to tell the destination that postcopy might happen. | |
634 | When postcopy starts the source sends the page discard data and then | |
635 | forms the 'package' containing: | |
636 | ||
2e3c8f8d DDAG |
637 | - Command: 'postcopy listen' |
638 | - The device state | |
2bfdd1c8 | 639 | |
2e3c8f8d DDAG |
640 | A series of sections, identical to the precopy streams device state stream |
641 | containing everything except postcopiable devices (i.e. RAM) | |
642 | - Command: 'postcopy run' | |
643 | ||
644 | The 'package' is sent as the data part of a Command: ``CMD_PACKAGED``, and the | |
2bfdd1c8 DDAG |
645 | contents are formatted in the same way as the main migration stream. |
646 | ||
647 | During postcopy the source scans the list of dirty pages and sends them | |
648 | to the destination without being requested (in much the same way as precopy), | |
649 | however when a page request is received from the destination, the dirty page | |
650 | scanning restarts from the requested location. This causes requested pages | |
651 | to be sent quickly, and also causes pages directly after the requested page | |
652 | to be sent quickly in the hope that those pages are likely to be used | |
653 | by the destination soon. | |
654 | ||
655 | Destination behaviour | |
2e3c8f8d | 656 | --------------------- |
2bfdd1c8 DDAG |
657 | |
658 | Initially the destination looks the same as precopy, with a single thread | |
659 | reading the migration stream; the 'postcopy advise' and 'discard' commands | |
660 | are processed to change the way RAM is managed, but don't affect the stream | |
661 | processing. | |
662 | ||
2e3c8f8d DDAG |
663 | :: |
664 | ||
665 | ------------------------------------------------------------------------------ | |
666 | 1 2 3 4 5 6 7 | |
667 | main -----DISCARD-CMD_PACKAGED ( LISTEN DEVICE DEVICE DEVICE RUN ) | |
668 | thread | | | |
669 | | (page request) | |
670 | | \___ | |
671 | v \ | |
672 | listen thread: --- page -- page -- page -- page -- page -- | |
673 | ||
674 | a b c | |
675 | ------------------------------------------------------------------------------ | |
676 | ||
677 | - On receipt of ``CMD_PACKAGED`` (1) | |
678 | ||
679 | All the data associated with the package - the ( ... ) section in the diagram - | |
680 | is read into memory, and the main thread recurses into qemu_loadvm_state_main | |
681 | to process the contents of the package (2) which contains commands (3,6) and | |
682 | devices (4...) | |
683 | ||
684 | - On receipt of 'postcopy listen' - 3 -(i.e. the 1st command in the package) | |
685 | ||
686 | a new thread (a) is started that takes over servicing the migration stream, | |
687 | while the main thread carries on loading the package. It loads normal | |
688 | background page data (b) but if during a device load a fault happens (5) | |
689 | the returned page (c) is loaded by the listen thread allowing the main | |
690 | threads device load to carry on. | |
691 | ||
692 | - The last thing in the ``CMD_PACKAGED`` is a 'RUN' command (6) | |
693 | ||
694 | letting the destination CPUs start running. At the end of the | |
695 | ``CMD_PACKAGED`` (7) the main thread returns to normal running behaviour and | |
696 | is no longer used by migration, while the listen thread carries on servicing | |
697 | page data until the end of migration. | |
698 | ||
699 | Postcopy states | |
700 | --------------- | |
2bfdd1c8 DDAG |
701 | |
702 | Postcopy moves through a series of states (see postcopy_state) from | |
703 | ADVISE->DISCARD->LISTEN->RUNNING->END | |
704 | ||
2e3c8f8d DDAG |
705 | - Advise |
706 | ||
707 | Set at the start of migration if postcopy is enabled, even | |
708 | if it hasn't had the start command; here the destination | |
709 | checks that its OS has the support needed for postcopy, and performs | |
710 | setup to ensure the RAM mappings are suitable for later postcopy. | |
711 | The destination will fail early in migration at this point if the | |
712 | required OS support is not present. | |
713 | (Triggered by reception of POSTCOPY_ADVISE command) | |
714 | ||
715 | - Discard | |
716 | ||
717 | Entered on receipt of the first 'discard' command; prior to | |
718 | the first Discard being performed, hugepages are switched off | |
719 | (using madvise) to ensure that no new huge pages are created | |
720 | during the postcopy phase, and to cause any huge pages that | |
721 | have discards on them to be broken. | |
722 | ||
723 | - Listen | |
724 | ||
725 | The first command in the package, POSTCOPY_LISTEN, switches | |
726 | the destination state to Listen, and starts a new thread | |
727 | (the 'listen thread') which takes over the job of receiving | |
728 | pages off the migration stream, while the main thread carries | |
729 | on processing the blob. With this thread able to process page | |
730 | reception, the destination now 'sensitises' the RAM to detect | |
731 | any access to missing pages (on Linux using the 'userfault' | |
732 | system). | |
733 | ||
734 | - Running | |
735 | ||
736 | POSTCOPY_RUN causes the destination to synchronise all | |
737 | state and start the CPUs and IO devices running. The main | |
738 | thread now finishes processing the migration package and | |
739 | now carries on as it would for normal precopy migration | |
740 | (although it can't do the cleanup it would do as it | |
741 | finishes a normal migration). | |
742 | ||
743 | - End | |
744 | ||
745 | The listen thread can now quit, and perform the cleanup of migration | |
746 | state, the migration is now complete. | |
747 | ||
748 | Source side page maps | |
749 | --------------------- | |
2bfdd1c8 DDAG |
750 | |
751 | The source side keeps two bitmaps during postcopy; 'the migration bitmap' | |
752 | and 'unsent map'. The 'migration bitmap' is basically the same as in | |
753 | the precopy case, and holds a bit to indicate that page is 'dirty' - | |
754 | i.e. needs sending. During the precopy phase this is updated as the CPU | |
755 | dirties pages, however during postcopy the CPUs are stopped and nothing | |
756 | should dirty anything any more. | |
757 | ||
758 | The 'unsent map' is used for the transition to postcopy. It is a bitmap that | |
759 | has a bit cleared whenever a page is sent to the destination, however during | |
760 | the transition to postcopy mode it is combined with the migration bitmap | |
761 | to form a set of pages that: | |
2e3c8f8d | 762 | |
2bfdd1c8 DDAG |
763 | a) Have been sent but then redirtied (which must be discarded) |
764 | b) Have not yet been sent - which also must be discarded to cause any | |
765 | transparent huge pages built during precopy to be broken. | |
766 | ||
767 | Note that the contents of the unsentmap are sacrificed during the calculation | |
768 | of the discard set and thus aren't valid once in postcopy. The dirtymap | |
769 | is still valid and is used to ensure that no page is sent more than once. Any | |
770 | request for a page that has already been sent is ignored. Duplicate requests | |
771 | such as this can happen as a page is sent at about the same time the | |
772 | destination accesses it. | |
773 | ||
2e3c8f8d DDAG |
774 | Postcopy with hugepages |
775 | ----------------------- | |
0c1f4036 DDAG |
776 | |
777 | Postcopy now works with hugetlbfs backed memory: | |
2e3c8f8d | 778 | |
0c1f4036 DDAG |
779 | a) The linux kernel on the destination must support userfault on hugepages. |
780 | b) The huge-page configuration on the source and destination VMs must be | |
781 | identical; i.e. RAMBlocks on both sides must use the same page size. | |
2e3c8f8d | 782 | c) Note that ``-mem-path /dev/hugepages`` will fall back to allocating normal |
0c1f4036 | 783 | RAM if it doesn't have enough hugepages, triggering (b) to fail. |
2e3c8f8d | 784 | Using ``-mem-prealloc`` enforces the allocation using hugepages. |
0c1f4036 DDAG |
785 | d) Care should be taken with the size of hugepage used; postcopy with 2MB |
786 | hugepages works well, however 1GB hugepages are likely to be problematic | |
787 | since it takes ~1 second to transfer a 1GB hugepage across a 10Gbps link, | |
788 | and until the full page is transferred the destination thread is blocked. | |
1dc61e7b DDAG |
789 | |
790 | Postcopy with shared memory | |
791 | --------------------------- | |
792 | ||
793 | Postcopy migration with shared memory needs explicit support from the other | |
794 | processes that share memory and from QEMU. There are restrictions on the type of | |
795 | memory that userfault can support shared. | |
796 | ||
797 | The Linux kernel userfault support works on `/dev/shm` memory and on `hugetlbfs` | |
798 | (although the kernel doesn't provide an equivalent to `madvise(MADV_DONTNEED)` | |
799 | for hugetlbfs which may be a problem in some configurations). | |
800 | ||
801 | The vhost-user code in QEMU supports clients that have Postcopy support, | |
802 | and the `vhost-user-bridge` (in `tests/`) and the DPDK package have changes | |
803 | to support postcopy. | |
804 | ||
805 | The client needs to open a userfaultfd and register the areas | |
806 | of memory that it maps with userfault. The client must then pass the | |
807 | userfaultfd back to QEMU together with a mapping table that allows | |
808 | fault addresses in the clients address space to be converted back to | |
809 | RAMBlock/offsets. The client's userfaultfd is added to the postcopy | |
810 | fault-thread and page requests are made on behalf of the client by QEMU. | |
811 | QEMU performs 'wake' operations on the client's userfaultfd to allow it | |
812 | to continue after a page has arrived. | |
813 | ||
814 | .. note:: | |
815 | There are two future improvements that would be nice: | |
816 | a) Some way to make QEMU ignorant of the addresses in the clients | |
817 | address space | |
818 | b) Avoiding the need for QEMU to perform ufd-wake calls after the | |
819 | pages have arrived | |
820 | ||
821 | Retro-fitting postcopy to existing clients is possible: | |
822 | a) A mechanism is needed for the registration with userfault as above, | |
823 | and the registration needs to be coordinated with the phases of | |
824 | postcopy. In vhost-user extra messages are added to the existing | |
825 | control channel. | |
826 | b) Any thread that can block due to guest memory accesses must be | |
827 | identified and the implication understood; for example if the | |
828 | guest memory access is made while holding a lock then all other | |
829 | threads waiting for that lock will also be blocked. | |
edd70806 DDAG |
830 | |
831 | Firmware | |
832 | ======== | |
833 | ||
834 | Migration migrates the copies of RAM and ROM, and thus when running | |
835 | on the destination it includes the firmware from the source. Even after | |
836 | resetting a VM, the old firmware is used. Only once QEMU has been restarted | |
837 | is the new firmware in use. | |
838 | ||
839 | - Changes in firmware size can cause changes in the required RAMBlock size | |
840 | to hold the firmware and thus migration can fail. In practice it's best | |
841 | to pad firmware images to convenient powers of 2 with plenty of space | |
842 | for growth. | |
843 | ||
844 | - Care should be taken with device emulation code so that newer | |
845 | emulation code can work with older firmware to allow forward migration. | |
846 | ||
847 | - Care should be taken with newer firmware so that backward migration | |
848 | to older systems with older device emulation code will work. | |
849 | ||
850 | In some cases it may be best to tie specific firmware versions to specific | |
851 | versioned machine types to cut down on the combinations that will need | |
852 | support. This is also useful when newer versions of firmware outgrow | |
853 | the padding. | |
854 |