(RDMA: Remote Direct Memory Access)
RDMA Live Migration Specification, Version # 1
==============================================
-Wiki: http://wiki.qemu-project.org/Features/RDMALiveMigration
+Wiki: https://wiki.qemu.org/Features/RDMALiveMigration
Github: git@github.com:hinesmr/qemu.git, 'rdma' branch
Copyright (C) 2013 Michael R. Hines <mrhines@us.ibm.com>
because the RDMA I/O architecture reduces the number of interrupts and
data copies by bypassing the host networking stack. In particular, a TCP-based
migration, under certain types of memory-bound workloads, may take a more
-unpredicatable amount of time to complete the migration if the amount of
+unpredictable amount of time to complete the migration if the amount of
memory tracked during each live migration iteration round cannot keep pace
with the rate of dirty memory produced by the workload.
First, set the migration speed to match your hardware's capabilities:
QEMU Monitor Command:
-$ migrate_set_speed 40g # or whatever is the MAX of your RDMA device
+$ migrate_set_parameter max-bandwidth 40g # or whatever is the MAX of your RDMA device
Next, on the destination machine, add the following to the QEMU command line:
of the connection (described below).
All of the remaining command types (not including 'ready')
-described above all use the aformentioned two functions to do the hard work:
+described above all use the aforementioned two functions to do the hard work:
1. After connection setup, RAMBlock information is exchanged using
this protocol before the actual migration begins. This information includes
TODO:
=====
1. Currently, 'ulimit -l' mlock() limits as well as cgroups swap limits
- are not compatible with infinband memory pinning and will result in
+ are not compatible with infiniband memory pinning and will result in
an aborted migration (but with the source VM left unaffected).
2. Use of the recent /proc/<pid>/pagemap would likely speed up
the use of KSM and ballooning while using RDMA.