]> git.proxmox.com Git - pve-qemu-kvm.git/blob - debian/patches/0001-RFC-Efficient-VM-backup-for-qemu.patch
9c43b6c2679cf0479e442b9172b3c4855e1e248a
[pve-qemu-kvm.git] / debian / patches / 0001-RFC-Efficient-VM-backup-for-qemu.patch
1 From 0177b535961d9e46420d474091b6dedbf7ee11a3 Mon Sep 17 00:00:00 2001
2 From: Dietmar Maurer <dietmar@proxmox.com>
3 Date: Tue, 13 Nov 2012 09:24:50 +0100
4 Subject: [PATCH v3 1/6] RFC: Efficient VM backup for qemu
5
6 This series provides a way to efficiently backup VMs.
7
8 * Backup to a single archive file
9 * Backup contain all data to restore VM (full backup)
10 * Do not depend on storage type or image format
11 * Avoid use of temporary storage
12 * store sparse images efficiently
13
14 The file docs/backup-rfc.txt contains more details.
15
16 Changes since v1:
17
18 * fix spelling errors
19 * move BackupInfo from BDS to BackupBlockJob
20 * introduce BackupDriver to allow more than one backup format
21 * vma: add suport to store vmstate (size is not known in advance)
22 * add ability to store VM state
23
24 Changes since v2:
25
26 * BackupDriver: remove cancel_cb
27
28 Signed-off-by: Dietmar Maurer <dietmar@proxmox.com>
29 ---
30 docs/backup-rfc.txt | 119 +++++++++++++++++++++++++++++++++++++++++++++++++++
31 1 files changed, 119 insertions(+), 0 deletions(-)
32 create mode 100644 docs/backup-rfc.txt
33
34 diff --git a/docs/backup-rfc.txt b/docs/backup-rfc.txt
35 new file mode 100644
36 index 0000000..5b4b3df
37 --- /dev/null
38 +++ b/docs/backup-rfc.txt
39 @@ -0,0 +1,119 @@
40 +RFC: Efficient VM backup for qemu
41 +
42 +=Requirements=
43 +
44 +* Backup to a single archive file
45 +* Backup needs to contain all data to restore VM (full backup)
46 +* Do not depend on storage type or image format
47 +* Avoid use of temporary storage
48 +* store sparse images efficiently
49 +
50 +=Introduction=
51 +
52 +Most VM backup solutions use some kind of snapshot to get a consistent
53 +VM view at a specific point in time. For example, we previously used
54 +LVM to create a snapshot of all used VM images, which are then copied
55 +into a tar file.
56 +
57 +That basically means that any data written during backup involve
58 +considerable overhead. For LVM we get the following steps:
59 +
60 +1.) read original data (VM write)
61 +2.) write original data into snapshot (VM write)
62 +3.) write new data (VM write)
63 +4.) read data from snapshot (backup)
64 +5.) write data from snapshot into tar file (backup)
65 +
66 +Another approach to backup VM images is to create a new qcow2 image
67 +which use the old image as base. During backup, writes are redirected
68 +to the new image, so the old image represents a 'snapshot'. After
69 +backup, data need to be copied back from new image into the old
70 +one (commit). So a simple write during backup triggers the following
71 +steps:
72 +
73 +1.) write new data to new image (VM write)
74 +2.) read data from old image (backup)
75 +3.) write data from old image into tar file (backup)
76 +
77 +4.) read data from new image (commit)
78 +5.) write data to old image (commit)
79 +
80 +This is in fact the same overhead as before. Other tools like qemu
81 +livebackup produces similar overhead (2 reads, 3 writes).
82 +
83 +Some storage types/formats supports internal snapshots using some kind
84 +of reference counting (rados, sheepdog, dm-thin, qcow2). It would be possible
85 +to use that for backups, but for now we want to be storage-independent.
86 +
87 +Note: It turned out that taking a qcow2 snapshot can take a very long
88 +time on larger files.
89 +
90 +=Make it more efficient=
91 +
92 +The be more efficient, we simply need to avoid unnecessary steps. The
93 +following steps are always required:
94 +
95 +1.) read old data before it gets overwritten
96 +2.) write that data into the backup archive
97 +3.) write new data (VM write)
98 +
99 +As you can see, this involves only one read, an two writes.
100 +
101 +To make that work, our backup archive need to be able to store image
102 +data 'out of order'. It is important to notice that this will not work
103 +with traditional archive formats like tar.
104 +
105 +During backup we simply intercept writes, then read existing data and
106 +store that directly into the archive. After that we can continue the
107 +write.
108 +
109 +==Advantages==
110 +
111 +* very good performance (1 read, 2 writes)
112 +* works on any storage type and image format.
113 +* avoid usage of temporary storage
114 +* we can define a new and simple archive format, which is able to
115 + store sparse files efficiently.
116 +
117 +Note: Storing sparse files is a mess with existing archive
118 +formats. For example, tar requires information about holes at the
119 +beginning of the archive.
120 +
121 +==Disadvantages==
122 +
123 +* we need to define a new archive format
124 +
125 +Note: Most existing archive formats are optimized to store small files
126 +including file attributes. We simply do not need that for VM archives.
127 +
128 +* archive contains data 'out of order'
129 +
130 +If you want to access image data in sequential order, you need to
131 +re-order archive data. It would be possible to to that on the fly,
132 +using temporary files.
133 +
134 +Fortunately, a normal restore/extract works perfectly with 'out of
135 +order' data, because the target files are seekable.
136 +
137 +* slow backup storage can slow down VM during backup
138 +
139 +It is important to note that we only do sequential writes to the
140 +backup storage. Furthermore one can compress the backup stream. IMHO,
141 +it is better to slow down the VM a bit. All other solutions creates
142 +large amounts of temporary data during backup.
143 +
144 +=Archive format requirements=
145 +
146 +The basic requirement for such new format is that we can store image
147 +date 'out of order'. It is also very likely that we have less than 256
148 +drives/images per VM, and we want to be able to store VM configuration
149 +files.
150 +
151 +We have defined a very simply format with those properties, see:
152 +
153 +docs/specs/vma_spec.txt
154 +
155 +Please let us know if you know an existing format which provides the
156 +same functionality.
157 +
158 +
159 --
160 1.7.2.5
161