]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_vzdump]] | |
2 | ifdef::manvolnum[] | |
3 | vzdump(1) | |
4 | ========= | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | vzdump - Backup Utility for VMs and Containers | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::vzdump.1-synopsis.adoc[] | |
17 | ||
18 | ||
19 | DESCRIPTION | |
20 | ----------- | |
21 | endif::manvolnum[] | |
22 | ifndef::manvolnum[] | |
23 | Backup and Restore | |
24 | ================== | |
25 | :pve-toplevel: | |
26 | endif::manvolnum[] | |
27 | ||
28 | Backups are a requirement for any sensible IT deployment, and {pve} | |
29 | provides a fully integrated solution, using the capabilities of each | |
30 | storage and each guest system type. This allows the system | |
31 | administrator to fine tune via the `mode` option between consistency | |
32 | of the backups and downtime of the guest system. | |
33 | ||
34 | {pve} backups are always full backups - containing the VM/CT | |
35 | configuration and all data. Backups can be started via the GUI or via | |
36 | the `vzdump` command line tool. | |
37 | ||
38 | .Backup Storage | |
39 | ||
40 | Before a backup can run, a backup storage must be defined. Refer to | |
41 | the Storage documentation on how to add a storage. A backup storage | |
42 | must be a file level storage, as backups are stored as regular files. | |
43 | In most situations, using a NFS server is a good way to store backups. | |
44 | You can save those backups later to a tape drive, for off-site | |
45 | archiving. | |
46 | ||
47 | .Scheduled Backup | |
48 | ||
49 | Backup jobs can be scheduled so that they are executed automatically | |
50 | on specific days and times, for selectable nodes and guest systems. | |
51 | Configuration of scheduled backups is done at the Datacenter level in | |
52 | the GUI, which will generate a cron entry in /etc/cron.d/vzdump. | |
53 | ||
54 | Backup modes | |
55 | ------------ | |
56 | ||
57 | There are several ways to provide consistency (option `mode`), | |
58 | depending on the guest type. | |
59 | ||
60 | .Backup modes for VMs: | |
61 | ||
62 | `stop` mode:: | |
63 | ||
64 | This mode provides the highest consistency of the backup, at the cost | |
65 | of a short downtime in the VM operation. It works by executing an | |
66 | orderly shutdown of the VM, and then runs a background Qemu process to | |
67 | backup the VM data. After the backup is started, the VM goes to full | |
68 | operation mode if it was previously running. Consistency is guaranteed | |
69 | by using the live backup feature. | |
70 | ||
71 | `suspend` mode:: | |
72 | ||
73 | This mode is provided for compatibility reason, and suspends the VM | |
74 | before calling the `snapshot` mode. Since suspending the VM results in | |
75 | a longer downtime and does not necessarily improve the data | |
76 | consistency, the use of the `snapshot` mode is recommended instead. | |
77 | ||
78 | `snapshot` mode:: | |
79 | ||
80 | This mode provides the lowest operation downtime, at the cost of a | |
81 | small inconstancy risk. It works by performing a Proxmox VE live | |
82 | backup, in which data blocks are copied while the VM is running. If the | |
83 | guest agent is enabled (`agent: 1`) and running, it calls | |
84 | `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve | |
85 | consistency. | |
86 | ||
87 | A technical overview of the Proxmox VE live backup for QemuServer can | |
88 | be found online | |
89 | https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here]. | |
90 | ||
91 | NOTE: Proxmox VE live backup provides snapshot-like semantics on any | |
92 | storage type. It does not require that the underlying storage supports | |
93 | snapshots. Also please note that since the backups are done via | |
94 | a background Qemu process, a stopped VM will appear as running for a | |
95 | short amount of time while the VM disks are being read by Qemu. | |
96 | However the VM itself is not booted, only its disk(s) are read. | |
97 | ||
98 | .Backup modes for Containers: | |
99 | ||
100 | `stop` mode:: | |
101 | ||
102 | Stop the container for the duration of the backup. This potentially | |
103 | results in a very long downtime. | |
104 | ||
105 | `suspend` mode:: | |
106 | ||
107 | This mode uses rsync to copy the container data to a temporary | |
108 | location (see option `--tmpdir`). Then the container is suspended and | |
109 | a second rsync copies changed files. After that, the container is | |
110 | started (resumed) again. This results in minimal downtime, but needs | |
111 | additional space to hold the container copy. | |
112 | + | |
113 | When the container is on a local file system and the target storage of | |
114 | the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a | |
115 | local file system too, as this will result in a many fold performance | |
116 | improvement. Use of a local `tmpdir` is also required if you want to | |
117 | backup a local container using ACLs in suspend mode if the backup | |
118 | storage is an NFS server. | |
119 | ||
120 | `snapshot` mode:: | |
121 | ||
122 | This mode uses the snapshotting facilities of the underlying | |
123 | storage. First, the container will be suspended to ensure data consistency. | |
124 | A temporary snapshot of the container's volumes will be made and the | |
125 | snapshot content will be archived in a tar file. Finally, the temporary | |
126 | snapshot is deleted again. | |
127 | ||
128 | NOTE: `snapshot` mode requires that all backed up volumes are on a storage that | |
129 | supports snapshots. Using the `backup=no` mount point option individual volumes | |
130 | can be excluded from the backup (and thus this requirement). | |
131 | ||
132 | // see PVE::VZDump::LXC::prepare() | |
133 | NOTE: By default additional mount points besides the Root Disk mount point are | |
134 | not included in backups. For volume mount points you can set the *Backup* option | |
135 | to include the mount point in the backup. Device and bind mounts are never | |
136 | backed up as their content is managed outside the {pve} storage library. | |
137 | ||
138 | Backup File Names | |
139 | ----------------- | |
140 | ||
141 | Newer versions of vzdump encode the guest type and the | |
142 | backup time into the filename, for example | |
143 | ||
144 | vzdump-lxc-105-2009_10_09-11_04_43.tar | |
145 | ||
146 | That way it is possible to store several backup in the same | |
147 | directory. The parameter `maxfiles` can be used to specify the | |
148 | maximum number of backups to keep. | |
149 | ||
150 | [[vzdump_restore]] | |
151 | Restore | |
152 | ------- | |
153 | ||
154 | A backup archive can be restored through the {pve} web GUI or through the | |
155 | following CLI tools: | |
156 | ||
157 | ||
158 | `pct restore`:: Container restore utility | |
159 | ||
160 | `qmrestore`:: Virtual Machine restore utility | |
161 | ||
162 | For details see the corresponding manual pages. | |
163 | ||
164 | Bandwidth Limit | |
165 | ~~~~~~~~~~~~~~~ | |
166 | ||
167 | Restoring one or more big backups may need a lot of resources, especially | |
168 | storage bandwidth for both reading from the backup storage and writing to | |
169 | the target storage. This can negatively effect other virtual guest as access | |
170 | to storage can get congested. | |
171 | ||
172 | To avoid this you can set bandwidth limits for a backup job. {pve} | |
173 | implements two kinds of limits for restoring and archive: | |
174 | ||
175 | * per-restore limit: denotes the maximal amount of bandwidth for | |
176 | reading from a backup archive | |
177 | ||
178 | * per-storage write limit: denotes the maximal amount of bandwidth used for | |
179 | writing to a specific storage | |
180 | ||
181 | The read limit indirectly affects the write limit, as we cannot write more | |
182 | than we read. A smaller per-job limit will overwrite a bigger per-storage | |
183 | limit. A bigger per-job limit will only overwrite the per-storage limit if | |
184 | you have `Data.Allocate' permissions on the affected storage. | |
185 | ||
186 | You can use the `--bwlimit <integer>` option from the restore CLI commands | |
187 | to set up a restore job specific bandwidth limit. Kibit/s is used as unit | |
188 | for the limit, this means passing `10240' will limit the read speed of the | |
189 | backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth | |
190 | is available for the already running virtual guests, and thus the backup | |
191 | does not impact their operations. | |
192 | ||
193 | NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for | |
194 | a specific restore job. This can be helpful if you need to restore a very | |
195 | important virtual guest as fast as possible. (Needs `Data.Allocate' | |
196 | permissions on storage) | |
197 | ||
198 | Most times your storage's generally available bandwidth stays the same over | |
199 | time, thus we implemented the possibility to set a default bandwidth limit | |
200 | per configured storage, this can be done with: | |
201 | ||
202 | ---- | |
203 | # pvesm set STORAGEID --bwlimit KIBs | |
204 | ---- | |
205 | ||
206 | ||
207 | Configuration | |
208 | ------------- | |
209 | ||
210 | Global configuration is stored in `/etc/vzdump.conf`. The file uses a | |
211 | simple colon separated key/value format. Each line has the following | |
212 | format: | |
213 | ||
214 | OPTION: value | |
215 | ||
216 | Blank lines in the file are ignored, and lines starting with a `#` | |
217 | character are treated as comments and are also ignored. Values from | |
218 | this file are used as default, and can be overwritten on the command | |
219 | line. | |
220 | ||
221 | We currently support the following options: | |
222 | ||
223 | include::vzdump.conf.5-opts.adoc[] | |
224 | ||
225 | ||
226 | .Example `vzdump.conf` Configuration | |
227 | ---- | |
228 | tmpdir: /mnt/fast_local_disk | |
229 | storage: my_backup_storage | |
230 | mode: snapshot | |
231 | bwlimit: 10000 | |
232 | ---- | |
233 | ||
234 | Hook Scripts | |
235 | ------------ | |
236 | ||
237 | You can specify a hook script with option `--script`. This script is | |
238 | called at various phases of the backup process, with parameters | |
239 | accordingly set. You can find an example in the documentation | |
240 | directory (`vzdump-hook-script.pl`). | |
241 | ||
242 | File Exclusions | |
243 | --------------- | |
244 | ||
245 | NOTE: this option is only available for container backups. | |
246 | ||
247 | `vzdump` skips the following files by default (disable with the option | |
248 | `--stdexcludes 0`) | |
249 | ||
250 | /tmp/?* | |
251 | /var/tmp/?* | |
252 | /var/run/?*pid | |
253 | ||
254 | You can also manually specify (additional) exclude paths, for example: | |
255 | ||
256 | # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*' | |
257 | ||
258 | (only excludes tmp directories) | |
259 | ||
260 | Configuration files are also stored inside the backup archive | |
261 | (in `./etc/vzdump/`) and will be correctly restored. | |
262 | ||
263 | Examples | |
264 | -------- | |
265 | ||
266 | Simply dump guest 777 - no snapshot, just archive the guest private area and | |
267 | configuration files to the default dump directory (usually | |
268 | `/var/lib/vz/dump/`). | |
269 | ||
270 | # vzdump 777 | |
271 | ||
272 | Use rsync and suspend/resume to create a snapshot (minimal downtime). | |
273 | ||
274 | # vzdump 777 --mode suspend | |
275 | ||
276 | Backup all guest systems and send notification mails to root and admin. | |
277 | ||
278 | # vzdump --all --mode suspend --mailto root --mailto admin | |
279 | ||
280 | Use snapshot mode (no downtime) and non-default dump directory. | |
281 | ||
282 | # vzdump 777 --dumpdir /mnt/backup --mode snapshot | |
283 | ||
284 | Backup more than one guest (selectively) | |
285 | ||
286 | # vzdump 101 102 103 --mailto root | |
287 | ||
288 | Backup all guests excluding 101 and 102 | |
289 | ||
290 | # vzdump --mode suspend --exclude 101,102 | |
291 | ||
292 | Restore a container to a new CT 600 | |
293 | ||
294 | # pct restore 600 /mnt/backup/vzdump-lxc-777.tar | |
295 | ||
296 | Restore a QemuServer VM to VM 601 | |
297 | ||
298 | # qmrestore /mnt/backup/vzdump-qemu-888.vma 601 | |
299 | ||
300 | Clone an existing container 101 to a new container 300 with a 4GB root | |
301 | file system, using pipes | |
302 | ||
303 | # vzdump 101 --stdout | pct restore --rootfs 4 300 - | |
304 | ||
305 | ||
306 | ifdef::manvolnum[] | |
307 | include::pve-copyright.adoc[] | |
308 | endif::manvolnum[] | |
309 |