]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_vzdump]] | |
2 | ifdef::manvolnum[] | |
3 | vzdump(1) | |
4 | ========= | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | vzdump - Backup Utility for VMs and Containers | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::vzdump.1-synopsis.adoc[] | |
17 | ||
18 | ||
19 | DESCRIPTION | |
20 | ----------- | |
21 | endif::manvolnum[] | |
22 | ifndef::manvolnum[] | |
23 | Backup and Restore | |
24 | ================== | |
25 | :pve-toplevel: | |
26 | endif::manvolnum[] | |
27 | ||
28 | Backups are a requirement for any sensible IT deployment, and {pve} | |
29 | provides a fully integrated solution, using the capabilities of each | |
30 | storage and each guest system type. This allows the system | |
31 | administrator to fine tune via the `mode` option between consistency | |
32 | of the backups and downtime of the guest system. | |
33 | ||
34 | {pve} backups are always full backups - containing the VM/CT | |
35 | configuration and all data. Backups can be started via the GUI or via | |
36 | the `vzdump` command line tool. | |
37 | ||
38 | .Backup Storage | |
39 | ||
40 | Before a backup can run, a backup storage must be defined. Refer to | |
41 | the Storage documentation on how to add a storage. A backup storage | |
42 | must be a file level storage, as backups are stored as regular files. | |
43 | In most situations, using a NFS server is a good way to store backups. | |
44 | You can save those backups later to a tape drive, for off-site | |
45 | archiving. | |
46 | ||
47 | .Scheduled Backup | |
48 | ||
49 | Backup jobs can be scheduled so that they are executed automatically | |
50 | on specific days and times, for selectable nodes and guest systems. | |
51 | Configuration of scheduled backups is done at the Datacenter level in | |
52 | the GUI, which will generate a job entry in /etc/pve/jobs.cfg, which | |
53 | will in turn be parsed and executed by the `pvescheduler` daemon. | |
54 | These jobs use the xref:chapter_calendar_events[calendar events] for | |
55 | defining the schedule. | |
56 | ||
57 | Since scheduled backups miss their execution when the host was offline or the | |
58 | pvescheduler was disabled during the scheduled time, it is possible to configure | |
59 | the behaviour for catching up. By enabling the `Repeat missed` option | |
60 | (`repeat-missed` in the config), you can tell the scheduler that it should run | |
61 | missed jobs as soon as possible. | |
62 | ||
63 | Backup modes | |
64 | ------------ | |
65 | ||
66 | There are several ways to provide consistency (option `mode`), | |
67 | depending on the guest type. | |
68 | ||
69 | .Backup modes for VMs: | |
70 | ||
71 | `stop` mode:: | |
72 | ||
73 | This mode provides the highest consistency of the backup, at the cost | |
74 | of a short downtime in the VM operation. It works by executing an | |
75 | orderly shutdown of the VM, and then runs a background Qemu process to | |
76 | backup the VM data. After the backup is started, the VM goes to full | |
77 | operation mode if it was previously running. Consistency is guaranteed | |
78 | by using the live backup feature. | |
79 | ||
80 | `suspend` mode:: | |
81 | ||
82 | This mode is provided for compatibility reason, and suspends the VM | |
83 | before calling the `snapshot` mode. Since suspending the VM results in | |
84 | a longer downtime and does not necessarily improve the data | |
85 | consistency, the use of the `snapshot` mode is recommended instead. | |
86 | ||
87 | `snapshot` mode:: | |
88 | ||
89 | This mode provides the lowest operation downtime, at the cost of a | |
90 | small inconsistency risk. It works by performing a {pve} live | |
91 | backup, in which data blocks are copied while the VM is running. If the | |
92 | guest agent is enabled (`agent: 1`) and running, it calls | |
93 | `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve | |
94 | consistency. | |
95 | ||
96 | A technical overview of the {pve} live backup for QemuServer can | |
97 | be found online | |
98 | https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here]. | |
99 | ||
100 | NOTE: {pve} live backup provides snapshot-like semantics on any | |
101 | storage type. It does not require that the underlying storage supports | |
102 | snapshots. Also please note that since the backups are done via | |
103 | a background Qemu process, a stopped VM will appear as running for a | |
104 | short amount of time while the VM disks are being read by Qemu. | |
105 | However the VM itself is not booted, only its disk(s) are read. | |
106 | ||
107 | .Backup modes for Containers: | |
108 | ||
109 | `stop` mode:: | |
110 | ||
111 | Stop the container for the duration of the backup. This potentially | |
112 | results in a very long downtime. | |
113 | ||
114 | `suspend` mode:: | |
115 | ||
116 | This mode uses rsync to copy the container data to a temporary | |
117 | location (see option `--tmpdir`). Then the container is suspended and | |
118 | a second rsync copies changed files. After that, the container is | |
119 | started (resumed) again. This results in minimal downtime, but needs | |
120 | additional space to hold the container copy. | |
121 | + | |
122 | When the container is on a local file system and the target storage of | |
123 | the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a | |
124 | local file system too, as this will result in a many fold performance | |
125 | improvement. Use of a local `tmpdir` is also required if you want to | |
126 | backup a local container using ACLs in suspend mode if the backup | |
127 | storage is an NFS server. | |
128 | ||
129 | `snapshot` mode:: | |
130 | ||
131 | This mode uses the snapshotting facilities of the underlying | |
132 | storage. First, the container will be suspended to ensure data consistency. | |
133 | A temporary snapshot of the container's volumes will be made and the | |
134 | snapshot content will be archived in a tar file. Finally, the temporary | |
135 | snapshot is deleted again. | |
136 | ||
137 | NOTE: `snapshot` mode requires that all backed up volumes are on a storage that | |
138 | supports snapshots. Using the `backup=no` mount point option individual volumes | |
139 | can be excluded from the backup (and thus this requirement). | |
140 | ||
141 | // see PVE::VZDump::LXC::prepare() | |
142 | NOTE: By default additional mount points besides the Root Disk mount point are | |
143 | not included in backups. For volume mount points you can set the *Backup* option | |
144 | to include the mount point in the backup. Device and bind mounts are never | |
145 | backed up as their content is managed outside the {pve} storage library. | |
146 | ||
147 | Backup File Names | |
148 | ----------------- | |
149 | ||
150 | Newer versions of vzdump encode the guest type and the | |
151 | backup time into the filename, for example | |
152 | ||
153 | vzdump-lxc-105-2009_10_09-11_04_43.tar | |
154 | ||
155 | That way it is possible to store several backup in the same directory. You can | |
156 | limit the number of backups that are kept with various retention options, see | |
157 | the xref:vzdump_retention[Backup Retention] section below. | |
158 | ||
159 | Backup File Compression | |
160 | ----------------------- | |
161 | ||
162 | The backup file can be compressed with one of the following algorithms: `lzo` | |
163 | footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm | |
164 | https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip - | |
165 | based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd` | |
166 | footnote:[Zstandard a lossless data compression algorithm | |
167 | https://en.wikipedia.org/wiki/Zstandard]. | |
168 | ||
169 | Currently, Zstandard (zstd) is the fastest of these three algorithms. | |
170 | Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip | |
171 | are more widely used and often installed by default. | |
172 | ||
173 | You can install pigz footnote:[pigz - parallel implementation of gzip | |
174 | https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better | |
175 | performance due to multi-threading. For pigz & zstd, the amount of | |
176 | threads/cores can be adjusted. See the | |
177 | xref:vzdump_configuration[configuration options] below. | |
178 | ||
179 | The extension of the backup file name can usually be used to determine which | |
180 | compression algorithm has been used to create the backup. | |
181 | ||
182 | |=== | |
183 | |.zst | Zstandard (zstd) compression | |
184 | |.gz or .tgz | gzip compression | |
185 | |.lzo | lzo compression | |
186 | |=== | |
187 | ||
188 | If the backup file name doesn't end with one of the above file extensions, then | |
189 | it was not compressed by vzdump. | |
190 | ||
191 | Backup Encryption | |
192 | ----------------- | |
193 | ||
194 | For Proxmox Backup Server storages, you can optionally set up client-side | |
195 | encryption of backups, see xref:storage_pbs_encryption[the corresponding section.] | |
196 | ||
197 | [[vzdump_retention]] | |
198 | Backup Retention | |
199 | ---------------- | |
200 | ||
201 | With the `prune-backups` option you can specify which backups you want to keep | |
202 | in a flexible manner. The following retention options are available: | |
203 | ||
204 | `keep-all <boolean>` :: | |
205 | Keep all backups. If this is `true`, no other options can be set. | |
206 | ||
207 | `keep-last <N>` :: | |
208 | Keep the last `<N>` backups. | |
209 | ||
210 | `keep-hourly <N>` :: | |
211 | Keep backups for the last `<N>` hours. If there is more than one | |
212 | backup for a single hour, only the latest is kept. | |
213 | ||
214 | `keep-daily <N>` :: | |
215 | Keep backups for the last `<N>` days. If there is more than one | |
216 | backup for a single day, only the latest is kept. | |
217 | ||
218 | `keep-weekly <N>` :: | |
219 | Keep backups for the last `<N>` weeks. If there is more than one | |
220 | backup for a single week, only the latest is kept. | |
221 | ||
222 | NOTE: Weeks start on Monday and end on Sunday. The software uses the | |
223 | `ISO week date`-system and handles weeks at the end of the year correctly. | |
224 | ||
225 | `keep-monthly <N>` :: | |
226 | Keep backups for the last `<N>` months. If there is more than one | |
227 | backup for a single month, only the latest is kept. | |
228 | ||
229 | `keep-yearly <N>` :: | |
230 | Keep backups for the last `<N>` years. If there is more than one | |
231 | backup for a single year, only the latest is kept. | |
232 | ||
233 | The retention options are processed in the order given above. Each option | |
234 | only covers backups within its time period. The next option does not take care | |
235 | of already covered backups. It will only consider older backups. | |
236 | ||
237 | Specify the retention options you want to use as a | |
238 | comma-separated list, for example: | |
239 | ||
240 | # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9 | |
241 | ||
242 | While you can pass `prune-backups` directly to `vzdump`, it is often more | |
243 | sensible to configure the setting on the storage level, which can be done via | |
244 | the web interface. | |
245 | ||
246 | NOTE: The old `maxfiles` option is deprecated and should be replaced either by | |
247 | `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by | |
248 | `keep-all`. | |
249 | ||
250 | ||
251 | Prune Simulator | |
252 | ~~~~~~~~~~~~~~~ | |
253 | ||
254 | You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator | |
255 | of the Proxmox Backup Server documentation] to explore the effect of different | |
256 | retention options with various backup schedules. | |
257 | ||
258 | Retention Settings Example | |
259 | ~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
260 | ||
261 | The backup frequency and retention of old backups may depend on how often data | |
262 | changes, and how important an older state may be, in a specific work load. | |
263 | When backups act as a company's document archive, there may also be legal | |
264 | requirements for how long backups must be kept. | |
265 | ||
266 | For this example, we assume that you are doing daily backups, have a retention | |
267 | period of 10 years, and the period between backups stored gradually grows. | |
268 | ||
269 | `keep-last=3` - even if only daily backups are taken, an admin may want to | |
270 | create an extra one just before or after a big upgrade. Setting keep-last | |
271 | ensures this. | |
272 | ||
273 | `keep-hourly` is not set - for daily backups this is not relevant. You cover | |
274 | extra manual backups already, with keep-last. | |
275 | ||
276 | `keep-daily=13` - together with keep-last, which covers at least one | |
277 | day, this ensures that you have at least two weeks of backups. | |
278 | ||
279 | `keep-weekly=8` - ensures that you have at least two full months of | |
280 | weekly backups. | |
281 | ||
282 | `keep-monthly=11` - together with the previous keep settings, this | |
283 | ensures that you have at least a year of monthly backups. | |
284 | ||
285 | `keep-yearly=9` - this is for the long term archive. As you covered the | |
286 | current year with the previous options, you would set this to nine for the | |
287 | remaining ones, giving you a total of at least 10 years of coverage. | |
288 | ||
289 | We recommend that you use a higher retention period than is minimally required | |
290 | by your environment; you can always reduce it if you find it is unnecessarily | |
291 | high, but you cannot recreate backups once they have been removed. | |
292 | ||
293 | [[vzdump_protection]] | |
294 | Backup Protection | |
295 | ----------------- | |
296 | ||
297 | You can mark a backup as `protected` to prevent its removal. Attempting to | |
298 | remove a protected backup via {pve}'s UI, CLI or API will fail. However, this | |
299 | is enforced by {pve} and not the file-system, that means that a manual removal | |
300 | of a backup file itself is still possible for anyone with write access to the | |
301 | underlying backup storage. | |
302 | ||
303 | NOTE: Protected backups are ignored by pruning and do not count towards the | |
304 | retention settings. | |
305 | ||
306 | For filesystem-based storages, the protection is implemented via a sentinel file | |
307 | `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the | |
308 | server side (available since Proxmox Backup Server version 2.1). | |
309 | ||
310 | Use the storage option `max-protected-backups` to control how many protected | |
311 | backups per guest are allowed on the storage. Use `-1` for unlimited. The | |
312 | default is unlimited for users with `Datastore.Allocate` privilege and `5` for | |
313 | other users. | |
314 | ||
315 | [[vzdump_notes]] | |
316 | Backup Notes | |
317 | ------------ | |
318 | ||
319 | You can add notes to backups using the 'Edit Notes' button in the UI or via the | |
320 | storage content API. | |
321 | ||
322 | It is also possible to specify a template for generating notes dynamically for | |
323 | a backup job and for manual backup. The template string can contain variables, | |
324 | surrounded by two curly braces, which will be replaced by the corresponding | |
325 | value when the backup is executed. | |
326 | ||
327 | Currently supported are: | |
328 | ||
329 | * `{{cluster}}` the cluster name, if any | |
330 | * `{{guestname}}` the virtual guest's assigned name | |
331 | * `{{node}}` the host name of the node the backup is being created | |
332 | * `{{vmid}}` the numerical VMID of the guest | |
333 | ||
334 | When specified via API or CLI, it needs to be a single line, where newline and | |
335 | backslash need to be escaped as literal `\n` and `\\` respectively. | |
336 | ||
337 | [[vzdump_restore]] | |
338 | Restore | |
339 | ------- | |
340 | ||
341 | A backup archive can be restored through the {pve} web GUI or through the | |
342 | following CLI tools: | |
343 | ||
344 | ||
345 | `pct restore`:: Container restore utility | |
346 | ||
347 | `qmrestore`:: Virtual Machine restore utility | |
348 | ||
349 | For details see the corresponding manual pages. | |
350 | ||
351 | Bandwidth Limit | |
352 | ~~~~~~~~~~~~~~~ | |
353 | ||
354 | Restoring one or more big backups may need a lot of resources, especially | |
355 | storage bandwidth for both reading from the backup storage and writing to | |
356 | the target storage. This can negatively affect other virtual guests as access | |
357 | to storage can get congested. | |
358 | ||
359 | To avoid this you can set bandwidth limits for a backup job. {pve} | |
360 | implements two kinds of limits for restoring and archive: | |
361 | ||
362 | * per-restore limit: denotes the maximal amount of bandwidth for | |
363 | reading from a backup archive | |
364 | ||
365 | * per-storage write limit: denotes the maximal amount of bandwidth used for | |
366 | writing to a specific storage | |
367 | ||
368 | The read limit indirectly affects the write limit, as we cannot write more | |
369 | than we read. A smaller per-job limit will overwrite a bigger per-storage | |
370 | limit. A bigger per-job limit will only overwrite the per-storage limit if | |
371 | you have `Data.Allocate' permissions on the affected storage. | |
372 | ||
373 | You can use the `--bwlimit <integer>` option from the restore CLI commands | |
374 | to set up a restore job specific bandwidth limit. Kibit/s is used as unit | |
375 | for the limit, this means passing `10240' will limit the read speed of the | |
376 | backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth | |
377 | is available for the already running virtual guests, and thus the backup | |
378 | does not impact their operations. | |
379 | ||
380 | NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for | |
381 | a specific restore job. This can be helpful if you need to restore a very | |
382 | important virtual guest as fast as possible. (Needs `Data.Allocate' | |
383 | permissions on storage) | |
384 | ||
385 | Most times your storage's generally available bandwidth stays the same over | |
386 | time, thus we implemented the possibility to set a default bandwidth limit | |
387 | per configured storage, this can be done with: | |
388 | ||
389 | ---- | |
390 | # pvesm set STORAGEID --bwlimit restore=KIBs | |
391 | ---- | |
392 | ||
393 | Live-Restore | |
394 | ~~~~~~~~~~~~ | |
395 | ||
396 | Restoring a large backup can take a long time, in which a guest is still | |
397 | unavailable. For VM backups stored on a Proxmox Backup Server, this wait | |
398 | time can be mitigated using the live-restore option. | |
399 | ||
400 | Enabling live-restore via either the checkbox in the GUI or the `--live-restore` | |
401 | argument of `qmrestore` causes the VM to start as soon as the restore | |
402 | begins. Data is copied in the background, prioritizing chunks that the VM is | |
403 | actively accessing. | |
404 | ||
405 | Note that this comes with two caveats: | |
406 | ||
407 | * During live-restore, the VM will operate with limited disk read speeds, as | |
408 | data has to be loaded from the backup server (once loaded, it is immediately | |
409 | available on the destination storage however, so accessing data twice only | |
410 | incurs the penalty the first time). Write speeds are largely unaffected. | |
411 | * If the live-restore fails for any reason, the VM will be left in an | |
412 | undefined state - that is, not all data might have been copied from the | |
413 | backup, and it is _most likely_ not possible to keep any data that was written | |
414 | during the failed restore operation. | |
415 | ||
416 | This mode of operation is especially useful for large VMs, where only a small | |
417 | amount of data is required for initial operation, e.g. web servers - once the OS | |
418 | and necessary services have been started, the VM is operational, while the | |
419 | background task continues copying seldom used data. | |
420 | ||
421 | Single File Restore | |
422 | ~~~~~~~~~~~~~~~~~~~ | |
423 | ||
424 | The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to | |
425 | open a file browser directly on the data contained in a backup. This feature | |
426 | is only available for backups on a Proxmox Backup Server. | |
427 | ||
428 | For containers, the first layer of the file tree shows all included 'pxar' | |
429 | archives, which can be opened and browsed freely. For VMs, the first layer shows | |
430 | contained drive images, which can be opened to reveal a list of supported | |
431 | storage technologies found on the drive. In the most basic case, this will be an | |
432 | entry called 'part', representing a partition table, which contains entries for | |
433 | each partition found on the drive. Note that for VMs, not all data might be | |
434 | accessible (unsupported guest file systems, storage technologies, etc...). | |
435 | ||
436 | Files and directories can be downloaded using the 'Download' button, the latter | |
437 | being compressed into a zip archive on the fly. | |
438 | ||
439 | To enable secure access to VM images, which might contain untrusted data, a | |
440 | temporary VM (not visible as a guest) is started. This does not mean that data | |
441 | downloaded from such an archive is inherently safe, but it avoids exposing the | |
442 | hypervisor system to danger. The VM will stop itself after a timeout. This | |
443 | entire process happens transparently from a user's point of view. | |
444 | ||
445 | [[vzdump_configuration]] | |
446 | Configuration | |
447 | ------------- | |
448 | ||
449 | Global configuration is stored in `/etc/vzdump.conf`. The file uses a | |
450 | simple colon separated key/value format. Each line has the following | |
451 | format: | |
452 | ||
453 | OPTION: value | |
454 | ||
455 | Blank lines in the file are ignored, and lines starting with a `#` | |
456 | character are treated as comments and are also ignored. Values from | |
457 | this file are used as default, and can be overwritten on the command | |
458 | line. | |
459 | ||
460 | We currently support the following options: | |
461 | ||
462 | include::vzdump.conf.5-opts.adoc[] | |
463 | ||
464 | ||
465 | .Example `vzdump.conf` Configuration | |
466 | ---- | |
467 | tmpdir: /mnt/fast_local_disk | |
468 | storage: my_backup_storage | |
469 | mode: snapshot | |
470 | bwlimit: 10000 | |
471 | ---- | |
472 | ||
473 | Hook Scripts | |
474 | ------------ | |
475 | ||
476 | You can specify a hook script with option `--script`. This script is | |
477 | called at various phases of the backup process, with parameters | |
478 | accordingly set. You can find an example in the documentation | |
479 | directory (`vzdump-hook-script.pl`). | |
480 | ||
481 | File Exclusions | |
482 | --------------- | |
483 | ||
484 | NOTE: this option is only available for container backups. | |
485 | ||
486 | `vzdump` skips the following files by default (disable with the option | |
487 | `--stdexcludes 0`) | |
488 | ||
489 | /tmp/?* | |
490 | /var/tmp/?* | |
491 | /var/run/?*pid | |
492 | ||
493 | You can also manually specify (additional) exclude paths, for example: | |
494 | ||
495 | # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*' | |
496 | ||
497 | excludes the directory `/tmp/` and any file or directory named `/var/foo`, | |
498 | `/var/foobar`, and so on. | |
499 | ||
500 | Paths that do not start with a `/` are not anchored to the container's root, | |
501 | but will match relative to any subdirectory. For example: | |
502 | ||
503 | # vzdump 777 --exclude-path bar | |
504 | ||
505 | excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and | |
506 | so on, but not `/bar2`. | |
507 | ||
508 | Configuration files are also stored inside the backup archive | |
509 | (in `./etc/vzdump/`) and will be correctly restored. | |
510 | ||
511 | Examples | |
512 | -------- | |
513 | ||
514 | Simply dump guest 777 - no snapshot, just archive the guest private area and | |
515 | configuration files to the default dump directory (usually | |
516 | `/var/lib/vz/dump/`). | |
517 | ||
518 | # vzdump 777 | |
519 | ||
520 | Use rsync and suspend/resume to create a snapshot (minimal downtime). | |
521 | ||
522 | # vzdump 777 --mode suspend | |
523 | ||
524 | Backup all guest systems and send notification mails to root and admin. | |
525 | ||
526 | # vzdump --all --mode suspend --mailto root --mailto admin | |
527 | ||
528 | Use snapshot mode (no downtime) and non-default dump directory. | |
529 | ||
530 | # vzdump 777 --dumpdir /mnt/backup --mode snapshot | |
531 | ||
532 | Backup more than one guest (selectively) | |
533 | ||
534 | # vzdump 101 102 103 --mailto root | |
535 | ||
536 | Backup all guests excluding 101 and 102 | |
537 | ||
538 | # vzdump --mode suspend --exclude 101,102 | |
539 | ||
540 | Restore a container to a new CT 600 | |
541 | ||
542 | # pct restore 600 /mnt/backup/vzdump-lxc-777.tar | |
543 | ||
544 | Restore a QemuServer VM to VM 601 | |
545 | ||
546 | # qmrestore /mnt/backup/vzdump-qemu-888.vma 601 | |
547 | ||
548 | Clone an existing container 101 to a new container 300 with a 4GB root | |
549 | file system, using pipes | |
550 | ||
551 | # vzdump 101 --stdout | pct restore --rootfs 4 300 - | |
552 | ||
553 | ||
554 | ifdef::manvolnum[] | |
555 | include::pve-copyright.adoc[] | |
556 | endif::manvolnum[] | |
557 |