]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
bump version to 8.2.1
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command-line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to the
41 xref:chapter_storage[storage documentation] on how to add a storage. It can
42 either be a Proxmox Backup Server storage, where backups are stored as
43 de-duplicated chunks and metadata, or a file-level storage, where backups are
44 stored as regular files. Using Proxmox Backup Server on a dedicated host is
45 recommended, because of its advanced features. Using an NFS server is a good
46 alternative. In both cases, you might want to save those backups later to a tape
47 drive, for off-site archiving.
48
49 .Scheduled Backup
50
51 Backup jobs can be scheduled so that they are executed automatically on specific
52 days and times, for selectable nodes and guest systems. See the
53 xref:vzdump_jobs[Backup Jobs] section for more.
54
55 Backup Modes
56 ------------
57
58 There are several ways to provide consistency (option `mode`),
59 depending on the guest type.
60
61 .Backup modes for VMs:
62
63 `stop` mode::
64
65 This mode provides the highest consistency of the backup, at the cost
66 of a short downtime in the VM operation. It works by executing an
67 orderly shutdown of the VM, and then runs a background QEMU process to
68 backup the VM data. After the backup is started, the VM goes to full
69 operation mode if it was previously running. Consistency is guaranteed
70 by using the live backup feature.
71
72 `suspend` mode::
73
74 This mode is provided for compatibility reason, and suspends the VM
75 before calling the `snapshot` mode. Since suspending the VM results in
76 a longer downtime and does not necessarily improve the data
77 consistency, the use of the `snapshot` mode is recommended instead.
78
79 `snapshot` mode::
80
81 This mode provides the lowest operation downtime, at the cost of a
82 small inconsistency risk. It works by performing a {pve} live
83 backup, in which data blocks are copied while the VM is running. If the
84 guest agent is enabled (`agent: 1`) and running, it calls
85 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
86 consistency.
87
88 A technical overview of the {pve} live backup for QemuServer can
89 be found online
90 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
91
92 NOTE: {pve} live backup provides snapshot-like semantics on any
93 storage type. It does not require that the underlying storage supports
94 snapshots. Also please note that since the backups are done via
95 a background QEMU process, a stopped VM will appear as running for a
96 short amount of time while the VM disks are being read by QEMU.
97 However the VM itself is not booted, only its disk(s) are read.
98
99 .Backup modes for Containers:
100
101 `stop` mode::
102
103 Stop the container for the duration of the backup. This potentially
104 results in a very long downtime.
105
106 `suspend` mode::
107
108 This mode uses rsync to copy the container data to a temporary
109 location (see option `--tmpdir`). Then the container is suspended and
110 a second rsync copies changed files. After that, the container is
111 started (resumed) again. This results in minimal downtime, but needs
112 additional space to hold the container copy.
113 +
114 When the container is on a local file system and the target storage of
115 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
116 local file system too, as this will result in a many fold performance
117 improvement. Use of a local `tmpdir` is also required if you want to
118 backup a local container using ACLs in suspend mode if the backup
119 storage is an NFS server.
120
121 `snapshot` mode::
122
123 This mode uses the snapshotting facilities of the underlying
124 storage. First, the container will be suspended to ensure data consistency.
125 A temporary snapshot of the container's volumes will be made and the
126 snapshot content will be archived in a tar file. Finally, the temporary
127 snapshot is deleted again.
128
129 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
130 supports snapshots. Using the `backup=no` mount point option individual volumes
131 can be excluded from the backup (and thus this requirement).
132
133 // see PVE::VZDump::LXC::prepare()
134 NOTE: By default additional mount points besides the Root Disk mount point are
135 not included in backups. For volume mount points you can set the *Backup* option
136 to include the mount point in the backup. Device and bind mounts are never
137 backed up as their content is managed outside the {pve} storage library.
138
139 VM Backup Fleecing
140 ~~~~~~~~~~~~~~~~~~
141
142 WARNING: Backup fleecing is still being worked on (also in upstream QEMU) and is
143 currently only a technology preview.
144
145 When a backup for a VM is started, QEMU will install a "copy-before-write"
146 filter in its block layer. This filter ensures that upon new guest writes, old
147 data still needed for the backup is sent to the backup target first. The guest
148 write blocks until this operation is finished so guest IO to not-yet-backed-up
149 sectors will be limited by the speed of the backup target.
150
151 With backup fleecing, such old data is cached in a fleecing image rather than
152 sent directly to the backup target. This can help guest IO performance and even
153 prevent hangs in certain scenarios, at the cost of requiring more storage space.
154 Use e.g. `vzdump 123 --fleecing enabled=1,storage=local-lvm` to enable backup
155 fleecing, with fleecing images created on the storage `local-lvm`.
156
157 The fleecing storage should be a fast local storage, with thin provisioning and
158 discard support. Examples are LVM-thin, RBD, ZFS with `sparse 1` in the storage
159 configuration, many file-based storages. Ideally, the fleecing storage is a
160 dedicated storage, so it running full will not affect other guests and just fail
161 the backup. Parts of the fleecing image that have been backed up will be
162 discarded to try and keep the space usage low.
163
164 For file-based storages that do not support discard (e.g. NFS before version
165 4.2), you should set `preallocation off` in the storage configuration. In
166 combination with `qcow2` (used automatically as the format for the fleecing
167 image when the storage supports it), this has the advantage that already
168 allocated parts of the image can be re-used later, which can still help save
169 quite a bit of space.
170
171 WARNING: On a storage that's not thinly provisioned, e.g. LVM or ZFS without the
172 `sparse` option, the full size of the original disk needs to be reserved for the
173 fleecing image up-front. On a thinly provisioned storage, the fleecing image can
174 grow to the same size as the original image only if the guest re-writes a whole
175 disk while the backup is busy with another disk.
176
177 Backup File Names
178 -----------------
179
180 Newer versions of vzdump encode the guest type and the
181 backup time into the filename, for example
182
183 vzdump-lxc-105-2009_10_09-11_04_43.tar
184
185 That way it is possible to store several backup in the same directory. You can
186 limit the number of backups that are kept with various retention options, see
187 the xref:vzdump_retention[Backup Retention] section below.
188
189 Backup File Compression
190 -----------------------
191
192 The backup file can be compressed with one of the following algorithms: `lzo`
193 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
194 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
195 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
196 footnote:[Zstandard a lossless data compression algorithm
197 https://en.wikipedia.org/wiki/Zstandard].
198
199 Currently, Zstandard (zstd) is the fastest of these three algorithms.
200 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
201 are more widely used and often installed by default.
202
203 You can install pigz footnote:[pigz - parallel implementation of gzip
204 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
205 performance due to multi-threading. For pigz & zstd, the amount of
206 threads/cores can be adjusted. See the
207 xref:vzdump_configuration[configuration options] below.
208
209 The extension of the backup file name can usually be used to determine which
210 compression algorithm has been used to create the backup.
211
212 |===
213 |.zst | Zstandard (zstd) compression
214 |.gz or .tgz | gzip compression
215 |.lzo | lzo compression
216 |===
217
218 If the backup file name doesn't end with one of the above file extensions, then
219 it was not compressed by vzdump.
220
221 Backup Encryption
222 -----------------
223
224 For Proxmox Backup Server storages, you can optionally set up client-side
225 encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
226
227 [[vzdump_jobs]]
228 Backup Jobs
229 -----------
230
231 [thumbnail="screenshot/gui-cluster-backup-overview.png"]
232
233 Besides triggering a backup manually, you can also setup periodic jobs that
234 backup all, or a selection of virtual guest to a storage. You can manage the
235 jobs in the UI under 'Datacenter' -> 'Backup' or via the `/cluster/backup` API
236 endpoint. Both will generate job entries in `/etc/pve/jobs.cfg`, which are
237 parsed and executed by the `pvescheduler` daemon.
238
239 [thumbnail="screenshot/gui-cluster-backup-edit-01-general.png", float="left"]
240
241 A job is either configured for all cluster nodes or a specific node, and is
242 executed according to a given schedule. The format for the schedule is very
243 similar to `systemd` calendar events, see the
244 xref:chapter_calendar_events[calendar events] section for details. The
245 'Schedule' field in the UI can be freely edited, and it contains several
246 examples that can be used as a starting point in its drop-down list.
247
248 You can configure job-specific xref:vzdump_retention[retention options]
249 overriding those from the storage or node configuration, as well as a
250 xref:vzdump_notes[template for notes] for additional information to be saved
251 together with the backup.
252
253 Since scheduled backups miss their execution when the host was offline or the
254 pvescheduler was disabled during the scheduled time, it is possible to configure
255 the behaviour for catching up. By enabling the `Repeat missed` option (in the
256 'Advanced' tab in the UI, `repeat-missed` in the config), you can tell the
257 scheduler that it should run missed jobs as soon as possible.
258
259 [thumbnail="screenshot/gui-cluster-backup-edit-04-advanced.png"]
260
261 There are a few settings for tuning backup performance (some of which are
262 exposed in the 'Advanced' tab in the UI). The most notable is `bwlimit` for
263 limiting IO bandwidth. The amount of threads used for the compressor can be
264 controlled with the `pigz` (replacing `gzip`), respectively, `zstd` setting.
265 Furthermore, there are `ionice` (when the BFQ scheduler is used) and, as part of
266 the `performance` setting, `max-workers` (affects VM backups only) and
267 `pbs-entries-max` (affects container backups only). See the
268 xref:vzdump_configuration[configuration options] for details.
269
270 [[vzdump_retention]]
271 Backup Retention
272 ----------------
273
274 With the `prune-backups` option you can specify which backups you want to keep
275 in a flexible manner.
276
277 [thumbnail="screenshot/gui-cluster-backup-edit-02-retention.png"]
278
279 The following retention options are available:
280
281 `keep-all <boolean>` ::
282 Keep all backups. If this is `true`, no other options can be set.
283
284 `keep-last <N>` ::
285 Keep the last `<N>` backups.
286
287 `keep-hourly <N>` ::
288 Keep backups for the last `<N>` hours. If there is more than one
289 backup for a single hour, only the latest is kept.
290
291 `keep-daily <N>` ::
292 Keep backups for the last `<N>` days. If there is more than one
293 backup for a single day, only the latest is kept.
294
295 `keep-weekly <N>` ::
296 Keep backups for the last `<N>` weeks. If there is more than one
297 backup for a single week, only the latest is kept.
298
299 NOTE: Weeks start on Monday and end on Sunday. The software uses the
300 `ISO week date`-system and handles weeks at the end of the year correctly.
301
302 `keep-monthly <N>` ::
303 Keep backups for the last `<N>` months. If there is more than one
304 backup for a single month, only the latest is kept.
305
306 `keep-yearly <N>` ::
307 Keep backups for the last `<N>` years. If there is more than one
308 backup for a single year, only the latest is kept.
309
310 The retention options are processed in the order given above. Each option
311 only covers backups within its time period. The next option does not take care
312 of already covered backups. It will only consider older backups.
313
314 Specify the retention options you want to use as a
315 comma-separated list, for example:
316
317 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
318
319 While you can pass `prune-backups` directly to `vzdump`, it is often more
320 sensible to configure the setting on the storage level, which can be done via
321 the web interface.
322
323 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
324 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
325 `keep-all`.
326
327
328 Prune Simulator
329 ~~~~~~~~~~~~~~~
330
331 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
332 of the Proxmox Backup Server documentation] to explore the effect of different
333 retention options with various backup schedules.
334
335 Retention Settings Example
336 ~~~~~~~~~~~~~~~~~~~~~~~~~~
337
338 The backup frequency and retention of old backups may depend on how often data
339 changes, and how important an older state may be, in a specific work load.
340 When backups act as a company's document archive, there may also be legal
341 requirements for how long backups must be kept.
342
343 For this example, we assume that you are doing daily backups, have a retention
344 period of 10 years, and the period between backups stored gradually grows.
345
346 `keep-last=3` - even if only daily backups are taken, an admin may want to
347 create an extra one just before or after a big upgrade. Setting keep-last
348 ensures this.
349
350 `keep-hourly` is not set - for daily backups this is not relevant. You cover
351 extra manual backups already, with keep-last.
352
353 `keep-daily=13` - together with keep-last, which covers at least one
354 day, this ensures that you have at least two weeks of backups.
355
356 `keep-weekly=8` - ensures that you have at least two full months of
357 weekly backups.
358
359 `keep-monthly=11` - together with the previous keep settings, this
360 ensures that you have at least a year of monthly backups.
361
362 `keep-yearly=9` - this is for the long term archive. As you covered the
363 current year with the previous options, you would set this to nine for the
364 remaining ones, giving you a total of at least 10 years of coverage.
365
366 We recommend that you use a higher retention period than is minimally required
367 by your environment; you can always reduce it if you find it is unnecessarily
368 high, but you cannot recreate backups once they have been removed.
369
370 [[vzdump_protection]]
371 Backup Protection
372 -----------------
373
374 You can mark a backup as `protected` to prevent its removal. Attempting to
375 remove a protected backup via {pve}'s UI, CLI or API will fail. However, this
376 is enforced by {pve} and not the file-system, that means that a manual removal
377 of a backup file itself is still possible for anyone with write access to the
378 underlying backup storage.
379
380 NOTE: Protected backups are ignored by pruning and do not count towards the
381 retention settings.
382
383 For filesystem-based storages, the protection is implemented via a sentinel file
384 `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
385 server side (available since Proxmox Backup Server version 2.1).
386
387 Use the storage option `max-protected-backups` to control how many protected
388 backups per guest are allowed on the storage. Use `-1` for unlimited. The
389 default is unlimited for users with `Datastore.Allocate` privilege and `5` for
390 other users.
391
392 [[vzdump_notes]]
393 Backup Notes
394 ------------
395
396 You can add notes to backups using the 'Edit Notes' button in the UI or via the
397 storage content API.
398
399 [thumbnail="screenshot/gui-cluster-backup-edit-03-template.png"]
400
401 It is also possible to specify a template for generating notes dynamically for
402 a backup job and for manual backup. The template string can contain variables,
403 surrounded by two curly braces, which will be replaced by the corresponding
404 value when the backup is executed.
405
406 Currently supported are:
407
408 * `{{cluster}}` the cluster name, if any
409 * `{{guestname}}` the virtual guest's assigned name
410 * `{{node}}` the host name of the node the backup is being created
411 * `{{vmid}}` the numerical VMID of the guest
412
413 When specified via API or CLI, it needs to be a single line, where newline and
414 backslash need to be escaped as literal `\n` and `\\` respectively.
415
416 [[vzdump_restore]]
417 Restore
418 -------
419
420 A backup archive can be restored through the {pve} web GUI or through the
421 following CLI tools:
422
423
424 `pct restore`:: Container restore utility
425
426 `qmrestore`:: Virtual Machine restore utility
427
428 For details see the corresponding manual pages.
429
430 Bandwidth Limit
431 ~~~~~~~~~~~~~~~
432
433 Restoring one or more big backups may need a lot of resources, especially
434 storage bandwidth for both reading from the backup storage and writing to
435 the target storage. This can negatively affect other virtual guests as access
436 to storage can get congested.
437
438 To avoid this you can set bandwidth limits for a backup job. {pve}
439 implements two kinds of limits for restoring and archive:
440
441 * per-restore limit: denotes the maximal amount of bandwidth for
442 reading from a backup archive
443
444 * per-storage write limit: denotes the maximal amount of bandwidth used for
445 writing to a specific storage
446
447 The read limit indirectly affects the write limit, as we cannot write more
448 than we read. A smaller per-job limit will overwrite a bigger per-storage
449 limit. A bigger per-job limit will only overwrite the per-storage limit if
450 you have `Data.Allocate' permissions on the affected storage.
451
452 You can use the `--bwlimit <integer>` option from the restore CLI commands
453 to set up a restore job specific bandwidth limit. KiB/s is used as unit
454 for the limit, this means passing `10240' will limit the read speed of the
455 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
456 is available for the already running virtual guests, and thus the backup
457 does not impact their operations.
458
459 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
460 a specific restore job. This can be helpful if you need to restore a very
461 important virtual guest as fast as possible. (Needs `Data.Allocate'
462 permissions on storage)
463
464 Most times your storage's generally available bandwidth stays the same over
465 time, thus we implemented the possibility to set a default bandwidth limit
466 per configured storage, this can be done with:
467
468 ----
469 # pvesm set STORAGEID --bwlimit restore=KIBs
470 ----
471
472 Live-Restore
473 ~~~~~~~~~~~~
474
475 Restoring a large backup can take a long time, in which a guest is still
476 unavailable. For VM backups stored on a Proxmox Backup Server, this wait
477 time can be mitigated using the live-restore option.
478
479 Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
480 argument of `qmrestore` causes the VM to start as soon as the restore
481 begins. Data is copied in the background, prioritizing chunks that the VM is
482 actively accessing.
483
484 Note that this comes with two caveats:
485
486 * During live-restore, the VM will operate with limited disk read speeds, as
487 data has to be loaded from the backup server (once loaded, it is immediately
488 available on the destination storage however, so accessing data twice only
489 incurs the penalty the first time). Write speeds are largely unaffected.
490 * If the live-restore fails for any reason, the VM will be left in an
491 undefined state - that is, not all data might have been copied from the
492 backup, and it is _most likely_ not possible to keep any data that was written
493 during the failed restore operation.
494
495 This mode of operation is especially useful for large VMs, where only a small
496 amount of data is required for initial operation, e.g. web servers - once the OS
497 and necessary services have been started, the VM is operational, while the
498 background task continues copying seldom used data.
499
500 Single File Restore
501 ~~~~~~~~~~~~~~~~~~~
502
503 The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
504 open a file browser directly on the data contained in a backup. This feature
505 is only available for backups on a Proxmox Backup Server.
506
507 For containers, the first layer of the file tree shows all included 'pxar'
508 archives, which can be opened and browsed freely. For VMs, the first layer shows
509 contained drive images, which can be opened to reveal a list of supported
510 storage technologies found on the drive. In the most basic case, this will be an
511 entry called 'part', representing a partition table, which contains entries for
512 each partition found on the drive. Note that for VMs, not all data might be
513 accessible (unsupported guest file systems, storage technologies, etc...).
514
515 Files and directories can be downloaded using the 'Download' button, the latter
516 being compressed into a zip archive on the fly.
517
518 To enable secure access to VM images, which might contain untrusted data, a
519 temporary VM (not visible as a guest) is started. This does not mean that data
520 downloaded from such an archive is inherently safe, but it avoids exposing the
521 hypervisor system to danger. The VM will stop itself after a timeout. This
522 entire process happens transparently from a user's point of view.
523
524 NOTE: For troubleshooting purposes, each temporary VM instance generates a log
525 file in `/var/log/proxmox-backup/file-restore/`. The log file might contain
526 additional information in case an attempt to restore individual files or
527 accessing file systems contained in a backup archive fails.
528
529 [[vzdump_configuration]]
530 Configuration
531 -------------
532
533 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
534 simple colon separated key/value format. Each line has the following
535 format:
536
537 OPTION: value
538
539 Blank lines in the file are ignored, and lines starting with a `#`
540 character are treated as comments and are also ignored. Values from
541 this file are used as default, and can be overwritten on the command
542 line.
543
544 We currently support the following options:
545
546 include::vzdump.conf.5-opts.adoc[]
547
548
549 .Example `vzdump.conf` Configuration
550 ----
551 tmpdir: /mnt/fast_local_disk
552 storage: my_backup_storage
553 mode: snapshot
554 bwlimit: 10000
555 ----
556
557 Hook Scripts
558 ------------
559
560 You can specify a hook script with option `--script`. This script is
561 called at various phases of the backup process, with parameters
562 accordingly set. You can find an example in the documentation
563 directory (`vzdump-hook-script.pl`).
564
565 File Exclusions
566 ---------------
567
568 NOTE: this option is only available for container backups.
569
570 `vzdump` skips the following files by default (disable with the option
571 `--stdexcludes 0`)
572
573 /tmp/?*
574 /var/tmp/?*
575 /var/run/?*pid
576
577 You can also manually specify (additional) exclude paths, for example:
578
579 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
580
581 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
582 `/var/foobar`, and so on.
583
584 Paths that do not start with a `/` are not anchored to the container's root,
585 but will match relative to any subdirectory. For example:
586
587 # vzdump 777 --exclude-path bar
588
589 excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
590 so on, but not `/bar2`.
591
592 Configuration files are also stored inside the backup archive
593 (in `./etc/vzdump/`) and will be correctly restored.
594
595 Examples
596 --------
597
598 Simply dump guest 777 - no snapshot, just archive the guest private area and
599 configuration files to the default dump directory (usually
600 `/var/lib/vz/dump/`).
601
602 # vzdump 777
603
604 Use rsync and suspend/resume to create a snapshot (minimal downtime).
605
606 # vzdump 777 --mode suspend
607
608 Backup all guest systems and send notification mails to root and admin.
609 Due to `mailto` being set and `notification-mode` being set to `auto` by
610 default, the notification mails are sent via the system's `sendmail`
611 command instead of the notification system.
612
613 # vzdump --all --mode suspend --mailto root --mailto admin
614
615 Use snapshot mode (no downtime) and non-default dump directory.
616
617 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
618
619 Backup more than one guest (selectively)
620
621 # vzdump 101 102 103 --mailto root
622
623 Backup all guests excluding 101 and 102
624
625 # vzdump --mode suspend --exclude 101,102
626
627 Restore a container to a new CT 600
628
629 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
630
631 Restore a QemuServer VM to VM 601
632
633 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
634
635 Clone an existing container 101 to a new container 300 with a 4GB root
636 file system, using pipes
637
638 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
639
640
641 ifdef::manvolnum[]
642 include::pve-copyright.adoc[]
643 endif::manvolnum[]