]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
backup: merge sections describing jobs
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to the
41 xref:chapter_storage[storage documentation] on how to add a storage. It can
42 either be a Proxmox Backup Server storage, where backups are stored as
43 de-duplicated chunks and metadata, or a file-level storage, where backups are
44 stored as regular files. Using Proxmox Backup Server on a dedicated host is
45 recommended, because of its advanced features. Using an NFS server is a good
46 alternative. In both cases, you might want to save those backups later to a tape
47 drive, for off-site archiving.
48
49 .Scheduled Backup
50
51 Backup jobs can be scheduled so that they are executed automatically on specific
52 days and times, for selectable nodes and guest systems. See the
53 xref:vzdump_jobs[Backup Jobs] section for more.
54
55 Backup Modes
56 ------------
57
58 There are several ways to provide consistency (option `mode`),
59 depending on the guest type.
60
61 .Backup modes for VMs:
62
63 `stop` mode::
64
65 This mode provides the highest consistency of the backup, at the cost
66 of a short downtime in the VM operation. It works by executing an
67 orderly shutdown of the VM, and then runs a background Qemu process to
68 backup the VM data. After the backup is started, the VM goes to full
69 operation mode if it was previously running. Consistency is guaranteed
70 by using the live backup feature.
71
72 `suspend` mode::
73
74 This mode is provided for compatibility reason, and suspends the VM
75 before calling the `snapshot` mode. Since suspending the VM results in
76 a longer downtime and does not necessarily improve the data
77 consistency, the use of the `snapshot` mode is recommended instead.
78
79 `snapshot` mode::
80
81 This mode provides the lowest operation downtime, at the cost of a
82 small inconsistency risk. It works by performing a {pve} live
83 backup, in which data blocks are copied while the VM is running. If the
84 guest agent is enabled (`agent: 1`) and running, it calls
85 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
86 consistency.
87
88 A technical overview of the {pve} live backup for QemuServer can
89 be found online
90 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
91
92 NOTE: {pve} live backup provides snapshot-like semantics on any
93 storage type. It does not require that the underlying storage supports
94 snapshots. Also please note that since the backups are done via
95 a background Qemu process, a stopped VM will appear as running for a
96 short amount of time while the VM disks are being read by Qemu.
97 However the VM itself is not booted, only its disk(s) are read.
98
99 .Backup modes for Containers:
100
101 `stop` mode::
102
103 Stop the container for the duration of the backup. This potentially
104 results in a very long downtime.
105
106 `suspend` mode::
107
108 This mode uses rsync to copy the container data to a temporary
109 location (see option `--tmpdir`). Then the container is suspended and
110 a second rsync copies changed files. After that, the container is
111 started (resumed) again. This results in minimal downtime, but needs
112 additional space to hold the container copy.
113 +
114 When the container is on a local file system and the target storage of
115 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
116 local file system too, as this will result in a many fold performance
117 improvement. Use of a local `tmpdir` is also required if you want to
118 backup a local container using ACLs in suspend mode if the backup
119 storage is an NFS server.
120
121 `snapshot` mode::
122
123 This mode uses the snapshotting facilities of the underlying
124 storage. First, the container will be suspended to ensure data consistency.
125 A temporary snapshot of the container's volumes will be made and the
126 snapshot content will be archived in a tar file. Finally, the temporary
127 snapshot is deleted again.
128
129 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
130 supports snapshots. Using the `backup=no` mount point option individual volumes
131 can be excluded from the backup (and thus this requirement).
132
133 // see PVE::VZDump::LXC::prepare()
134 NOTE: By default additional mount points besides the Root Disk mount point are
135 not included in backups. For volume mount points you can set the *Backup* option
136 to include the mount point in the backup. Device and bind mounts are never
137 backed up as their content is managed outside the {pve} storage library.
138
139 Backup File Names
140 -----------------
141
142 Newer versions of vzdump encode the guest type and the
143 backup time into the filename, for example
144
145 vzdump-lxc-105-2009_10_09-11_04_43.tar
146
147 That way it is possible to store several backup in the same directory. You can
148 limit the number of backups that are kept with various retention options, see
149 the xref:vzdump_retention[Backup Retention] section below.
150
151 Backup File Compression
152 -----------------------
153
154 The backup file can be compressed with one of the following algorithms: `lzo`
155 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
156 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
157 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
158 footnote:[Zstandard a lossless data compression algorithm
159 https://en.wikipedia.org/wiki/Zstandard].
160
161 Currently, Zstandard (zstd) is the fastest of these three algorithms.
162 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
163 are more widely used and often installed by default.
164
165 You can install pigz footnote:[pigz - parallel implementation of gzip
166 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
167 performance due to multi-threading. For pigz & zstd, the amount of
168 threads/cores can be adjusted. See the
169 xref:vzdump_configuration[configuration options] below.
170
171 The extension of the backup file name can usually be used to determine which
172 compression algorithm has been used to create the backup.
173
174 |===
175 |.zst | Zstandard (zstd) compression
176 |.gz or .tgz | gzip compression
177 |.lzo | lzo compression
178 |===
179
180 If the backup file name doesn't end with one of the above file extensions, then
181 it was not compressed by vzdump.
182
183 Backup Encryption
184 -----------------
185
186 For Proxmox Backup Server storages, you can optionally set up client-side
187 encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
188
189 [[vzdump_jobs]]
190 Backup Jobs
191 -----------
192
193 Besides triggering a backup manually, you can also setup periodic jobs that
194 backup all, or a selection of virtual guest to a storage. You can manage the
195 jobs in the UI under 'Datacenter' -> 'Backup' or via the `/cluster/backup` API
196 endpoint. Both will generate job entries in `/etc/pve/jobs.cfg`, which are
197 parsed and executed by the `pvescheduler` daemon.
198
199 A job is either configured for all cluster nodes or a specific node, and is
200 executed according to a given schedule. The format for the schedule is very
201 similar to `systemd` calendar events, see the
202 xref:chapter_calendar_events[calendar events] section for details. The
203 'Schedule' field in the UI can be freely edited, and it contains several
204 examples that can be used as a starting point in its drop-down list.
205
206 You can configure job-specific xref:vzdump_retention[retention options]
207 overriding those from the storage or node configuration, as well as a
208 xref:vzdump_notes[template for notes] for additional information to be saved
209 together with the backup.
210
211 Since scheduled backups miss their execution when the host was offline or the
212 pvescheduler was disabled during the scheduled time, it is possible to configure
213 the behaviour for catching up. By enabling the `Repeat missed` option
214 (`repeat-missed` in the config), you can tell the scheduler that it should run
215 missed jobs as soon as possible.
216
217 There are a few settings for tuning backup performance not exposed in the UI.
218 The most notable is `bwlimit` for limiting IO bandwidth. The amount of threads
219 used for the compressor can be controlled with the `pigz` (replacing `gzip`),
220 respectively, `zstd` setting. Furthermore, there is `ionice`. See the
221 xref:vzdump_configuration[configuration options] for details.
222
223 [[vzdump_retention]]
224 Backup Retention
225 ----------------
226
227 With the `prune-backups` option you can specify which backups you want to keep
228 in a flexible manner. The following retention options are available:
229
230 `keep-all <boolean>` ::
231 Keep all backups. If this is `true`, no other options can be set.
232
233 `keep-last <N>` ::
234 Keep the last `<N>` backups.
235
236 `keep-hourly <N>` ::
237 Keep backups for the last `<N>` hours. If there is more than one
238 backup for a single hour, only the latest is kept.
239
240 `keep-daily <N>` ::
241 Keep backups for the last `<N>` days. If there is more than one
242 backup for a single day, only the latest is kept.
243
244 `keep-weekly <N>` ::
245 Keep backups for the last `<N>` weeks. If there is more than one
246 backup for a single week, only the latest is kept.
247
248 NOTE: Weeks start on Monday and end on Sunday. The software uses the
249 `ISO week date`-system and handles weeks at the end of the year correctly.
250
251 `keep-monthly <N>` ::
252 Keep backups for the last `<N>` months. If there is more than one
253 backup for a single month, only the latest is kept.
254
255 `keep-yearly <N>` ::
256 Keep backups for the last `<N>` years. If there is more than one
257 backup for a single year, only the latest is kept.
258
259 The retention options are processed in the order given above. Each option
260 only covers backups within its time period. The next option does not take care
261 of already covered backups. It will only consider older backups.
262
263 Specify the retention options you want to use as a
264 comma-separated list, for example:
265
266 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
267
268 While you can pass `prune-backups` directly to `vzdump`, it is often more
269 sensible to configure the setting on the storage level, which can be done via
270 the web interface.
271
272 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
273 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
274 `keep-all`.
275
276
277 Prune Simulator
278 ~~~~~~~~~~~~~~~
279
280 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
281 of the Proxmox Backup Server documentation] to explore the effect of different
282 retention options with various backup schedules.
283
284 Retention Settings Example
285 ~~~~~~~~~~~~~~~~~~~~~~~~~~
286
287 The backup frequency and retention of old backups may depend on how often data
288 changes, and how important an older state may be, in a specific work load.
289 When backups act as a company's document archive, there may also be legal
290 requirements for how long backups must be kept.
291
292 For this example, we assume that you are doing daily backups, have a retention
293 period of 10 years, and the period between backups stored gradually grows.
294
295 `keep-last=3` - even if only daily backups are taken, an admin may want to
296 create an extra one just before or after a big upgrade. Setting keep-last
297 ensures this.
298
299 `keep-hourly` is not set - for daily backups this is not relevant. You cover
300 extra manual backups already, with keep-last.
301
302 `keep-daily=13` - together with keep-last, which covers at least one
303 day, this ensures that you have at least two weeks of backups.
304
305 `keep-weekly=8` - ensures that you have at least two full months of
306 weekly backups.
307
308 `keep-monthly=11` - together with the previous keep settings, this
309 ensures that you have at least a year of monthly backups.
310
311 `keep-yearly=9` - this is for the long term archive. As you covered the
312 current year with the previous options, you would set this to nine for the
313 remaining ones, giving you a total of at least 10 years of coverage.
314
315 We recommend that you use a higher retention period than is minimally required
316 by your environment; you can always reduce it if you find it is unnecessarily
317 high, but you cannot recreate backups once they have been removed.
318
319 [[vzdump_protection]]
320 Backup Protection
321 -----------------
322
323 You can mark a backup as `protected` to prevent its removal. Attempting to
324 remove a protected backup via {pve}'s UI, CLI or API will fail. However, this
325 is enforced by {pve} and not the file-system, that means that a manual removal
326 of a backup file itself is still possible for anyone with write access to the
327 underlying backup storage.
328
329 NOTE: Protected backups are ignored by pruning and do not count towards the
330 retention settings.
331
332 For filesystem-based storages, the protection is implemented via a sentinel file
333 `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
334 server side (available since Proxmox Backup Server version 2.1).
335
336 Use the storage option `max-protected-backups` to control how many protected
337 backups per guest are allowed on the storage. Use `-1` for unlimited. The
338 default is unlimited for users with `Datastore.Allocate` privilege and `5` for
339 other users.
340
341 [[vzdump_notes]]
342 Backup Notes
343 ------------
344
345 You can add notes to backups using the 'Edit Notes' button in the UI or via the
346 storage content API.
347
348 It is also possible to specify a template for generating notes dynamically for
349 a backup job and for manual backup. The template string can contain variables,
350 surrounded by two curly braces, which will be replaced by the corresponding
351 value when the backup is executed.
352
353 Currently supported are:
354
355 * `{{cluster}}` the cluster name, if any
356 * `{{guestname}}` the virtual guest's assigned name
357 * `{{node}}` the host name of the node the backup is being created
358 * `{{vmid}}` the numerical VMID of the guest
359
360 When specified via API or CLI, it needs to be a single line, where newline and
361 backslash need to be escaped as literal `\n` and `\\` respectively.
362
363 [[vzdump_restore]]
364 Restore
365 -------
366
367 A backup archive can be restored through the {pve} web GUI or through the
368 following CLI tools:
369
370
371 `pct restore`:: Container restore utility
372
373 `qmrestore`:: Virtual Machine restore utility
374
375 For details see the corresponding manual pages.
376
377 Bandwidth Limit
378 ~~~~~~~~~~~~~~~
379
380 Restoring one or more big backups may need a lot of resources, especially
381 storage bandwidth for both reading from the backup storage and writing to
382 the target storage. This can negatively affect other virtual guests as access
383 to storage can get congested.
384
385 To avoid this you can set bandwidth limits for a backup job. {pve}
386 implements two kinds of limits for restoring and archive:
387
388 * per-restore limit: denotes the maximal amount of bandwidth for
389 reading from a backup archive
390
391 * per-storage write limit: denotes the maximal amount of bandwidth used for
392 writing to a specific storage
393
394 The read limit indirectly affects the write limit, as we cannot write more
395 than we read. A smaller per-job limit will overwrite a bigger per-storage
396 limit. A bigger per-job limit will only overwrite the per-storage limit if
397 you have `Data.Allocate' permissions on the affected storage.
398
399 You can use the `--bwlimit <integer>` option from the restore CLI commands
400 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
401 for the limit, this means passing `10240' will limit the read speed of the
402 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
403 is available for the already running virtual guests, and thus the backup
404 does not impact their operations.
405
406 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
407 a specific restore job. This can be helpful if you need to restore a very
408 important virtual guest as fast as possible. (Needs `Data.Allocate'
409 permissions on storage)
410
411 Most times your storage's generally available bandwidth stays the same over
412 time, thus we implemented the possibility to set a default bandwidth limit
413 per configured storage, this can be done with:
414
415 ----
416 # pvesm set STORAGEID --bwlimit restore=KIBs
417 ----
418
419 Live-Restore
420 ~~~~~~~~~~~~
421
422 Restoring a large backup can take a long time, in which a guest is still
423 unavailable. For VM backups stored on a Proxmox Backup Server, this wait
424 time can be mitigated using the live-restore option.
425
426 Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
427 argument of `qmrestore` causes the VM to start as soon as the restore
428 begins. Data is copied in the background, prioritizing chunks that the VM is
429 actively accessing.
430
431 Note that this comes with two caveats:
432
433 * During live-restore, the VM will operate with limited disk read speeds, as
434 data has to be loaded from the backup server (once loaded, it is immediately
435 available on the destination storage however, so accessing data twice only
436 incurs the penalty the first time). Write speeds are largely unaffected.
437 * If the live-restore fails for any reason, the VM will be left in an
438 undefined state - that is, not all data might have been copied from the
439 backup, and it is _most likely_ not possible to keep any data that was written
440 during the failed restore operation.
441
442 This mode of operation is especially useful for large VMs, where only a small
443 amount of data is required for initial operation, e.g. web servers - once the OS
444 and necessary services have been started, the VM is operational, while the
445 background task continues copying seldom used data.
446
447 Single File Restore
448 ~~~~~~~~~~~~~~~~~~~
449
450 The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
451 open a file browser directly on the data contained in a backup. This feature
452 is only available for backups on a Proxmox Backup Server.
453
454 For containers, the first layer of the file tree shows all included 'pxar'
455 archives, which can be opened and browsed freely. For VMs, the first layer shows
456 contained drive images, which can be opened to reveal a list of supported
457 storage technologies found on the drive. In the most basic case, this will be an
458 entry called 'part', representing a partition table, which contains entries for
459 each partition found on the drive. Note that for VMs, not all data might be
460 accessible (unsupported guest file systems, storage technologies, etc...).
461
462 Files and directories can be downloaded using the 'Download' button, the latter
463 being compressed into a zip archive on the fly.
464
465 To enable secure access to VM images, which might contain untrusted data, a
466 temporary VM (not visible as a guest) is started. This does not mean that data
467 downloaded from such an archive is inherently safe, but it avoids exposing the
468 hypervisor system to danger. The VM will stop itself after a timeout. This
469 entire process happens transparently from a user's point of view.
470
471 [[vzdump_configuration]]
472 Configuration
473 -------------
474
475 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
476 simple colon separated key/value format. Each line has the following
477 format:
478
479 OPTION: value
480
481 Blank lines in the file are ignored, and lines starting with a `#`
482 character are treated as comments and are also ignored. Values from
483 this file are used as default, and can be overwritten on the command
484 line.
485
486 We currently support the following options:
487
488 include::vzdump.conf.5-opts.adoc[]
489
490
491 .Example `vzdump.conf` Configuration
492 ----
493 tmpdir: /mnt/fast_local_disk
494 storage: my_backup_storage
495 mode: snapshot
496 bwlimit: 10000
497 ----
498
499 Hook Scripts
500 ------------
501
502 You can specify a hook script with option `--script`. This script is
503 called at various phases of the backup process, with parameters
504 accordingly set. You can find an example in the documentation
505 directory (`vzdump-hook-script.pl`).
506
507 File Exclusions
508 ---------------
509
510 NOTE: this option is only available for container backups.
511
512 `vzdump` skips the following files by default (disable with the option
513 `--stdexcludes 0`)
514
515 /tmp/?*
516 /var/tmp/?*
517 /var/run/?*pid
518
519 You can also manually specify (additional) exclude paths, for example:
520
521 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
522
523 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
524 `/var/foobar`, and so on.
525
526 Paths that do not start with a `/` are not anchored to the container's root,
527 but will match relative to any subdirectory. For example:
528
529 # vzdump 777 --exclude-path bar
530
531 excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
532 so on, but not `/bar2`.
533
534 Configuration files are also stored inside the backup archive
535 (in `./etc/vzdump/`) and will be correctly restored.
536
537 Examples
538 --------
539
540 Simply dump guest 777 - no snapshot, just archive the guest private area and
541 configuration files to the default dump directory (usually
542 `/var/lib/vz/dump/`).
543
544 # vzdump 777
545
546 Use rsync and suspend/resume to create a snapshot (minimal downtime).
547
548 # vzdump 777 --mode suspend
549
550 Backup all guest systems and send notification mails to root and admin.
551
552 # vzdump --all --mode suspend --mailto root --mailto admin
553
554 Use snapshot mode (no downtime) and non-default dump directory.
555
556 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
557
558 Backup more than one guest (selectively)
559
560 # vzdump 101 102 103 --mailto root
561
562 Backup all guests excluding 101 and 102
563
564 # vzdump --mode suspend --exclude 101,102
565
566 Restore a container to a new CT 600
567
568 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
569
570 Restore a QemuServer VM to VM 601
571
572 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
573
574 Clone an existing container 101 to a new container 300 with a 4GB root
575 file system, using pipes
576
577 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
578
579
580 ifdef::manvolnum[]
581 include::pve-copyright.adoc[]
582 endif::manvolnum[]
583