]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
0c5b32ef3c16390f0b8386b5f6eb5757f116c3bd
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to the
41 xref:chapter_storage[storage documentation] on how to add a storage. It can
42 either be a Proxmox Backup Server storage, where backups are stored as
43 de-duplicated chunks and metadata, or a file-level storage, where backups are
44 stored as regular files. Using Proxmox Backup Server on a dedicated host is
45 recommended, because of its advanced features. Using an NFS server is a good
46 alternative. In both cases, you might want to save those backups later to a tape
47 drive, for off-site archiving.
48
49 .Scheduled Backup
50
51 Backup jobs can be scheduled so that they are executed automatically
52 on specific days and times, for selectable nodes and guest systems.
53 Configuration of scheduled backups is done at the Datacenter level in
54 the GUI, which will generate a job entry in /etc/pve/jobs.cfg, which
55 will in turn be parsed and executed by the `pvescheduler` daemon.
56 These jobs use the xref:chapter_calendar_events[calendar events] for
57 defining the schedule.
58
59 Since scheduled backups miss their execution when the host was offline or the
60 pvescheduler was disabled during the scheduled time, it is possible to configure
61 the behaviour for catching up. By enabling the `Repeat missed` option
62 (`repeat-missed` in the config), you can tell the scheduler that it should run
63 missed jobs as soon as possible.
64
65 Backup Modes
66 ------------
67
68 There are several ways to provide consistency (option `mode`),
69 depending on the guest type.
70
71 .Backup modes for VMs:
72
73 `stop` mode::
74
75 This mode provides the highest consistency of the backup, at the cost
76 of a short downtime in the VM operation. It works by executing an
77 orderly shutdown of the VM, and then runs a background Qemu process to
78 backup the VM data. After the backup is started, the VM goes to full
79 operation mode if it was previously running. Consistency is guaranteed
80 by using the live backup feature.
81
82 `suspend` mode::
83
84 This mode is provided for compatibility reason, and suspends the VM
85 before calling the `snapshot` mode. Since suspending the VM results in
86 a longer downtime and does not necessarily improve the data
87 consistency, the use of the `snapshot` mode is recommended instead.
88
89 `snapshot` mode::
90
91 This mode provides the lowest operation downtime, at the cost of a
92 small inconsistency risk. It works by performing a {pve} live
93 backup, in which data blocks are copied while the VM is running. If the
94 guest agent is enabled (`agent: 1`) and running, it calls
95 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
96 consistency.
97
98 A technical overview of the {pve} live backup for QemuServer can
99 be found online
100 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
101
102 NOTE: {pve} live backup provides snapshot-like semantics on any
103 storage type. It does not require that the underlying storage supports
104 snapshots. Also please note that since the backups are done via
105 a background Qemu process, a stopped VM will appear as running for a
106 short amount of time while the VM disks are being read by Qemu.
107 However the VM itself is not booted, only its disk(s) are read.
108
109 .Backup modes for Containers:
110
111 `stop` mode::
112
113 Stop the container for the duration of the backup. This potentially
114 results in a very long downtime.
115
116 `suspend` mode::
117
118 This mode uses rsync to copy the container data to a temporary
119 location (see option `--tmpdir`). Then the container is suspended and
120 a second rsync copies changed files. After that, the container is
121 started (resumed) again. This results in minimal downtime, but needs
122 additional space to hold the container copy.
123 +
124 When the container is on a local file system and the target storage of
125 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
126 local file system too, as this will result in a many fold performance
127 improvement. Use of a local `tmpdir` is also required if you want to
128 backup a local container using ACLs in suspend mode if the backup
129 storage is an NFS server.
130
131 `snapshot` mode::
132
133 This mode uses the snapshotting facilities of the underlying
134 storage. First, the container will be suspended to ensure data consistency.
135 A temporary snapshot of the container's volumes will be made and the
136 snapshot content will be archived in a tar file. Finally, the temporary
137 snapshot is deleted again.
138
139 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
140 supports snapshots. Using the `backup=no` mount point option individual volumes
141 can be excluded from the backup (and thus this requirement).
142
143 // see PVE::VZDump::LXC::prepare()
144 NOTE: By default additional mount points besides the Root Disk mount point are
145 not included in backups. For volume mount points you can set the *Backup* option
146 to include the mount point in the backup. Device and bind mounts are never
147 backed up as their content is managed outside the {pve} storage library.
148
149 Backup File Names
150 -----------------
151
152 Newer versions of vzdump encode the guest type and the
153 backup time into the filename, for example
154
155 vzdump-lxc-105-2009_10_09-11_04_43.tar
156
157 That way it is possible to store several backup in the same directory. You can
158 limit the number of backups that are kept with various retention options, see
159 the xref:vzdump_retention[Backup Retention] section below.
160
161 Backup File Compression
162 -----------------------
163
164 The backup file can be compressed with one of the following algorithms: `lzo`
165 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
166 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
167 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
168 footnote:[Zstandard a lossless data compression algorithm
169 https://en.wikipedia.org/wiki/Zstandard].
170
171 Currently, Zstandard (zstd) is the fastest of these three algorithms.
172 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
173 are more widely used and often installed by default.
174
175 You can install pigz footnote:[pigz - parallel implementation of gzip
176 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
177 performance due to multi-threading. For pigz & zstd, the amount of
178 threads/cores can be adjusted. See the
179 xref:vzdump_configuration[configuration options] below.
180
181 The extension of the backup file name can usually be used to determine which
182 compression algorithm has been used to create the backup.
183
184 |===
185 |.zst | Zstandard (zstd) compression
186 |.gz or .tgz | gzip compression
187 |.lzo | lzo compression
188 |===
189
190 If the backup file name doesn't end with one of the above file extensions, then
191 it was not compressed by vzdump.
192
193 Backup Encryption
194 -----------------
195
196 For Proxmox Backup Server storages, you can optionally set up client-side
197 encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
198
199 Backup Jobs
200 -----------
201
202 Besides triggering a backup manually, you can also setup periodic jobs that
203 backup all, or a selection of virtual guest to a storage.
204
205 // TODO: extend, link to retention below, ... di & document perf max-worker settings
206
207 [[vzdump_retention]]
208 Backup Retention
209 ----------------
210
211 With the `prune-backups` option you can specify which backups you want to keep
212 in a flexible manner. The following retention options are available:
213
214 `keep-all <boolean>` ::
215 Keep all backups. If this is `true`, no other options can be set.
216
217 `keep-last <N>` ::
218 Keep the last `<N>` backups.
219
220 `keep-hourly <N>` ::
221 Keep backups for the last `<N>` hours. If there is more than one
222 backup for a single hour, only the latest is kept.
223
224 `keep-daily <N>` ::
225 Keep backups for the last `<N>` days. If there is more than one
226 backup for a single day, only the latest is kept.
227
228 `keep-weekly <N>` ::
229 Keep backups for the last `<N>` weeks. If there is more than one
230 backup for a single week, only the latest is kept.
231
232 NOTE: Weeks start on Monday and end on Sunday. The software uses the
233 `ISO week date`-system and handles weeks at the end of the year correctly.
234
235 `keep-monthly <N>` ::
236 Keep backups for the last `<N>` months. If there is more than one
237 backup for a single month, only the latest is kept.
238
239 `keep-yearly <N>` ::
240 Keep backups for the last `<N>` years. If there is more than one
241 backup for a single year, only the latest is kept.
242
243 The retention options are processed in the order given above. Each option
244 only covers backups within its time period. The next option does not take care
245 of already covered backups. It will only consider older backups.
246
247 Specify the retention options you want to use as a
248 comma-separated list, for example:
249
250 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
251
252 While you can pass `prune-backups` directly to `vzdump`, it is often more
253 sensible to configure the setting on the storage level, which can be done via
254 the web interface.
255
256 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
257 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
258 `keep-all`.
259
260
261 Prune Simulator
262 ~~~~~~~~~~~~~~~
263
264 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
265 of the Proxmox Backup Server documentation] to explore the effect of different
266 retention options with various backup schedules.
267
268 Retention Settings Example
269 ~~~~~~~~~~~~~~~~~~~~~~~~~~
270
271 The backup frequency and retention of old backups may depend on how often data
272 changes, and how important an older state may be, in a specific work load.
273 When backups act as a company's document archive, there may also be legal
274 requirements for how long backups must be kept.
275
276 For this example, we assume that you are doing daily backups, have a retention
277 period of 10 years, and the period between backups stored gradually grows.
278
279 `keep-last=3` - even if only daily backups are taken, an admin may want to
280 create an extra one just before or after a big upgrade. Setting keep-last
281 ensures this.
282
283 `keep-hourly` is not set - for daily backups this is not relevant. You cover
284 extra manual backups already, with keep-last.
285
286 `keep-daily=13` - together with keep-last, which covers at least one
287 day, this ensures that you have at least two weeks of backups.
288
289 `keep-weekly=8` - ensures that you have at least two full months of
290 weekly backups.
291
292 `keep-monthly=11` - together with the previous keep settings, this
293 ensures that you have at least a year of monthly backups.
294
295 `keep-yearly=9` - this is for the long term archive. As you covered the
296 current year with the previous options, you would set this to nine for the
297 remaining ones, giving you a total of at least 10 years of coverage.
298
299 We recommend that you use a higher retention period than is minimally required
300 by your environment; you can always reduce it if you find it is unnecessarily
301 high, but you cannot recreate backups once they have been removed.
302
303 [[vzdump_protection]]
304 Backup Protection
305 -----------------
306
307 You can mark a backup as `protected` to prevent its removal. Attempting to
308 remove a protected backup via {pve}'s UI, CLI or API will fail. However, this
309 is enforced by {pve} and not the file-system, that means that a manual removal
310 of a backup file itself is still possible for anyone with write access to the
311 underlying backup storage.
312
313 NOTE: Protected backups are ignored by pruning and do not count towards the
314 retention settings.
315
316 For filesystem-based storages, the protection is implemented via a sentinel file
317 `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
318 server side (available since Proxmox Backup Server version 2.1).
319
320 Use the storage option `max-protected-backups` to control how many protected
321 backups per guest are allowed on the storage. Use `-1` for unlimited. The
322 default is unlimited for users with `Datastore.Allocate` privilege and `5` for
323 other users.
324
325 [[vzdump_notes]]
326 Backup Notes
327 ------------
328
329 You can add notes to backups using the 'Edit Notes' button in the UI or via the
330 storage content API.
331
332 It is also possible to specify a template for generating notes dynamically for
333 a backup job and for manual backup. The template string can contain variables,
334 surrounded by two curly braces, which will be replaced by the corresponding
335 value when the backup is executed.
336
337 Currently supported are:
338
339 * `{{cluster}}` the cluster name, if any
340 * `{{guestname}}` the virtual guest's assigned name
341 * `{{node}}` the host name of the node the backup is being created
342 * `{{vmid}}` the numerical VMID of the guest
343
344 When specified via API or CLI, it needs to be a single line, where newline and
345 backslash need to be escaped as literal `\n` and `\\` respectively.
346
347 [[vzdump_restore]]
348 Restore
349 -------
350
351 A backup archive can be restored through the {pve} web GUI or through the
352 following CLI tools:
353
354
355 `pct restore`:: Container restore utility
356
357 `qmrestore`:: Virtual Machine restore utility
358
359 For details see the corresponding manual pages.
360
361 Bandwidth Limit
362 ~~~~~~~~~~~~~~~
363
364 Restoring one or more big backups may need a lot of resources, especially
365 storage bandwidth for both reading from the backup storage and writing to
366 the target storage. This can negatively affect other virtual guests as access
367 to storage can get congested.
368
369 To avoid this you can set bandwidth limits for a backup job. {pve}
370 implements two kinds of limits for restoring and archive:
371
372 * per-restore limit: denotes the maximal amount of bandwidth for
373 reading from a backup archive
374
375 * per-storage write limit: denotes the maximal amount of bandwidth used for
376 writing to a specific storage
377
378 The read limit indirectly affects the write limit, as we cannot write more
379 than we read. A smaller per-job limit will overwrite a bigger per-storage
380 limit. A bigger per-job limit will only overwrite the per-storage limit if
381 you have `Data.Allocate' permissions on the affected storage.
382
383 You can use the `--bwlimit <integer>` option from the restore CLI commands
384 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
385 for the limit, this means passing `10240' will limit the read speed of the
386 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
387 is available for the already running virtual guests, and thus the backup
388 does not impact their operations.
389
390 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
391 a specific restore job. This can be helpful if you need to restore a very
392 important virtual guest as fast as possible. (Needs `Data.Allocate'
393 permissions on storage)
394
395 Most times your storage's generally available bandwidth stays the same over
396 time, thus we implemented the possibility to set a default bandwidth limit
397 per configured storage, this can be done with:
398
399 ----
400 # pvesm set STORAGEID --bwlimit restore=KIBs
401 ----
402
403 Live-Restore
404 ~~~~~~~~~~~~
405
406 Restoring a large backup can take a long time, in which a guest is still
407 unavailable. For VM backups stored on a Proxmox Backup Server, this wait
408 time can be mitigated using the live-restore option.
409
410 Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
411 argument of `qmrestore` causes the VM to start as soon as the restore
412 begins. Data is copied in the background, prioritizing chunks that the VM is
413 actively accessing.
414
415 Note that this comes with two caveats:
416
417 * During live-restore, the VM will operate with limited disk read speeds, as
418 data has to be loaded from the backup server (once loaded, it is immediately
419 available on the destination storage however, so accessing data twice only
420 incurs the penalty the first time). Write speeds are largely unaffected.
421 * If the live-restore fails for any reason, the VM will be left in an
422 undefined state - that is, not all data might have been copied from the
423 backup, and it is _most likely_ not possible to keep any data that was written
424 during the failed restore operation.
425
426 This mode of operation is especially useful for large VMs, where only a small
427 amount of data is required for initial operation, e.g. web servers - once the OS
428 and necessary services have been started, the VM is operational, while the
429 background task continues copying seldom used data.
430
431 Single File Restore
432 ~~~~~~~~~~~~~~~~~~~
433
434 The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
435 open a file browser directly on the data contained in a backup. This feature
436 is only available for backups on a Proxmox Backup Server.
437
438 For containers, the first layer of the file tree shows all included 'pxar'
439 archives, which can be opened and browsed freely. For VMs, the first layer shows
440 contained drive images, which can be opened to reveal a list of supported
441 storage technologies found on the drive. In the most basic case, this will be an
442 entry called 'part', representing a partition table, which contains entries for
443 each partition found on the drive. Note that for VMs, not all data might be
444 accessible (unsupported guest file systems, storage technologies, etc...).
445
446 Files and directories can be downloaded using the 'Download' button, the latter
447 being compressed into a zip archive on the fly.
448
449 To enable secure access to VM images, which might contain untrusted data, a
450 temporary VM (not visible as a guest) is started. This does not mean that data
451 downloaded from such an archive is inherently safe, but it avoids exposing the
452 hypervisor system to danger. The VM will stop itself after a timeout. This
453 entire process happens transparently from a user's point of view.
454
455 [[vzdump_configuration]]
456 Configuration
457 -------------
458
459 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
460 simple colon separated key/value format. Each line has the following
461 format:
462
463 OPTION: value
464
465 Blank lines in the file are ignored, and lines starting with a `#`
466 character are treated as comments and are also ignored. Values from
467 this file are used as default, and can be overwritten on the command
468 line.
469
470 We currently support the following options:
471
472 include::vzdump.conf.5-opts.adoc[]
473
474
475 .Example `vzdump.conf` Configuration
476 ----
477 tmpdir: /mnt/fast_local_disk
478 storage: my_backup_storage
479 mode: snapshot
480 bwlimit: 10000
481 ----
482
483 Hook Scripts
484 ------------
485
486 You can specify a hook script with option `--script`. This script is
487 called at various phases of the backup process, with parameters
488 accordingly set. You can find an example in the documentation
489 directory (`vzdump-hook-script.pl`).
490
491 File Exclusions
492 ---------------
493
494 NOTE: this option is only available for container backups.
495
496 `vzdump` skips the following files by default (disable with the option
497 `--stdexcludes 0`)
498
499 /tmp/?*
500 /var/tmp/?*
501 /var/run/?*pid
502
503 You can also manually specify (additional) exclude paths, for example:
504
505 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
506
507 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
508 `/var/foobar`, and so on.
509
510 Paths that do not start with a `/` are not anchored to the container's root,
511 but will match relative to any subdirectory. For example:
512
513 # vzdump 777 --exclude-path bar
514
515 excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
516 so on, but not `/bar2`.
517
518 Configuration files are also stored inside the backup archive
519 (in `./etc/vzdump/`) and will be correctly restored.
520
521 Examples
522 --------
523
524 Simply dump guest 777 - no snapshot, just archive the guest private area and
525 configuration files to the default dump directory (usually
526 `/var/lib/vz/dump/`).
527
528 # vzdump 777
529
530 Use rsync and suspend/resume to create a snapshot (minimal downtime).
531
532 # vzdump 777 --mode suspend
533
534 Backup all guest systems and send notification mails to root and admin.
535
536 # vzdump --all --mode suspend --mailto root --mailto admin
537
538 Use snapshot mode (no downtime) and non-default dump directory.
539
540 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
541
542 Backup more than one guest (selectively)
543
544 # vzdump 101 102 103 --mailto root
545
546 Backup all guests excluding 101 and 102
547
548 # vzdump --mode suspend --exclude 101,102
549
550 Restore a container to a new CT 600
551
552 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
553
554 Restore a QemuServer VM to VM 601
555
556 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
557
558 Clone an existing container 101 to a new container 300 with a 4GB root
559 file system, using pipes
560
561 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
562
563
564 ifdef::manvolnum[]
565 include::pve-copyright.adoc[]
566 endif::manvolnum[]
567