]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
vzdump: drop overly scary & outdated warning about fleecing
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to
41 the Storage documentation on how to add a storage. A backup storage
42 must be a file level storage, as backups are stored as regular files.
43 In most situations, using a NFS server is a good way to store backups.
44 You can save those backups later to a tape drive, for off-site
45 archiving.
46
47 .Scheduled Backup
48
49 Backup jobs can be scheduled so that they are executed automatically
50 on specific days and times, for selectable nodes and guest systems.
51 Configuration of scheduled backups is done at the Datacenter level in
52 the GUI, which will generate a job entry in /etc/pve/jobs.cfg, which
53 will in turn be parsed and executed by the `pvescheduler` daemon.
54 These jobs use the xref:chapter_calendar_events[calendar events] for
55 defining the schedule.
56
57 Since scheduled backups miss their execution when the host was offline or the
58 pvescheduler was disabled during the scheduled time, it is possible to configure
59 the behaviour for catching up. By enabling the `Repeat missed` option
60 (`repeat-missed` in the config), you can tell the scheduler that it should run
61 missed jobs as soon as possible.
62
63 Backup Modes
64 ------------
65
66 There are several ways to provide consistency (option `mode`),
67 depending on the guest type.
68
69 .Backup modes for VMs:
70
71 `stop` mode::
72
73 This mode provides the highest consistency of the backup, at the cost
74 of a short downtime in the VM operation. It works by executing an
75 orderly shutdown of the VM, and then runs a background Qemu process to
76 backup the VM data. After the backup is started, the VM goes to full
77 operation mode if it was previously running. Consistency is guaranteed
78 by using the live backup feature.
79
80 `suspend` mode::
81
82 This mode is provided for compatibility reason, and suspends the VM
83 before calling the `snapshot` mode. Since suspending the VM results in
84 a longer downtime and does not necessarily improve the data
85 consistency, the use of the `snapshot` mode is recommended instead.
86
87 `snapshot` mode::
88
89 This mode provides the lowest operation downtime, at the cost of a
90 small inconsistency risk. It works by performing a {pve} live
91 backup, in which data blocks are copied while the VM is running. If the
92 guest agent is enabled (`agent: 1`) and running, it calls
93 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
94 consistency.
95
96 A technical overview of the {pve} live backup for QemuServer can
97 be found online
98 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
99
100 NOTE: {pve} live backup provides snapshot-like semantics on any
101 storage type. It does not require that the underlying storage supports
102 snapshots. Also please note that since the backups are done via
103 a background Qemu process, a stopped VM will appear as running for a
104 short amount of time while the VM disks are being read by Qemu.
105 However the VM itself is not booted, only its disk(s) are read.
106
107 .Backup modes for Containers:
108
109 `stop` mode::
110
111 Stop the container for the duration of the backup. This potentially
112 results in a very long downtime.
113
114 `suspend` mode::
115
116 This mode uses rsync to copy the container data to a temporary
117 location (see option `--tmpdir`). Then the container is suspended and
118 a second rsync copies changed files. After that, the container is
119 started (resumed) again. This results in minimal downtime, but needs
120 additional space to hold the container copy.
121 +
122 When the container is on a local file system and the target storage of
123 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
124 local file system too, as this will result in a many fold performance
125 improvement. Use of a local `tmpdir` is also required if you want to
126 backup a local container using ACLs in suspend mode if the backup
127 storage is an NFS server.
128
129 `snapshot` mode::
130
131 This mode uses the snapshotting facilities of the underlying
132 storage. First, the container will be suspended to ensure data consistency.
133 A temporary snapshot of the container's volumes will be made and the
134 snapshot content will be archived in a tar file. Finally, the temporary
135 snapshot is deleted again.
136
137 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
138 supports snapshots. Using the `backup=no` mount point option individual volumes
139 can be excluded from the backup (and thus this requirement).
140
141 // see PVE::VZDump::LXC::prepare()
142 NOTE: By default additional mount points besides the Root Disk mount point are
143 not included in backups. For volume mount points you can set the *Backup* option
144 to include the mount point in the backup. Device and bind mounts are never
145 backed up as their content is managed outside the {pve} storage library.
146
147 Backup File Names
148 -----------------
149
150 Newer versions of vzdump encode the guest type and the
151 backup time into the filename, for example
152
153 vzdump-lxc-105-2009_10_09-11_04_43.tar
154
155 That way it is possible to store several backup in the same directory. You can
156 limit the number of backups that are kept with various retention options, see
157 the xref:vzdump_retention[Backup Retention] section below.
158
159 Backup File Compression
160 -----------------------
161
162 The backup file can be compressed with one of the following algorithms: `lzo`
163 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
164 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
165 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
166 footnote:[Zstandard a lossless data compression algorithm
167 https://en.wikipedia.org/wiki/Zstandard].
168
169 Currently, Zstandard (zstd) is the fastest of these three algorithms.
170 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
171 are more widely used and often installed by default.
172
173 You can install pigz footnote:[pigz - parallel implementation of gzip
174 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
175 performance due to multi-threading. For pigz & zstd, the amount of
176 threads/cores can be adjusted. See the
177 xref:vzdump_configuration[configuration options] below.
178
179 The extension of the backup file name can usually be used to determine which
180 compression algorithm has been used to create the backup.
181
182 |===
183 |.zst | Zstandard (zstd) compression
184 |.gz or .tgz | gzip compression
185 |.lzo | lzo compression
186 |===
187
188 If the backup file name doesn't end with one of the above file extensions, then
189 it was not compressed by vzdump.
190
191 Backup Encryption
192 -----------------
193
194 For Proxmox Backup Server storages, you can optionally set up client-side
195 encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
196
197 Backup Jobs
198 -----------
199
200 Besides triggering a backup manually, you can also setup periodic jobs that
201 backup all, or a selection of virtual guest to a storage.
202
203 // TODO: extend, link to retention below, ... di & document perf max-worker settings
204
205 [[vzdump_retention]]
206 Backup Retention
207 ----------------
208
209 With the `prune-backups` option you can specify which backups you want to keep
210 in a flexible manner. The following retention options are available:
211
212 `keep-all <boolean>` ::
213 Keep all backups. If this is `true`, no other options can be set.
214
215 `keep-last <N>` ::
216 Keep the last `<N>` backups.
217
218 `keep-hourly <N>` ::
219 Keep backups for the last `<N>` hours. If there is more than one
220 backup for a single hour, only the latest is kept.
221
222 `keep-daily <N>` ::
223 Keep backups for the last `<N>` days. If there is more than one
224 backup for a single day, only the latest is kept.
225
226 `keep-weekly <N>` ::
227 Keep backups for the last `<N>` weeks. If there is more than one
228 backup for a single week, only the latest is kept.
229
230 NOTE: Weeks start on Monday and end on Sunday. The software uses the
231 `ISO week date`-system and handles weeks at the end of the year correctly.
232
233 `keep-monthly <N>` ::
234 Keep backups for the last `<N>` months. If there is more than one
235 backup for a single month, only the latest is kept.
236
237 `keep-yearly <N>` ::
238 Keep backups for the last `<N>` years. If there is more than one
239 backup for a single year, only the latest is kept.
240
241 The retention options are processed in the order given above. Each option
242 only covers backups within its time period. The next option does not take care
243 of already covered backups. It will only consider older backups.
244
245 Specify the retention options you want to use as a
246 comma-separated list, for example:
247
248 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
249
250 While you can pass `prune-backups` directly to `vzdump`, it is often more
251 sensible to configure the setting on the storage level, which can be done via
252 the web interface.
253
254 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
255 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
256 `keep-all`.
257
258
259 Prune Simulator
260 ~~~~~~~~~~~~~~~
261
262 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
263 of the Proxmox Backup Server documentation] to explore the effect of different
264 retention options with various backup schedules.
265
266 Retention Settings Example
267 ~~~~~~~~~~~~~~~~~~~~~~~~~~
268
269 The backup frequency and retention of old backups may depend on how often data
270 changes, and how important an older state may be, in a specific work load.
271 When backups act as a company's document archive, there may also be legal
272 requirements for how long backups must be kept.
273
274 For this example, we assume that you are doing daily backups, have a retention
275 period of 10 years, and the period between backups stored gradually grows.
276
277 `keep-last=3` - even if only daily backups are taken, an admin may want to
278 create an extra one just before or after a big upgrade. Setting keep-last
279 ensures this.
280
281 `keep-hourly` is not set - for daily backups this is not relevant. You cover
282 extra manual backups already, with keep-last.
283
284 `keep-daily=13` - together with keep-last, which covers at least one
285 day, this ensures that you have at least two weeks of backups.
286
287 `keep-weekly=8` - ensures that you have at least two full months of
288 weekly backups.
289
290 `keep-monthly=11` - together with the previous keep settings, this
291 ensures that you have at least a year of monthly backups.
292
293 `keep-yearly=9` - this is for the long term archive. As you covered the
294 current year with the previous options, you would set this to nine for the
295 remaining ones, giving you a total of at least 10 years of coverage.
296
297 We recommend that you use a higher retention period than is minimally required
298 by your environment; you can always reduce it if you find it is unnecessarily
299 high, but you cannot recreate backups once they have been removed.
300
301 [[vzdump_protection]]
302 Backup Protection
303 -----------------
304
305 You can mark a backup as `protected` to prevent its removal. Attempting to
306 remove a protected backup via {pve}'s UI, CLI or API will fail. However, this
307 is enforced by {pve} and not the file-system, that means that a manual removal
308 of a backup file itself is still possible for anyone with write access to the
309 underlying backup storage.
310
311 NOTE: Protected backups are ignored by pruning and do not count towards the
312 retention settings.
313
314 For filesystem-based storages, the protection is implemented via a sentinel file
315 `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
316 server side (available since Proxmox Backup Server version 2.1).
317
318 Use the storage option `max-protected-backups` to control how many protected
319 backups per guest are allowed on the storage. Use `-1` for unlimited. The
320 default is unlimited for users with `Datastore.Allocate` privilege and `5` for
321 other users.
322
323 [[vzdump_notes]]
324 Backup Notes
325 ------------
326
327 You can add notes to backups using the 'Edit Notes' button in the UI or via the
328 storage content API.
329
330 It is also possible to specify a template for generating notes dynamically for
331 a backup job and for manual backup. The template string can contain variables,
332 surrounded by two curly braces, which will be replaced by the corresponding
333 value when the backup is executed.
334
335 Currently supported are:
336
337 * `{{cluster}}` the cluster name, if any
338 * `{{guestname}}` the virtual guest's assigned name
339 * `{{node}}` the host name of the node the backup is being created
340 * `{{vmid}}` the numerical VMID of the guest
341
342 When specified via API or CLI, it needs to be a single line, where newline and
343 backslash need to be escaped as literal `\n` and `\\` respectively.
344
345 [[vzdump_restore]]
346 Restore
347 -------
348
349 A backup archive can be restored through the {pve} web GUI or through the
350 following CLI tools:
351
352
353 `pct restore`:: Container restore utility
354
355 `qmrestore`:: Virtual Machine restore utility
356
357 For details see the corresponding manual pages.
358
359 Bandwidth Limit
360 ~~~~~~~~~~~~~~~
361
362 Restoring one or more big backups may need a lot of resources, especially
363 storage bandwidth for both reading from the backup storage and writing to
364 the target storage. This can negatively affect other virtual guests as access
365 to storage can get congested.
366
367 To avoid this you can set bandwidth limits for a backup job. {pve}
368 implements two kinds of limits for restoring and archive:
369
370 * per-restore limit: denotes the maximal amount of bandwidth for
371 reading from a backup archive
372
373 * per-storage write limit: denotes the maximal amount of bandwidth used for
374 writing to a specific storage
375
376 The read limit indirectly affects the write limit, as we cannot write more
377 than we read. A smaller per-job limit will overwrite a bigger per-storage
378 limit. A bigger per-job limit will only overwrite the per-storage limit if
379 you have `Data.Allocate' permissions on the affected storage.
380
381 You can use the `--bwlimit <integer>` option from the restore CLI commands
382 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
383 for the limit, this means passing `10240' will limit the read speed of the
384 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
385 is available for the already running virtual guests, and thus the backup
386 does not impact their operations.
387
388 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
389 a specific restore job. This can be helpful if you need to restore a very
390 important virtual guest as fast as possible. (Needs `Data.Allocate'
391 permissions on storage)
392
393 Most times your storage's generally available bandwidth stays the same over
394 time, thus we implemented the possibility to set a default bandwidth limit
395 per configured storage, this can be done with:
396
397 ----
398 # pvesm set STORAGEID --bwlimit restore=KIBs
399 ----
400
401 Live-Restore
402 ~~~~~~~~~~~~
403
404 Restoring a large backup can take a long time, in which a guest is still
405 unavailable. For VM backups stored on a Proxmox Backup Server, this wait
406 time can be mitigated using the live-restore option.
407
408 Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
409 argument of `qmrestore` causes the VM to start as soon as the restore
410 begins. Data is copied in the background, prioritizing chunks that the VM is
411 actively accessing.
412
413 Note that this comes with two caveats:
414
415 * During live-restore, the VM will operate with limited disk read speeds, as
416 data has to be loaded from the backup server (once loaded, it is immediately
417 available on the destination storage however, so accessing data twice only
418 incurs the penalty the first time). Write speeds are largely unaffected.
419 * If the live-restore fails for any reason, the VM will be left in an
420 undefined state - that is, not all data might have been copied from the
421 backup, and it is _most likely_ not possible to keep any data that was written
422 during the failed restore operation.
423
424 This mode of operation is especially useful for large VMs, where only a small
425 amount of data is required for initial operation, e.g. web servers - once the OS
426 and necessary services have been started, the VM is operational, while the
427 background task continues copying seldom used data.
428
429 Single File Restore
430 ~~~~~~~~~~~~~~~~~~~
431
432 The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
433 open a file browser directly on the data contained in a backup. This feature
434 is only available for backups on a Proxmox Backup Server.
435
436 For containers, the first layer of the file tree shows all included 'pxar'
437 archives, which can be opened and browsed freely. For VMs, the first layer shows
438 contained drive images, which can be opened to reveal a list of supported
439 storage technologies found on the drive. In the most basic case, this will be an
440 entry called 'part', representing a partition table, which contains entries for
441 each partition found on the drive. Note that for VMs, not all data might be
442 accessible (unsupported guest file systems, storage technologies, etc...).
443
444 Files and directories can be downloaded using the 'Download' button, the latter
445 being compressed into a zip archive on the fly.
446
447 To enable secure access to VM images, which might contain untrusted data, a
448 temporary VM (not visible as a guest) is started. This does not mean that data
449 downloaded from such an archive is inherently safe, but it avoids exposing the
450 hypervisor system to danger. The VM will stop itself after a timeout. This
451 entire process happens transparently from a user's point of view.
452
453 [[vzdump_configuration]]
454 Configuration
455 -------------
456
457 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
458 simple colon separated key/value format. Each line has the following
459 format:
460
461 OPTION: value
462
463 Blank lines in the file are ignored, and lines starting with a `#`
464 character are treated as comments and are also ignored. Values from
465 this file are used as default, and can be overwritten on the command
466 line.
467
468 We currently support the following options:
469
470 include::vzdump.conf.5-opts.adoc[]
471
472
473 .Example `vzdump.conf` Configuration
474 ----
475 tmpdir: /mnt/fast_local_disk
476 storage: my_backup_storage
477 mode: snapshot
478 bwlimit: 10000
479 ----
480
481 Hook Scripts
482 ------------
483
484 You can specify a hook script with option `--script`. This script is
485 called at various phases of the backup process, with parameters
486 accordingly set. You can find an example in the documentation
487 directory (`vzdump-hook-script.pl`).
488
489 File Exclusions
490 ---------------
491
492 NOTE: this option is only available for container backups.
493
494 `vzdump` skips the following files by default (disable with the option
495 `--stdexcludes 0`)
496
497 /tmp/?*
498 /var/tmp/?*
499 /var/run/?*pid
500
501 You can also manually specify (additional) exclude paths, for example:
502
503 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
504
505 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
506 `/var/foobar`, and so on.
507
508 Paths that do not start with a `/` are not anchored to the container's root,
509 but will match relative to any subdirectory. For example:
510
511 # vzdump 777 --exclude-path bar
512
513 excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
514 so on, but not `/bar2`.
515
516 Configuration files are also stored inside the backup archive
517 (in `./etc/vzdump/`) and will be correctly restored.
518
519 Examples
520 --------
521
522 Simply dump guest 777 - no snapshot, just archive the guest private area and
523 configuration files to the default dump directory (usually
524 `/var/lib/vz/dump/`).
525
526 # vzdump 777
527
528 Use rsync and suspend/resume to create a snapshot (minimal downtime).
529
530 # vzdump 777 --mode suspend
531
532 Backup all guest systems and send notification mails to root and admin.
533
534 # vzdump --all --mode suspend --mailto root --mailto admin
535
536 Use snapshot mode (no downtime) and non-default dump directory.
537
538 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
539
540 Backup more than one guest (selectively)
541
542 # vzdump 101 102 103 --mailto root
543
544 Backup all guests excluding 101 and 102
545
546 # vzdump --mode suspend --exclude 101,102
547
548 Restore a container to a new CT 600
549
550 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
551
552 Restore a QemuServer VM to VM 601
553
554 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
555
556 Clone an existing container 101 to a new container 300 with a 4GB root
557 file system, using pipes
558
559 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
560
561
562 ifdef::manvolnum[]
563 include::pve-copyright.adoc[]
564 endif::manvolnum[]
565