]> git.proxmox.com Git - pve-docs.git/blame_incremental - vzdump.adoc
fix #5429: network: override device names: include Type=ether
[pve-docs.git] / vzdump.adoc
... / ...
CommitLineData
1[[chapter_vzdump]]
2ifdef::manvolnum[]
3vzdump(1)
4=========
5:pve-toplevel:
6
7NAME
8----
9
10vzdump - Backup Utility for VMs and Containers
11
12
13SYNOPSIS
14--------
15
16include::vzdump.1-synopsis.adoc[]
17
18
19DESCRIPTION
20-----------
21endif::manvolnum[]
22ifndef::manvolnum[]
23Backup and Restore
24==================
25:pve-toplevel:
26endif::manvolnum[]
27
28Backups are a requirement for any sensible IT deployment, and {pve}
29provides a fully integrated solution, using the capabilities of each
30storage and each guest system type. This allows the system
31administrator to fine tune via the `mode` option between consistency
32of the backups and downtime of the guest system.
33
34{pve} backups are always full backups - containing the VM/CT
35configuration and all data. Backups can be started via the GUI or via
36the `vzdump` command line tool.
37
38.Backup Storage
39
40Before a backup can run, a backup storage must be defined. Refer to the
41xref:chapter_storage[storage documentation] on how to add a storage. It can
42either be a Proxmox Backup Server storage, where backups are stored as
43de-duplicated chunks and metadata, or a file-level storage, where backups are
44stored as regular files. Using Proxmox Backup Server on a dedicated host is
45recommended, because of its advanced features. Using an NFS server is a good
46alternative. In both cases, you might want to save those backups later to a tape
47drive, for off-site archiving.
48
49.Scheduled Backup
50
51Backup jobs can be scheduled so that they are executed automatically on specific
52days and times, for selectable nodes and guest systems. See the
53xref:vzdump_jobs[Backup Jobs] section for more.
54
55Backup Modes
56------------
57
58There are several ways to provide consistency (option `mode`),
59depending on the guest type.
60
61.Backup modes for VMs:
62
63`stop` mode::
64
65This mode provides the highest consistency of the backup, at the cost
66of a short downtime in the VM operation. It works by executing an
67orderly shutdown of the VM, and then runs a background Qemu process to
68backup the VM data. After the backup is started, the VM goes to full
69operation mode if it was previously running. Consistency is guaranteed
70by using the live backup feature.
71
72`suspend` mode::
73
74This mode is provided for compatibility reason, and suspends the VM
75before calling the `snapshot` mode. Since suspending the VM results in
76a longer downtime and does not necessarily improve the data
77consistency, the use of the `snapshot` mode is recommended instead.
78
79`snapshot` mode::
80
81This mode provides the lowest operation downtime, at the cost of a
82small inconsistency risk. It works by performing a {pve} live
83backup, in which data blocks are copied while the VM is running. If the
84guest agent is enabled (`agent: 1`) and running, it calls
85`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
86consistency.
87
88A technical overview of the {pve} live backup for QemuServer can
89be found online
90https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
91
92NOTE: {pve} live backup provides snapshot-like semantics on any
93storage type. It does not require that the underlying storage supports
94snapshots. Also please note that since the backups are done via
95a background Qemu process, a stopped VM will appear as running for a
96short amount of time while the VM disks are being read by Qemu.
97However the VM itself is not booted, only its disk(s) are read.
98
99.Backup modes for Containers:
100
101`stop` mode::
102
103Stop the container for the duration of the backup. This potentially
104results in a very long downtime.
105
106`suspend` mode::
107
108This mode uses rsync to copy the container data to a temporary
109location (see option `--tmpdir`). Then the container is suspended and
110a second rsync copies changed files. After that, the container is
111started (resumed) again. This results in minimal downtime, but needs
112additional space to hold the container copy.
113+
114When the container is on a local file system and the target storage of
115the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
116local file system too, as this will result in a many fold performance
117improvement. Use of a local `tmpdir` is also required if you want to
118backup a local container using ACLs in suspend mode if the backup
119storage is an NFS server.
120
121`snapshot` mode::
122
123This mode uses the snapshotting facilities of the underlying
124storage. First, the container will be suspended to ensure data consistency.
125A temporary snapshot of the container's volumes will be made and the
126snapshot content will be archived in a tar file. Finally, the temporary
127snapshot is deleted again.
128
129NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
130supports snapshots. Using the `backup=no` mount point option individual volumes
131can be excluded from the backup (and thus this requirement).
132
133// see PVE::VZDump::LXC::prepare()
134NOTE: By default additional mount points besides the Root Disk mount point are
135not included in backups. For volume mount points you can set the *Backup* option
136to include the mount point in the backup. Device and bind mounts are never
137backed up as their content is managed outside the {pve} storage library.
138
139Backup File Names
140-----------------
141
142Newer versions of vzdump encode the guest type and the
143backup time into the filename, for example
144
145 vzdump-lxc-105-2009_10_09-11_04_43.tar
146
147That way it is possible to store several backup in the same directory. You can
148limit the number of backups that are kept with various retention options, see
149the xref:vzdump_retention[Backup Retention] section below.
150
151Backup File Compression
152-----------------------
153
154The backup file can be compressed with one of the following algorithms: `lzo`
155footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
156https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
157based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
158footnote:[Zstandard a lossless data compression algorithm
159https://en.wikipedia.org/wiki/Zstandard].
160
161Currently, Zstandard (zstd) is the fastest of these three algorithms.
162Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
163are more widely used and often installed by default.
164
165You can install pigz footnote:[pigz - parallel implementation of gzip
166https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
167performance due to multi-threading. For pigz & zstd, the amount of
168threads/cores can be adjusted. See the
169xref:vzdump_configuration[configuration options] below.
170
171The extension of the backup file name can usually be used to determine which
172compression algorithm has been used to create the backup.
173
174|===
175|.zst | Zstandard (zstd) compression
176|.gz or .tgz | gzip compression
177|.lzo | lzo compression
178|===
179
180If the backup file name doesn't end with one of the above file extensions, then
181it was not compressed by vzdump.
182
183Backup Encryption
184-----------------
185
186For Proxmox Backup Server storages, you can optionally set up client-side
187encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
188
189[[vzdump_jobs]]
190Backup Jobs
191-----------
192
193Besides triggering a backup manually, you can also setup periodic jobs that
194backup all, or a selection of virtual guest to a storage. You can manage the
195jobs in the UI under 'Datacenter' -> 'Backup' or via the `/cluster/backup` API
196endpoint. Both will generate job entries in `/etc/pve/jobs.cfg`, which are
197parsed and executed by the `pvescheduler` daemon.
198
199A job is either configured for all cluster nodes or a specific node, and is
200executed according to a given schedule. The format for the schedule is very
201similar to `systemd` calendar events, see the
202xref:chapter_calendar_events[calendar events] section for details. The
203'Schedule' field in the UI can be freely edited, and it contains several
204examples that can be used as a starting point in its drop-down list.
205
206You can configure job-specific xref:vzdump_retention[retention options]
207overriding those from the storage or node configuration, as well as a
208xref:vzdump_notes[template for notes] for additional information to be saved
209together with the backup.
210
211Since scheduled backups miss their execution when the host was offline or the
212pvescheduler was disabled during the scheduled time, it is possible to configure
213the behaviour for catching up. By enabling the `Repeat missed` option
214(`repeat-missed` in the config), you can tell the scheduler that it should run
215missed jobs as soon as possible.
216
217There are a few settings for tuning backup performance not exposed in the UI.
218The most notable is `bwlimit` for limiting IO bandwidth. The amount of threads
219used for the compressor can be controlled with the `pigz` (replacing `gzip`),
220respectively, `zstd` setting. Furthermore, there are `ionice` and, as part of
221the `performance` setting, `max-workers` (affects VM backups only). See the
222xref:vzdump_configuration[configuration options] for details.
223
224[[vzdump_retention]]
225Backup Retention
226----------------
227
228With the `prune-backups` option you can specify which backups you want to keep
229in a flexible manner. The following retention options are available:
230
231`keep-all <boolean>` ::
232Keep all backups. If this is `true`, no other options can be set.
233
234`keep-last <N>` ::
235Keep the last `<N>` backups.
236
237`keep-hourly <N>` ::
238Keep backups for the last `<N>` hours. If there is more than one
239backup for a single hour, only the latest is kept.
240
241`keep-daily <N>` ::
242Keep backups for the last `<N>` days. If there is more than one
243backup for a single day, only the latest is kept.
244
245`keep-weekly <N>` ::
246Keep backups for the last `<N>` weeks. If there is more than one
247backup for a single week, only the latest is kept.
248
249NOTE: Weeks start on Monday and end on Sunday. The software uses the
250`ISO week date`-system and handles weeks at the end of the year correctly.
251
252`keep-monthly <N>` ::
253Keep backups for the last `<N>` months. If there is more than one
254backup for a single month, only the latest is kept.
255
256`keep-yearly <N>` ::
257Keep backups for the last `<N>` years. If there is more than one
258backup for a single year, only the latest is kept.
259
260The retention options are processed in the order given above. Each option
261only covers backups within its time period. The next option does not take care
262of already covered backups. It will only consider older backups.
263
264Specify the retention options you want to use as a
265comma-separated list, for example:
266
267 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
268
269While you can pass `prune-backups` directly to `vzdump`, it is often more
270sensible to configure the setting on the storage level, which can be done via
271the web interface.
272
273NOTE: The old `maxfiles` option is deprecated and should be replaced either by
274`keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
275`keep-all`.
276
277
278Prune Simulator
279~~~~~~~~~~~~~~~
280
281You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
282of the Proxmox Backup Server documentation] to explore the effect of different
283retention options with various backup schedules.
284
285Retention Settings Example
286~~~~~~~~~~~~~~~~~~~~~~~~~~
287
288The backup frequency and retention of old backups may depend on how often data
289changes, and how important an older state may be, in a specific work load.
290When backups act as a company's document archive, there may also be legal
291requirements for how long backups must be kept.
292
293For this example, we assume that you are doing daily backups, have a retention
294period of 10 years, and the period between backups stored gradually grows.
295
296`keep-last=3` - even if only daily backups are taken, an admin may want to
297 create an extra one just before or after a big upgrade. Setting keep-last
298 ensures this.
299
300`keep-hourly` is not set - for daily backups this is not relevant. You cover
301 extra manual backups already, with keep-last.
302
303`keep-daily=13` - together with keep-last, which covers at least one
304 day, this ensures that you have at least two weeks of backups.
305
306`keep-weekly=8` - ensures that you have at least two full months of
307 weekly backups.
308
309`keep-monthly=11` - together with the previous keep settings, this
310 ensures that you have at least a year of monthly backups.
311
312`keep-yearly=9` - this is for the long term archive. As you covered the
313 current year with the previous options, you would set this to nine for the
314 remaining ones, giving you a total of at least 10 years of coverage.
315
316We recommend that you use a higher retention period than is minimally required
317by your environment; you can always reduce it if you find it is unnecessarily
318high, but you cannot recreate backups once they have been removed.
319
320[[vzdump_protection]]
321Backup Protection
322-----------------
323
324You can mark a backup as `protected` to prevent its removal. Attempting to
325remove a protected backup via {pve}'s UI, CLI or API will fail. However, this
326is enforced by {pve} and not the file-system, that means that a manual removal
327of a backup file itself is still possible for anyone with write access to the
328underlying backup storage.
329
330NOTE: Protected backups are ignored by pruning and do not count towards the
331retention settings.
332
333For filesystem-based storages, the protection is implemented via a sentinel file
334`<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
335server side (available since Proxmox Backup Server version 2.1).
336
337Use the storage option `max-protected-backups` to control how many protected
338backups per guest are allowed on the storage. Use `-1` for unlimited. The
339default is unlimited for users with `Datastore.Allocate` privilege and `5` for
340other users.
341
342[[vzdump_notes]]
343Backup Notes
344------------
345
346You can add notes to backups using the 'Edit Notes' button in the UI or via the
347storage content API.
348
349It is also possible to specify a template for generating notes dynamically for
350a backup job and for manual backup. The template string can contain variables,
351surrounded by two curly braces, which will be replaced by the corresponding
352value when the backup is executed.
353
354Currently supported are:
355
356* `{{cluster}}` the cluster name, if any
357* `{{guestname}}` the virtual guest's assigned name
358* `{{node}}` the host name of the node the backup is being created
359* `{{vmid}}` the numerical VMID of the guest
360
361When specified via API or CLI, it needs to be a single line, where newline and
362backslash need to be escaped as literal `\n` and `\\` respectively.
363
364[[vzdump_restore]]
365Restore
366-------
367
368A backup archive can be restored through the {pve} web GUI or through the
369following CLI tools:
370
371
372`pct restore`:: Container restore utility
373
374`qmrestore`:: Virtual Machine restore utility
375
376For details see the corresponding manual pages.
377
378Bandwidth Limit
379~~~~~~~~~~~~~~~
380
381Restoring one or more big backups may need a lot of resources, especially
382storage bandwidth for both reading from the backup storage and writing to
383the target storage. This can negatively affect other virtual guests as access
384to storage can get congested.
385
386To avoid this you can set bandwidth limits for a backup job. {pve}
387implements two kinds of limits for restoring and archive:
388
389* per-restore limit: denotes the maximal amount of bandwidth for
390 reading from a backup archive
391
392* per-storage write limit: denotes the maximal amount of bandwidth used for
393 writing to a specific storage
394
395The read limit indirectly affects the write limit, as we cannot write more
396than we read. A smaller per-job limit will overwrite a bigger per-storage
397limit. A bigger per-job limit will only overwrite the per-storage limit if
398you have `Data.Allocate' permissions on the affected storage.
399
400You can use the `--bwlimit <integer>` option from the restore CLI commands
401to set up a restore job specific bandwidth limit. Kibit/s is used as unit
402for the limit, this means passing `10240' will limit the read speed of the
403backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
404is available for the already running virtual guests, and thus the backup
405does not impact their operations.
406
407NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
408a specific restore job. This can be helpful if you need to restore a very
409important virtual guest as fast as possible. (Needs `Data.Allocate'
410permissions on storage)
411
412Most times your storage's generally available bandwidth stays the same over
413time, thus we implemented the possibility to set a default bandwidth limit
414per configured storage, this can be done with:
415
416----
417# pvesm set STORAGEID --bwlimit restore=KIBs
418----
419
420Live-Restore
421~~~~~~~~~~~~
422
423Restoring a large backup can take a long time, in which a guest is still
424unavailable. For VM backups stored on a Proxmox Backup Server, this wait
425time can be mitigated using the live-restore option.
426
427Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
428argument of `qmrestore` causes the VM to start as soon as the restore
429begins. Data is copied in the background, prioritizing chunks that the VM is
430actively accessing.
431
432Note that this comes with two caveats:
433
434* During live-restore, the VM will operate with limited disk read speeds, as
435 data has to be loaded from the backup server (once loaded, it is immediately
436 available on the destination storage however, so accessing data twice only
437 incurs the penalty the first time). Write speeds are largely unaffected.
438* If the live-restore fails for any reason, the VM will be left in an
439 undefined state - that is, not all data might have been copied from the
440 backup, and it is _most likely_ not possible to keep any data that was written
441 during the failed restore operation.
442
443This mode of operation is especially useful for large VMs, where only a small
444amount of data is required for initial operation, e.g. web servers - once the OS
445and necessary services have been started, the VM is operational, while the
446background task continues copying seldom used data.
447
448Single File Restore
449~~~~~~~~~~~~~~~~~~~
450
451The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
452open a file browser directly on the data contained in a backup. This feature
453is only available for backups on a Proxmox Backup Server.
454
455For containers, the first layer of the file tree shows all included 'pxar'
456archives, which can be opened and browsed freely. For VMs, the first layer shows
457contained drive images, which can be opened to reveal a list of supported
458storage technologies found on the drive. In the most basic case, this will be an
459entry called 'part', representing a partition table, which contains entries for
460each partition found on the drive. Note that for VMs, not all data might be
461accessible (unsupported guest file systems, storage technologies, etc...).
462
463Files and directories can be downloaded using the 'Download' button, the latter
464being compressed into a zip archive on the fly.
465
466To enable secure access to VM images, which might contain untrusted data, a
467temporary VM (not visible as a guest) is started. This does not mean that data
468downloaded from such an archive is inherently safe, but it avoids exposing the
469hypervisor system to danger. The VM will stop itself after a timeout. This
470entire process happens transparently from a user's point of view.
471
472[[vzdump_configuration]]
473Configuration
474-------------
475
476Global configuration is stored in `/etc/vzdump.conf`. The file uses a
477simple colon separated key/value format. Each line has the following
478format:
479
480 OPTION: value
481
482Blank lines in the file are ignored, and lines starting with a `#`
483character are treated as comments and are also ignored. Values from
484this file are used as default, and can be overwritten on the command
485line.
486
487We currently support the following options:
488
489include::vzdump.conf.5-opts.adoc[]
490
491
492.Example `vzdump.conf` Configuration
493----
494tmpdir: /mnt/fast_local_disk
495storage: my_backup_storage
496mode: snapshot
497bwlimit: 10000
498----
499
500Hook Scripts
501------------
502
503You can specify a hook script with option `--script`. This script is
504called at various phases of the backup process, with parameters
505accordingly set. You can find an example in the documentation
506directory (`vzdump-hook-script.pl`).
507
508File Exclusions
509---------------
510
511NOTE: this option is only available for container backups.
512
513`vzdump` skips the following files by default (disable with the option
514`--stdexcludes 0`)
515
516 /tmp/?*
517 /var/tmp/?*
518 /var/run/?*pid
519
520You can also manually specify (additional) exclude paths, for example:
521
522 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
523
524excludes the directory `/tmp/` and any file or directory named `/var/foo`,
525`/var/foobar`, and so on.
526
527Paths that do not start with a `/` are not anchored to the container's root,
528but will match relative to any subdirectory. For example:
529
530 # vzdump 777 --exclude-path bar
531
532excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
533so on, but not `/bar2`.
534
535Configuration files are also stored inside the backup archive
536(in `./etc/vzdump/`) and will be correctly restored.
537
538Examples
539--------
540
541Simply dump guest 777 - no snapshot, just archive the guest private area and
542configuration files to the default dump directory (usually
543`/var/lib/vz/dump/`).
544
545 # vzdump 777
546
547Use rsync and suspend/resume to create a snapshot (minimal downtime).
548
549 # vzdump 777 --mode suspend
550
551Backup all guest systems and send notification mails to root and admin.
552
553 # vzdump --all --mode suspend --mailto root --mailto admin
554
555Use snapshot mode (no downtime) and non-default dump directory.
556
557 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
558
559Backup more than one guest (selectively)
560
561 # vzdump 101 102 103 --mailto root
562
563Backup all guests excluding 101 and 102
564
565 # vzdump --mode suspend --exclude 101,102
566
567Restore a container to a new CT 600
568
569 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
570
571Restore a QemuServer VM to VM 601
572
573 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
574
575Clone an existing container 101 to a new container 300 with a 4GB root
576file system, using pipes
577
578 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
579
580
581ifdef::manvolnum[]
582include::pve-copyright.adoc[]
583endif::manvolnum[]
584