+[[chapter_vzdump]]
ifdef::manvolnum[]
-PVE({manvolnum})
-================
-include::attributes.txt[]
+vzdump(1)
+=========
+:pve-toplevel:
NAME
----
vzdump - Backup Utility for VMs and Containers
-SYNOPSYS
+SYNOPSIS
--------
include::vzdump.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
-
ifndef::manvolnum[]
Backup and Restore
==================
-include::attributes.txt[]
+:pve-toplevel:
endif::manvolnum[]
-Backups are a requirements for any sensible IT deployment, and {pve}
+Backups are a requirement for any sensible IT deployment, and {pve}
provides a fully integrated solution, using the capabilities of each
storage and each guest system type. This allows the system
administrator to fine tune via the `mode` option between consistency
`stop` mode::
This mode provides the highest consistency of the backup, at the cost
-of a downtime in the VM operation. It works by executing an orderly
-shutdown of the VM, and then runs a background Qemu process to backup
-the VM data. After the backup is complete, the Qemu process resumes
-the VM to full operation mode if it was previously running.
+of a short downtime in the VM operation. It works by executing an
+orderly shutdown of the VM, and then runs a background Qemu process to
+backup the VM data. After the backup is started, the VM goes to full
+operation mode if it was previously running. Consistency is guaranteed
+by using the live backup feature.
`suspend` mode::
small inconstancy risk. It works by performing a Proxmox VE live
backup, in which data blocks are copied while the VM is running. If the
guest agent is enabled (`agent: 1`) and running, it calls
-'guest-fsfreeze-freeze' and 'guest-fsfreeze-thaw' to improve
+`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
consistency.
A technical overview of the Proxmox VE live backup for QemuServer can
be found online
-https://git.proxmox.com/?p=pve-qemu-kvm.git;a=blob;f=backup.txt[here].
+https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
NOTE: Proxmox VE live backup provides snapshot-like semantics on any
storage type. It does not require that the underlying storage supports
-snapshots.
+snapshots. Also please note that since the backups are done via
+a background Qemu process, a stopped VM will appear as running for a
+short amount of time while the VM disks are being read by Qemu.
+However the VM itself is not booted, only its disk(s) are read.
.Backup modes for Containers:
started (resumed) again. This results in minimal downtime, but needs
additional space to hold the container copy.
+
-When the container is on a local filesystem and the target storage of
-the backup is an NFS server, you should set `--tmpdir` to reside on a
-local filesystem too, as this will result in a many fold performance
+When the container is on a local file system and the target storage of
+the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
+local file system too, as this will result in a many fold performance
improvement. Use of a local `tmpdir` is also required if you want to
backup a local container using ACLs in suspend mode if the backup
storage is an NFS server.
snapshot is deleted again.
NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
-supports snapshots. Using the `backup=no` mountpoint option individual volumes
+supports snapshots. Using the `backup=no` mount point option individual volumes
can be excluded from the backup (and thus this requirement).
-NOTE: bind and device mountpoints are skipped during backup operations, like
-volume mountpoints with the backup option disabled.
-
+// see PVE::VZDump::LXC::prepare()
+NOTE: By default additional mount points besides the Root Disk mount point are
+not included in backups. For volume mount points you can set the *Backup* option
+to include the mount point in the backup. Device and bind mounts are never
+backed up as their content is managed outside the {pve} storage library.
Backup File Names
-----------------
directory. The parameter `maxfiles` can be used to specify the
maximum number of backups to keep.
+[[vzdump_restore]]
Restore
-------
-The resulting archive files can be restored with the following programs.
+A backup archive can be restored through the {pve} web GUI or through the
+following CLI tools:
`pct restore`:: Container restore utility
-`qmrestore`:: QemuServer restore utility
+`qmrestore`:: Virtual Machine restore utility
For details see the corresponding manual pages.
+Bandwidth Limit
+~~~~~~~~~~~~~~~
+
+Restoring one or more big backups may need a lot of resources, especially
+storage bandwidth for both reading from the backup storage and writing to
+the target storage. This can negatively effect other virtual guest as access
+to storage can get congested.
+
+To avoid this you can set bandwidth limits for a backup job. {pve}
+implements two kinds of limits for restoring and archive:
+
+* per-restore limit: denotes the maximal amount of bandwidth for
+ reading from a backup archive
+
+* per-storage write limit: denotes the maximal amount of bandwidth used for
+ writing to a specific storage
+
+The read limit indirectly affects the write limit, as we cannot write more
+than we read. A smaller per-job limit will overwrite a bigger per-storage
+limit. A bigger per-job limit will only overwrite the per-storage limit if
+you have `Data.Allocate' permissions on the affected storage.
+
+You can use the `--bwlimit <integer>` option from the restore CLI commands
+to set up a restore job specific bandwidth limit. Kibit/s is used as unit
+for the limit, this means passing `10240' will limit the read speed of the
+backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
+is available for the already running virtual guests, and thus the backup
+does not impact their operations.
+
+NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
+a specific restore job. This can be helpful if you need to restore a very
+important virtual guest as fast as possible. (Needs `Data.Allocate'
+permissions on storage)
+
+Most times your storage's generally available bandwidth stays the same over
+time, thus we implemented the possibility to set a default bandwidth limit
+per configured storage, this can be done with:
+
+----
+# pvesm set STORAGEID --bwlimit KIBs
+----
+
+
Configuration
-------------
-Global configuration is stored in '/etc/vzdump.conf'. The file uses a
+Global configuration is stored in `/etc/vzdump.conf`. The file uses a
simple colon separated key/value format. Each line has the following
format:
OPTION: value
-Blank lines in the file are ignored, and lines starting with a '#'
+Blank lines in the file are ignored, and lines starting with a `#`
character are treated as comments and are also ignored. Values from
this file are used as default, and can be overwritten on the command
line.
include::vzdump.conf.5-opts.adoc[]
-.Example 'vzdump.conf' Configuration
+.Example `vzdump.conf` Configuration
----
tmpdir: /mnt/fast_local_disk
storage: my_backup_storage
You can specify a hook script with option `--script`. This script is
called at various phases of the backup process, with parameters
accordingly set. You can find an example in the documentation
-directory ('vzdump-hook-script.pl').
+directory (`vzdump-hook-script.pl`).
File Exclusions
---------------
NOTE: this option is only available for container backups.
-'vzdump' skips the following files by default (disable with the option
+`vzdump` skips the following files by default (disable with the option
`--stdexcludes 0`)
/tmp/?*
Simply dump guest 777 - no snapshot, just archive the guest private area and
configuration files to the default dump directory (usually
-'/var/lib/vz/dump/').
+`/var/lib/vz/dump/`).
# vzdump 777