X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=vzdump.adoc;h=fb1ac3d0c3c9853a51aa0ce9ffb9e0ebed3b01a2;hp=231b49b8d08eadd61aecbf023335d44fcded92c1;hb=dd1aa0e01624f5927fb65143c9a070672ccbeb92;hpb=956afd0a818ab406143cfaf236143365ecb76e2c diff --git a/vzdump.adoc b/vzdump.adoc index 231b49b..fb1ac3d 100644 --- a/vzdump.adoc +++ b/vzdump.adoc @@ -1,7 +1,8 @@ +[[chapter_vzdump]] ifdef::manvolnum[] -PVE({manvolnum}) -================ -include::attributes.txt[] +vzdump(1) +========= +:pve-toplevel: NAME ---- @@ -9,7 +10,7 @@ NAME vzdump - Backup Utility for VMs and Containers -SYNOPSYS +SYNOPSIS -------- include::vzdump.1-synopsis.adoc[] @@ -18,79 +19,126 @@ include::vzdump.1-synopsis.adoc[] DESCRIPTION ----------- endif::manvolnum[] - ifndef::manvolnum[] Backup and Restore ================== -include::attributes.txt[] +:pve-toplevel: endif::manvolnum[] -'vzdump' is a utility to make consistent backups of running guest -systems. It basically creates an archive of the guest private area, -which also includes the guest configuration files. 'vzdump' currently -supports LXC containers and QemuServer VMs. There are several ways to -provide consistency (option `mode`), depending on the guest type. +Backups are a requirement for any sensible IT deployment, and {pve} +provides a fully integrated solution, using the capabilities of each +storage and each guest system type. This allows the system +administrator to fine tune via the `mode` option between consistency +of the backups and downtime of the guest system. + +{pve} backups are always full backups - containing the VM/CT +configuration and all data. Backups can be started via the GUI or via +the `vzdump` command line tool. + +.Backup Storage + +Before a backup can run, a backup storage must be defined. Refer to +the Storage documentation on how to add a storage. A backup storage +must be a file level storage, as backups are stored as regular files. +In most situations, using a NFS server is a good way to store backups. +You can save those backups later to a tape drive, for off-site +archiving. -.Backup `mode` for VMs: +.Scheduled Backup + +Backup jobs can be scheduled so that they are executed automatically +on specific days and times, for selectable nodes and guest systems. +Configuration of scheduled backups is done at the Datacenter level in +the GUI, which will generate a cron entry in /etc/cron.d/vzdump. + +Backup modes +------------ + +There are several ways to provide consistency (option `mode`), +depending on the guest type. + +.Backup modes for VMs: `stop` mode:: -This first performns a clean shutdown of the VM to make sure it is -stopped. It then starts the VM in suspended mode and uses the qemu -backup feature to dump all data. If the VM was running, we start -(resume) it immediately after starting the qemu backup task. This -keeps the downtime as low as possible. +This mode provides the highest consistency of the backup, at the cost +of a short downtime in the VM operation. It works by executing an +orderly shutdown of the VM, and then runs a background Qemu process to +backup the VM data. After the backup is started, the VM goes to full +operation mode if it was previously running. Consistency is guaranteed +by using the live backup feature. `suspend` mode:: -This mode does not really make sense for qemu. Please use snapshot -mode instead. +This mode is provided for compatibility reason, and suspends the VM +before calling the `snapshot` mode. Since suspending the VM results in +a longer downtime and does not necessarily improve the data +consistency, the use of the `snapshot` mode is recommended instead. `snapshot` mode:: -This mode simply starts a qemu live backup task. If the guest agent -is enabled (`agent: 1`) and running, it calls 'guest-fsfreeze-freeze' -and 'guest-fsfreeze-thaw' to improve consistency. +This mode provides the lowest operation downtime, at the cost of a +small inconstancy risk. It works by performing a Proxmox VE live +backup, in which data blocks are copied while the VM is running. If the +guest agent is enabled (`agent: 1`) and running, it calls +`guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve +consistency. A technical overview of the Proxmox VE live backup for QemuServer can be found online -https://git.proxmox.com/?p=pve-qemu-kvm.git;a=blob;f=backup.txt[here]. - -NOTE: Qemu backup provides snapshots on any storage type. It does -not require that the underlying storage supports snapshots. +https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here]. +NOTE: Proxmox VE live backup provides snapshot-like semantics on any +storage type. It does not require that the underlying storage supports +snapshots. Also please note that since the backups are done via +a background Qemu process, a stopped VM will appear as running for a +short amount of time while the VM disks are being read by Qemu. +However the VM itself is not booted, only its disk(s) are read. -.Backup `mode` for Containers: +.Backup modes for Containers: `stop` mode:: -Stop the guest during backup. This results in a very long downtime. +Stop the container for the duration of the backup. This potentially +results in a very long downtime. `suspend` mode:: This mode uses rsync to copy the container data to a temporary -location (see option `--tmpdir`). Then the container is suspended and a second -rsync copies changed files. After that, the container is started (resumed) -again. This results in minimal downtime, but needs additional space -to hold the container copy. +location (see option `--tmpdir`). Then the container is suspended and +a second rsync copies changed files. After that, the container is +started (resumed) again. This results in minimal downtime, but needs +additional space to hold the container copy. + -When the container is on a local filesystem and the target storage of the backup -is an NFS server, you should set `--tmpdir` to reside on a local filesystem too, -as this will result in a many fold performance improvement. -Use of a local `tmpdir` is also required if you want to backup in `suspend` -mode a local container using ACLs to an NFS server. +When the container is on a local file system and the target storage of +the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a +local file system too, as this will result in a many fold performance +improvement. Use of a local `tmpdir` is also required if you want to +backup a local container using ACLs in suspend mode if the backup +storage is an NFS server. `snapshot` mode:: This mode uses the snapshotting facilities of the underlying -storage. A snapshot will be made of the container volume, and the -snapshot content will be archived in a tar file. +storage. First, the container will be suspended to ensure data consistency. +A temporary snapshot of the container's volumes will be made and the +snapshot content will be archived in a tar file. Finally, the temporary +snapshot is deleted again. +NOTE: `snapshot` mode requires that all backed up volumes are on a storage that +supports snapshots. Using the `backup=no` mount point option individual volumes +can be excluded from the backup (and thus this requirement). + +// see PVE::VZDump::LXC::prepare() +NOTE: By default additional mount points besides the Root Disk mount point are +not included in backups. For volume mount points you can set the *Backup* option +to include the mount point in the backup. Device and bind mounts are never +backed up as their content is managed outside the {pve} storage library. Backup File Names ----------------- -Newer versions of vzdump encode the virtual machine type and the +Newer versions of vzdump encode the guest type and the backup time into the filename, for example vzdump-lxc-105-2009_10_09-11_04_43.tar @@ -99,28 +147,73 @@ That way it is possible to store several backup in the same directory. The parameter `maxfiles` can be used to specify the maximum number of backups to keep. +[[vzdump_restore]] Restore ------- -The resulting archive files can be restored with the following programs. +A backup archive can be restored through the {pve} web GUI or through the +following CLI tools: `pct restore`:: Container restore utility -`qmrestore`:: QemuServer restore utility +`qmrestore`:: Virtual Machine restore utility For details see the corresponding manual pages. +Bandwidth Limit +~~~~~~~~~~~~~~~ + +Restoring one or more big backups may need a lot of resources, especially +storage bandwidth for both reading from the backup storage and writing to +the target storage. This can negatively effect other virtual guest as access +to storage can get congested. + +To avoid this you can set bandwidth limits for a backup job. {pve} +implements two kinds of limits for restoring and archive: + +* per-restore limit: denotes the maximal amount of bandwidth for + reading from a backup archive + +* per-storage write limit: denotes the maximal amount of bandwidth used for + writing to a specific storage + +The read limit indirectly affects the write limit, as we cannot write more +than we read. A smaller per-job limit will overwrite a bigger per-storage +limit. A bigger per-job limit will only overwrite the per-storage limit if +you have `Data.Allocate' permissions on the affected storage. + +You can use the `--bwlimit ` option from the restore CLI commands +to set up a restore job specific bandwidth limit. Kibit/s is used as unit +for the limit, this means passing `10240' will limit the read speed of the +backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth +is available for the already running virtual guests, and thus the backup +does not impact their operations. + +NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for +a specific restore job. This can be helpful if you need to restore a very +important virtual guest as fast as possible. (Needs `Data.Allocate' +permissions on storage) + +Most times your storage's generally available bandwidth stays the same over +time, thus we implemented the possibility to set a default bandwidth limit +per configured storage, this can be done with: + +---- +# pvesm set STORAGEID --bwlimit KIBs +---- + + Configuration ------------- -Global configuration is stored in '/etc/vzdump.conf'. The file uses a +Global configuration is stored in `/etc/vzdump.conf`. The file uses a simple colon separated key/value format. Each line has the following format: OPTION: value -Blank lines in the file are ignored, and lines starting with a '#' +Blank lines in the file are ignored, and lines starting with a `#` character are treated as comments and are also ignored. Values from this file are used as default, and can be overwritten on the command line. @@ -130,7 +223,7 @@ We currently support the following options: include::vzdump.conf.5-opts.adoc[] -.Example 'vzdump.conf' Configuration +.Example `vzdump.conf` Configuration ---- tmpdir: /mnt/fast_local_disk storage: my_backup_storage @@ -144,34 +237,35 @@ Hook Scripts You can specify a hook script with option `--script`. This script is called at various phases of the backup process, with parameters accordingly set. You can find an example in the documentation -directory ('vzdump-hook-script.pl'). +directory (`vzdump-hook-script.pl`). File Exclusions --------------- -First, this option is only available for container backups. 'vzdump' -skips the following files with option `--stdexcludes` +NOTE: this option is only available for container backups. + +`vzdump` skips the following files by default (disable with the option +`--stdexcludes 0`) - /var/log/?* /tmp/?* /var/tmp/?* /var/run/?*pid -Or you can manually specify exclude paths, for example: +You can also manually specify (additional) exclude paths, for example: # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*' (only excludes tmp directories) Configuration files are also stored inside the backup archive -(`/etc/vzdump/`), and will be correctly restored. +(in `./etc/vzdump/`) and will be correctly restored. Examples -------- Simply dump guest 777 - no snapshot, just archive the guest private area and configuration files to the default dump directory (usually -'/var/liv//vz/dump/'). +`/var/lib/vz/dump/`). # vzdump 777 @@ -183,7 +277,7 @@ Backup all guest systems and send notification mails to root and admin. # vzdump --all --mode suspend --mailto root --mailto admin -Use snapshot mode (no downtime). +Use snapshot mode (no downtime) and non-default dump directory. # vzdump 777 --dumpdir /mnt/backup --mode snapshot