]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
vzdump: drop overly scary & outdated warning about fleecing
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to
41 the Storage documentation on how to add a storage. A backup storage
42 must be a file level storage, as backups are stored as regular files.
43 In most situations, using a NFS server is a good way to store backups.
44 You can save those backups later to a tape drive, for off-site
45 archiving.
46
47 .Scheduled Backup
48
49 Backup jobs can be scheduled so that they are executed automatically
50 on specific days and times, for selectable nodes and guest systems.
51 Configuration of scheduled backups is done at the Datacenter level in
52 the GUI, which will generate a cron entry in /etc/cron.d/vzdump.
53
54 Backup modes
55 ------------
56
57 There are several ways to provide consistency (option `mode`),
58 depending on the guest type.
59
60 .Backup modes for VMs:
61
62 `stop` mode::
63
64 This mode provides the highest consistency of the backup, at the cost
65 of a short downtime in the VM operation. It works by executing an
66 orderly shutdown of the VM, and then runs a background Qemu process to
67 backup the VM data. After the backup is started, the VM goes to full
68 operation mode if it was previously running. Consistency is guaranteed
69 by using the live backup feature.
70
71 `suspend` mode::
72
73 This mode is provided for compatibility reason, and suspends the VM
74 before calling the `snapshot` mode. Since suspending the VM results in
75 a longer downtime and does not necessarily improve the data
76 consistency, the use of the `snapshot` mode is recommended instead.
77
78 `snapshot` mode::
79
80 This mode provides the lowest operation downtime, at the cost of a
81 small inconsistency risk. It works by performing a {pve} live
82 backup, in which data blocks are copied while the VM is running. If the
83 guest agent is enabled (`agent: 1`) and running, it calls
84 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
85 consistency.
86
87 A technical overview of the {pve} live backup for QemuServer can
88 be found online
89 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
90
91 NOTE: {pve} live backup provides snapshot-like semantics on any
92 storage type. It does not require that the underlying storage supports
93 snapshots. Also please note that since the backups are done via
94 a background Qemu process, a stopped VM will appear as running for a
95 short amount of time while the VM disks are being read by Qemu.
96 However the VM itself is not booted, only its disk(s) are read.
97
98 .Backup modes for Containers:
99
100 `stop` mode::
101
102 Stop the container for the duration of the backup. This potentially
103 results in a very long downtime.
104
105 `suspend` mode::
106
107 This mode uses rsync to copy the container data to a temporary
108 location (see option `--tmpdir`). Then the container is suspended and
109 a second rsync copies changed files. After that, the container is
110 started (resumed) again. This results in minimal downtime, but needs
111 additional space to hold the container copy.
112 +
113 When the container is on a local file system and the target storage of
114 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
115 local file system too, as this will result in a many fold performance
116 improvement. Use of a local `tmpdir` is also required if you want to
117 backup a local container using ACLs in suspend mode if the backup
118 storage is an NFS server.
119
120 `snapshot` mode::
121
122 This mode uses the snapshotting facilities of the underlying
123 storage. First, the container will be suspended to ensure data consistency.
124 A temporary snapshot of the container's volumes will be made and the
125 snapshot content will be archived in a tar file. Finally, the temporary
126 snapshot is deleted again.
127
128 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
129 supports snapshots. Using the `backup=no` mount point option individual volumes
130 can be excluded from the backup (and thus this requirement).
131
132 // see PVE::VZDump::LXC::prepare()
133 NOTE: By default additional mount points besides the Root Disk mount point are
134 not included in backups. For volume mount points you can set the *Backup* option
135 to include the mount point in the backup. Device and bind mounts are never
136 backed up as their content is managed outside the {pve} storage library.
137
138 Backup File Names
139 -----------------
140
141 Newer versions of vzdump encode the guest type and the
142 backup time into the filename, for example
143
144 vzdump-lxc-105-2009_10_09-11_04_43.tar
145
146 That way it is possible to store several backup in the same
147 directory. The parameter `maxfiles` can be used to specify the
148 maximum number of backups to keep.
149
150 Backup File Compression
151 -----------------------
152
153 The backup file can be compressed with one of the following algorithms: `lzo`
154 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
155 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
156 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
157 footnote:[Zstandard a lossless data compression algorithm
158 https://en.wikipedia.org/wiki/Zstandard].
159
160 Currently, Zstandard (zstd) is the fastest of these three algorithms.
161 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
162 are more widely used and often installed by default.
163
164 You can install pigz footnote:[pigz - parallel implementation of gzip
165 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
166 performance due to multi-threading. For pigz & zstd, the amount of
167 threads/cores can be adjusted. See the
168 xref:vzdump_configuration[configuration options] below.
169
170 The extension of the backup file name can usually be used to determine which
171 compression algorithm has been used to create the backup.
172
173 |===
174 |.zst | Zstandard (zstd) compression
175 |.gz or .tgz | gzip compression
176 |.lzo | lzo compression
177 |===
178
179 If the backup file name doesn't end with one of the above file extensions, then
180 it was not compressed by vzdump.
181
182
183 [[vzdump_restore]]
184 Restore
185 -------
186
187 A backup archive can be restored through the {pve} web GUI or through the
188 following CLI tools:
189
190
191 `pct restore`:: Container restore utility
192
193 `qmrestore`:: Virtual Machine restore utility
194
195 For details see the corresponding manual pages.
196
197 Bandwidth Limit
198 ~~~~~~~~~~~~~~~
199
200 Restoring one or more big backups may need a lot of resources, especially
201 storage bandwidth for both reading from the backup storage and writing to
202 the target storage. This can negatively affect other virtual guests as access
203 to storage can get congested.
204
205 To avoid this you can set bandwidth limits for a backup job. {pve}
206 implements two kinds of limits for restoring and archive:
207
208 * per-restore limit: denotes the maximal amount of bandwidth for
209 reading from a backup archive
210
211 * per-storage write limit: denotes the maximal amount of bandwidth used for
212 writing to a specific storage
213
214 The read limit indirectly affects the write limit, as we cannot write more
215 than we read. A smaller per-job limit will overwrite a bigger per-storage
216 limit. A bigger per-job limit will only overwrite the per-storage limit if
217 you have `Data.Allocate' permissions on the affected storage.
218
219 You can use the `--bwlimit <integer>` option from the restore CLI commands
220 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
221 for the limit, this means passing `10240' will limit the read speed of the
222 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
223 is available for the already running virtual guests, and thus the backup
224 does not impact their operations.
225
226 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
227 a specific restore job. This can be helpful if you need to restore a very
228 important virtual guest as fast as possible. (Needs `Data.Allocate'
229 permissions on storage)
230
231 Most times your storage's generally available bandwidth stays the same over
232 time, thus we implemented the possibility to set a default bandwidth limit
233 per configured storage, this can be done with:
234
235 ----
236 # pvesm set STORAGEID --bwlimit restore=KIBs
237 ----
238
239 [[vzdump_configuration]]
240 Configuration
241 -------------
242
243 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
244 simple colon separated key/value format. Each line has the following
245 format:
246
247 OPTION: value
248
249 Blank lines in the file are ignored, and lines starting with a `#`
250 character are treated as comments and are also ignored. Values from
251 this file are used as default, and can be overwritten on the command
252 line.
253
254 We currently support the following options:
255
256 include::vzdump.conf.5-opts.adoc[]
257
258
259 .Example `vzdump.conf` Configuration
260 ----
261 tmpdir: /mnt/fast_local_disk
262 storage: my_backup_storage
263 mode: snapshot
264 bwlimit: 10000
265 ----
266
267 Hook Scripts
268 ------------
269
270 You can specify a hook script with option `--script`. This script is
271 called at various phases of the backup process, with parameters
272 accordingly set. You can find an example in the documentation
273 directory (`vzdump-hook-script.pl`).
274
275 File Exclusions
276 ---------------
277
278 NOTE: this option is only available for container backups.
279
280 `vzdump` skips the following files by default (disable with the option
281 `--stdexcludes 0`)
282
283 /tmp/?*
284 /var/tmp/?*
285 /var/run/?*pid
286
287 You can also manually specify (additional) exclude paths, for example:
288
289 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
290
291 (only excludes tmp directories)
292
293 Configuration files are also stored inside the backup archive
294 (in `./etc/vzdump/`) and will be correctly restored.
295
296 Examples
297 --------
298
299 Simply dump guest 777 - no snapshot, just archive the guest private area and
300 configuration files to the default dump directory (usually
301 `/var/lib/vz/dump/`).
302
303 # vzdump 777
304
305 Use rsync and suspend/resume to create a snapshot (minimal downtime).
306
307 # vzdump 777 --mode suspend
308
309 Backup all guest systems and send notification mails to root and admin.
310
311 # vzdump --all --mode suspend --mailto root --mailto admin
312
313 Use snapshot mode (no downtime) and non-default dump directory.
314
315 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
316
317 Backup more than one guest (selectively)
318
319 # vzdump 101 102 103 --mailto root
320
321 Backup all guests excluding 101 and 102
322
323 # vzdump --mode suspend --exclude 101,102
324
325 Restore a container to a new CT 600
326
327 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
328
329 Restore a QemuServer VM to VM 601
330
331 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
332
333 Clone an existing container 101 to a new container 300 with a 4GB root
334 file system, using pipes
335
336 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
337
338
339 ifdef::manvolnum[]
340 include::pve-copyright.adoc[]
341 endif::manvolnum[]
342