]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
24e3ff2c088dbcdedd539d809aadb364a3509d79
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to
41 the Storage documentation on how to add a storage. A backup storage
42 must be a file level storage, as backups are stored as regular files.
43 In most situations, using a NFS server is a good way to store backups.
44 You can save those backups later to a tape drive, for off-site
45 archiving.
46
47 .Scheduled Backup
48
49 Backup jobs can be scheduled so that they are executed automatically
50 on specific days and times, for selectable nodes and guest systems.
51 Configuration of scheduled backups is done at the Datacenter level in
52 the GUI, which will generate a cron entry in /etc/cron.d/vzdump.
53
54 Backup modes
55 ------------
56
57 There are several ways to provide consistency (option `mode`),
58 depending on the guest type.
59
60 .Backup modes for VMs:
61
62 `stop` mode::
63
64 This mode provides the highest consistency of the backup, at the cost
65 of a short downtime in the VM operation. It works by executing an
66 orderly shutdown of the VM, and then runs a background Qemu process to
67 backup the VM data. After the backup is started, the VM goes to full
68 operation mode if it was previously running. Consistency is guaranteed
69 by using the live backup feature.
70
71 `suspend` mode::
72
73 This mode is provided for compatibility reason, and suspends the VM
74 before calling the `snapshot` mode. Since suspending the VM results in
75 a longer downtime and does not necessarily improve the data
76 consistency, the use of the `snapshot` mode is recommended instead.
77
78 `snapshot` mode::
79
80 This mode provides the lowest operation downtime, at the cost of a
81 small inconsistency risk. It works by performing a {pve} live
82 backup, in which data blocks are copied while the VM is running. If the
83 guest agent is enabled (`agent: 1`) and running, it calls
84 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
85 consistency.
86
87 A technical overview of the {pve} live backup for QemuServer can
88 be found online
89 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
90
91 NOTE: {pve} live backup provides snapshot-like semantics on any
92 storage type. It does not require that the underlying storage supports
93 snapshots. Also please note that since the backups are done via
94 a background Qemu process, a stopped VM will appear as running for a
95 short amount of time while the VM disks are being read by Qemu.
96 However the VM itself is not booted, only its disk(s) are read.
97
98 .Backup modes for Containers:
99
100 `stop` mode::
101
102 Stop the container for the duration of the backup. This potentially
103 results in a very long downtime.
104
105 `suspend` mode::
106
107 This mode uses rsync to copy the container data to a temporary
108 location (see option `--tmpdir`). Then the container is suspended and
109 a second rsync copies changed files. After that, the container is
110 started (resumed) again. This results in minimal downtime, but needs
111 additional space to hold the container copy.
112 +
113 When the container is on a local file system and the target storage of
114 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
115 local file system too, as this will result in a many fold performance
116 improvement. Use of a local `tmpdir` is also required if you want to
117 backup a local container using ACLs in suspend mode if the backup
118 storage is an NFS server.
119
120 `snapshot` mode::
121
122 This mode uses the snapshotting facilities of the underlying
123 storage. First, the container will be suspended to ensure data consistency.
124 A temporary snapshot of the container's volumes will be made and the
125 snapshot content will be archived in a tar file. Finally, the temporary
126 snapshot is deleted again.
127
128 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
129 supports snapshots. Using the `backup=no` mount point option individual volumes
130 can be excluded from the backup (and thus this requirement).
131
132 // see PVE::VZDump::LXC::prepare()
133 NOTE: By default additional mount points besides the Root Disk mount point are
134 not included in backups. For volume mount points you can set the *Backup* option
135 to include the mount point in the backup. Device and bind mounts are never
136 backed up as their content is managed outside the {pve} storage library.
137
138 Backup File Names
139 -----------------
140
141 Newer versions of vzdump encode the guest type and the
142 backup time into the filename, for example
143
144 vzdump-lxc-105-2009_10_09-11_04_43.tar
145
146 That way it is possible to store several backup in the same directory. You can
147 limit the number of backups that are kept with various retention options, see
148 the xref:vzdump_retention[Backup Retention] section below.
149
150 Backup File Compression
151 -----------------------
152
153 The backup file can be compressed with one of the following algorithms: `lzo`
154 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
155 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
156 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
157 footnote:[Zstandard a lossless data compression algorithm
158 https://en.wikipedia.org/wiki/Zstandard].
159
160 Currently, Zstandard (zstd) is the fastest of these three algorithms.
161 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
162 are more widely used and often installed by default.
163
164 You can install pigz footnote:[pigz - parallel implementation of gzip
165 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
166 performance due to multi-threading. For pigz & zstd, the amount of
167 threads/cores can be adjusted. See the
168 xref:vzdump_configuration[configuration options] below.
169
170 The extension of the backup file name can usually be used to determine which
171 compression algorithm has been used to create the backup.
172
173 |===
174 |.zst | Zstandard (zstd) compression
175 |.gz or .tgz | gzip compression
176 |.lzo | lzo compression
177 |===
178
179 If the backup file name doesn't end with one of the above file extensions, then
180 it was not compressed by vzdump.
181
182
183 [[vzdump_retention]]
184 Backup Retention
185 ----------------
186
187 With the `prune-backups` option you can specify which backups you want to keep
188 in a flexible manner. The following retention options are available:
189
190 `keep-all <boolean>` ::
191 Keep all backups. If this is `true`, no other options can be set.
192
193 `keep-last <N>` ::
194 Keep the last `<N>` backups.
195
196 `keep-hourly <N>` ::
197 Keep backups for the last `<N>` hours. If there is more than one
198 backup for a single hour, only the latest is kept.
199
200 `keep-daily <N>` ::
201 Keep backups for the last `<N>` days. If there is more than one
202 backup for a single day, only the latest is kept.
203
204 `keep-weekly <N>` ::
205 Keep backups for the last `<N>` weeks. If there is more than one
206 backup for a single week, only the latest is kept.
207
208 NOTE: Weeks start on Monday and end on Sunday. The software uses the
209 `ISO week date`-system and handles weeks at the end of the year correctly.
210
211 `keep-monthly <N>` ::
212 Keep backups for the last `<N>` months. If there is more than one
213 backup for a single month, only the latest is kept.
214
215 `keep-yearly <N>` ::
216 Keep backups for the last `<N>` years. If there is more than one
217 backup for a single year, only the latest is kept.
218
219 The retention options are processed in the order given above. Each option
220 only covers backups within its time period. The next option does not take care
221 of already covered backups. It will only consider older backups.
222
223 Specify the retention options you want to use as a
224 comma-separated list, for example:
225
226 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
227
228 While you can pass `prune-backups` directly to `vzdump`, it is often more
229 sensible to configure the setting on the storage level, which can be done via
230 the web interface.
231
232 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
233 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
234 `keep-all`.
235
236 Retention Settings Example
237 ~~~~~~~~~~~~~~~~~~~~~~~~~~
238
239 The backup frequency and retention of old backups may depend on how often data
240 changes, and how important an older state may be, in a specific work load.
241 When backups act as a company's document archive, there may also be legal
242 requirements for how long backups must be kept.
243
244 For this example, we assume that you are doing daily backups, have a retention
245 period of 10 years, and the period between backups stored gradually grows.
246
247 `keep-last=3` - even if only daily backups are taken, an admin may want to
248 create an extra one just before or after a big upgrade. Setting keep-last
249 ensures this.
250
251 `keep-hourly` is not set - for daily backups this is not relevant. You cover
252 extra manual backups already, with keep-last.
253
254 `keep-daily=13` - together with keep-last, which covers at least one
255 day, this ensures that you have at least two weeks of backups.
256
257 `keep-weekly=8` - ensures that you have at least two full months of
258 weekly backups.
259
260 `keep-monthly=11` - together with the previous keep settings, this
261 ensures that you have at least a year of monthly backups.
262
263 `keep-yearly=9` - this is for the long term archive. As you covered the
264 current year with the previous options, you would set this to nine for the
265 remaining ones, giving you a total of at least 10 years of coverage.
266
267 We recommend that you use a higher retention period than is minimally required
268 by your environment; you can always reduce it if you find it is unnecessarily
269 high, but you cannot recreate backups once they have been removed.
270
271 [[vzdump_restore]]
272 Restore
273 -------
274
275 A backup archive can be restored through the {pve} web GUI or through the
276 following CLI tools:
277
278
279 `pct restore`:: Container restore utility
280
281 `qmrestore`:: Virtual Machine restore utility
282
283 For details see the corresponding manual pages.
284
285 Bandwidth Limit
286 ~~~~~~~~~~~~~~~
287
288 Restoring one or more big backups may need a lot of resources, especially
289 storage bandwidth for both reading from the backup storage and writing to
290 the target storage. This can negatively affect other virtual guests as access
291 to storage can get congested.
292
293 To avoid this you can set bandwidth limits for a backup job. {pve}
294 implements two kinds of limits for restoring and archive:
295
296 * per-restore limit: denotes the maximal amount of bandwidth for
297 reading from a backup archive
298
299 * per-storage write limit: denotes the maximal amount of bandwidth used for
300 writing to a specific storage
301
302 The read limit indirectly affects the write limit, as we cannot write more
303 than we read. A smaller per-job limit will overwrite a bigger per-storage
304 limit. A bigger per-job limit will only overwrite the per-storage limit if
305 you have `Data.Allocate' permissions on the affected storage.
306
307 You can use the `--bwlimit <integer>` option from the restore CLI commands
308 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
309 for the limit, this means passing `10240' will limit the read speed of the
310 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
311 is available for the already running virtual guests, and thus the backup
312 does not impact their operations.
313
314 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
315 a specific restore job. This can be helpful if you need to restore a very
316 important virtual guest as fast as possible. (Needs `Data.Allocate'
317 permissions on storage)
318
319 Most times your storage's generally available bandwidth stays the same over
320 time, thus we implemented the possibility to set a default bandwidth limit
321 per configured storage, this can be done with:
322
323 ----
324 # pvesm set STORAGEID --bwlimit restore=KIBs
325 ----
326
327 [[vzdump_configuration]]
328 Configuration
329 -------------
330
331 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
332 simple colon separated key/value format. Each line has the following
333 format:
334
335 OPTION: value
336
337 Blank lines in the file are ignored, and lines starting with a `#`
338 character are treated as comments and are also ignored. Values from
339 this file are used as default, and can be overwritten on the command
340 line.
341
342 We currently support the following options:
343
344 include::vzdump.conf.5-opts.adoc[]
345
346
347 .Example `vzdump.conf` Configuration
348 ----
349 tmpdir: /mnt/fast_local_disk
350 storage: my_backup_storage
351 mode: snapshot
352 bwlimit: 10000
353 ----
354
355 Hook Scripts
356 ------------
357
358 You can specify a hook script with option `--script`. This script is
359 called at various phases of the backup process, with parameters
360 accordingly set. You can find an example in the documentation
361 directory (`vzdump-hook-script.pl`).
362
363 File Exclusions
364 ---------------
365
366 NOTE: this option is only available for container backups.
367
368 `vzdump` skips the following files by default (disable with the option
369 `--stdexcludes 0`)
370
371 /tmp/?*
372 /var/tmp/?*
373 /var/run/?*pid
374
375 You can also manually specify (additional) exclude paths, for example:
376
377 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
378
379 (only excludes tmp directories)
380
381 Configuration files are also stored inside the backup archive
382 (in `./etc/vzdump/`) and will be correctly restored.
383
384 Examples
385 --------
386
387 Simply dump guest 777 - no snapshot, just archive the guest private area and
388 configuration files to the default dump directory (usually
389 `/var/lib/vz/dump/`).
390
391 # vzdump 777
392
393 Use rsync and suspend/resume to create a snapshot (minimal downtime).
394
395 # vzdump 777 --mode suspend
396
397 Backup all guest systems and send notification mails to root and admin.
398
399 # vzdump --all --mode suspend --mailto root --mailto admin
400
401 Use snapshot mode (no downtime) and non-default dump directory.
402
403 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
404
405 Backup more than one guest (selectively)
406
407 # vzdump 101 102 103 --mailto root
408
409 Backup all guests excluding 101 and 102
410
411 # vzdump --mode suspend --exclude 101,102
412
413 Restore a container to a new CT 600
414
415 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
416
417 Restore a QemuServer VM to VM 601
418
419 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
420
421 Clone an existing container 101 to a new container 300 with a 4GB root
422 file system, using pipes
423
424 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
425
426
427 ifdef::manvolnum[]
428 include::pve-copyright.adoc[]
429 endif::manvolnum[]
430