]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
f766d36515dcd3410b2175e776c68990be3971a0
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to
41 the Storage documentation on how to add a storage. A backup storage
42 must be a file level storage, as backups are stored as regular files.
43 In most situations, using a NFS server is a good way to store backups.
44 You can save those backups later to a tape drive, for off-site
45 archiving.
46
47 .Scheduled Backup
48
49 Backup jobs can be scheduled so that they are executed automatically
50 on specific days and times, for selectable nodes and guest systems.
51 Configuration of scheduled backups is done at the Datacenter level in
52 the GUI, which will generate a job entry in /etc/pve/jobs.cfg, which
53 will in turn be parsed and executed by the `pvescheduler` daemon.
54 These jobs use the xref:chapter_calendar_events[calendar events] for
55 defining the schedule.
56
57 Backup modes
58 ------------
59
60 There are several ways to provide consistency (option `mode`),
61 depending on the guest type.
62
63 .Backup modes for VMs:
64
65 `stop` mode::
66
67 This mode provides the highest consistency of the backup, at the cost
68 of a short downtime in the VM operation. It works by executing an
69 orderly shutdown of the VM, and then runs a background Qemu process to
70 backup the VM data. After the backup is started, the VM goes to full
71 operation mode if it was previously running. Consistency is guaranteed
72 by using the live backup feature.
73
74 `suspend` mode::
75
76 This mode is provided for compatibility reason, and suspends the VM
77 before calling the `snapshot` mode. Since suspending the VM results in
78 a longer downtime and does not necessarily improve the data
79 consistency, the use of the `snapshot` mode is recommended instead.
80
81 `snapshot` mode::
82
83 This mode provides the lowest operation downtime, at the cost of a
84 small inconsistency risk. It works by performing a {pve} live
85 backup, in which data blocks are copied while the VM is running. If the
86 guest agent is enabled (`agent: 1`) and running, it calls
87 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
88 consistency.
89
90 A technical overview of the {pve} live backup for QemuServer can
91 be found online
92 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
93
94 NOTE: {pve} live backup provides snapshot-like semantics on any
95 storage type. It does not require that the underlying storage supports
96 snapshots. Also please note that since the backups are done via
97 a background Qemu process, a stopped VM will appear as running for a
98 short amount of time while the VM disks are being read by Qemu.
99 However the VM itself is not booted, only its disk(s) are read.
100
101 .Backup modes for Containers:
102
103 `stop` mode::
104
105 Stop the container for the duration of the backup. This potentially
106 results in a very long downtime.
107
108 `suspend` mode::
109
110 This mode uses rsync to copy the container data to a temporary
111 location (see option `--tmpdir`). Then the container is suspended and
112 a second rsync copies changed files. After that, the container is
113 started (resumed) again. This results in minimal downtime, but needs
114 additional space to hold the container copy.
115 +
116 When the container is on a local file system and the target storage of
117 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
118 local file system too, as this will result in a many fold performance
119 improvement. Use of a local `tmpdir` is also required if you want to
120 backup a local container using ACLs in suspend mode if the backup
121 storage is an NFS server.
122
123 `snapshot` mode::
124
125 This mode uses the snapshotting facilities of the underlying
126 storage. First, the container will be suspended to ensure data consistency.
127 A temporary snapshot of the container's volumes will be made and the
128 snapshot content will be archived in a tar file. Finally, the temporary
129 snapshot is deleted again.
130
131 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
132 supports snapshots. Using the `backup=no` mount point option individual volumes
133 can be excluded from the backup (and thus this requirement).
134
135 // see PVE::VZDump::LXC::prepare()
136 NOTE: By default additional mount points besides the Root Disk mount point are
137 not included in backups. For volume mount points you can set the *Backup* option
138 to include the mount point in the backup. Device and bind mounts are never
139 backed up as their content is managed outside the {pve} storage library.
140
141 Backup File Names
142 -----------------
143
144 Newer versions of vzdump encode the guest type and the
145 backup time into the filename, for example
146
147 vzdump-lxc-105-2009_10_09-11_04_43.tar
148
149 That way it is possible to store several backup in the same directory. You can
150 limit the number of backups that are kept with various retention options, see
151 the xref:vzdump_retention[Backup Retention] section below.
152
153 Backup File Compression
154 -----------------------
155
156 The backup file can be compressed with one of the following algorithms: `lzo`
157 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
158 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
159 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
160 footnote:[Zstandard a lossless data compression algorithm
161 https://en.wikipedia.org/wiki/Zstandard].
162
163 Currently, Zstandard (zstd) is the fastest of these three algorithms.
164 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
165 are more widely used and often installed by default.
166
167 You can install pigz footnote:[pigz - parallel implementation of gzip
168 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
169 performance due to multi-threading. For pigz & zstd, the amount of
170 threads/cores can be adjusted. See the
171 xref:vzdump_configuration[configuration options] below.
172
173 The extension of the backup file name can usually be used to determine which
174 compression algorithm has been used to create the backup.
175
176 |===
177 |.zst | Zstandard (zstd) compression
178 |.gz or .tgz | gzip compression
179 |.lzo | lzo compression
180 |===
181
182 If the backup file name doesn't end with one of the above file extensions, then
183 it was not compressed by vzdump.
184
185 Backup Encryption
186 -----------------
187
188 For Proxmox Backup Server storages, you can optionally set up client-side
189 encryption of backups, see xref:storage_pbs_encryption[the corresponding section.]
190
191 [[vzdump_retention]]
192 Backup Retention
193 ----------------
194
195 With the `prune-backups` option you can specify which backups you want to keep
196 in a flexible manner. The following retention options are available:
197
198 `keep-all <boolean>` ::
199 Keep all backups. If this is `true`, no other options can be set.
200
201 `keep-last <N>` ::
202 Keep the last `<N>` backups.
203
204 `keep-hourly <N>` ::
205 Keep backups for the last `<N>` hours. If there is more than one
206 backup for a single hour, only the latest is kept.
207
208 `keep-daily <N>` ::
209 Keep backups for the last `<N>` days. If there is more than one
210 backup for a single day, only the latest is kept.
211
212 `keep-weekly <N>` ::
213 Keep backups for the last `<N>` weeks. If there is more than one
214 backup for a single week, only the latest is kept.
215
216 NOTE: Weeks start on Monday and end on Sunday. The software uses the
217 `ISO week date`-system and handles weeks at the end of the year correctly.
218
219 `keep-monthly <N>` ::
220 Keep backups for the last `<N>` months. If there is more than one
221 backup for a single month, only the latest is kept.
222
223 `keep-yearly <N>` ::
224 Keep backups for the last `<N>` years. If there is more than one
225 backup for a single year, only the latest is kept.
226
227 The retention options are processed in the order given above. Each option
228 only covers backups within its time period. The next option does not take care
229 of already covered backups. It will only consider older backups.
230
231 Specify the retention options you want to use as a
232 comma-separated list, for example:
233
234 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
235
236 While you can pass `prune-backups` directly to `vzdump`, it is often more
237 sensible to configure the setting on the storage level, which can be done via
238 the web interface.
239
240 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
241 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
242 `keep-all`.
243
244
245 Prune Simulator
246 ~~~~~~~~~~~~~~~
247
248 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
249 of the Proxmox Backup Server documentation] to explore the effect of different
250 retention options with various backup schedules.
251
252 Retention Settings Example
253 ~~~~~~~~~~~~~~~~~~~~~~~~~~
254
255 The backup frequency and retention of old backups may depend on how often data
256 changes, and how important an older state may be, in a specific work load.
257 When backups act as a company's document archive, there may also be legal
258 requirements for how long backups must be kept.
259
260 For this example, we assume that you are doing daily backups, have a retention
261 period of 10 years, and the period between backups stored gradually grows.
262
263 `keep-last=3` - even if only daily backups are taken, an admin may want to
264 create an extra one just before or after a big upgrade. Setting keep-last
265 ensures this.
266
267 `keep-hourly` is not set - for daily backups this is not relevant. You cover
268 extra manual backups already, with keep-last.
269
270 `keep-daily=13` - together with keep-last, which covers at least one
271 day, this ensures that you have at least two weeks of backups.
272
273 `keep-weekly=8` - ensures that you have at least two full months of
274 weekly backups.
275
276 `keep-monthly=11` - together with the previous keep settings, this
277 ensures that you have at least a year of monthly backups.
278
279 `keep-yearly=9` - this is for the long term archive. As you covered the
280 current year with the previous options, you would set this to nine for the
281 remaining ones, giving you a total of at least 10 years of coverage.
282
283 We recommend that you use a higher retention period than is minimally required
284 by your environment; you can always reduce it if you find it is unnecessarily
285 high, but you cannot recreate backups once they have been removed.
286
287 [[vzdump_protection]]
288 Backup Protection
289 -----------------
290
291 You can mark a backup as `protected` to prevent its removal. Attempting to
292 remove a protected backup via {pve}'s UI or API will fail. However, manual
293 removal of a backup file via CLI is still possible. Protected backups are
294 ignored by pruning and do not count towards the retention settings.
295
296 For filesystem-based storages, the protection is implemented via a sentinel file
297 `<backup-name>.protected`. For Proxmox Backup Server, it is handled on the
298 server side.
299
300 [[vzdump_restore]]
301 Restore
302 -------
303
304 A backup archive can be restored through the {pve} web GUI or through the
305 following CLI tools:
306
307
308 `pct restore`:: Container restore utility
309
310 `qmrestore`:: Virtual Machine restore utility
311
312 For details see the corresponding manual pages.
313
314 Bandwidth Limit
315 ~~~~~~~~~~~~~~~
316
317 Restoring one or more big backups may need a lot of resources, especially
318 storage bandwidth for both reading from the backup storage and writing to
319 the target storage. This can negatively affect other virtual guests as access
320 to storage can get congested.
321
322 To avoid this you can set bandwidth limits for a backup job. {pve}
323 implements two kinds of limits for restoring and archive:
324
325 * per-restore limit: denotes the maximal amount of bandwidth for
326 reading from a backup archive
327
328 * per-storage write limit: denotes the maximal amount of bandwidth used for
329 writing to a specific storage
330
331 The read limit indirectly affects the write limit, as we cannot write more
332 than we read. A smaller per-job limit will overwrite a bigger per-storage
333 limit. A bigger per-job limit will only overwrite the per-storage limit if
334 you have `Data.Allocate' permissions on the affected storage.
335
336 You can use the `--bwlimit <integer>` option from the restore CLI commands
337 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
338 for the limit, this means passing `10240' will limit the read speed of the
339 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
340 is available for the already running virtual guests, and thus the backup
341 does not impact their operations.
342
343 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
344 a specific restore job. This can be helpful if you need to restore a very
345 important virtual guest as fast as possible. (Needs `Data.Allocate'
346 permissions on storage)
347
348 Most times your storage's generally available bandwidth stays the same over
349 time, thus we implemented the possibility to set a default bandwidth limit
350 per configured storage, this can be done with:
351
352 ----
353 # pvesm set STORAGEID --bwlimit restore=KIBs
354 ----
355
356 Live-Restore
357 ~~~~~~~~~~~~
358
359 Restoring a large backup can take a long time, in which a guest is still
360 unavailable. For VM backups stored on a Proxmox Backup Server, this wait
361 time can be mitigated using the live-restore option.
362
363 Enabling live-restore via either the checkbox in the GUI or the `--live-restore`
364 argument of `qmrestore` causes the VM to start as soon as the restore
365 begins. Data is copied in the background, prioritizing chunks that the VM is
366 actively accessing.
367
368 Note that this comes with two caveats:
369
370 * During live-restore, the VM will operate with limited disk read speeds, as
371 data has to be loaded from the backup server (once loaded, it is immediately
372 available on the destination storage however, so accessing data twice only
373 incurs the penalty the first time). Write speeds are largely unaffected.
374 * If the live-restore fails for any reason, the VM will be left in an
375 undefined state - that is, not all data might have been copied from the
376 backup, and it is _most likely_ not possible to keep any data that was written
377 during the failed restore operation.
378
379 This mode of operation is especially useful for large VMs, where only a small
380 amount of data is required for initial operation, e.g. web servers - once the OS
381 and necessary services have been started, the VM is operational, while the
382 background task continues copying seldom used data.
383
384 Single File Restore
385 ~~~~~~~~~~~~~~~~~~~
386
387 The 'File Restore' button in the 'Backups' tab of the storage GUI can be used to
388 open a file browser directly on the data contained in a backup. This feature
389 is only available for backups on a Proxmox Backup Server.
390
391 For containers, the first layer of the file tree shows all included 'pxar'
392 archives, which can be opened and browsed freely. For VMs, the first layer shows
393 contained drive images, which can be opened to reveal a list of supported
394 storage technologies found on the drive. In the most basic case, this will be an
395 entry called 'part', representing a partition table, which contains entries for
396 each partition found on the drive. Note that for VMs, not all data might be
397 accessible (unsupported guest file systems, storage technologies, etc...).
398
399 Files and directories can be downloaded using the 'Download' button, the latter
400 being compressed into a zip archive on the fly.
401
402 To enable secure access to VM images, which might contain untrusted data, a
403 temporary VM (not visible as a guest) is started. This does not mean that data
404 downloaded from such an archive is inherently safe, but it avoids exposing the
405 hypervisor system to danger. The VM will stop itself after a timeout. This
406 entire process happens transparently from a user's point of view.
407
408 [[vzdump_configuration]]
409 Configuration
410 -------------
411
412 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
413 simple colon separated key/value format. Each line has the following
414 format:
415
416 OPTION: value
417
418 Blank lines in the file are ignored, and lines starting with a `#`
419 character are treated as comments and are also ignored. Values from
420 this file are used as default, and can be overwritten on the command
421 line.
422
423 We currently support the following options:
424
425 include::vzdump.conf.5-opts.adoc[]
426
427
428 .Example `vzdump.conf` Configuration
429 ----
430 tmpdir: /mnt/fast_local_disk
431 storage: my_backup_storage
432 mode: snapshot
433 bwlimit: 10000
434 ----
435
436 Hook Scripts
437 ------------
438
439 You can specify a hook script with option `--script`. This script is
440 called at various phases of the backup process, with parameters
441 accordingly set. You can find an example in the documentation
442 directory (`vzdump-hook-script.pl`).
443
444 File Exclusions
445 ---------------
446
447 NOTE: this option is only available for container backups.
448
449 `vzdump` skips the following files by default (disable with the option
450 `--stdexcludes 0`)
451
452 /tmp/?*
453 /var/tmp/?*
454 /var/run/?*pid
455
456 You can also manually specify (additional) exclude paths, for example:
457
458 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
459
460 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
461 `/var/foobar`, and so on.
462
463 Paths that do not start with a `/` are not anchored to the container's root,
464 but will match relative to any subdirectory. For example:
465
466 # vzdump 777 --exclude-path bar
467
468 excludes any file or directory named `/bar`, `/var/bar`, `/var/foo/bar`, and
469 so on, but not `/bar2`.
470
471 Configuration files are also stored inside the backup archive
472 (in `./etc/vzdump/`) and will be correctly restored.
473
474 Examples
475 --------
476
477 Simply dump guest 777 - no snapshot, just archive the guest private area and
478 configuration files to the default dump directory (usually
479 `/var/lib/vz/dump/`).
480
481 # vzdump 777
482
483 Use rsync and suspend/resume to create a snapshot (minimal downtime).
484
485 # vzdump 777 --mode suspend
486
487 Backup all guest systems and send notification mails to root and admin.
488
489 # vzdump --all --mode suspend --mailto root --mailto admin
490
491 Use snapshot mode (no downtime) and non-default dump directory.
492
493 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
494
495 Backup more than one guest (selectively)
496
497 # vzdump 101 102 103 --mailto root
498
499 Backup all guests excluding 101 and 102
500
501 # vzdump --mode suspend --exclude 101,102
502
503 Restore a container to a new CT 600
504
505 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
506
507 Restore a QemuServer VM to VM 601
508
509 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
510
511 Clone an existing container 101 to a new container 300 with a 4GB root
512 file system, using pipes
513
514 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
515
516
517 ifdef::manvolnum[]
518 include::pve-copyright.adoc[]
519 endif::manvolnum[]
520