]> git.proxmox.com Git - pve-docs.git/blob - vzdump.adoc
vdzump: extend examples to also cover relative patterns
[pve-docs.git] / vzdump.adoc
1 [[chapter_vzdump]]
2 ifdef::manvolnum[]
3 vzdump(1)
4 =========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 vzdump - Backup Utility for VMs and Containers
11
12
13 SYNOPSIS
14 --------
15
16 include::vzdump.1-synopsis.adoc[]
17
18
19 DESCRIPTION
20 -----------
21 endif::manvolnum[]
22 ifndef::manvolnum[]
23 Backup and Restore
24 ==================
25 :pve-toplevel:
26 endif::manvolnum[]
27
28 Backups are a requirement for any sensible IT deployment, and {pve}
29 provides a fully integrated solution, using the capabilities of each
30 storage and each guest system type. This allows the system
31 administrator to fine tune via the `mode` option between consistency
32 of the backups and downtime of the guest system.
33
34 {pve} backups are always full backups - containing the VM/CT
35 configuration and all data. Backups can be started via the GUI or via
36 the `vzdump` command line tool.
37
38 .Backup Storage
39
40 Before a backup can run, a backup storage must be defined. Refer to
41 the Storage documentation on how to add a storage. A backup storage
42 must be a file level storage, as backups are stored as regular files.
43 In most situations, using a NFS server is a good way to store backups.
44 You can save those backups later to a tape drive, for off-site
45 archiving.
46
47 .Scheduled Backup
48
49 Backup jobs can be scheduled so that they are executed automatically
50 on specific days and times, for selectable nodes and guest systems.
51 Configuration of scheduled backups is done at the Datacenter level in
52 the GUI, which will generate a cron entry in /etc/cron.d/vzdump.
53
54 Backup modes
55 ------------
56
57 There are several ways to provide consistency (option `mode`),
58 depending on the guest type.
59
60 .Backup modes for VMs:
61
62 `stop` mode::
63
64 This mode provides the highest consistency of the backup, at the cost
65 of a short downtime in the VM operation. It works by executing an
66 orderly shutdown of the VM, and then runs a background Qemu process to
67 backup the VM data. After the backup is started, the VM goes to full
68 operation mode if it was previously running. Consistency is guaranteed
69 by using the live backup feature.
70
71 `suspend` mode::
72
73 This mode is provided for compatibility reason, and suspends the VM
74 before calling the `snapshot` mode. Since suspending the VM results in
75 a longer downtime and does not necessarily improve the data
76 consistency, the use of the `snapshot` mode is recommended instead.
77
78 `snapshot` mode::
79
80 This mode provides the lowest operation downtime, at the cost of a
81 small inconsistency risk. It works by performing a {pve} live
82 backup, in which data blocks are copied while the VM is running. If the
83 guest agent is enabled (`agent: 1`) and running, it calls
84 `guest-fsfreeze-freeze` and `guest-fsfreeze-thaw` to improve
85 consistency.
86
87 A technical overview of the {pve} live backup for QemuServer can
88 be found online
89 https://git.proxmox.com/?p=pve-qemu.git;a=blob_plain;f=backup.txt[here].
90
91 NOTE: {pve} live backup provides snapshot-like semantics on any
92 storage type. It does not require that the underlying storage supports
93 snapshots. Also please note that since the backups are done via
94 a background Qemu process, a stopped VM will appear as running for a
95 short amount of time while the VM disks are being read by Qemu.
96 However the VM itself is not booted, only its disk(s) are read.
97
98 .Backup modes for Containers:
99
100 `stop` mode::
101
102 Stop the container for the duration of the backup. This potentially
103 results in a very long downtime.
104
105 `suspend` mode::
106
107 This mode uses rsync to copy the container data to a temporary
108 location (see option `--tmpdir`). Then the container is suspended and
109 a second rsync copies changed files. After that, the container is
110 started (resumed) again. This results in minimal downtime, but needs
111 additional space to hold the container copy.
112 +
113 When the container is on a local file system and the target storage of
114 the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a
115 local file system too, as this will result in a many fold performance
116 improvement. Use of a local `tmpdir` is also required if you want to
117 backup a local container using ACLs in suspend mode if the backup
118 storage is an NFS server.
119
120 `snapshot` mode::
121
122 This mode uses the snapshotting facilities of the underlying
123 storage. First, the container will be suspended to ensure data consistency.
124 A temporary snapshot of the container's volumes will be made and the
125 snapshot content will be archived in a tar file. Finally, the temporary
126 snapshot is deleted again.
127
128 NOTE: `snapshot` mode requires that all backed up volumes are on a storage that
129 supports snapshots. Using the `backup=no` mount point option individual volumes
130 can be excluded from the backup (and thus this requirement).
131
132 // see PVE::VZDump::LXC::prepare()
133 NOTE: By default additional mount points besides the Root Disk mount point are
134 not included in backups. For volume mount points you can set the *Backup* option
135 to include the mount point in the backup. Device and bind mounts are never
136 backed up as their content is managed outside the {pve} storage library.
137
138 Backup File Names
139 -----------------
140
141 Newer versions of vzdump encode the guest type and the
142 backup time into the filename, for example
143
144 vzdump-lxc-105-2009_10_09-11_04_43.tar
145
146 That way it is possible to store several backup in the same directory. You can
147 limit the number of backups that are kept with various retention options, see
148 the xref:vzdump_retention[Backup Retention] section below.
149
150 Backup File Compression
151 -----------------------
152
153 The backup file can be compressed with one of the following algorithms: `lzo`
154 footnote:[Lempel–Ziv–Oberhumer a lossless data compression algorithm
155 https://en.wikipedia.org/wiki/Lempel-Ziv-Oberhumer], `gzip` footnote:[gzip -
156 based on the DEFLATE algorithm https://en.wikipedia.org/wiki/Gzip] or `zstd`
157 footnote:[Zstandard a lossless data compression algorithm
158 https://en.wikipedia.org/wiki/Zstandard].
159
160 Currently, Zstandard (zstd) is the fastest of these three algorithms.
161 Multi-threading is another advantage of zstd over lzo and gzip. Lzo and gzip
162 are more widely used and often installed by default.
163
164 You can install pigz footnote:[pigz - parallel implementation of gzip
165 https://zlib.net/pigz/] as a drop-in replacement for gzip to provide better
166 performance due to multi-threading. For pigz & zstd, the amount of
167 threads/cores can be adjusted. See the
168 xref:vzdump_configuration[configuration options] below.
169
170 The extension of the backup file name can usually be used to determine which
171 compression algorithm has been used to create the backup.
172
173 |===
174 |.zst | Zstandard (zstd) compression
175 |.gz or .tgz | gzip compression
176 |.lzo | lzo compression
177 |===
178
179 If the backup file name doesn't end with one of the above file extensions, then
180 it was not compressed by vzdump.
181
182
183 [[vzdump_retention]]
184 Backup Retention
185 ----------------
186
187 With the `prune-backups` option you can specify which backups you want to keep
188 in a flexible manner. The following retention options are available:
189
190 `keep-all <boolean>` ::
191 Keep all backups. If this is `true`, no other options can be set.
192
193 `keep-last <N>` ::
194 Keep the last `<N>` backups.
195
196 `keep-hourly <N>` ::
197 Keep backups for the last `<N>` hours. If there is more than one
198 backup for a single hour, only the latest is kept.
199
200 `keep-daily <N>` ::
201 Keep backups for the last `<N>` days. If there is more than one
202 backup for a single day, only the latest is kept.
203
204 `keep-weekly <N>` ::
205 Keep backups for the last `<N>` weeks. If there is more than one
206 backup for a single week, only the latest is kept.
207
208 NOTE: Weeks start on Monday and end on Sunday. The software uses the
209 `ISO week date`-system and handles weeks at the end of the year correctly.
210
211 `keep-monthly <N>` ::
212 Keep backups for the last `<N>` months. If there is more than one
213 backup for a single month, only the latest is kept.
214
215 `keep-yearly <N>` ::
216 Keep backups for the last `<N>` years. If there is more than one
217 backup for a single year, only the latest is kept.
218
219 The retention options are processed in the order given above. Each option
220 only covers backups within its time period. The next option does not take care
221 of already covered backups. It will only consider older backups.
222
223 Specify the retention options you want to use as a
224 comma-separated list, for example:
225
226 # vzdump 777 --prune-backups keep-last=3,keep-daily=13,keep-yearly=9
227
228 While you can pass `prune-backups` directly to `vzdump`, it is often more
229 sensible to configure the setting on the storage level, which can be done via
230 the web interface.
231
232 NOTE: The old `maxfiles` option is deprecated and should be replaced either by
233 `keep-last` or, in case `maxfiles` was `0` for unlimited retention, by
234 `keep-all`.
235
236
237 Prune Simulator
238 ~~~~~~~~~~~~~~~
239
240 You can use the https://pbs.proxmox.com/docs/prune-simulator[prune simulator
241 of the Proxmox Backup Server documentation] to explore the effect of different
242 retention options with various backup schedules.
243
244 Retention Settings Example
245 ~~~~~~~~~~~~~~~~~~~~~~~~~~
246
247 The backup frequency and retention of old backups may depend on how often data
248 changes, and how important an older state may be, in a specific work load.
249 When backups act as a company's document archive, there may also be legal
250 requirements for how long backups must be kept.
251
252 For this example, we assume that you are doing daily backups, have a retention
253 period of 10 years, and the period between backups stored gradually grows.
254
255 `keep-last=3` - even if only daily backups are taken, an admin may want to
256 create an extra one just before or after a big upgrade. Setting keep-last
257 ensures this.
258
259 `keep-hourly` is not set - for daily backups this is not relevant. You cover
260 extra manual backups already, with keep-last.
261
262 `keep-daily=13` - together with keep-last, which covers at least one
263 day, this ensures that you have at least two weeks of backups.
264
265 `keep-weekly=8` - ensures that you have at least two full months of
266 weekly backups.
267
268 `keep-monthly=11` - together with the previous keep settings, this
269 ensures that you have at least a year of monthly backups.
270
271 `keep-yearly=9` - this is for the long term archive. As you covered the
272 current year with the previous options, you would set this to nine for the
273 remaining ones, giving you a total of at least 10 years of coverage.
274
275 We recommend that you use a higher retention period than is minimally required
276 by your environment; you can always reduce it if you find it is unnecessarily
277 high, but you cannot recreate backups once they have been removed.
278
279 [[vzdump_restore]]
280 Restore
281 -------
282
283 A backup archive can be restored through the {pve} web GUI or through the
284 following CLI tools:
285
286
287 `pct restore`:: Container restore utility
288
289 `qmrestore`:: Virtual Machine restore utility
290
291 For details see the corresponding manual pages.
292
293 Bandwidth Limit
294 ~~~~~~~~~~~~~~~
295
296 Restoring one or more big backups may need a lot of resources, especially
297 storage bandwidth for both reading from the backup storage and writing to
298 the target storage. This can negatively affect other virtual guests as access
299 to storage can get congested.
300
301 To avoid this you can set bandwidth limits for a backup job. {pve}
302 implements two kinds of limits for restoring and archive:
303
304 * per-restore limit: denotes the maximal amount of bandwidth for
305 reading from a backup archive
306
307 * per-storage write limit: denotes the maximal amount of bandwidth used for
308 writing to a specific storage
309
310 The read limit indirectly affects the write limit, as we cannot write more
311 than we read. A smaller per-job limit will overwrite a bigger per-storage
312 limit. A bigger per-job limit will only overwrite the per-storage limit if
313 you have `Data.Allocate' permissions on the affected storage.
314
315 You can use the `--bwlimit <integer>` option from the restore CLI commands
316 to set up a restore job specific bandwidth limit. Kibit/s is used as unit
317 for the limit, this means passing `10240' will limit the read speed of the
318 backup to 10 MiB/s, ensuring that the rest of the possible storage bandwidth
319 is available for the already running virtual guests, and thus the backup
320 does not impact their operations.
321
322 NOTE: You can use `0` for the `bwlimit` parameter to disable all limits for
323 a specific restore job. This can be helpful if you need to restore a very
324 important virtual guest as fast as possible. (Needs `Data.Allocate'
325 permissions on storage)
326
327 Most times your storage's generally available bandwidth stays the same over
328 time, thus we implemented the possibility to set a default bandwidth limit
329 per configured storage, this can be done with:
330
331 ----
332 # pvesm set STORAGEID --bwlimit restore=KIBs
333 ----
334
335 [[vzdump_configuration]]
336 Configuration
337 -------------
338
339 Global configuration is stored in `/etc/vzdump.conf`. The file uses a
340 simple colon separated key/value format. Each line has the following
341 format:
342
343 OPTION: value
344
345 Blank lines in the file are ignored, and lines starting with a `#`
346 character are treated as comments and are also ignored. Values from
347 this file are used as default, and can be overwritten on the command
348 line.
349
350 We currently support the following options:
351
352 include::vzdump.conf.5-opts.adoc[]
353
354
355 .Example `vzdump.conf` Configuration
356 ----
357 tmpdir: /mnt/fast_local_disk
358 storage: my_backup_storage
359 mode: snapshot
360 bwlimit: 10000
361 ----
362
363 Hook Scripts
364 ------------
365
366 You can specify a hook script with option `--script`. This script is
367 called at various phases of the backup process, with parameters
368 accordingly set. You can find an example in the documentation
369 directory (`vzdump-hook-script.pl`).
370
371 File Exclusions
372 ---------------
373
374 NOTE: this option is only available for container backups.
375
376 `vzdump` skips the following files by default (disable with the option
377 `--stdexcludes 0`)
378
379 /tmp/?*
380 /var/tmp/?*
381 /var/run/?*pid
382
383 You can also manually specify (additional) exclude paths, for example:
384
385 # vzdump 777 --exclude-path /tmp/ --exclude-path '/var/foo*'
386
387 excludes the directory `/tmp/` and any file or directory named `/var/foo`,
388 `/var/foobar`, and so on.
389
390 Paths that do not start with a `/` are not anchored to the container's root,
391 but will match relative to any subdirectory. For example:
392
393 # vzdump 777 --exclude-path bar
394
395 excludes any file or directoy named `/bar`, `/var/bar`, `/var/foo/bar`, and
396 so on, but not `/bar2`.
397
398 Configuration files are also stored inside the backup archive
399 (in `./etc/vzdump/`) and will be correctly restored.
400
401 Examples
402 --------
403
404 Simply dump guest 777 - no snapshot, just archive the guest private area and
405 configuration files to the default dump directory (usually
406 `/var/lib/vz/dump/`).
407
408 # vzdump 777
409
410 Use rsync and suspend/resume to create a snapshot (minimal downtime).
411
412 # vzdump 777 --mode suspend
413
414 Backup all guest systems and send notification mails to root and admin.
415
416 # vzdump --all --mode suspend --mailto root --mailto admin
417
418 Use snapshot mode (no downtime) and non-default dump directory.
419
420 # vzdump 777 --dumpdir /mnt/backup --mode snapshot
421
422 Backup more than one guest (selectively)
423
424 # vzdump 101 102 103 --mailto root
425
426 Backup all guests excluding 101 and 102
427
428 # vzdump --mode suspend --exclude 101,102
429
430 Restore a container to a new CT 600
431
432 # pct restore 600 /mnt/backup/vzdump-lxc-777.tar
433
434 Restore a QemuServer VM to VM 601
435
436 # qmrestore /mnt/backup/vzdump-qemu-888.vma 601
437
438 Clone an existing container 101 to a new container 300 with a 4GB root
439 file system, using pipes
440
441 # vzdump 101 --stdout | pct restore --rootfs 4 300 -
442
443
444 ifdef::manvolnum[]
445 include::pve-copyright.adoc[]
446 endif::manvolnum[]
447