X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvesr.adoc;h=2bcc4d9486705324f3ea13471e29707ccef39dce;hp=69148ac16a84bf51b382fa88635fe7a821db9404;hb=a45c999b4586734621bbc968d67f87390739b270;hpb=45c218cfb93603e0afb7afc6c0a98c456b7066d8 diff --git a/pvesr.adoc b/pvesr.adoc index 69148ac..2bcc4d9 100644 --- a/pvesr.adoc +++ b/pvesr.adoc @@ -1,3 +1,4 @@ +[[chapter_pvesr]] ifdef::manvolnum[] pvesr(1) ======== @@ -25,28 +26,52 @@ endif::manvolnum[] The `pvesr` command line tool manages the {PVE} storage replication framework. Storage replication brings redundancy for guests using -local storage's and reduces migration time when you migrate a guest. +local storage and reduces migration time. -It replicates virtual disks to another node so that all data is -available without using shared storage. Replication uses storage -snapshots to minimize traffic sent over the network. New data is sent -incrementally after the initial sync. In the case of a node failure, -your guest data is still available on the replicated node. +It replicates guest volumes to another node so that all data is available +without using shared storage. Replication uses snapshots to minimize traffic +sent over the network. Therefore, new data is sent only incrementally after +an initial full sync. In the case of a node failure, your guest data is +still available on the replicated node. -The minimal replication interval are 1 minute and the maximal interval is once a week. -Interval schedule format is a subset of `systemd` calendar events. -Every interval time your guest vdisk data will be synchronized, -but only the new data will replicated. This reduce the amount of data to a minimum. -New data are data what are written to the vdisk after the last replication. +The replication will be done automatically in configurable intervals. +The minimum replication interval is one minute and the maximal interval is +once a week. The format used to specify those intervals is a subset of +`systemd` calendar events, see +xref:pvesr_schedule_time_format[Schedule Format] section: -Every guest can replicate to many target nodes, but only one replication job per target node is allowed. +Every guest can be replicated to multiple target nodes, but a guest cannot +get replicated twice to the same target node. -The migration of guests, where storage replication is activated, is currently only offline possible. -When the guest will migrate to the target of the replication, only the delta of the data must migrated and -the replication direction will switched automatically in the opposite direction. -If you migrate to a node where you do not replicate, it will send the whole vdisk data to the new node and after the migration it continuous the replication job as usually. +Each replications bandwidth can be limited, to avoid overloading a storage +or server. -WARNING: High-Availability is possible with Storage Replication but this can lead to lose data. So be aware of this problem before you use this combination. +Virtual guest with active replication cannot currently use online migration. +Offline migration is supported in general. If you migrate to a node where +the guests data is already replicated only the changes since the last +synchronisation (so called `delta`) must be sent, this reduces the required +time significantly. In this case the replication direction will also switch +nodes automatically after the migration finished. + +For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`. +You migrate it to `nodeB`, so now it gets automatically replicated back from +`nodeB` to `nodeA`. + +If you migrate to a node where the guest is not replicated, the whole disk +data must send over. After the migration the replication job continues to +replicate this guest to the configured nodes. + +[IMPORTANT] +==== +High-Availability is allowed in combination with storage replication, but it +has the following implications: + +* redistributing services after a more preferred node comes online will lead + to errors. + +* recovery works, but there may be some data loss between the last synced + time and the time a node failed. +==== Supported Storage Types ----------------------- @@ -58,130 +83,207 @@ Supported Storage Types |ZFS (local) |zfspool |yes |yes |============================================ -Schedule --------- - -Proxmox VE has a very flexible replication scheduler with will explained in detail here. +[[pvesr_schedule_time_format]] +Schedule Format +--------------- -A schedule string has following format. +{pve} has a very flexible replication scheduler. It is based on the systemd +time calendar event format.footnote:[see `man 7 systemd.time` for more information] +Calendar events may be used to refer to one or more points in time in a +single expression. -[day of the week]