X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pvesr.adoc;h=a1a366c7bcbfd58f6f1043091c01d41c9a91ddac;hb=00271f41dbff6bfcce1389bd904337c823dc2d64;hp=9acb8aeace875cd2c8850d9458e20a5f6f40118f;hpb=236bec37f419463417e41d563ec91c341cd234a1;p=pve-docs.git diff --git a/pvesr.adoc b/pvesr.adoc index 9acb8ae..a1a366c 100644 --- a/pvesr.adoc +++ b/pvesr.adoc @@ -1,3 +1,4 @@ +[[chapter_pvesr]] ifdef::manvolnum[] pvesr(1) ======== @@ -23,28 +24,55 @@ Storage Replication :pve-toplevel: endif::manvolnum[] -The {PVE} storage replication tool (`pvesr`) manage the Proxmox VE Storage Based Replication. -Storage Replication bring guest redundancy for local storage's, -reduce the migration time and will only replicate new data. +The `pvesr` command line tool manages the {PVE} storage replication +framework. Storage replication brings redundancy for guests using +local storage and reduces migration time. -It will replicate the vdisk of guest to an other node this make that data available -without using shared/distributed storage. So in case of a node failure your guest data still available -on the replicated node. +It replicates guest volumes to another node so that all data is available +without using shared storage. Replication uses snapshots to minimize traffic +sent over the network. Therefore, new data is sent only incrementally after +the initial full sync. In the case of a node failure, your guest data is +still available on the replicated node. -The minimal replication interval are 1 minute and the maximal interval is once a week. -Interval schedule format is a subset of `systemd` calendar events. -Every interval time your guest vdisk data will be synchronized, -but only the new data will replicated. This reduce the amount of data to a minimum. -New data are data what are written to the vdisk after the last replication. +The replication is done automatically in configurable intervals. +The minimum replication interval is one minute, and the maximal interval +once a week. The format used to specify those intervals is a subset of +`systemd` calendar events, see +xref:pvesr_schedule_time_format[Schedule Format] section: -Every guest can replicate to many target nodes, but only one replication job per target node is allowed. +It is possible to replicate a guest to multiple target nodes, +but not twice to the same target node. -The migration of guests, where storage replication is activated, is currently only offline possible. -When the guest will migrate to the target of the replication, only the delta of the data must migrated and -the replication direction will switched automatically in the opposite direction. -If you migrate to a node where you do not replicate, it will send the whole vdisk data to the new node and after the migration it continuous the replication job as usually. +Each replications bandwidth can be limited, to avoid overloading a storage +or server. -WARNING: High-Availability is possible with Storage Replication but this can lead to lose data. So be aware of this problem before you use this combination. +Guests with replication enabled can currently only be migrated offline. +Only changes since the last replication (so-called `deltas`) need to be +transferred if the guest is migrated to a node to which it already is +replicated. This reduces the time needed significantly. The replication +direction automatically switches if you migrate a guest to the replication +target node. + +For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`. +You migrate it to `nodeB`, so now it gets automatically replicated back from +`nodeB` to `nodeA`. + +If you migrate to a node where the guest is not replicated, the whole disk +data must send over. After the migration, the replication job continues to +replicate this guest to the configured nodes. + +[IMPORTANT] +==== +High-Availability is allowed in combination with storage replication, but it +has the following implications: + +* as live-migrations are currently not possible, redistributing services after + a more preferred node comes online does not work. Keep that in mind when + configuring your HA groups and their priorities for replicated guests. + +* recovery works, but there may be some data loss between the last synced + time and the time a node failed. +==== Supported Storage Types ----------------------- @@ -56,130 +84,207 @@ Supported Storage Types |ZFS (local) |zfspool |yes |yes |============================================ -Schedule --------- - -Proxmox VE has a very flexible replication scheduler with will explained in detail here. +[[pvesr_schedule_time_format]] +Schedule Format +--------------- -A schedule string has following format. +{pve} has a very flexible replication scheduler. It is based on the systemd +time calendar event format.footnote:[see `man 7 systemd.time` for more information] +Calendar events may be used to refer to one or more points in time in a +single expression. -[day of the week]