]> git.proxmox.com Git - pve-docs.git/blame - pvesr.adoc
ssh: document PVE-specific setup
[pve-docs.git] / pvesr.adoc
CommitLineData
b1f48b2a 1[[chapter_pvesr]]
c024f553
DM
2ifdef::manvolnum[]
3pvesr(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
236bec37 10pvesr - Proxmox VE Storage Replication
c024f553
DM
11
12SYNOPSIS
13--------
14
15include::pvesr.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Storage Replication
23===================
24:pve-toplevel:
25endif::manvolnum[]
26
ff4ae052 27The `pvesr` command-line tool manages the {PVE} storage replication
45c218cf 28framework. Storage replication brings redundancy for guests using
728a3ea5 29local storage and reduces migration time.
45c218cf 30
728a3ea5
TL
31It replicates guest volumes to another node so that all data is available
32without using shared storage. Replication uses snapshots to minimize traffic
33sent over the network. Therefore, new data is sent only incrementally after
c93fb4bf 34the initial full sync. In the case of a node failure, your guest data is
728a3ea5 35still available on the replicated node.
236bec37 36
c93fb4bf
WL
37The replication is done automatically in configurable intervals.
38The minimum replication interval is one minute, and the maximal interval
728a3ea5
TL
39once a week. The format used to specify those intervals is a subset of
40`systemd` calendar events, see
41xref:pvesr_schedule_time_format[Schedule Format] section:
236bec37 42
c93fb4bf
WL
43It is possible to replicate a guest to multiple target nodes,
44but not twice to the same target node.
236bec37 45
728a3ea5
TL
46Each replications bandwidth can be limited, to avoid overloading a storage
47or server.
236bec37 48
c93fb4bf
WL
49Only changes since the last replication (so-called `deltas`) need to be
50transferred if the guest is migrated to a node to which it already is
51replicated. This reduces the time needed significantly. The replication
52direction automatically switches if you migrate a guest to the replication
53target node.
728a3ea5
TL
54
55For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
56You migrate it to `nodeB`, so now it gets automatically replicated back from
57`nodeB` to `nodeA`.
58
59If you migrate to a node where the guest is not replicated, the whole disk
c93fb4bf 60data must send over. After the migration, the replication job continues to
728a3ea5
TL
61replicate this guest to the configured nodes.
62
63[IMPORTANT]
64====
d9df5dae
AL
65High-Availability is allowed in combination with storage replication, but there
66may be some data loss between the last synced time and the time a node failed.
728a3ea5 67====
236bec37
WL
68
69Supported Storage Types
70-----------------------
71
72.Storage Types
73[width="100%",options="header"]
363c7a1d
FG
74|=============================================
75|Description |Plugin type |Snapshots|Stable
236bec37 76|ZFS (local) |zfspool |yes |yes
363c7a1d 77|=============================================
236bec37 78
728a3ea5
TL
79[[pvesr_schedule_time_format]]
80Schedule Format
81---------------
61529459
DC
82Replication uses xref:chapter_calendar_events[calendar events] for
83configuring the schedule.
236bec37 84
728a3ea5
TL
85Error Handling
86--------------
236bec37 87
c93fb4bf
WL
88If a replication job encounters problems, it is placed in an error state.
89In this state, the configured replication intervals get suspended
90temporarily. The failed replication is repeatedly tried again in a
9130 minute interval.
92Once this succeeds, the original schedule gets activated again.
236bec37 93
728a3ea5
TL
94Possible issues
95~~~~~~~~~~~~~~~
236bec37 96
c93fb4bf
WL
97Some of the most common issues are in the following list. Depending on your
98setup there may be another cause.
236bec37 99
728a3ea5 100* Network is not working.
236bec37 101
728a3ea5 102* No free space left on the replication target storage.
236bec37 103
c93fb4bf 104* Storage with same storage ID available on the target node
236bec37 105
c93fb4bf 106NOTE: You can always use the replication log to find out what is causing the problem.
236bec37 107
728a3ea5
TL
108Migrating a guest in case of Error
109~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
110// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
111// it here
236bec37 112
c93fb4bf 113In the case of a grave error, a virtual guest may get stuck on a failed
728a3ea5 114node. You then need to move it manually to a working node again.
236bec37 115
728a3ea5
TL
116Example
117~~~~~~~
236bec37 118
c93fb4bf 119Let's assume that you have two guests (VM 100 and CT 200) running on node A
728a3ea5
TL
120and replicate to node B.
121Node A failed and can not get back online. Now you have to migrate the guest
122to Node B manually.
236bec37 123
e2b3622a 124- connect to node B over ssh or open its shell via the web UI
236bec37 125
728a3ea5
TL
126- check if that the cluster is quorate
127+
128----
129# pvecm status
130----
236bec37 131
c93fb4bf
WL
132- If you have no quorum, we strongly advise to fix this first and make the
133 node operable again. Only if this is not possible at the moment, you may
728a3ea5
TL
134 use the following command to enforce quorum on the current node:
135+
136----
137# pvecm expected 1
138----
236bec37 139
c93fb4bf
WL
140WARNING: Avoid changes which affect the cluster if `expected votes` are set
141(for example adding/removing nodes, storages, virtual guests) at all costs.
5e8c8202 142Only use it to get vital guests up and running again or to resolve the quorum
728a3ea5 143issue itself.
236bec37 144
728a3ea5
TL
145- move both guest configuration files form the origin node A to node B:
146+
147----
1cdc7c17
DC
148# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
149# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
728a3ea5 150----
236bec37 151
728a3ea5
TL
152- Now you can start the guests again:
153+
154----
155# qm start 100
156# pct start 200
157----
236bec37 158
728a3ea5 159Remember to replace the VMIDs and node names with your respective values.
236bec37 160
728a3ea5
TL
161Managing Jobs
162-------------
236bec37 163
1ff5e4e8 164[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
2a3d436c 165
c93fb4bf 166You can use the web GUI to create, modify, and remove replication jobs
ff4ae052 167easily. Additionally, the command-line interface (CLI) tool `pvesr` can be
728a3ea5 168used to do this.
236bec37 169
728a3ea5 170You can find the replication panel on all levels (datacenter, node, virtual
c93fb4bf
WL
171guest) in the web GUI. They differ in which jobs get shown:
172all, node- or guest-specific jobs.
236bec37 173
c93fb4bf
WL
174When adding a new job, you need to specify the guest if not already selected
175as well as the target node. The replication
728a3ea5 176xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
c93fb4bf
WL
17715 minutes` is not desired. You may impose a rate-limit on a replication
178job. The rate limit can help to keep the load on the storage acceptable.
236bec37 179
c93fb4bf
WL
180A replication job is identified by a cluster-wide unique ID. This ID is
181composed of the VMID in addition to a job number.
728a3ea5 182This ID must only be specified manually if the CLI tool is used.
236bec37 183
236bec37 184
ff4ae052 185Command-line Interface Examples
728a3ea5 186-------------------------------
236bec37 187
c93fb4bf
WL
188Create a replication job which runs every 5 minutes with a limited bandwidth
189of 10 Mbps (megabytes per second) for the guest with ID 100.
236bec37 190
728a3ea5
TL
191----
192# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
193----
236bec37 194
c93fb4bf 195Disable an active job with ID `100-0`.
236bec37 196
728a3ea5
TL
197----
198# pvesr disable 100-0
199----
236bec37 200
c93fb4bf 201Enable a deactivated job with ID `100-0`.
236bec37 202
728a3ea5
TL
203----
204# pvesr enable 100-0
205----
236bec37 206
c93fb4bf 207Change the schedule interval of the job with ID `100-0` to once per hour.
236bec37 208
728a3ea5
TL
209----
210# pvesr update 100-0 --schedule '*/00'
211----
c024f553
DM
212
213ifdef::manvolnum[]
214include::pve-copyright.adoc[]
215endif::manvolnum[]