]> git.proxmox.com Git - pve-docs.git/blame - pvesr.adoc
qm: pct: mention move-disk for storage and reassign
[pve-docs.git] / pvesr.adoc
CommitLineData
b1f48b2a 1[[chapter_pvesr]]
c024f553
DM
2ifdef::manvolnum[]
3pvesr(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
236bec37 10pvesr - Proxmox VE Storage Replication
c024f553
DM
11
12SYNOPSIS
13--------
14
15include::pvesr.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Storage Replication
23===================
24:pve-toplevel:
25endif::manvolnum[]
26
45c218cf
DM
27The `pvesr` command line tool manages the {PVE} storage replication
28framework. Storage replication brings redundancy for guests using
728a3ea5 29local storage and reduces migration time.
45c218cf 30
728a3ea5
TL
31It replicates guest volumes to another node so that all data is available
32without using shared storage. Replication uses snapshots to minimize traffic
33sent over the network. Therefore, new data is sent only incrementally after
c93fb4bf 34the initial full sync. In the case of a node failure, your guest data is
728a3ea5 35still available on the replicated node.
236bec37 36
c93fb4bf
WL
37The replication is done automatically in configurable intervals.
38The minimum replication interval is one minute, and the maximal interval
728a3ea5
TL
39once a week. The format used to specify those intervals is a subset of
40`systemd` calendar events, see
41xref:pvesr_schedule_time_format[Schedule Format] section:
236bec37 42
c93fb4bf
WL
43It is possible to replicate a guest to multiple target nodes,
44but not twice to the same target node.
236bec37 45
728a3ea5
TL
46Each replications bandwidth can be limited, to avoid overloading a storage
47or server.
236bec37 48
c93fb4bf
WL
49Guests with replication enabled can currently only be migrated offline.
50Only changes since the last replication (so-called `deltas`) need to be
51transferred if the guest is migrated to a node to which it already is
52replicated. This reduces the time needed significantly. The replication
53direction automatically switches if you migrate a guest to the replication
54target node.
728a3ea5
TL
55
56For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57You migrate it to `nodeB`, so now it gets automatically replicated back from
58`nodeB` to `nodeA`.
59
60If you migrate to a node where the guest is not replicated, the whole disk
c93fb4bf 61data must send over. After the migration, the replication job continues to
728a3ea5
TL
62replicate this guest to the configured nodes.
63
64[IMPORTANT]
65====
d9df5dae
AL
66High-Availability is allowed in combination with storage replication, but there
67may be some data loss between the last synced time and the time a node failed.
728a3ea5 68====
236bec37
WL
69
70Supported Storage Types
71-----------------------
72
73.Storage Types
74[width="100%",options="header"]
75|============================================
76|Description |PVE type |Snapshots|Stable
77|ZFS (local) |zfspool |yes |yes
78|============================================
79
728a3ea5
TL
80[[pvesr_schedule_time_format]]
81Schedule Format
82---------------
61529459
DC
83Replication uses xref:chapter_calendar_events[calendar events] for
84configuring the schedule.
236bec37 85
728a3ea5
TL
86Error Handling
87--------------
236bec37 88
c93fb4bf
WL
89If a replication job encounters problems, it is placed in an error state.
90In this state, the configured replication intervals get suspended
91temporarily. The failed replication is repeatedly tried again in a
9230 minute interval.
93Once this succeeds, the original schedule gets activated again.
236bec37 94
728a3ea5
TL
95Possible issues
96~~~~~~~~~~~~~~~
236bec37 97
c93fb4bf
WL
98Some of the most common issues are in the following list. Depending on your
99setup there may be another cause.
236bec37 100
728a3ea5 101* Network is not working.
236bec37 102
728a3ea5 103* No free space left on the replication target storage.
236bec37 104
c93fb4bf 105* Storage with same storage ID available on the target node
236bec37 106
c93fb4bf 107NOTE: You can always use the replication log to find out what is causing the problem.
236bec37 108
728a3ea5
TL
109Migrating a guest in case of Error
110~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
111// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
112// it here
236bec37 113
c93fb4bf 114In the case of a grave error, a virtual guest may get stuck on a failed
728a3ea5 115node. You then need to move it manually to a working node again.
236bec37 116
728a3ea5
TL
117Example
118~~~~~~~
236bec37 119
c93fb4bf 120Let's assume that you have two guests (VM 100 and CT 200) running on node A
728a3ea5
TL
121and replicate to node B.
122Node A failed and can not get back online. Now you have to migrate the guest
123to Node B manually.
236bec37 124
728a3ea5 125- connect to node B over ssh or open its shell via the WebUI
236bec37 126
728a3ea5
TL
127- check if that the cluster is quorate
128+
129----
130# pvecm status
131----
236bec37 132
c93fb4bf
WL
133- If you have no quorum, we strongly advise to fix this first and make the
134 node operable again. Only if this is not possible at the moment, you may
728a3ea5
TL
135 use the following command to enforce quorum on the current node:
136+
137----
138# pvecm expected 1
139----
236bec37 140
c93fb4bf
WL
141WARNING: Avoid changes which affect the cluster if `expected votes` are set
142(for example adding/removing nodes, storages, virtual guests) at all costs.
5e8c8202 143Only use it to get vital guests up and running again or to resolve the quorum
728a3ea5 144issue itself.
236bec37 145
728a3ea5
TL
146- move both guest configuration files form the origin node A to node B:
147+
148----
1cdc7c17
DC
149# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
150# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
728a3ea5 151----
236bec37 152
728a3ea5
TL
153- Now you can start the guests again:
154+
155----
156# qm start 100
157# pct start 200
158----
236bec37 159
728a3ea5 160Remember to replace the VMIDs and node names with your respective values.
236bec37 161
728a3ea5
TL
162Managing Jobs
163-------------
236bec37 164
1ff5e4e8 165[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
2a3d436c 166
c93fb4bf
WL
167You can use the web GUI to create, modify, and remove replication jobs
168easily. Additionally, the command line interface (CLI) tool `pvesr` can be
728a3ea5 169used to do this.
236bec37 170
728a3ea5 171You can find the replication panel on all levels (datacenter, node, virtual
c93fb4bf
WL
172guest) in the web GUI. They differ in which jobs get shown:
173all, node- or guest-specific jobs.
236bec37 174
c93fb4bf
WL
175When adding a new job, you need to specify the guest if not already selected
176as well as the target node. The replication
728a3ea5 177xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
c93fb4bf
WL
17815 minutes` is not desired. You may impose a rate-limit on a replication
179job. The rate limit can help to keep the load on the storage acceptable.
236bec37 180
c93fb4bf
WL
181A replication job is identified by a cluster-wide unique ID. This ID is
182composed of the VMID in addition to a job number.
728a3ea5 183This ID must only be specified manually if the CLI tool is used.
236bec37 184
236bec37 185
728a3ea5
TL
186Command Line Interface Examples
187-------------------------------
236bec37 188
c93fb4bf
WL
189Create a replication job which runs every 5 minutes with a limited bandwidth
190of 10 Mbps (megabytes per second) for the guest with ID 100.
236bec37 191
728a3ea5
TL
192----
193# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
194----
236bec37 195
c93fb4bf 196Disable an active job with ID `100-0`.
236bec37 197
728a3ea5
TL
198----
199# pvesr disable 100-0
200----
236bec37 201
c93fb4bf 202Enable a deactivated job with ID `100-0`.
236bec37 203
728a3ea5
TL
204----
205# pvesr enable 100-0
206----
236bec37 207
c93fb4bf 208Change the schedule interval of the job with ID `100-0` to once per hour.
236bec37 209
728a3ea5
TL
210----
211# pvesr update 100-0 --schedule '*/00'
212----
c024f553
DM
213
214ifdef::manvolnum[]
215include::pve-copyright.adoc[]
216endif::manvolnum[]