]> git.proxmox.com Git - pve-docs.git/blame - pvesr.adoc
rewrite and extend pct documentation
[pve-docs.git] / pvesr.adoc
CommitLineData
b1f48b2a 1[[chapter_pvesr]]
c024f553
DM
2ifdef::manvolnum[]
3pvesr(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
236bec37 10pvesr - Proxmox VE Storage Replication
c024f553
DM
11
12SYNOPSIS
13--------
14
15include::pvesr.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Storage Replication
23===================
24:pve-toplevel:
25endif::manvolnum[]
26
45c218cf
DM
27The `pvesr` command line tool manages the {PVE} storage replication
28framework. Storage replication brings redundancy for guests using
728a3ea5 29local storage and reduces migration time.
45c218cf 30
728a3ea5
TL
31It replicates guest volumes to another node so that all data is available
32without using shared storage. Replication uses snapshots to minimize traffic
33sent over the network. Therefore, new data is sent only incrementally after
34an initial full sync. In the case of a node failure, your guest data is
35still available on the replicated node.
236bec37 36
728a3ea5
TL
37The replication will be done automatically in configurable intervals.
38The minimum replication interval is one minute and the maximal interval is
39once a week. The format used to specify those intervals is a subset of
40`systemd` calendar events, see
41xref:pvesr_schedule_time_format[Schedule Format] section:
236bec37 42
728a3ea5
TL
43Every guest can be replicated to multiple target nodes, but a guest cannot
44get replicated twice to the same target node.
236bec37 45
728a3ea5
TL
46Each replications bandwidth can be limited, to avoid overloading a storage
47or server.
236bec37 48
728a3ea5
TL
49Virtual guest with active replication cannot currently use online migration.
50Offline migration is supported in general. If you migrate to a node where
51the guests data is already replicated only the changes since the last
52synchronisation (so called `delta`) must be sent, this reduces the required
53time significantly. In this case the replication direction will also switch
54nodes automatically after the migration finished.
55
56For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57You migrate it to `nodeB`, so now it gets automatically replicated back from
58`nodeB` to `nodeA`.
59
60If you migrate to a node where the guest is not replicated, the whole disk
61data must send over. After the migration the replication job continues to
62replicate this guest to the configured nodes.
63
64[IMPORTANT]
65====
66High-Availability is allowed in combination with storage replication, but it
67has the following implications:
68
40a27a5a
TL
69* as live-migrations are currently not possible, redistributing services after
70 a more preferred node comes online does not work. Keep that in mind when
71 configuring your HA groups and their priorities for replicated guests.
728a3ea5
TL
72
73* recovery works, but there may be some data loss between the last synced
74 time and the time a node failed.
75====
236bec37
WL
76
77Supported Storage Types
78-----------------------
79
80.Storage Types
81[width="100%",options="header"]
82|============================================
83|Description |PVE type |Snapshots|Stable
84|ZFS (local) |zfspool |yes |yes
85|============================================
86
728a3ea5
TL
87[[pvesr_schedule_time_format]]
88Schedule Format
89---------------
236bec37 90
728a3ea5 91{pve} has a very flexible replication scheduler. It is based on the systemd
470d4313 92time calendar event format.footnote:[see `man 7 systemd.time` for more information]
728a3ea5
TL
93Calendar events may be used to refer to one or more points in time in a
94single expression.
236bec37 95
728a3ea5 96Such a calendar event uses the following format:
236bec37 97
728a3ea5
TL
98----
99[day(s)] [[start-time(s)][/repetition-time(s)]]
100----
236bec37 101
728a3ea5
TL
102This allows you to configure a set of days on which the job should run.
103You can also set one or more start times, it tells the replication scheduler
104the moments in time when a job should start.
105With this information we could create a job which runs every workday at 10
106PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
10722'`, most reasonable schedules can be written quite intuitive this way.
108
109NOTE: Hours are set in 24h format.
110
111To allow easier and shorter configuration one or more repetition times can
112be set. They indicate that on the start-time(s) itself and the start-time(s)
113plus all multiples of the repetition value replications will be done. If
19cc0d77
DC
114you want to start replication at 8 AM and repeat it every 15 minutes until
1159 AM you would use: `'8:00/15'`
728a3ea5
TL
116
117Here you see also that if no hour separation (`:`) is used the value gets
118interpreted as minute. If such a separation is used the value on the left
119denotes the hour(s) and the value on the right denotes the minute(s).
120Further, you can use `*` to match all possible values.
121
122To get additional ideas look at
123xref:pvesr_schedule_format_examples[more Examples below].
124
125Detailed Specification
126~~~~~~~~~~~~~~~~~~~~~~
127
128days:: Days are specified with an abbreviated English version: `sun, mon,
129tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
130list. A range of days can also be set by specifying the start and end day
131separated by ``..'', for example `mon..fri`. Those formats can be also
132mixed. If omitted `'*'` is assumed.
133
134time-format:: A time format consists of hours and minutes interval lists.
135Hours and minutes are separated by `':'`. Both, hour and minute, can be list
136and ranges of values, using the same format as days.
137First come hours then minutes, hours can be omitted if not needed, in this
138case `'*'` is assumed for the value of hours.
139The valid range for values is `0-23` for hours and `0-59` for minutes.
140
141[[pvesr_schedule_format_examples]]
236bec37
WL
142Examples:
143~~~~~~~~~
144
145.Schedule Examples
146[width="100%",options="header"]
728a3ea5
TL
147|==============================================================================
148|Schedule String |Alternative |Meaning
8985eb37
FG
149|mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
150|sat,sun |sat..sun |Only on weekends at 0:00
728a3ea5 151|mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
8985eb37
FG
152|12:05 |12:05 |Every day at 12:05 PM
153|*/5 |0/5 |Every five minutes
728a3ea5 154|mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
8985eb37 155|mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
728a3ea5 156|fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
8985eb37 157|12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
728a3ea5
TL
158|* |*/1 |Every minute (minimum interval)
159|==============================================================================
236bec37 160
728a3ea5
TL
161Error Handling
162--------------
236bec37 163
728a3ea5
TL
164If a replication job encounters problems it will be placed in error state.
165In this state the configured replication intervals get suspended
166temporarily. Then we retry the failed replication in a 30 minute interval,
167once this succeeds the original schedule gets activated again.
236bec37 168
728a3ea5
TL
169Possible issues
170~~~~~~~~~~~~~~~
236bec37 171
728a3ea5
TL
172This represents only the most common issues possible, depending on your
173setup there may be also another cause.
236bec37 174
728a3ea5 175* Network is not working.
236bec37 176
728a3ea5 177* No free space left on the replication target storage.
236bec37 178
728a3ea5 179* Storage with same storage ID available on target node
236bec37 180
728a3ea5
TL
181NOTE: You can always use the replication log to get hints about a problems
182cause.
236bec37 183
728a3ea5
TL
184Migrating a guest in case of Error
185~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
187// it here
236bec37 188
728a3ea5
TL
189In the case of a grave error a virtual guest may get stuck on a failed
190node. You then need to move it manually to a working node again.
236bec37 191
728a3ea5
TL
192Example
193~~~~~~~
236bec37 194
728a3ea5
TL
195Lets assume that you have two guests (VM 100 and CT 200) running on node A
196and replicate to node B.
197Node A failed and can not get back online. Now you have to migrate the guest
198to Node B manually.
236bec37 199
728a3ea5 200- connect to node B over ssh or open its shell via the WebUI
236bec37 201
728a3ea5
TL
202- check if that the cluster is quorate
203+
204----
205# pvecm status
206----
236bec37 207
728a3ea5
TL
208- If you have no quorum we strongly advise to fix this first and make the
209 node operable again. Only if this is not possible at the moment you may
210 use the following command to enforce quorum on the current node:
211+
212----
213# pvecm expected 1
214----
236bec37 215
728a3ea5
TL
216WARNING: If expected votes are set avoid changes which affect the cluster
217(for example adding/removing nodes, storages, virtual guests) at all costs.
5e8c8202 218Only use it to get vital guests up and running again or to resolve the quorum
728a3ea5 219issue itself.
236bec37 220
728a3ea5
TL
221- move both guest configuration files form the origin node A to node B:
222+
223----
1cdc7c17
DC
224# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
225# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
728a3ea5 226----
236bec37 227
728a3ea5
TL
228- Now you can start the guests again:
229+
230----
231# qm start 100
232# pct start 200
233----
236bec37 234
728a3ea5 235Remember to replace the VMIDs and node names with your respective values.
236bec37 236
728a3ea5
TL
237Managing Jobs
238-------------
236bec37 239
1ff5e4e8 240[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
2a3d436c 241
728a3ea5
TL
242You can use the web GUI to create, modify and remove replication jobs
243easily. Additionally the command line interface (CLI) tool `pvesr` can be
244used to do this.
236bec37 245
728a3ea5
TL
246You can find the replication panel on all levels (datacenter, node, virtual
247guest) in the web GUI. They differ in what jobs get shown: all, only node
248specific or only guest specific jobs.
236bec37 249
728a3ea5
TL
250Once adding a new job you need to specify the virtual guest (if not already
251selected) and the target node. The replication
252xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
25315 minutes` is not desired. You may also impose rate limiting on a
254replication job, this can help to keep the storage load acceptable.
236bec37 255
728a3ea5
TL
256A replication job is identified by an cluster-wide unique ID. This ID is
257composed of the VMID in addition to an job number.
258This ID must only be specified manually if the CLI tool is used.
236bec37 259
236bec37 260
728a3ea5
TL
261Command Line Interface Examples
262-------------------------------
236bec37 263
8985eb37 264Create a replication job which will run every 5 minutes with limited bandwidth of
728a3ea5 26510 mbps (megabytes per second) for the guest with guest ID 100.
236bec37 266
728a3ea5
TL
267----
268# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
269----
236bec37 270
728a3ea5 271Disable an active job with ID `100-0`
236bec37 272
728a3ea5
TL
273----
274# pvesr disable 100-0
275----
236bec37 276
728a3ea5 277Enable a deactivated job with ID `100-0`
236bec37 278
728a3ea5
TL
279----
280# pvesr enable 100-0
281----
236bec37 282
728a3ea5 283Change the schedule interval of the job with ID `100-0` to once a hour
236bec37 284
728a3ea5
TL
285----
286# pvesr update 100-0 --schedule '*/00'
287----
c024f553
DM
288
289ifdef::manvolnum[]
290include::pve-copyright.adoc[]
291endif::manvolnum[]