]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesr.adoc
update link qemu documentation non web.archive
[pve-docs.git] / pvesr.adoc
... / ...
CommitLineData
1[[chapter_pvesr]]
2ifdef::manvolnum[]
3pvesr(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesr - Proxmox VE Storage Replication
11
12SYNOPSIS
13--------
14
15include::pvesr.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Storage Replication
23===================
24:pve-toplevel:
25endif::manvolnum[]
26
27The `pvesr` command line tool manages the {PVE} storage replication
28framework. Storage replication brings redundancy for guests using
29local storage and reduces migration time.
30
31It replicates guest volumes to another node so that all data is available
32without using shared storage. Replication uses snapshots to minimize traffic
33sent over the network. Therefore, new data is sent only incrementally after
34the initial full sync. In the case of a node failure, your guest data is
35still available on the replicated node.
36
37The replication is done automatically in configurable intervals.
38The minimum replication interval is one minute, and the maximal interval
39once a week. The format used to specify those intervals is a subset of
40`systemd` calendar events, see
41xref:pvesr_schedule_time_format[Schedule Format] section:
42
43It is possible to replicate a guest to multiple target nodes,
44but not twice to the same target node.
45
46Each replications bandwidth can be limited, to avoid overloading a storage
47or server.
48
49Guests with replication enabled can currently only be migrated offline.
50Only changes since the last replication (so-called `deltas`) need to be
51transferred if the guest is migrated to a node to which it already is
52replicated. This reduces the time needed significantly. The replication
53direction automatically switches if you migrate a guest to the replication
54target node.
55
56For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57You migrate it to `nodeB`, so now it gets automatically replicated back from
58`nodeB` to `nodeA`.
59
60If you migrate to a node where the guest is not replicated, the whole disk
61data must send over. After the migration, the replication job continues to
62replicate this guest to the configured nodes.
63
64[IMPORTANT]
65====
66High-Availability is allowed in combination with storage replication, but it
67has the following implications:
68
69* as live-migrations are currently not possible, redistributing services after
70 a more preferred node comes online does not work. Keep that in mind when
71 configuring your HA groups and their priorities for replicated guests.
72
73* recovery works, but there may be some data loss between the last synced
74 time and the time a node failed.
75====
76
77Supported Storage Types
78-----------------------
79
80.Storage Types
81[width="100%",options="header"]
82|============================================
83|Description |PVE type |Snapshots|Stable
84|ZFS (local) |zfspool |yes |yes
85|============================================
86
87[[pvesr_schedule_time_format]]
88Schedule Format
89---------------
90
91{pve} has a very flexible replication scheduler. It is based on the systemd
92time calendar event format.footnote:[see `man 7 systemd.time` for more information]
93Calendar events may be used to refer to one or more points in time in a
94single expression.
95
96Such a calendar event uses the following format:
97
98----
99[day(s)] [[start-time(s)][/repetition-time(s)]]
100----
101
102This format allows you to configure a set of days on which the job should run.
103You can also set one or more start times. It tells the replication scheduler
104the moments in time when a job should start.
105With this information we, can create a job which runs every workday at 10
106PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
10722'`, most reasonable schedules can be written quite intuitive this way.
108
109NOTE: Hours are formatted in 24-hour format.
110
111To allow a convenient and shorter configuration, one or more repeat times per
112guest can be set. They indicate that replications are done on the start-time(s)
113itself and the start-time(s) plus all multiples of the repetition value. If
114you want to start replication at 8 AM and repeat it every 15 minutes until
1159 AM you would use: `'8:00/15'`
116
117Here you see that if no hour separation (`:`), is used the value gets
118interpreted as minute. If such a separation is used, the value on the left
119denotes the hour(s), and the value on the right denotes the minute(s).
120Further, you can use `*` to match all possible values.
121
122To get additional ideas look at
123xref:pvesr_schedule_format_examples[more Examples below].
124
125Detailed Specification
126~~~~~~~~~~~~~~~~~~~~~~
127
128days:: Days are specified with an abbreviated English version: `sun, mon,
129tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
130list. A range of days can also be set by specifying the start and end day
131separated by ``..'', for example `mon..fri`. These formats can be mixed.
132If omitted `'*'` is assumed.
133
134time-format:: A time format consists of hours and minutes interval lists.
135Hours and minutes are separated by `':'`. Both hour and minute can be list
136and ranges of values, using the same format as days.
137First are hours, then minutes. Hours can be omitted if not needed. In this
138case `'*'` is assumed for the value of hours.
139The valid range for values is `0-23` for hours and `0-59` for minutes.
140
141[[pvesr_schedule_format_examples]]
142Examples:
143~~~~~~~~~
144
145.Schedule Examples
146[width="100%",options="header"]
147|==============================================================================
148|Schedule String |Alternative |Meaning
149|mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
150|sat,sun |sat..sun |Only on weekends at 0:00
151|mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
152|12:05 |12:05 |Every day at 12:05 PM
153|*/5 |0/5 |Every five minutes
154|mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
155|mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
156|fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
157|12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
158|* |*/1 |Every minute (minimum interval)
159|==============================================================================
160
161Error Handling
162--------------
163
164If a replication job encounters problems, it is placed in an error state.
165In this state, the configured replication intervals get suspended
166temporarily. The failed replication is repeatedly tried again in a
16730 minute interval.
168Once this succeeds, the original schedule gets activated again.
169
170Possible issues
171~~~~~~~~~~~~~~~
172
173Some of the most common issues are in the following list. Depending on your
174setup there may be another cause.
175
176* Network is not working.
177
178* No free space left on the replication target storage.
179
180* Storage with same storage ID available on the target node
181
182NOTE: You can always use the replication log to find out what is causing the problem.
183
184Migrating a guest in case of Error
185~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
187// it here
188
189In the case of a grave error, a virtual guest may get stuck on a failed
190node. You then need to move it manually to a working node again.
191
192Example
193~~~~~~~
194
195Let's assume that you have two guests (VM 100 and CT 200) running on node A
196and replicate to node B.
197Node A failed and can not get back online. Now you have to migrate the guest
198to Node B manually.
199
200- connect to node B over ssh or open its shell via the WebUI
201
202- check if that the cluster is quorate
203+
204----
205# pvecm status
206----
207
208- If you have no quorum, we strongly advise to fix this first and make the
209 node operable again. Only if this is not possible at the moment, you may
210 use the following command to enforce quorum on the current node:
211+
212----
213# pvecm expected 1
214----
215
216WARNING: Avoid changes which affect the cluster if `expected votes` are set
217(for example adding/removing nodes, storages, virtual guests) at all costs.
218Only use it to get vital guests up and running again or to resolve the quorum
219issue itself.
220
221- move both guest configuration files form the origin node A to node B:
222+
223----
224# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
225# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
226----
227
228- Now you can start the guests again:
229+
230----
231# qm start 100
232# pct start 200
233----
234
235Remember to replace the VMIDs and node names with your respective values.
236
237Managing Jobs
238-------------
239
240[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
241
242You can use the web GUI to create, modify, and remove replication jobs
243easily. Additionally, the command line interface (CLI) tool `pvesr` can be
244used to do this.
245
246You can find the replication panel on all levels (datacenter, node, virtual
247guest) in the web GUI. They differ in which jobs get shown:
248all, node- or guest-specific jobs.
249
250When adding a new job, you need to specify the guest if not already selected
251as well as the target node. The replication
252xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
25315 minutes` is not desired. You may impose a rate-limit on a replication
254job. The rate limit can help to keep the load on the storage acceptable.
255
256A replication job is identified by a cluster-wide unique ID. This ID is
257composed of the VMID in addition to a job number.
258This ID must only be specified manually if the CLI tool is used.
259
260
261Command Line Interface Examples
262-------------------------------
263
264Create a replication job which runs every 5 minutes with a limited bandwidth
265of 10 Mbps (megabytes per second) for the guest with ID 100.
266
267----
268# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
269----
270
271Disable an active job with ID `100-0`.
272
273----
274# pvesr disable 100-0
275----
276
277Enable a deactivated job with ID `100-0`.
278
279----
280# pvesr enable 100-0
281----
282
283Change the schedule interval of the job with ID `100-0` to once per hour.
284
285----
286# pvesr update 100-0 --schedule '*/00'
287----
288
289ifdef::manvolnum[]
290include::pve-copyright.adoc[]
291endif::manvolnum[]