]> git.proxmox.com Git - pve-docs.git/blob - pvesr.adoc
fixup: add missing newlines
[pve-docs.git] / pvesr.adoc
1 [[chapter_pvesr]]
2 ifdef::manvolnum[]
3 pvesr(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesr - Proxmox VE Storage Replication
11
12 SYNOPSIS
13 --------
14
15 include::pvesr.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Storage Replication
23 ===================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The `pvesr` command line tool manages the {PVE} storage replication
28 framework. Storage replication brings redundancy for guests using
29 local storage and reduces migration time.
30
31 It replicates guest volumes to another node so that all data is available
32 without using shared storage. Replication uses snapshots to minimize traffic
33 sent over the network. Therefore, new data is sent only incrementally after
34 the initial full sync. In the case of a node failure, your guest data is
35 still available on the replicated node.
36
37 The replication is done automatically in configurable intervals.
38 The minimum replication interval is one minute, and the maximal interval
39 once a week. The format used to specify those intervals is a subset of
40 `systemd` calendar events, see
41 xref:pvesr_schedule_time_format[Schedule Format] section:
42
43 It is possible to replicate a guest to multiple target nodes,
44 but not twice to the same target node.
45
46 Each replications bandwidth can be limited, to avoid overloading a storage
47 or server.
48
49 Guests with replication enabled can currently only be migrated offline.
50 Only changes since the last replication (so-called `deltas`) need to be
51 transferred if the guest is migrated to a node to which it already is
52 replicated. This reduces the time needed significantly. The replication
53 direction automatically switches if you migrate a guest to the replication
54 target node.
55
56 For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57 You migrate it to `nodeB`, so now it gets automatically replicated back from
58 `nodeB` to `nodeA`.
59
60 If you migrate to a node where the guest is not replicated, the whole disk
61 data must send over. After the migration, the replication job continues to
62 replicate this guest to the configured nodes.
63
64 [IMPORTANT]
65 ====
66 High-Availability is allowed in combination with storage replication, but there
67 may be some data loss between the last synced time and the time a node failed.
68 ====
69
70 Supported Storage Types
71 -----------------------
72
73 .Storage Types
74 [width="100%",options="header"]
75 |============================================
76 |Description |PVE type |Snapshots|Stable
77 |ZFS (local) |zfspool |yes |yes
78 |============================================
79
80 [[pvesr_schedule_time_format]]
81 Schedule Format
82 ---------------
83
84 {pve} has a very flexible replication scheduler. It is based on the systemd
85 time calendar event format.footnote:[see `man 7 systemd.time` for more information]
86 Calendar events may be used to refer to one or more points in time in a
87 single expression.
88
89 Such a calendar event uses the following format:
90
91 ----
92 [day(s)] [[start-time(s)][/repetition-time(s)]]
93 ----
94
95 This format allows you to configure a set of days on which the job should run.
96 You can also set one or more start times. It tells the replication scheduler
97 the moments in time when a job should start.
98 With this information we, can create a job which runs every workday at 10
99 PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
100 22'`, most reasonable schedules can be written quite intuitive this way.
101
102 NOTE: Hours are formatted in 24-hour format.
103
104 To allow a convenient and shorter configuration, one or more repeat times per
105 guest can be set. They indicate that replications are done on the start-time(s)
106 itself and the start-time(s) plus all multiples of the repetition value. If
107 you want to start replication at 8 AM and repeat it every 15 minutes until
108 9 AM you would use: `'8:00/15'`
109
110 Here you see that if no hour separation (`:`), is used the value gets
111 interpreted as minute. If such a separation is used, the value on the left
112 denotes the hour(s), and the value on the right denotes the minute(s).
113 Further, you can use `*` to match all possible values.
114
115 To get additional ideas look at
116 xref:pvesr_schedule_format_examples[more Examples below].
117
118 Detailed Specification
119 ~~~~~~~~~~~~~~~~~~~~~~
120
121 days:: Days are specified with an abbreviated English version: `sun, mon,
122 tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
123 list. A range of days can also be set by specifying the start and end day
124 separated by ``..'', for example `mon..fri`. These formats can be mixed.
125 If omitted `'*'` is assumed.
126
127 time-format:: A time format consists of hours and minutes interval lists.
128 Hours and minutes are separated by `':'`. Both hour and minute can be list
129 and ranges of values, using the same format as days.
130 First are hours, then minutes. Hours can be omitted if not needed. In this
131 case `'*'` is assumed for the value of hours.
132 The valid range for values is `0-23` for hours and `0-59` for minutes.
133
134 [[pvesr_schedule_format_examples]]
135 Examples:
136 ~~~~~~~~~
137
138 .Schedule Examples
139 [width="100%",options="header"]
140 |==============================================================================
141 |Schedule String |Alternative |Meaning
142 |mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
143 |sat,sun |sat..sun |Only on weekends at 0:00
144 |mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
145 |12:05 |12:05 |Every day at 12:05 PM
146 |*/5 |0/5 |Every five minutes
147 |mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
148 |mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
149 |fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
150 |12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
151 |* |*/1 |Every minute (minimum interval)
152 |==============================================================================
153
154 Error Handling
155 --------------
156
157 If a replication job encounters problems, it is placed in an error state.
158 In this state, the configured replication intervals get suspended
159 temporarily. The failed replication is repeatedly tried again in a
160 30 minute interval.
161 Once this succeeds, the original schedule gets activated again.
162
163 Possible issues
164 ~~~~~~~~~~~~~~~
165
166 Some of the most common issues are in the following list. Depending on your
167 setup there may be another cause.
168
169 * Network is not working.
170
171 * No free space left on the replication target storage.
172
173 * Storage with same storage ID available on the target node
174
175 NOTE: You can always use the replication log to find out what is causing the problem.
176
177 Migrating a guest in case of Error
178 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
179 // FIXME: move this to better fitting chapter (sysadmin ?) and only link to
180 // it here
181
182 In the case of a grave error, a virtual guest may get stuck on a failed
183 node. You then need to move it manually to a working node again.
184
185 Example
186 ~~~~~~~
187
188 Let's assume that you have two guests (VM 100 and CT 200) running on node A
189 and replicate to node B.
190 Node A failed and can not get back online. Now you have to migrate the guest
191 to Node B manually.
192
193 - connect to node B over ssh or open its shell via the WebUI
194
195 - check if that the cluster is quorate
196 +
197 ----
198 # pvecm status
199 ----
200
201 - If you have no quorum, we strongly advise to fix this first and make the
202 node operable again. Only if this is not possible at the moment, you may
203 use the following command to enforce quorum on the current node:
204 +
205 ----
206 # pvecm expected 1
207 ----
208
209 WARNING: Avoid changes which affect the cluster if `expected votes` are set
210 (for example adding/removing nodes, storages, virtual guests) at all costs.
211 Only use it to get vital guests up and running again or to resolve the quorum
212 issue itself.
213
214 - move both guest configuration files form the origin node A to node B:
215 +
216 ----
217 # mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
218 # mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
219 ----
220
221 - Now you can start the guests again:
222 +
223 ----
224 # qm start 100
225 # pct start 200
226 ----
227
228 Remember to replace the VMIDs and node names with your respective values.
229
230 Managing Jobs
231 -------------
232
233 [thumbnail="screenshot/gui-qemu-add-replication-job.png"]
234
235 You can use the web GUI to create, modify, and remove replication jobs
236 easily. Additionally, the command line interface (CLI) tool `pvesr` can be
237 used to do this.
238
239 You can find the replication panel on all levels (datacenter, node, virtual
240 guest) in the web GUI. They differ in which jobs get shown:
241 all, node- or guest-specific jobs.
242
243 When adding a new job, you need to specify the guest if not already selected
244 as well as the target node. The replication
245 xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
246 15 minutes` is not desired. You may impose a rate-limit on a replication
247 job. The rate limit can help to keep the load on the storage acceptable.
248
249 A replication job is identified by a cluster-wide unique ID. This ID is
250 composed of the VMID in addition to a job number.
251 This ID must only be specified manually if the CLI tool is used.
252
253
254 Command Line Interface Examples
255 -------------------------------
256
257 Create a replication job which runs every 5 minutes with a limited bandwidth
258 of 10 Mbps (megabytes per second) for the guest with ID 100.
259
260 ----
261 # pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
262 ----
263
264 Disable an active job with ID `100-0`.
265
266 ----
267 # pvesr disable 100-0
268 ----
269
270 Enable a deactivated job with ID `100-0`.
271
272 ----
273 # pvesr enable 100-0
274 ----
275
276 Change the schedule interval of the job with ID `100-0` to once per hour.
277
278 ----
279 # pvesr update 100-0 --schedule '*/00'
280 ----
281
282 ifdef::manvolnum[]
283 include::pve-copyright.adoc[]
284 endif::manvolnum[]