]> git.proxmox.com Git - pve-docs.git/blob - pvesr.adoc
document #2247: add info about SSH fingerprints on cluster leave
[pve-docs.git] / pvesr.adoc
1 [[chapter_pvesr]]
2 ifdef::manvolnum[]
3 pvesr(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesr - Proxmox VE Storage Replication
11
12 SYNOPSIS
13 --------
14
15 include::pvesr.1-synopsis.adoc[]
16
17 DESCRIPTION
18 -----------
19 endif::manvolnum[]
20
21 ifndef::manvolnum[]
22 Storage Replication
23 ===================
24 :pve-toplevel:
25 endif::manvolnum[]
26
27 The `pvesr` command line tool manages the {PVE} storage replication
28 framework. Storage replication brings redundancy for guests using
29 local storage and reduces migration time.
30
31 It replicates guest volumes to another node so that all data is available
32 without using shared storage. Replication uses snapshots to minimize traffic
33 sent over the network. Therefore, new data is sent only incrementally after
34 an initial full sync. In the case of a node failure, your guest data is
35 still available on the replicated node.
36
37 The replication will be done automatically in configurable intervals.
38 The minimum replication interval is one minute and the maximal interval is
39 once a week. The format used to specify those intervals is a subset of
40 `systemd` calendar events, see
41 xref:pvesr_schedule_time_format[Schedule Format] section:
42
43 Every guest can be replicated to multiple target nodes, but a guest cannot
44 get replicated twice to the same target node.
45
46 Each replications bandwidth can be limited, to avoid overloading a storage
47 or server.
48
49 Virtual guest with active replication cannot currently use online migration.
50 Offline migration is supported in general. If you migrate to a node where
51 the guests data is already replicated only the changes since the last
52 synchronisation (so called `delta`) must be sent, this reduces the required
53 time significantly. In this case the replication direction will also switch
54 nodes automatically after the migration finished.
55
56 For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57 You migrate it to `nodeB`, so now it gets automatically replicated back from
58 `nodeB` to `nodeA`.
59
60 If you migrate to a node where the guest is not replicated, the whole disk
61 data must send over. After the migration the replication job continues to
62 replicate this guest to the configured nodes.
63
64 [IMPORTANT]
65 ====
66 High-Availability is allowed in combination with storage replication, but it
67 has the following implications:
68
69 * redistributing services after a more preferred node comes online will lead
70 to errors.
71
72 * recovery works, but there may be some data loss between the last synced
73 time and the time a node failed.
74 ====
75
76 Supported Storage Types
77 -----------------------
78
79 .Storage Types
80 [width="100%",options="header"]
81 |============================================
82 |Description |PVE type |Snapshots|Stable
83 |ZFS (local) |zfspool |yes |yes
84 |============================================
85
86 [[pvesr_schedule_time_format]]
87 Schedule Format
88 ---------------
89
90 {pve} has a very flexible replication scheduler. It is based on the systemd
91 time calendar event format.footnote:[see `man 7 systemd.time` for more information]
92 Calendar events may be used to refer to one or more points in time in a
93 single expression.
94
95 Such a calendar event uses the following format:
96
97 ----
98 [day(s)] [[start-time(s)][/repetition-time(s)]]
99 ----
100
101 This allows you to configure a set of days on which the job should run.
102 You can also set one or more start times, it tells the replication scheduler
103 the moments in time when a job should start.
104 With this information we could create a job which runs every workday at 10
105 PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
106 22'`, most reasonable schedules can be written quite intuitive this way.
107
108 NOTE: Hours are set in 24h format.
109
110 To allow easier and shorter configuration one or more repetition times can
111 be set. They indicate that on the start-time(s) itself and the start-time(s)
112 plus all multiples of the repetition value replications will be done. If
113 you want to start replication at 8 AM and repeat it every 15 minutes until
114 9 AM you would use: `'8:00/15'`
115
116 Here you see also that if no hour separation (`:`) is used the value gets
117 interpreted as minute. If such a separation is used the value on the left
118 denotes the hour(s) and the value on the right denotes the minute(s).
119 Further, you can use `*` to match all possible values.
120
121 To get additional ideas look at
122 xref:pvesr_schedule_format_examples[more Examples below].
123
124 Detailed Specification
125 ~~~~~~~~~~~~~~~~~~~~~~
126
127 days:: Days are specified with an abbreviated English version: `sun, mon,
128 tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
129 list. A range of days can also be set by specifying the start and end day
130 separated by ``..'', for example `mon..fri`. Those formats can be also
131 mixed. If omitted `'*'` is assumed.
132
133 time-format:: A time format consists of hours and minutes interval lists.
134 Hours and minutes are separated by `':'`. Both, hour and minute, can be list
135 and ranges of values, using the same format as days.
136 First come hours then minutes, hours can be omitted if not needed, in this
137 case `'*'` is assumed for the value of hours.
138 The valid range for values is `0-23` for hours and `0-59` for minutes.
139
140 [[pvesr_schedule_format_examples]]
141 Examples:
142 ~~~~~~~~~
143
144 .Schedule Examples
145 [width="100%",options="header"]
146 |==============================================================================
147 |Schedule String |Alternative |Meaning
148 |mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
149 |sat,sun |sat..sun |Only on weekends at 0:00
150 |mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
151 |12:05 |12:05 |Every day at 12:05 PM
152 |*/5 |0/5 |Every five minutes
153 |mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
154 |mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
155 |fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
156 |12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
157 |* |*/1 |Every minute (minimum interval)
158 |==============================================================================
159
160 Error Handling
161 --------------
162
163 If a replication job encounters problems it will be placed in error state.
164 In this state the configured replication intervals get suspended
165 temporarily. Then we retry the failed replication in a 30 minute interval,
166 once this succeeds the original schedule gets activated again.
167
168 Possible issues
169 ~~~~~~~~~~~~~~~
170
171 This represents only the most common issues possible, depending on your
172 setup there may be also another cause.
173
174 * Network is not working.
175
176 * No free space left on the replication target storage.
177
178 * Storage with same storage ID available on target node
179
180 NOTE: You can always use the replication log to get hints about a problems
181 cause.
182
183 Migrating a guest in case of Error
184 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
185 // FIXME: move this to better fitting chapter (sysadmin ?) and only link to
186 // it here
187
188 In the case of a grave error a virtual guest may get stuck on a failed
189 node. You then need to move it manually to a working node again.
190
191 Example
192 ~~~~~~~
193
194 Lets assume that you have two guests (VM 100 and CT 200) running on node A
195 and replicate to node B.
196 Node A failed and can not get back online. Now you have to migrate the guest
197 to Node B manually.
198
199 - connect to node B over ssh or open its shell via the WebUI
200
201 - check if that the cluster is quorate
202 +
203 ----
204 # pvecm status
205 ----
206
207 - If you have no quorum we strongly advise to fix this first and make the
208 node operable again. Only if this is not possible at the moment you may
209 use the following command to enforce quorum on the current node:
210 +
211 ----
212 # pvecm expected 1
213 ----
214
215 WARNING: If expected votes are set avoid changes which affect the cluster
216 (for example adding/removing nodes, storages, virtual guests) at all costs.
217 Only use it to get vital guests up and running again or to resolve to quorum
218 issue itself.
219
220 - move both guest configuration files form the origin node A to node B:
221 +
222 ----
223 # mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
224 # mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
225 ----
226
227 - Now you can start the guests again:
228 +
229 ----
230 # qm start 100
231 # pct start 200
232 ----
233
234 Remember to replace the VMIDs and node names with your respective values.
235
236 Managing Jobs
237 -------------
238
239 [thumbnail="screenshot/gui-qemu-add-replication-job.png"]
240
241 You can use the web GUI to create, modify and remove replication jobs
242 easily. Additionally the command line interface (CLI) tool `pvesr` can be
243 used to do this.
244
245 You can find the replication panel on all levels (datacenter, node, virtual
246 guest) in the web GUI. They differ in what jobs get shown: all, only node
247 specific or only guest specific jobs.
248
249 Once adding a new job you need to specify the virtual guest (if not already
250 selected) and the target node. The replication
251 xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
252 15 minutes` is not desired. You may also impose rate limiting on a
253 replication job, this can help to keep the storage load acceptable.
254
255 A replication job is identified by an cluster-wide unique ID. This ID is
256 composed of the VMID in addition to an job number.
257 This ID must only be specified manually if the CLI tool is used.
258
259
260 Command Line Interface Examples
261 -------------------------------
262
263 Create a replication job which will run every 5 minutes with limited bandwidth of
264 10 mbps (megabytes per second) for the guest with guest ID 100.
265
266 ----
267 # pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
268 ----
269
270 Disable an active job with ID `100-0`
271
272 ----
273 # pvesr disable 100-0
274 ----
275
276 Enable a deactivated job with ID `100-0`
277
278 ----
279 # pvesr enable 100-0
280 ----
281
282 Change the schedule interval of the job with ID `100-0` to once a hour
283
284 ----
285 # pvesr update 100-0 --schedule '*/00'
286 ----
287
288 ifdef::manvolnum[]
289 include::pve-copyright.adoc[]
290 endif::manvolnum[]