]> git.proxmox.com Git - pve-docs.git/blame - pvesr.adoc
pmxcfs: language and style fixup
[pve-docs.git] / pvesr.adoc
CommitLineData
b1f48b2a 1[[chapter_pvesr]]
c024f553
DM
2ifdef::manvolnum[]
3pvesr(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
236bec37 10pvesr - Proxmox VE Storage Replication
c024f553
DM
11
12SYNOPSIS
13--------
14
15include::pvesr.1-synopsis.adoc[]
16
17DESCRIPTION
18-----------
19endif::manvolnum[]
20
21ifndef::manvolnum[]
22Storage Replication
23===================
24:pve-toplevel:
25endif::manvolnum[]
26
45c218cf
DM
27The `pvesr` command line tool manages the {PVE} storage replication
28framework. Storage replication brings redundancy for guests using
728a3ea5 29local storage and reduces migration time.
45c218cf 30
728a3ea5
TL
31It replicates guest volumes to another node so that all data is available
32without using shared storage. Replication uses snapshots to minimize traffic
33sent over the network. Therefore, new data is sent only incrementally after
c93fb4bf 34the initial full sync. In the case of a node failure, your guest data is
728a3ea5 35still available on the replicated node.
236bec37 36
c93fb4bf
WL
37The replication is done automatically in configurable intervals.
38The minimum replication interval is one minute, and the maximal interval
728a3ea5
TL
39once a week. The format used to specify those intervals is a subset of
40`systemd` calendar events, see
41xref:pvesr_schedule_time_format[Schedule Format] section:
236bec37 42
c93fb4bf
WL
43It is possible to replicate a guest to multiple target nodes,
44but not twice to the same target node.
236bec37 45
728a3ea5
TL
46Each replications bandwidth can be limited, to avoid overloading a storage
47or server.
236bec37 48
c93fb4bf
WL
49Guests with replication enabled can currently only be migrated offline.
50Only changes since the last replication (so-called `deltas`) need to be
51transferred if the guest is migrated to a node to which it already is
52replicated. This reduces the time needed significantly. The replication
53direction automatically switches if you migrate a guest to the replication
54target node.
728a3ea5
TL
55
56For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`.
57You migrate it to `nodeB`, so now it gets automatically replicated back from
58`nodeB` to `nodeA`.
59
60If you migrate to a node where the guest is not replicated, the whole disk
c93fb4bf 61data must send over. After the migration, the replication job continues to
728a3ea5
TL
62replicate this guest to the configured nodes.
63
64[IMPORTANT]
65====
d9df5dae
AL
66High-Availability is allowed in combination with storage replication, but there
67may be some data loss between the last synced time and the time a node failed.
728a3ea5 68====
236bec37
WL
69
70Supported Storage Types
71-----------------------
72
73.Storage Types
74[width="100%",options="header"]
75|============================================
76|Description |PVE type |Snapshots|Stable
77|ZFS (local) |zfspool |yes |yes
78|============================================
79
728a3ea5
TL
80[[pvesr_schedule_time_format]]
81Schedule Format
82---------------
236bec37 83
728a3ea5 84{pve} has a very flexible replication scheduler. It is based on the systemd
470d4313 85time calendar event format.footnote:[see `man 7 systemd.time` for more information]
728a3ea5
TL
86Calendar events may be used to refer to one or more points in time in a
87single expression.
236bec37 88
728a3ea5 89Such a calendar event uses the following format:
236bec37 90
728a3ea5
TL
91----
92[day(s)] [[start-time(s)][/repetition-time(s)]]
93----
236bec37 94
c93fb4bf
WL
95This format allows you to configure a set of days on which the job should run.
96You can also set one or more start times. It tells the replication scheduler
728a3ea5 97the moments in time when a job should start.
c93fb4bf 98With this information we, can create a job which runs every workday at 10
728a3ea5
TL
99PM: `'mon,tue,wed,thu,fri 22'` which could be abbreviated to: `'mon..fri
10022'`, most reasonable schedules can be written quite intuitive this way.
101
c93fb4bf 102NOTE: Hours are formatted in 24-hour format.
728a3ea5 103
c93fb4bf
WL
104To allow a convenient and shorter configuration, one or more repeat times per
105guest can be set. They indicate that replications are done on the start-time(s)
106itself and the start-time(s) plus all multiples of the repetition value. If
19cc0d77
DC
107you want to start replication at 8 AM and repeat it every 15 minutes until
1089 AM you would use: `'8:00/15'`
728a3ea5 109
c93fb4bf
WL
110Here you see that if no hour separation (`:`), is used the value gets
111interpreted as minute. If such a separation is used, the value on the left
112denotes the hour(s), and the value on the right denotes the minute(s).
728a3ea5
TL
113Further, you can use `*` to match all possible values.
114
115To get additional ideas look at
116xref:pvesr_schedule_format_examples[more Examples below].
117
118Detailed Specification
119~~~~~~~~~~~~~~~~~~~~~~
120
121days:: Days are specified with an abbreviated English version: `sun, mon,
122tue, wed, thu, fri and sat`. You may use multiple days as a comma-separated
123list. A range of days can also be set by specifying the start and end day
c93fb4bf
WL
124separated by ``..'', for example `mon..fri`. These formats can be mixed.
125If omitted `'*'` is assumed.
728a3ea5
TL
126
127time-format:: A time format consists of hours and minutes interval lists.
c93fb4bf 128Hours and minutes are separated by `':'`. Both hour and minute can be list
728a3ea5 129and ranges of values, using the same format as days.
c93fb4bf 130First are hours, then minutes. Hours can be omitted if not needed. In this
728a3ea5
TL
131case `'*'` is assumed for the value of hours.
132The valid range for values is `0-23` for hours and `0-59` for minutes.
133
134[[pvesr_schedule_format_examples]]
236bec37
WL
135Examples:
136~~~~~~~~~
137
138.Schedule Examples
139[width="100%",options="header"]
728a3ea5
TL
140|==============================================================================
141|Schedule String |Alternative |Meaning
8985eb37
FG
142|mon,tue,wed,thu,fri |mon..fri |Every working day at 0:00
143|sat,sun |sat..sun |Only on weekends at 0:00
728a3ea5 144|mon,wed,fri |-- |Only on Monday, Wednesday and Friday at 0:00
8985eb37
FG
145|12:05 |12:05 |Every day at 12:05 PM
146|*/5 |0/5 |Every five minutes
728a3ea5 147|mon..wed 30/10 |mon,tue,wed 30/10 |Monday, Tuesday, Wednesday 30, 40 and 50 minutes after every full hour
8985eb37 148|mon..fri 8..17,22:0/15 |-- |Every working day every 15 minutes between 8 AM and 6 PM and between 10 PM and 11 PM
728a3ea5 149|fri 12..13:5/20 |fri 12,13:5/20 |Friday at 12:05, 12:25, 12:45, 13:05, 13:25 and 13:45
8985eb37 150|12,14,16,18,20,22:5 |12/2:5 |Every day starting at 12:05 until 22:05, every 2 hours
728a3ea5
TL
151|* |*/1 |Every minute (minimum interval)
152|==============================================================================
236bec37 153
728a3ea5
TL
154Error Handling
155--------------
236bec37 156
c93fb4bf
WL
157If a replication job encounters problems, it is placed in an error state.
158In this state, the configured replication intervals get suspended
159temporarily. The failed replication is repeatedly tried again in a
16030 minute interval.
161Once this succeeds, the original schedule gets activated again.
236bec37 162
728a3ea5
TL
163Possible issues
164~~~~~~~~~~~~~~~
236bec37 165
c93fb4bf
WL
166Some of the most common issues are in the following list. Depending on your
167setup there may be another cause.
236bec37 168
728a3ea5 169* Network is not working.
236bec37 170
728a3ea5 171* No free space left on the replication target storage.
236bec37 172
c93fb4bf 173* Storage with same storage ID available on the target node
236bec37 174
c93fb4bf 175NOTE: You can always use the replication log to find out what is causing the problem.
236bec37 176
728a3ea5
TL
177Migrating a guest in case of Error
178~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
179// FIXME: move this to better fitting chapter (sysadmin ?) and only link to
180// it here
236bec37 181
c93fb4bf 182In the case of a grave error, a virtual guest may get stuck on a failed
728a3ea5 183node. You then need to move it manually to a working node again.
236bec37 184
728a3ea5
TL
185Example
186~~~~~~~
236bec37 187
c93fb4bf 188Let's assume that you have two guests (VM 100 and CT 200) running on node A
728a3ea5
TL
189and replicate to node B.
190Node A failed and can not get back online. Now you have to migrate the guest
191to Node B manually.
236bec37 192
728a3ea5 193- connect to node B over ssh or open its shell via the WebUI
236bec37 194
728a3ea5
TL
195- check if that the cluster is quorate
196+
197----
198# pvecm status
199----
236bec37 200
c93fb4bf
WL
201- If you have no quorum, we strongly advise to fix this first and make the
202 node operable again. Only if this is not possible at the moment, you may
728a3ea5
TL
203 use the following command to enforce quorum on the current node:
204+
205----
206# pvecm expected 1
207----
236bec37 208
c93fb4bf
WL
209WARNING: Avoid changes which affect the cluster if `expected votes` are set
210(for example adding/removing nodes, storages, virtual guests) at all costs.
5e8c8202 211Only use it to get vital guests up and running again or to resolve the quorum
728a3ea5 212issue itself.
236bec37 213
728a3ea5
TL
214- move both guest configuration files form the origin node A to node B:
215+
216----
1cdc7c17
DC
217# mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf
218# mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf
728a3ea5 219----
236bec37 220
728a3ea5
TL
221- Now you can start the guests again:
222+
223----
224# qm start 100
225# pct start 200
226----
236bec37 227
728a3ea5 228Remember to replace the VMIDs and node names with your respective values.
236bec37 229
728a3ea5
TL
230Managing Jobs
231-------------
236bec37 232
1ff5e4e8 233[thumbnail="screenshot/gui-qemu-add-replication-job.png"]
2a3d436c 234
c93fb4bf
WL
235You can use the web GUI to create, modify, and remove replication jobs
236easily. Additionally, the command line interface (CLI) tool `pvesr` can be
728a3ea5 237used to do this.
236bec37 238
728a3ea5 239You can find the replication panel on all levels (datacenter, node, virtual
c93fb4bf
WL
240guest) in the web GUI. They differ in which jobs get shown:
241all, node- or guest-specific jobs.
236bec37 242
c93fb4bf
WL
243When adding a new job, you need to specify the guest if not already selected
244as well as the target node. The replication
728a3ea5 245xref:pvesr_schedule_time_format[schedule] can be set if the default of `all
c93fb4bf
WL
24615 minutes` is not desired. You may impose a rate-limit on a replication
247job. The rate limit can help to keep the load on the storage acceptable.
236bec37 248
c93fb4bf
WL
249A replication job is identified by a cluster-wide unique ID. This ID is
250composed of the VMID in addition to a job number.
728a3ea5 251This ID must only be specified manually if the CLI tool is used.
236bec37 252
236bec37 253
728a3ea5
TL
254Command Line Interface Examples
255-------------------------------
236bec37 256
c93fb4bf
WL
257Create a replication job which runs every 5 minutes with a limited bandwidth
258of 10 Mbps (megabytes per second) for the guest with ID 100.
236bec37 259
728a3ea5
TL
260----
261# pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10
262----
236bec37 263
c93fb4bf 264Disable an active job with ID `100-0`.
236bec37 265
728a3ea5
TL
266----
267# pvesr disable 100-0
268----
236bec37 269
c93fb4bf 270Enable a deactivated job with ID `100-0`.
236bec37 271
728a3ea5
TL
272----
273# pvesr enable 100-0
274----
236bec37 275
c93fb4bf 276Change the schedule interval of the job with ID `100-0` to once per hour.
236bec37 277
728a3ea5
TL
278----
279# pvesr update 100-0 --schedule '*/00'
280----
c024f553
DM
281
282ifdef::manvolnum[]
283include::pve-copyright.adoc[]
284endif::manvolnum[]