]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_pvesr]] | |
2 | ifdef::manvolnum[] | |
3 | pvesr(1) | |
4 | ======== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pvesr - Proxmox VE Storage Replication | |
11 | ||
12 | SYNOPSIS | |
13 | -------- | |
14 | ||
15 | include::pvesr.1-synopsis.adoc[] | |
16 | ||
17 | DESCRIPTION | |
18 | ----------- | |
19 | endif::manvolnum[] | |
20 | ||
21 | ifndef::manvolnum[] | |
22 | Storage Replication | |
23 | =================== | |
24 | :pve-toplevel: | |
25 | endif::manvolnum[] | |
26 | ||
27 | The `pvesr` command-line tool manages the {PVE} storage replication | |
28 | framework. Storage replication brings redundancy for guests using | |
29 | local storage and reduces migration time. | |
30 | ||
31 | It replicates guest volumes to another node so that all data is available | |
32 | without using shared storage. Replication uses snapshots to minimize traffic | |
33 | sent over the network. Therefore, new data is sent only incrementally after | |
34 | the initial full sync. In the case of a node failure, your guest data is | |
35 | still available on the replicated node. | |
36 | ||
37 | The replication is done automatically in configurable intervals. | |
38 | The minimum replication interval is one minute, and the maximal interval | |
39 | once a week. The format used to specify those intervals is a subset of | |
40 | `systemd` calendar events, see | |
41 | xref:pvesr_schedule_time_format[Schedule Format] section: | |
42 | ||
43 | It is possible to replicate a guest to multiple target nodes, | |
44 | but not twice to the same target node. | |
45 | ||
46 | Each replications bandwidth can be limited, to avoid overloading a storage | |
47 | or server. | |
48 | ||
49 | Only changes since the last replication (so-called `deltas`) need to be | |
50 | transferred if the guest is migrated to a node to which it already is | |
51 | replicated. This reduces the time needed significantly. The replication | |
52 | direction automatically switches if you migrate a guest to the replication | |
53 | target node. | |
54 | ||
55 | For example: VM100 is currently on `nodeA` and gets replicated to `nodeB`. | |
56 | You migrate it to `nodeB`, so now it gets automatically replicated back from | |
57 | `nodeB` to `nodeA`. | |
58 | ||
59 | If you migrate to a node where the guest is not replicated, the whole disk | |
60 | data must send over. After the migration, the replication job continues to | |
61 | replicate this guest to the configured nodes. | |
62 | ||
63 | [IMPORTANT] | |
64 | ==== | |
65 | High-Availability is allowed in combination with storage replication, but there | |
66 | may be some data loss between the last synced time and the time a node failed. | |
67 | ==== | |
68 | ||
69 | Supported Storage Types | |
70 | ----------------------- | |
71 | ||
72 | .Storage Types | |
73 | [width="100%",options="header"] | |
74 | |============================================= | |
75 | |Description |Plugin type |Snapshots|Stable | |
76 | |ZFS (local) |zfspool |yes |yes | |
77 | |============================================= | |
78 | ||
79 | [[pvesr_schedule_time_format]] | |
80 | Schedule Format | |
81 | --------------- | |
82 | Replication uses xref:chapter_calendar_events[calendar events] for | |
83 | configuring the schedule. | |
84 | ||
85 | Error Handling | |
86 | -------------- | |
87 | ||
88 | If a replication job encounters problems, it is placed in an error state. | |
89 | In this state, the configured replication intervals get suspended | |
90 | temporarily. The failed replication is repeatedly tried again in a | |
91 | 30 minute interval. | |
92 | Once this succeeds, the original schedule gets activated again. | |
93 | ||
94 | Possible issues | |
95 | ~~~~~~~~~~~~~~~ | |
96 | ||
97 | Some of the most common issues are in the following list. Depending on your | |
98 | setup there may be another cause. | |
99 | ||
100 | * Network is not working. | |
101 | ||
102 | * No free space left on the replication target storage. | |
103 | ||
104 | * Storage with same storage ID available on the target node | |
105 | ||
106 | NOTE: You can always use the replication log to find out what is causing the problem. | |
107 | ||
108 | Migrating a guest in case of Error | |
109 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
110 | // FIXME: move this to better fitting chapter (sysadmin ?) and only link to | |
111 | // it here | |
112 | ||
113 | In the case of a grave error, a virtual guest may get stuck on a failed | |
114 | node. You then need to move it manually to a working node again. | |
115 | ||
116 | Example | |
117 | ~~~~~~~ | |
118 | ||
119 | Let's assume that you have two guests (VM 100 and CT 200) running on node A | |
120 | and replicate to node B. | |
121 | Node A failed and can not get back online. Now you have to migrate the guest | |
122 | to Node B manually. | |
123 | ||
124 | - connect to node B over ssh or open its shell via the web UI | |
125 | ||
126 | - check if that the cluster is quorate | |
127 | + | |
128 | ---- | |
129 | # pvecm status | |
130 | ---- | |
131 | ||
132 | - If you have no quorum, we strongly advise to fix this first and make the | |
133 | node operable again. Only if this is not possible at the moment, you may | |
134 | use the following command to enforce quorum on the current node: | |
135 | + | |
136 | ---- | |
137 | # pvecm expected 1 | |
138 | ---- | |
139 | ||
140 | WARNING: Avoid changes which affect the cluster if `expected votes` are set | |
141 | (for example adding/removing nodes, storages, virtual guests) at all costs. | |
142 | Only use it to get vital guests up and running again or to resolve the quorum | |
143 | issue itself. | |
144 | ||
145 | - move both guest configuration files form the origin node A to node B: | |
146 | + | |
147 | ---- | |
148 | # mv /etc/pve/nodes/A/qemu-server/100.conf /etc/pve/nodes/B/qemu-server/100.conf | |
149 | # mv /etc/pve/nodes/A/lxc/200.conf /etc/pve/nodes/B/lxc/200.conf | |
150 | ---- | |
151 | ||
152 | - Now you can start the guests again: | |
153 | + | |
154 | ---- | |
155 | # qm start 100 | |
156 | # pct start 200 | |
157 | ---- | |
158 | ||
159 | Remember to replace the VMIDs and node names with your respective values. | |
160 | ||
161 | Managing Jobs | |
162 | ------------- | |
163 | ||
164 | [thumbnail="screenshot/gui-qemu-add-replication-job.png"] | |
165 | ||
166 | You can use the web GUI to create, modify, and remove replication jobs | |
167 | easily. Additionally, the command-line interface (CLI) tool `pvesr` can be | |
168 | used to do this. | |
169 | ||
170 | You can find the replication panel on all levels (datacenter, node, virtual | |
171 | guest) in the web GUI. They differ in which jobs get shown: | |
172 | all, node- or guest-specific jobs. | |
173 | ||
174 | When adding a new job, you need to specify the guest if not already selected | |
175 | as well as the target node. The replication | |
176 | xref:pvesr_schedule_time_format[schedule] can be set if the default of `all | |
177 | 15 minutes` is not desired. You may impose a rate-limit on a replication | |
178 | job. The rate limit can help to keep the load on the storage acceptable. | |
179 | ||
180 | A replication job is identified by a cluster-wide unique ID. This ID is | |
181 | composed of the VMID in addition to a job number. | |
182 | This ID must only be specified manually if the CLI tool is used. | |
183 | ||
184 | ||
185 | Command-line Interface Examples | |
186 | ------------------------------- | |
187 | ||
188 | Create a replication job which runs every 5 minutes with a limited bandwidth | |
189 | of 10 Mbps (megabytes per second) for the guest with ID 100. | |
190 | ||
191 | ---- | |
192 | # pvesr create-local-job 100-0 pve1 --schedule "*/5" --rate 10 | |
193 | ---- | |
194 | ||
195 | Disable an active job with ID `100-0`. | |
196 | ||
197 | ---- | |
198 | # pvesr disable 100-0 | |
199 | ---- | |
200 | ||
201 | Enable a deactivated job with ID `100-0`. | |
202 | ||
203 | ---- | |
204 | # pvesr enable 100-0 | |
205 | ---- | |
206 | ||
207 | Change the schedule interval of the job with ID `100-0` to once per hour. | |
208 | ||
209 | ---- | |
210 | # pvesr update 100-0 --schedule '*/00' | |
211 | ---- | |
212 | ||
213 | ifdef::manvolnum[] | |
214 | include::pve-copyright.adoc[] | |
215 | endif::manvolnum[] |