]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_storage]] | |
2 | ifdef::manvolnum[] | |
3 | pvesm(1) | |
4 | ======== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pvesm - Proxmox VE Storage Manager | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::pvesm.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ifndef::manvolnum[] | |
22 | {pve} Storage | |
23 | ============= | |
24 | :pve-toplevel: | |
25 | endif::manvolnum[] | |
26 | ifdef::wiki[] | |
27 | :title: Storage | |
28 | endif::wiki[] | |
29 | ||
30 | The {pve} storage model is very flexible. Virtual machine images | |
31 | can either be stored on one or several local storages, or on shared | |
32 | storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may | |
33 | configure as many storage pools as you like. You can use all | |
34 | storage technologies available for Debian Linux. | |
35 | ||
36 | One major benefit of storing VMs on shared storage is the ability to | |
37 | live-migrate running machines without any downtime, as all nodes in | |
38 | the cluster have direct access to VM disk images. There is no need to | |
39 | copy VM image data, so live migration is very fast in that case. | |
40 | ||
41 | The storage library (package `libpve-storage-perl`) uses a flexible | |
42 | plugin system to provide a common interface to all storage types. This | |
43 | can be easily adopted to include further storage types in the future. | |
44 | ||
45 | ||
46 | Storage Types | |
47 | ------------- | |
48 | ||
49 | There are basically two different classes of storage types: | |
50 | ||
51 | File level storage:: | |
52 | ||
53 | File level based storage technologies allow access to a fully featured (POSIX) | |
54 | file system. They are in general more flexible than any Block level storage | |
55 | (see below), and allow you to store content of any type. ZFS is probably the | |
56 | most advanced system, and it has full support for snapshots and clones. | |
57 | ||
58 | Block level storage:: | |
59 | ||
60 | Allows to store large 'raw' images. It is usually not possible to store | |
61 | other files (ISO, backups, ..) on such storage types. Most modern | |
62 | block level storage implementations support snapshots and clones. | |
63 | RADOS and GlusterFS are distributed systems, replicating storage | |
64 | data to different nodes. | |
65 | ||
66 | ||
67 | .Available storage types | |
68 | [width="100%",cols="<2d,1*m,4*d",options="header"] | |
69 | |=========================================================== | |
70 | |Description |PVE type |Level |Shared|Snapshots|Stable | |
71 | |ZFS (local) |zfspool |file |no |yes |yes | |
72 | |Directory |dir |file |no |no^1^ |yes | |
73 | |NFS |nfs |file |yes |no^1^ |yes | |
74 | |CIFS |cifs |file |yes |no^1^ |yes | |
75 | |Proxmox Backup |pbs |both |yes |n/a |yes | |
76 | |GlusterFS |glusterfs |file |yes |no^1^ |yes | |
77 | |CephFS |cephfs |file |yes |yes |yes | |
78 | |LVM |lvm |block |no^2^ |no |yes | |
79 | |LVM-thin |lvmthin |block |no |yes |yes | |
80 | |iSCSI/kernel |iscsi |block |yes |no |yes | |
81 | |iSCSI/libiscsi |iscsidirect |block |yes |no |yes | |
82 | |Ceph/RBD |rbd |block |yes |yes |yes | |
83 | |ZFS over iSCSI |zfs |block |yes |yes |yes | |
84 | |=========================================================== | |
85 | ||
86 | ^1^: On file based storages, snapshots are possible with the 'qcow2' format. | |
87 | ||
88 | ^2^: It is possible to use LVM on top of an iSCSI or FC-based storage. | |
89 | That way you get a `shared` LVM storage. | |
90 | ||
91 | ||
92 | Thin Provisioning | |
93 | ~~~~~~~~~~~~~~~~~ | |
94 | ||
95 | A number of storages, and the Qemu image format `qcow2`, support 'thin | |
96 | provisioning'. With thin provisioning activated, only the blocks that | |
97 | the guest system actually use will be written to the storage. | |
98 | ||
99 | Say for instance you create a VM with a 32GB hard disk, and after | |
100 | installing the guest system OS, the root file system of the VM contains | |
101 | 3 GB of data. In that case only 3GB are written to the storage, even | |
102 | if the guest VM sees a 32GB hard drive. In this way thin provisioning | |
103 | allows you to create disk images which are larger than the currently | |
104 | available storage blocks. You can create large disk images for your | |
105 | VMs, and when the need arises, add more disks to your storage without | |
106 | resizing the VMs' file systems. | |
107 | ||
108 | All storage types which have the ``Snapshots'' feature also support thin | |
109 | provisioning. | |
110 | ||
111 | CAUTION: If a storage runs full, all guests using volumes on that | |
112 | storage receive IO errors. This can cause file system inconsistencies | |
113 | and may corrupt your data. So it is advisable to avoid | |
114 | over-provisioning of your storage resources, or carefully observe | |
115 | free space to avoid such conditions. | |
116 | ||
117 | ||
118 | Storage Configuration | |
119 | --------------------- | |
120 | ||
121 | All {pve} related storage configuration is stored within a single text | |
122 | file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it | |
123 | gets automatically distributed to all cluster nodes. So all nodes | |
124 | share the same storage configuration. | |
125 | ||
126 | Sharing storage configuration makes perfect sense for shared storage, | |
127 | because the same ``shared'' storage is accessible from all nodes. But it is | |
128 | also useful for local storage types. In this case such local storage | |
129 | is available on all nodes, but it is physically different and can have | |
130 | totally different content. | |
131 | ||
132 | ||
133 | Storage Pools | |
134 | ~~~~~~~~~~~~~ | |
135 | ||
136 | Each storage pool has a `<type>`, and is uniquely identified by its | |
137 | `<STORAGE_ID>`. A pool configuration looks like this: | |
138 | ||
139 | ---- | |
140 | <type>: <STORAGE_ID> | |
141 | <property> <value> | |
142 | <property> <value> | |
143 | <property> | |
144 | ... | |
145 | ---- | |
146 | ||
147 | The `<type>: <STORAGE_ID>` line starts the pool definition, which is then | |
148 | followed by a list of properties. Most properties require a value. Some have | |
149 | reasonable defaults, in which case you can omit the value. | |
150 | ||
151 | To be more specific, take a look at the default storage configuration | |
152 | after installation. It contains one special local storage pool named | |
153 | `local`, which refers to the directory `/var/lib/vz` and is always | |
154 | available. The {pve} installer creates additional storage entries | |
155 | depending on the storage type chosen at installation time. | |
156 | ||
157 | .Default storage configuration (`/etc/pve/storage.cfg`) | |
158 | ---- | |
159 | dir: local | |
160 | path /var/lib/vz | |
161 | content iso,vztmpl,backup | |
162 | ||
163 | # default image store on LVM based installation | |
164 | lvmthin: local-lvm | |
165 | thinpool data | |
166 | vgname pve | |
167 | content rootdir,images | |
168 | ||
169 | # default image store on ZFS based installation | |
170 | zfspool: local-zfs | |
171 | pool rpool/data | |
172 | sparse | |
173 | content images,rootdir | |
174 | ---- | |
175 | ||
176 | ||
177 | Common Storage Properties | |
178 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
179 | ||
180 | A few storage properties are common among different storage types. | |
181 | ||
182 | nodes:: | |
183 | ||
184 | List of cluster node names where this storage is | |
185 | usable/accessible. One can use this property to restrict storage | |
186 | access to a limited set of nodes. | |
187 | ||
188 | content:: | |
189 | ||
190 | A storage can support several content types, for example virtual disk | |
191 | images, cdrom iso images, container templates or container root | |
192 | directories. Not all storage types support all content types. One can set | |
193 | this property to select what this storage is used for. | |
194 | ||
195 | images::: | |
196 | ||
197 | KVM-Qemu VM images. | |
198 | ||
199 | rootdir::: | |
200 | ||
201 | Allow to store container data. | |
202 | ||
203 | vztmpl::: | |
204 | ||
205 | Container templates. | |
206 | ||
207 | backup::: | |
208 | ||
209 | Backup files (`vzdump`). | |
210 | ||
211 | iso::: | |
212 | ||
213 | ISO images | |
214 | ||
215 | snippets::: | |
216 | ||
217 | Snippet files, for example guest hook scripts | |
218 | ||
219 | shared:: | |
220 | ||
221 | Mark storage as shared. | |
222 | ||
223 | disable:: | |
224 | ||
225 | You can use this flag to disable the storage completely. | |
226 | ||
227 | maxfiles:: | |
228 | ||
229 | Deprecated, please use `prune-backups` instead. Maximum number of backup files | |
230 | per VM. Use `0` for unlimited. | |
231 | ||
232 | prune-backups:: | |
233 | ||
234 | Retention options for backups. For details, see | |
235 | xref:vzdump_retention[Backup Retention]. | |
236 | ||
237 | format:: | |
238 | ||
239 | Default image format (`raw|qcow2|vmdk`) | |
240 | ||
241 | ||
242 | WARNING: It is not advisable to use the same storage pool on different | |
243 | {pve} clusters. Some storage operation need exclusive access to the | |
244 | storage, so proper locking is required. While this is implemented | |
245 | within a cluster, it does not work between different clusters. | |
246 | ||
247 | ||
248 | Volumes | |
249 | ------- | |
250 | ||
251 | We use a special notation to address storage data. When you allocate | |
252 | data from a storage pool, it returns such a volume identifier. A volume | |
253 | is identified by the `<STORAGE_ID>`, followed by a storage type | |
254 | dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks | |
255 | like: | |
256 | ||
257 | local:230/example-image.raw | |
258 | ||
259 | local:iso/debian-501-amd64-netinst.iso | |
260 | ||
261 | local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz | |
262 | ||
263 | iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61 | |
264 | ||
265 | To get the file system path for a `<VOLUME_ID>` use: | |
266 | ||
267 | pvesm path <VOLUME_ID> | |
268 | ||
269 | ||
270 | Volume Ownership | |
271 | ~~~~~~~~~~~~~~~~ | |
272 | ||
273 | There exists an ownership relation for `image` type volumes. Each such | |
274 | volume is owned by a VM or Container. For example volume | |
275 | `local:230/example-image.raw` is owned by VM 230. Most storage | |
276 | backends encodes this ownership information into the volume name. | |
277 | ||
278 | When you remove a VM or Container, the system also removes all | |
279 | associated volumes which are owned by that VM or Container. | |
280 | ||
281 | ||
282 | Using the Command Line Interface | |
283 | -------------------------------- | |
284 | ||
285 | It is recommended to familiarize yourself with the concept behind storage | |
286 | pools and volume identifiers, but in real life, you are not forced to do any | |
287 | of those low level operations on the command line. Normally, | |
288 | allocation and removal of volumes is done by the VM and Container | |
289 | management tools. | |
290 | ||
291 | Nevertheless, there is a command line tool called `pvesm` (``{pve} | |
292 | Storage Manager''), which is able to perform common storage management | |
293 | tasks. | |
294 | ||
295 | ||
296 | Examples | |
297 | ~~~~~~~~ | |
298 | ||
299 | Add storage pools | |
300 | ||
301 | pvesm add <TYPE> <STORAGE_ID> <OPTIONS> | |
302 | pvesm add dir <STORAGE_ID> --path <PATH> | |
303 | pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT> | |
304 | pvesm add lvm <STORAGE_ID> --vgname <VGNAME> | |
305 | pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET> | |
306 | ||
307 | Disable storage pools | |
308 | ||
309 | pvesm set <STORAGE_ID> --disable 1 | |
310 | ||
311 | Enable storage pools | |
312 | ||
313 | pvesm set <STORAGE_ID> --disable 0 | |
314 | ||
315 | Change/set storage options | |
316 | ||
317 | pvesm set <STORAGE_ID> <OPTIONS> | |
318 | pvesm set <STORAGE_ID> --shared 1 | |
319 | pvesm set local --format qcow2 | |
320 | pvesm set <STORAGE_ID> --content iso | |
321 | ||
322 | Remove storage pools. This does not delete any data, and does not | |
323 | disconnect or unmount anything. It just removes the storage | |
324 | configuration. | |
325 | ||
326 | pvesm remove <STORAGE_ID> | |
327 | ||
328 | Allocate volumes | |
329 | ||
330 | pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>] | |
331 | ||
332 | Allocate a 4G volume in local storage. The name is auto-generated if | |
333 | you pass an empty string as `<name>` | |
334 | ||
335 | pvesm alloc local <VMID> '' 4G | |
336 | ||
337 | Free volumes | |
338 | ||
339 | pvesm free <VOLUME_ID> | |
340 | ||
341 | WARNING: This really destroys all volume data. | |
342 | ||
343 | List storage status | |
344 | ||
345 | pvesm status | |
346 | ||
347 | List storage contents | |
348 | ||
349 | pvesm list <STORAGE_ID> [--vmid <VMID>] | |
350 | ||
351 | List volumes allocated by VMID | |
352 | ||
353 | pvesm list <STORAGE_ID> --vmid <VMID> | |
354 | ||
355 | List iso images | |
356 | ||
357 | pvesm list <STORAGE_ID> --iso | |
358 | ||
359 | List container templates | |
360 | ||
361 | pvesm list <STORAGE_ID> --vztmpl | |
362 | ||
363 | Show file system path for a volume | |
364 | ||
365 | pvesm path <VOLUME_ID> | |
366 | ||
367 | Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`. | |
368 | This is mostly used internally with `pvesm import`. | |
369 | The stream format qcow2+size is different to the qcow2 format. | |
370 | Consequently, the exported file cannot simply be attached to a VM. | |
371 | This also holds for the other formats. | |
372 | ||
373 | pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1 | |
374 | ||
375 | ifdef::wiki[] | |
376 | ||
377 | See Also | |
378 | -------- | |
379 | ||
380 | * link:/wiki/Storage:_Directory[Storage: Directory] | |
381 | ||
382 | * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS] | |
383 | ||
384 | * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI] | |
385 | ||
386 | * link:/wiki/Storage:_iSCSI[Storage: iSCSI] | |
387 | ||
388 | * link:/wiki/Storage:_LVM[Storage: LVM] | |
389 | ||
390 | * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin] | |
391 | ||
392 | * link:/wiki/Storage:_NFS[Storage: NFS] | |
393 | ||
394 | * link:/wiki/Storage:_CIFS[Storage: CIFS] | |
395 | ||
396 | * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server] | |
397 | ||
398 | * link:/wiki/Storage:_RBD[Storage: RBD] | |
399 | ||
400 | * link:/wiki/Storage:_CephFS[Storage: CephFS] | |
401 | ||
402 | * link:/wiki/Storage:_ZFS[Storage: ZFS] | |
403 | ||
404 | * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] | |
405 | ||
406 | endif::wiki[] | |
407 | ||
408 | ifndef::wiki[] | |
409 | ||
410 | // backend documentation | |
411 | ||
412 | include::pve-storage-dir.adoc[] | |
413 | ||
414 | include::pve-storage-nfs.adoc[] | |
415 | ||
416 | include::pve-storage-cifs.adoc[] | |
417 | ||
418 | include::pve-storage-pbs.adoc[] | |
419 | ||
420 | include::pve-storage-glusterfs.adoc[] | |
421 | ||
422 | include::pve-storage-zfspool.adoc[] | |
423 | ||
424 | include::pve-storage-lvm.adoc[] | |
425 | ||
426 | include::pve-storage-lvmthin.adoc[] | |
427 | ||
428 | include::pve-storage-iscsi.adoc[] | |
429 | ||
430 | include::pve-storage-iscsidirect.adoc[] | |
431 | ||
432 | include::pve-storage-rbd.adoc[] | |
433 | ||
434 | include::pve-storage-cephfs.adoc[] | |
435 | ||
436 | ||
437 | ||
438 | ifdef::manvolnum[] | |
439 | include::pve-copyright.adoc[] | |
440 | endif::manvolnum[] | |
441 | ||
442 | endif::wiki[] | |
443 |