]>
Commit | Line | Data |
---|---|---|
80c0adcb | 1 | [[chapter_storage]] |
aa039b0f | 2 | ifdef::manvolnum[] |
b2f242ab DM |
3 | pvesm(1) |
4 | ======== | |
5f09af76 DM |
5 | :pve-toplevel: |
6 | ||
aa039b0f DM |
7 | NAME |
8 | ---- | |
9 | ||
10 | pvesm - Proxmox VE Storage Manager | |
11 | ||
12 | ||
49a5e11c | 13 | SYNOPSIS |
aa039b0f DM |
14 | -------- |
15 | ||
16 | include::pvesm.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
aa039b0f DM |
21 | ifndef::manvolnum[] |
22 | {pve} Storage | |
23 | ============= | |
194d2f29 | 24 | :pve-toplevel: |
aa039b0f | 25 | endif::manvolnum[] |
5f09af76 | 26 | ifdef::wiki[] |
cb84ed18 | 27 | :title: Storage |
5f09af76 DM |
28 | endif::wiki[] |
29 | ||
aa039b0f DM |
30 | The {pve} storage model is very flexible. Virtual machine images |
31 | can either be stored on one or several local storages, or on shared | |
32 | storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may | |
33 | configure as many storage pools as you like. You can use all | |
34 | storage technologies available for Debian Linux. | |
35 | ||
36 | One major benefit of storing VMs on shared storage is the ability to | |
37 | live-migrate running machines without any downtime, as all nodes in | |
38 | the cluster have direct access to VM disk images. There is no need to | |
39 | copy VM image data, so live migration is very fast in that case. | |
40 | ||
8c1189b6 | 41 | The storage library (package `libpve-storage-perl`) uses a flexible |
aa039b0f | 42 | plugin system to provide a common interface to all storage types. This |
cc15d2c5 | 43 | can be easily adopted to include further storage types in the future. |
aa039b0f DM |
44 | |
45 | ||
46 | Storage Types | |
47 | ------------- | |
48 | ||
49 | There are basically two different classes of storage types: | |
50 | ||
e21e6f8a TL |
51 | File level storage:: |
52 | ||
cc15d2c5 | 53 | File level based storage technologies allow access to a fully featured (POSIX) |
e21e6f8a TL |
54 | file system. They are in general more flexible than any Block level storage |
55 | (see below), and allow you to store content of any type. ZFS is probably the | |
56 | most advanced system, and it has full support for snapshots and clones. | |
57 | ||
aa039b0f DM |
58 | Block level storage:: |
59 | ||
60 | Allows to store large 'raw' images. It is usually not possible to store | |
61 | other files (ISO, backups, ..) on such storage types. Most modern | |
62 | block level storage implementations support snapshots and clones. | |
e4fefc2c | 63 | RADOS and GlusterFS are distributed systems, replicating storage |
aa039b0f DM |
64 | data to different nodes. |
65 | ||
aa039b0f DM |
66 | |
67 | .Available storage types | |
b84c51fa | 68 | [width="100%",cols="<2d,1*m,4*d",options="header"] |
aa039b0f | 69 | |=========================================================== |
4bcf9cc3 AL |
70 | |Description |Plugin type |Level |Shared|Snapshots|Stable |
71 | |ZFS (local) |zfspool |both^1^|no |yes |yes | |
72 | |Directory |dir |file |no |no^2^ |yes | |
73 | |BTRFS |btrfs |file |no |yes |technology preview | |
74 | |NFS |nfs |file |yes |no^2^ |yes | |
75 | |CIFS |cifs |file |yes |no^2^ |yes | |
76 | |Proxmox Backup |pbs |both |yes |n/a |yes | |
77 | |GlusterFS |glusterfs |file |yes |no^2^ |yes | |
78 | |CephFS |cephfs |file |yes |yes |yes | |
79 | |LVM |lvm |block |no^3^ |no |yes | |
80 | |LVM-thin |lvmthin |block |no |yes |yes | |
81 | |iSCSI/kernel |iscsi |block |yes |no |yes | |
82 | |iSCSI/libiscsi |iscsidirect |block |yes |no |yes | |
83 | |Ceph/RBD |rbd |block |yes |yes |yes | |
84 | |ZFS over iSCSI |zfs |block |yes |yes |yes | |
93e1d33e | 85 | |=========================================================== |
aa039b0f | 86 | |
4bcf9cc3 AL |
87 | ^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide |
88 | block device functionality. | |
db7f8770 | 89 | |
4bcf9cc3 AL |
90 | ^2^: On file based storages, snapshots are possible with the 'qcow2' format. |
91 | ||
92 | ^3^: It is possible to use LVM on top of an iSCSI or FC-based storage. | |
93 | That way you get a `shared` LVM storage | |
aa039b0f | 94 | |
5eba0743 FG |
95 | |
96 | Thin Provisioning | |
2afe468c | 97 | ~~~~~~~~~~~~~~~~~ |
ebc15cbc | 98 | |
c730e973 | 99 | A number of storages, and the QEMU image format `qcow2`, support 'thin |
8c1189b6 | 100 | provisioning'. With thin provisioning activated, only the blocks that |
2afe468c | 101 | the guest system actually use will be written to the storage. |
ebc15cbc | 102 | |
2afe468c | 103 | Say for instance you create a VM with a 32GB hard disk, and after |
5eba0743 | 104 | installing the guest system OS, the root file system of the VM contains |
2afe468c DM |
105 | 3 GB of data. In that case only 3GB are written to the storage, even |
106 | if the guest VM sees a 32GB hard drive. In this way thin provisioning | |
107 | allows you to create disk images which are larger than the currently | |
108 | available storage blocks. You can create large disk images for your | |
109 | VMs, and when the need arises, add more disks to your storage without | |
5eba0743 | 110 | resizing the VMs' file systems. |
2afe468c | 111 | |
8c1189b6 | 112 | All storage types which have the ``Snapshots'' feature also support thin |
2afe468c | 113 | provisioning. |
ebc15cbc | 114 | |
ba1d96fd | 115 | CAUTION: If a storage runs full, all guests using volumes on that |
38d1cf56 | 116 | storage receive IO errors. This can cause file system inconsistencies |
ba1d96fd DM |
117 | and may corrupt your data. So it is advisable to avoid |
118 | over-provisioning of your storage resources, or carefully observe | |
119 | free space to avoid such conditions. | |
ebc15cbc | 120 | |
5eba0743 | 121 | |
aa039b0f DM |
122 | Storage Configuration |
123 | --------------------- | |
124 | ||
125 | All {pve} related storage configuration is stored within a single text | |
8c1189b6 | 126 | file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it |
aa039b0f DM |
127 | gets automatically distributed to all cluster nodes. So all nodes |
128 | share the same storage configuration. | |
129 | ||
cc15d2c5 FE |
130 | Sharing storage configuration makes perfect sense for shared storage, |
131 | because the same ``shared'' storage is accessible from all nodes. But it is | |
aa039b0f DM |
132 | also useful for local storage types. In this case such local storage |
133 | is available on all nodes, but it is physically different and can have | |
134 | totally different content. | |
135 | ||
5eba0743 | 136 | |
aa039b0f DM |
137 | Storage Pools |
138 | ~~~~~~~~~~~~~ | |
139 | ||
5eba0743 FG |
140 | Each storage pool has a `<type>`, and is uniquely identified by its |
141 | `<STORAGE_ID>`. A pool configuration looks like this: | |
aa039b0f DM |
142 | |
143 | ---- | |
144 | <type>: <STORAGE_ID> | |
145 | <property> <value> | |
146 | <property> <value> | |
a550860d | 147 | <property> |
aa039b0f DM |
148 | ... |
149 | ---- | |
150 | ||
aa039b0f | 151 | The `<type>: <STORAGE_ID>` line starts the pool definition, which is then |
a550860d TL |
152 | followed by a list of properties. Most properties require a value. Some have |
153 | reasonable defaults, in which case you can omit the value. | |
aa039b0f | 154 | |
9c41b54d DM |
155 | To be more specific, take a look at the default storage configuration |
156 | after installation. It contains one special local storage pool named | |
8c1189b6 | 157 | `local`, which refers to the directory `/var/lib/vz` and is always |
9c41b54d DM |
158 | available. The {pve} installer creates additional storage entries |
159 | depending on the storage type chosen at installation time. | |
160 | ||
8c1189b6 | 161 | .Default storage configuration (`/etc/pve/storage.cfg`) |
9801e1c3 DM |
162 | ---- |
163 | dir: local | |
aa039b0f | 164 | path /var/lib/vz |
9801e1c3 DM |
165 | content iso,vztmpl,backup |
166 | ||
9c41b54d | 167 | # default image store on LVM based installation |
9801e1c3 DM |
168 | lvmthin: local-lvm |
169 | thinpool data | |
170 | vgname pve | |
171 | content rootdir,images | |
9c41b54d DM |
172 | |
173 | # default image store on ZFS based installation | |
174 | zfspool: local-zfs | |
175 | pool rpool/data | |
176 | sparse | |
177 | content images,rootdir | |
9801e1c3 | 178 | ---- |
aa039b0f | 179 | |
0c3c5ff3 AL |
180 | CAUTION: It is problematic to have multiple storage configurations pointing to |
181 | the exact same underlying storage. Such an _aliased_ storage configuration can | |
182 | lead to two different volume IDs ('volid') pointing to the exact same disk | |
183 | image. {pve} expects that the images' volume IDs point to, are unique. Choosing | |
184 | different content types for _aliased_ storage configurations can be fine, but | |
185 | is not recommended. | |
5eba0743 | 186 | |
aa039b0f DM |
187 | Common Storage Properties |
188 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
189 | ||
871e1fd6 | 190 | A few storage properties are common among different storage types. |
aa039b0f DM |
191 | |
192 | nodes:: | |
193 | ||
194 | List of cluster node names where this storage is | |
195 | usable/accessible. One can use this property to restrict storage | |
196 | access to a limited set of nodes. | |
197 | ||
198 | content:: | |
199 | ||
200 | A storage can support several content types, for example virtual disk | |
201 | images, cdrom iso images, container templates or container root | |
871e1fd6 | 202 | directories. Not all storage types support all content types. One can set |
cc15d2c5 | 203 | this property to select what this storage is used for. |
aa039b0f DM |
204 | |
205 | images::: | |
206 | ||
c730e973 | 207 | QEMU/KVM VM images. |
aa039b0f DM |
208 | |
209 | rootdir::: | |
210 | ||
871e1fd6 | 211 | Allow to store container data. |
aa039b0f DM |
212 | |
213 | vztmpl::: | |
214 | ||
215 | Container templates. | |
216 | ||
217 | backup::: | |
218 | ||
8c1189b6 | 219 | Backup files (`vzdump`). |
aa039b0f DM |
220 | |
221 | iso::: | |
222 | ||
223 | ISO images | |
224 | ||
c2c8eb89 DC |
225 | snippets::: |
226 | ||
227 | Snippet files, for example guest hook scripts | |
228 | ||
aa039b0f DM |
229 | shared:: |
230 | ||
231 | Mark storage as shared. | |
232 | ||
233 | disable:: | |
234 | ||
235 | You can use this flag to disable the storage completely. | |
236 | ||
237 | maxfiles:: | |
238 | ||
3a976366 FE |
239 | Deprecated, please use `prune-backups` instead. Maximum number of backup files |
240 | per VM. Use `0` for unlimited. | |
241 | ||
242 | prune-backups:: | |
243 | ||
244 | Retention options for backups. For details, see | |
245 | xref:vzdump_retention[Backup Retention]. | |
aa039b0f DM |
246 | |
247 | format:: | |
248 | ||
249 | Default image format (`raw|qcow2|vmdk`) | |
250 | ||
0537ebf1 FE |
251 | preallocation:: |
252 | ||
253 | Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on | |
254 | file-based storages. The default is `metadata`, which is treated like `off` for | |
255 | `raw` images. When using network storages in combination with large `qcow2` | |
256 | images, using `off` can help to avoid timeouts. | |
aa039b0f DM |
257 | |
258 | WARNING: It is not advisable to use the same storage pool on different | |
871e1fd6 | 259 | {pve} clusters. Some storage operation need exclusive access to the |
aa039b0f | 260 | storage, so proper locking is required. While this is implemented |
871e1fd6 | 261 | within a cluster, it does not work between different clusters. |
aa039b0f DM |
262 | |
263 | ||
264 | Volumes | |
265 | ------- | |
266 | ||
267 | We use a special notation to address storage data. When you allocate | |
871e1fd6 | 268 | data from a storage pool, it returns such a volume identifier. A volume |
aa039b0f DM |
269 | is identified by the `<STORAGE_ID>`, followed by a storage type |
270 | dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks | |
271 | like: | |
272 | ||
273 | local:230/example-image.raw | |
274 | ||
275 | local:iso/debian-501-amd64-netinst.iso | |
276 | ||
277 | local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz | |
278 | ||
279 | iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61 | |
280 | ||
5eba0743 | 281 | To get the file system path for a `<VOLUME_ID>` use: |
aa039b0f DM |
282 | |
283 | pvesm path <VOLUME_ID> | |
284 | ||
5eba0743 | 285 | |
aa039b0f DM |
286 | Volume Ownership |
287 | ~~~~~~~~~~~~~~~~ | |
288 | ||
8c1189b6 | 289 | There exists an ownership relation for `image` type volumes. Each such |
aa039b0f DM |
290 | volume is owned by a VM or Container. For example volume |
291 | `local:230/example-image.raw` is owned by VM 230. Most storage | |
292 | backends encodes this ownership information into the volume name. | |
293 | ||
871e1fd6 | 294 | When you remove a VM or Container, the system also removes all |
aa039b0f DM |
295 | associated volumes which are owned by that VM or Container. |
296 | ||
297 | ||
ff4ae052 | 298 | Using the Command-line Interface |
aa039b0f DM |
299 | -------------------------------- |
300 | ||
871e1fd6 FG |
301 | It is recommended to familiarize yourself with the concept behind storage |
302 | pools and volume identifiers, but in real life, you are not forced to do any | |
aa039b0f DM |
303 | of those low level operations on the command line. Normally, |
304 | allocation and removal of volumes is done by the VM and Container | |
305 | management tools. | |
306 | ||
ff4ae052 | 307 | Nevertheless, there is a command-line tool called `pvesm` (``{pve} |
8c1189b6 | 308 | Storage Manager''), which is able to perform common storage management |
aa039b0f DM |
309 | tasks. |
310 | ||
311 | ||
312 | Examples | |
313 | ~~~~~~~~ | |
314 | ||
315 | Add storage pools | |
316 | ||
317 | pvesm add <TYPE> <STORAGE_ID> <OPTIONS> | |
318 | pvesm add dir <STORAGE_ID> --path <PATH> | |
319 | pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT> | |
320 | pvesm add lvm <STORAGE_ID> --vgname <VGNAME> | |
321 | pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET> | |
322 | ||
323 | Disable storage pools | |
324 | ||
325 | pvesm set <STORAGE_ID> --disable 1 | |
326 | ||
327 | Enable storage pools | |
328 | ||
329 | pvesm set <STORAGE_ID> --disable 0 | |
330 | ||
331 | Change/set storage options | |
332 | ||
333 | pvesm set <STORAGE_ID> <OPTIONS> | |
334 | pvesm set <STORAGE_ID> --shared 1 | |
335 | pvesm set local --format qcow2 | |
336 | pvesm set <STORAGE_ID> --content iso | |
337 | ||
338 | Remove storage pools. This does not delete any data, and does not | |
339 | disconnect or unmount anything. It just removes the storage | |
340 | configuration. | |
341 | ||
342 | pvesm remove <STORAGE_ID> | |
343 | ||
344 | Allocate volumes | |
345 | ||
346 | pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>] | |
347 | ||
348 | Allocate a 4G volume in local storage. The name is auto-generated if | |
349 | you pass an empty string as `<name>` | |
350 | ||
351 | pvesm alloc local <VMID> '' 4G | |
352 | ||
5eba0743 | 353 | Free volumes |
aa039b0f DM |
354 | |
355 | pvesm free <VOLUME_ID> | |
356 | ||
357 | WARNING: This really destroys all volume data. | |
358 | ||
359 | List storage status | |
360 | ||
361 | pvesm status | |
362 | ||
363 | List storage contents | |
364 | ||
365 | pvesm list <STORAGE_ID> [--vmid <VMID>] | |
366 | ||
367 | List volumes allocated by VMID | |
368 | ||
369 | pvesm list <STORAGE_ID> --vmid <VMID> | |
370 | ||
371 | List iso images | |
372 | ||
65ef3bb6 | 373 | pvesm list <STORAGE_ID> --content iso |
aa039b0f DM |
374 | |
375 | List container templates | |
376 | ||
65ef3bb6 | 377 | pvesm list <STORAGE_ID> --content vztmpl |
aa039b0f | 378 | |
5eba0743 | 379 | Show file system path for a volume |
aa039b0f DM |
380 | |
381 | pvesm path <VOLUME_ID> | |
382 | ||
13962741 DJ |
383 | Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`. |
384 | This is mostly used internally with `pvesm import`. | |
385 | The stream format qcow2+size is different to the qcow2 format. | |
386 | Consequently, the exported file cannot simply be attached to a VM. | |
387 | This also holds for the other formats. | |
388 | ||
389 | pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1 | |
390 | ||
deb4673f DM |
391 | ifdef::wiki[] |
392 | ||
393 | See Also | |
394 | -------- | |
395 | ||
f532afb7 | 396 | * link:/wiki/Storage:_Directory[Storage: Directory] |
deb4673f | 397 | |
f532afb7 | 398 | * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS] |
deb4673f | 399 | |
f532afb7 | 400 | * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI] |
deb4673f | 401 | |
f532afb7 | 402 | * link:/wiki/Storage:_iSCSI[Storage: iSCSI] |
deb4673f | 403 | |
f532afb7 | 404 | * link:/wiki/Storage:_LVM[Storage: LVM] |
deb4673f | 405 | |
f532afb7 | 406 | * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin] |
deb4673f | 407 | |
f532afb7 | 408 | * link:/wiki/Storage:_NFS[Storage: NFS] |
deb4673f | 409 | |
de14ebff WL |
410 | * link:/wiki/Storage:_CIFS[Storage: CIFS] |
411 | ||
7b43e874 | 412 | * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server] |
93e1d33e | 413 | |
f532afb7 | 414 | * link:/wiki/Storage:_RBD[Storage: RBD] |
deb4673f | 415 | |
ef488ba5 | 416 | * link:/wiki/Storage:_CephFS[Storage: CephFS] |
a82d3cc3 | 417 | |
f532afb7 | 418 | * link:/wiki/Storage:_ZFS[Storage: ZFS] |
deb4673f | 419 | |
032e755c | 420 | * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] |
deb4673f DM |
421 | |
422 | endif::wiki[] | |
423 | ||
251666be DM |
424 | ifndef::wiki[] |
425 | ||
aa039b0f DM |
426 | // backend documentation |
427 | ||
428 | include::pve-storage-dir.adoc[] | |
429 | ||
430 | include::pve-storage-nfs.adoc[] | |
431 | ||
de14ebff WL |
432 | include::pve-storage-cifs.adoc[] |
433 | ||
93e1d33e TL |
434 | include::pve-storage-pbs.adoc[] |
435 | ||
aa039b0f DM |
436 | include::pve-storage-glusterfs.adoc[] |
437 | ||
438 | include::pve-storage-zfspool.adoc[] | |
439 | ||
440 | include::pve-storage-lvm.adoc[] | |
441 | ||
9801e1c3 DM |
442 | include::pve-storage-lvmthin.adoc[] |
443 | ||
aa039b0f DM |
444 | include::pve-storage-iscsi.adoc[] |
445 | ||
446 | include::pve-storage-iscsidirect.adoc[] | |
447 | ||
448 | include::pve-storage-rbd.adoc[] | |
449 | ||
669bce8b AA |
450 | include::pve-storage-cephfs.adoc[] |
451 | ||
ea856d57 | 452 | include::pve-storage-btrfs.adoc[] |
aa039b0f | 453 | |
93f65836 SI |
454 | include::pve-storage-zfs.adoc[] |
455 | ||
251666be | 456 | |
aa039b0f DM |
457 | ifdef::manvolnum[] |
458 | include::pve-copyright.adoc[] | |
459 | endif::manvolnum[] | |
460 | ||
251666be DM |
461 | endif::wiki[] |
462 |