10 pvesm - Proxmox VE Storage Manager
16 include::pvesm.1-synopsis.adoc[]
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in the future.
49 There are basically two different classes of storage types:
53 File level based storage technologies allow access to a fully featured (POSIX)
54 file system. They are in general more flexible than any Block level storage
55 (see below), and allow you to store content of any type. ZFS is probably the
56 most advanced system, and it has full support for snapshots and clones.
60 Allows to store large 'raw' images. It is usually not possible to store
61 other files (ISO, backups, ..) on such storage types. Most modern
62 block level storage implementations support snapshots and clones.
63 RADOS and GlusterFS are distributed systems, replicating storage
64 data to different nodes.
67 .Available storage types
68 [width="100%",cols="<2d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |Plugin type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |both^1^|no |yes |yes
72 |Directory |dir |file |no |no^2^ |yes
73 |BTRFS |btrfs |file |no |yes |technology preview
74 |NFS |nfs |file |yes |no^2^ |yes
75 |CIFS |cifs |file |yes |no^2^ |yes
76 |Proxmox Backup |pbs |both |yes |n/a |yes
77 |GlusterFS |glusterfs |file |yes |no^2^ |yes
78 |CephFS |cephfs |file |yes |yes |yes
79 |LVM |lvm |block |no^3^ |no |yes
80 |LVM-thin |lvmthin |block |no |yes |yes
81 |iSCSI/kernel |iscsi |block |yes |no |yes
82 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83 |Ceph/RBD |rbd |block |yes |yes |yes
84 |ZFS over iSCSI |zfs |block |yes |yes |yes
85 |===========================================================
87 ^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88 block device functionality.
90 ^2^: On file based storages, snapshots are possible with the 'qcow2' format.
92 ^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93 That way you get a `shared` LVM storage
99 A number of storages, and the QEMU image format `qcow2`, support 'thin
100 provisioning'. With thin provisioning activated, only the blocks that
101 the guest system actually use will be written to the storage.
103 Say for instance you create a VM with a 32GB hard disk, and after
104 installing the guest system OS, the root file system of the VM contains
105 3 GB of data. In that case only 3GB are written to the storage, even
106 if the guest VM sees a 32GB hard drive. In this way thin provisioning
107 allows you to create disk images which are larger than the currently
108 available storage blocks. You can create large disk images for your
109 VMs, and when the need arises, add more disks to your storage without
110 resizing the VMs' file systems.
112 All storage types which have the ``Snapshots'' feature also support thin
115 CAUTION: If a storage runs full, all guests using volumes on that
116 storage receive IO errors. This can cause file system inconsistencies
117 and may corrupt your data. So it is advisable to avoid
118 over-provisioning of your storage resources, or carefully observe
119 free space to avoid such conditions.
122 Storage Configuration
123 ---------------------
125 All {pve} related storage configuration is stored within a single text
126 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
127 gets automatically distributed to all cluster nodes. So all nodes
128 share the same storage configuration.
130 Sharing storage configuration makes perfect sense for shared storage,
131 because the same ``shared'' storage is accessible from all nodes. But it is
132 also useful for local storage types. In this case such local storage
133 is available on all nodes, but it is physically different and can have
134 totally different content.
140 Each storage pool has a `<type>`, and is uniquely identified by its
141 `<STORAGE_ID>`. A pool configuration looks like this:
151 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
152 followed by a list of properties. Most properties require a value. Some have
153 reasonable defaults, in which case you can omit the value.
155 To be more specific, take a look at the default storage configuration
156 after installation. It contains one special local storage pool named
157 `local`, which refers to the directory `/var/lib/vz` and is always
158 available. The {pve} installer creates additional storage entries
159 depending on the storage type chosen at installation time.
161 .Default storage configuration (`/etc/pve/storage.cfg`)
165 content iso,vztmpl,backup
167 # default image store on LVM based installation
171 content rootdir,images
173 # default image store on ZFS based installation
177 content images,rootdir
180 CAUTION: It is problematic to have multiple storage configurations pointing to
181 the exact same underlying storage. Such an _aliased_ storage configuration can
182 lead to two different volume IDs ('volid') pointing to the exact same disk
183 image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184 different content types for _aliased_ storage configurations can be fine, but
187 Common Storage Properties
188 ~~~~~~~~~~~~~~~~~~~~~~~~~
190 A few storage properties are common among different storage types.
194 List of cluster node names where this storage is
195 usable/accessible. One can use this property to restrict storage
196 access to a limited set of nodes.
200 A storage can support several content types, for example virtual disk
201 images, cdrom iso images, container templates or container root
202 directories. Not all storage types support all content types. One can set
203 this property to select what this storage is used for.
211 Allow to store container data.
219 Backup files (`vzdump`).
227 Snippet files, for example guest hook scripts
231 Indicate that this is a single storage with the same contents on all nodes (or
232 all listed in the 'nodes' option). It will not make the contents of a local
233 storage automatically accessible to other nodes, it just marks an already shared
238 You can use this flag to disable the storage completely.
242 Deprecated, please use `prune-backups` instead. Maximum number of backup files
243 per VM. Use `0` for unlimited.
247 Retention options for backups. For details, see
248 xref:vzdump_retention[Backup Retention].
252 Default image format (`raw|qcow2|vmdk`)
256 Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
257 file-based storages. The default is `metadata`, which is treated like `off` for
258 `raw` images. When using network storages in combination with large `qcow2`
259 images, using `off` can help to avoid timeouts.
261 WARNING: It is not advisable to use the same storage pool on different
262 {pve} clusters. Some storage operation need exclusive access to the
263 storage, so proper locking is required. While this is implemented
264 within a cluster, it does not work between different clusters.
270 We use a special notation to address storage data. When you allocate
271 data from a storage pool, it returns such a volume identifier. A volume
272 is identified by the `<STORAGE_ID>`, followed by a storage type
273 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
276 local:230/example-image.raw
278 local:iso/debian-501-amd64-netinst.iso
280 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
282 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
284 To get the file system path for a `<VOLUME_ID>` use:
286 pvesm path <VOLUME_ID>
292 There exists an ownership relation for `image` type volumes. Each such
293 volume is owned by a VM or Container. For example volume
294 `local:230/example-image.raw` is owned by VM 230. Most storage
295 backends encodes this ownership information into the volume name.
297 When you remove a VM or Container, the system also removes all
298 associated volumes which are owned by that VM or Container.
301 Using the Command-line Interface
302 --------------------------------
304 It is recommended to familiarize yourself with the concept behind storage
305 pools and volume identifiers, but in real life, you are not forced to do any
306 of those low level operations on the command line. Normally,
307 allocation and removal of volumes is done by the VM and Container
310 Nevertheless, there is a command-line tool called `pvesm` (``{pve}
311 Storage Manager''), which is able to perform common storage management
320 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
321 pvesm add dir <STORAGE_ID> --path <PATH>
322 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
323 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
324 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
326 Disable storage pools
328 pvesm set <STORAGE_ID> --disable 1
332 pvesm set <STORAGE_ID> --disable 0
334 Change/set storage options
336 pvesm set <STORAGE_ID> <OPTIONS>
337 pvesm set <STORAGE_ID> --shared 1
338 pvesm set local --format qcow2
339 pvesm set <STORAGE_ID> --content iso
341 Remove storage pools. This does not delete any data, and does not
342 disconnect or unmount anything. It just removes the storage
345 pvesm remove <STORAGE_ID>
349 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
351 Allocate a 4G volume in local storage. The name is auto-generated if
352 you pass an empty string as `<name>`
354 pvesm alloc local <VMID> '' 4G
358 pvesm free <VOLUME_ID>
360 WARNING: This really destroys all volume data.
366 List storage contents
368 pvesm list <STORAGE_ID> [--vmid <VMID>]
370 List volumes allocated by VMID
372 pvesm list <STORAGE_ID> --vmid <VMID>
376 pvesm list <STORAGE_ID> --content iso
378 List container templates
380 pvesm list <STORAGE_ID> --content vztmpl
382 Show file system path for a volume
384 pvesm path <VOLUME_ID>
386 Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
387 This is mostly used internally with `pvesm import`.
388 The stream format qcow2+size is different to the qcow2 format.
389 Consequently, the exported file cannot simply be attached to a VM.
390 This also holds for the other formats.
392 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
399 * link:/wiki/Storage:_Directory[Storage: Directory]
401 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
403 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
405 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
407 * link:/wiki/Storage:_LVM[Storage: LVM]
409 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
411 * link:/wiki/Storage:_NFS[Storage: NFS]
413 * link:/wiki/Storage:_CIFS[Storage: CIFS]
415 * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
417 * link:/wiki/Storage:_RBD[Storage: RBD]
419 * link:/wiki/Storage:_CephFS[Storage: CephFS]
421 * link:/wiki/Storage:_ZFS[Storage: ZFS]
423 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
429 // backend documentation
431 include::pve-storage-dir.adoc[]
433 include::pve-storage-nfs.adoc[]
435 include::pve-storage-cifs.adoc[]
437 include::pve-storage-pbs.adoc[]
439 include::pve-storage-glusterfs.adoc[]
441 include::pve-storage-zfspool.adoc[]
443 include::pve-storage-lvm.adoc[]
445 include::pve-storage-lvmthin.adoc[]
447 include::pve-storage-iscsi.adoc[]
449 include::pve-storage-iscsidirect.adoc[]
451 include::pve-storage-rbd.adoc[]
453 include::pve-storage-cephfs.adoc[]
455 include::pve-storage-btrfs.adoc[]
457 include::pve-storage-zfs.adoc[]
461 include::pve-copyright.adoc[]