10 pvesm - Proxmox VE Storage Manager
16 include::pvesm.1-synopsis.adoc[]
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in the future.
49 There are basically two different classes of storage types:
53 File level based storage technologies allow access to a fully featured (POSIX)
54 file system. They are in general more flexible than any Block level storage
55 (see below), and allow you to store content of any type. ZFS is probably the
56 most advanced system, and it has full support for snapshots and clones.
60 Allows to store large 'raw' images. It is usually not possible to store
61 other files (ISO, backups, ..) on such storage types. Most modern
62 block level storage implementations support snapshots and clones.
63 RADOS and GlusterFS are distributed systems, replicating storage
64 data to different nodes.
67 .Available storage types
68 [width="100%",cols="<2d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no^1^ |yes
73 |NFS |nfs |file |yes |no^1^ |yes
74 |CIFS |cifs |file |yes |no^1^ |yes
75 |Proxmox Backup |pbs |both |yes |n/a |beta
76 |GlusterFS |glusterfs |file |yes |no^1^ |yes
77 |CephFS |cephfs |file |yes |yes |yes
78 |LVM |lvm |block |no^2^ |no |yes
79 |LVM-thin |lvmthin |block |no |yes |yes
80 |iSCSI/kernel |iscsi |block |yes |no |yes
81 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
82 |Ceph/RBD |rbd |block |yes |yes |yes
83 |ZFS over iSCSI |zfs |block |yes |yes |yes
84 |===========================================================
86 ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
88 ^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
89 That way you get a `shared` LVM storage.
95 A number of storages, and the Qemu image format `qcow2`, support 'thin
96 provisioning'. With thin provisioning activated, only the blocks that
97 the guest system actually use will be written to the storage.
99 Say for instance you create a VM with a 32GB hard disk, and after
100 installing the guest system OS, the root file system of the VM contains
101 3 GB of data. In that case only 3GB are written to the storage, even
102 if the guest VM sees a 32GB hard drive. In this way thin provisioning
103 allows you to create disk images which are larger than the currently
104 available storage blocks. You can create large disk images for your
105 VMs, and when the need arises, add more disks to your storage without
106 resizing the VMs' file systems.
108 All storage types which have the ``Snapshots'' feature also support thin
111 CAUTION: If a storage runs full, all guests using volumes on that
112 storage receive IO errors. This can cause file system inconsistencies
113 and may corrupt your data. So it is advisable to avoid
114 over-provisioning of your storage resources, or carefully observe
115 free space to avoid such conditions.
118 Storage Configuration
119 ---------------------
121 All {pve} related storage configuration is stored within a single text
122 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
123 gets automatically distributed to all cluster nodes. So all nodes
124 share the same storage configuration.
126 Sharing storage configuration makes perfect sense for shared storage,
127 because the same ``shared'' storage is accessible from all nodes. But it is
128 also useful for local storage types. In this case such local storage
129 is available on all nodes, but it is physically different and can have
130 totally different content.
136 Each storage pool has a `<type>`, and is uniquely identified by its
137 `<STORAGE_ID>`. A pool configuration looks like this:
147 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
148 followed by a list of properties. Most properties require a value. Some have
149 reasonable defaults, in which case you can omit the value.
151 To be more specific, take a look at the default storage configuration
152 after installation. It contains one special local storage pool named
153 `local`, which refers to the directory `/var/lib/vz` and is always
154 available. The {pve} installer creates additional storage entries
155 depending on the storage type chosen at installation time.
157 .Default storage configuration (`/etc/pve/storage.cfg`)
161 content iso,vztmpl,backup
163 # default image store on LVM based installation
167 content rootdir,images
169 # default image store on ZFS based installation
173 content images,rootdir
177 Common Storage Properties
178 ~~~~~~~~~~~~~~~~~~~~~~~~~
180 A few storage properties are common among different storage types.
184 List of cluster node names where this storage is
185 usable/accessible. One can use this property to restrict storage
186 access to a limited set of nodes.
190 A storage can support several content types, for example virtual disk
191 images, cdrom iso images, container templates or container root
192 directories. Not all storage types support all content types. One can set
193 this property to select what this storage is used for.
201 Allow to store container data.
209 Backup files (`vzdump`).
217 Snippet files, for example guest hook scripts
221 Mark storage as shared.
225 You can use this flag to disable the storage completely.
229 Maximum number of backup files per VM. Use `0` for unlimited.
233 Default image format (`raw|qcow2|vmdk`)
236 WARNING: It is not advisable to use the same storage pool on different
237 {pve} clusters. Some storage operation need exclusive access to the
238 storage, so proper locking is required. While this is implemented
239 within a cluster, it does not work between different clusters.
245 We use a special notation to address storage data. When you allocate
246 data from a storage pool, it returns such a volume identifier. A volume
247 is identified by the `<STORAGE_ID>`, followed by a storage type
248 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
251 local:230/example-image.raw
253 local:iso/debian-501-amd64-netinst.iso
255 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
257 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
259 To get the file system path for a `<VOLUME_ID>` use:
261 pvesm path <VOLUME_ID>
267 There exists an ownership relation for `image` type volumes. Each such
268 volume is owned by a VM or Container. For example volume
269 `local:230/example-image.raw` is owned by VM 230. Most storage
270 backends encodes this ownership information into the volume name.
272 When you remove a VM or Container, the system also removes all
273 associated volumes which are owned by that VM or Container.
276 Using the Command Line Interface
277 --------------------------------
279 It is recommended to familiarize yourself with the concept behind storage
280 pools and volume identifiers, but in real life, you are not forced to do any
281 of those low level operations on the command line. Normally,
282 allocation and removal of volumes is done by the VM and Container
285 Nevertheless, there is a command line tool called `pvesm` (``{pve}
286 Storage Manager''), which is able to perform common storage management
295 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
296 pvesm add dir <STORAGE_ID> --path <PATH>
297 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
298 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
299 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
301 Disable storage pools
303 pvesm set <STORAGE_ID> --disable 1
307 pvesm set <STORAGE_ID> --disable 0
309 Change/set storage options
311 pvesm set <STORAGE_ID> <OPTIONS>
312 pvesm set <STORAGE_ID> --shared 1
313 pvesm set local --format qcow2
314 pvesm set <STORAGE_ID> --content iso
316 Remove storage pools. This does not delete any data, and does not
317 disconnect or unmount anything. It just removes the storage
320 pvesm remove <STORAGE_ID>
324 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
326 Allocate a 4G volume in local storage. The name is auto-generated if
327 you pass an empty string as `<name>`
329 pvesm alloc local <VMID> '' 4G
333 pvesm free <VOLUME_ID>
335 WARNING: This really destroys all volume data.
341 List storage contents
343 pvesm list <STORAGE_ID> [--vmid <VMID>]
345 List volumes allocated by VMID
347 pvesm list <STORAGE_ID> --vmid <VMID>
351 pvesm list <STORAGE_ID> --iso
353 List container templates
355 pvesm list <STORAGE_ID> --vztmpl
357 Show file system path for a volume
359 pvesm path <VOLUME_ID>
361 Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
362 This is mostly used internally with `pvesm import`.
363 The stream format qcow2+size is different to the qcow2 format.
364 Consequently, the exported file cannot simply be attached to a VM.
365 This also holds for the other formats.
367 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
374 * link:/wiki/Storage:_Directory[Storage: Directory]
376 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
378 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
380 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
382 * link:/wiki/Storage:_LVM[Storage: LVM]
384 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
386 * link:/wiki/Storage:_NFS[Storage: NFS]
388 * link:/wiki/Storage:_CIFS[Storage: CIFS]
390 * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
392 * link:/wiki/Storage:_RBD[Storage: RBD]
394 * link:/wiki/Storage:_CephFS[Storage: CephFS]
396 * link:/wiki/Storage:_ZFS[Storage: ZFS]
398 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
404 // backend documentation
406 include::pve-storage-dir.adoc[]
408 include::pve-storage-nfs.adoc[]
410 include::pve-storage-cifs.adoc[]
412 include::pve-storage-pbs.adoc[]
414 include::pve-storage-glusterfs.adoc[]
416 include::pve-storage-zfspool.adoc[]
418 include::pve-storage-lvm.adoc[]
420 include::pve-storage-lvmthin.adoc[]
422 include::pve-storage-iscsi.adoc[]
424 include::pve-storage-iscsidirect.adoc[]
426 include::pve-storage-rbd.adoc[]
428 include::pve-storage-cephfs.adoc[]
433 include::pve-copyright.adoc[]