10 pvesm - Proxmox VE Storage Manager
16 include::pvesm.1-synopsis.adoc[]
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
49 There are basically two different classes of storage types:
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57 data to different nodes.
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no^1^ |yes
73 |NFS |nfs |file |yes |no^1^ |yes
74 |CIFS |cifs |file |yes |no^1^ |yes
75 |GlusterFS |glusterfs |file |yes |no^1^ |yes
76 |LVM |lvm |block |no^2^ |no |yes
77 |LVM-thin |lvmthin |block |no |yes |yes
78 |iSCSI/kernel |iscsi |block |yes |no |yes
79 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
80 |Ceph/RBD |rbd |block |yes |yes |yes
81 |Sheepdog |sheepdog |block |yes |yes |beta
82 |ZFS over iSCSI |zfs |block |yes |yes |yes
83 |=========================================================
85 ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
87 ^2^: It is possible to use LVM on top of an iSCSI storage. That way
88 you get a `shared` LVM storage.
94 A number of storages, and the Qemu image format `qcow2`, support 'thin
95 provisioning'. With thin provisioning activated, only the blocks that
96 the guest system actually use will be written to the storage.
98 Say for instance you create a VM with a 32GB hard disk, and after
99 installing the guest system OS, the root file system of the VM contains
100 3 GB of data. In that case only 3GB are written to the storage, even
101 if the guest VM sees a 32GB hard drive. In this way thin provisioning
102 allows you to create disk images which are larger than the currently
103 available storage blocks. You can create large disk images for your
104 VMs, and when the need arises, add more disks to your storage without
105 resizing the VMs' file systems.
107 All storage types which have the ``Snapshots'' feature also support thin
110 CAUTION: If a storage runs full, all guests using volumes on that
111 storage receive IO errors. This can cause file system inconsistencies
112 and may corrupt your data. So it is advisable to avoid
113 over-provisioning of your storage resources, or carefully observe
114 free space to avoid such conditions.
117 Storage Configuration
118 ---------------------
120 All {pve} related storage configuration is stored within a single text
121 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
122 gets automatically distributed to all cluster nodes. So all nodes
123 share the same storage configuration.
125 Sharing storage configuration make perfect sense for shared storage,
126 because the same ``shared'' storage is accessible from all nodes. But is
127 also useful for local storage types. In this case such local storage
128 is available on all nodes, but it is physically different and can have
129 totally different content.
135 Each storage pool has a `<type>`, and is uniquely identified by its
136 `<STORAGE_ID>`. A pool configuration looks like this:
145 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
146 followed by a list of properties. Most properties have values, but some of
147 them come with reasonable default. In that case you can omit the value.
149 To be more specific, take a look at the default storage configuration
150 after installation. It contains one special local storage pool named
151 `local`, which refers to the directory `/var/lib/vz` and is always
152 available. The {pve} installer creates additional storage entries
153 depending on the storage type chosen at installation time.
155 .Default storage configuration (`/etc/pve/storage.cfg`)
159 content iso,vztmpl,backup
161 # default image store on LVM based installation
165 content rootdir,images
167 # default image store on ZFS based installation
171 content images,rootdir
175 Common Storage Properties
176 ~~~~~~~~~~~~~~~~~~~~~~~~~
178 A few storage properties are common among different storage types.
182 List of cluster node names where this storage is
183 usable/accessible. One can use this property to restrict storage
184 access to a limited set of nodes.
188 A storage can support several content types, for example virtual disk
189 images, cdrom iso images, container templates or container root
190 directories. Not all storage types support all content types. One can set
191 this property to select for what this storage is used for.
199 Allow to store container data.
207 Backup files (`vzdump`).
215 Mark storage as shared.
219 You can use this flag to disable the storage completely.
223 Maximum number of backup files per VM. Use `0` for unlimited.
227 Default image format (`raw|qcow2|vmdk`)
230 WARNING: It is not advisable to use the same storage pool on different
231 {pve} clusters. Some storage operation need exclusive access to the
232 storage, so proper locking is required. While this is implemented
233 within a cluster, it does not work between different clusters.
239 We use a special notation to address storage data. When you allocate
240 data from a storage pool, it returns such a volume identifier. A volume
241 is identified by the `<STORAGE_ID>`, followed by a storage type
242 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
245 local:230/example-image.raw
247 local:iso/debian-501-amd64-netinst.iso
249 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
251 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
253 To get the file system path for a `<VOLUME_ID>` use:
255 pvesm path <VOLUME_ID>
261 There exists an ownership relation for `image` type volumes. Each such
262 volume is owned by a VM or Container. For example volume
263 `local:230/example-image.raw` is owned by VM 230. Most storage
264 backends encodes this ownership information into the volume name.
266 When you remove a VM or Container, the system also removes all
267 associated volumes which are owned by that VM or Container.
270 Using the Command Line Interface
271 --------------------------------
273 It is recommended to familiarize yourself with the concept behind storage
274 pools and volume identifiers, but in real life, you are not forced to do any
275 of those low level operations on the command line. Normally,
276 allocation and removal of volumes is done by the VM and Container
279 Nevertheless, there is a command line tool called `pvesm` (``{pve}
280 Storage Manager''), which is able to perform common storage management
289 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
290 pvesm add dir <STORAGE_ID> --path <PATH>
291 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
292 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
293 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
295 Disable storage pools
297 pvesm set <STORAGE_ID> --disable 1
301 pvesm set <STORAGE_ID> --disable 0
303 Change/set storage options
305 pvesm set <STORAGE_ID> <OPTIONS>
306 pvesm set <STORAGE_ID> --shared 1
307 pvesm set local --format qcow2
308 pvesm set <STORAGE_ID> --content iso
310 Remove storage pools. This does not delete any data, and does not
311 disconnect or unmount anything. It just removes the storage
314 pvesm remove <STORAGE_ID>
318 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
320 Allocate a 4G volume in local storage. The name is auto-generated if
321 you pass an empty string as `<name>`
323 pvesm alloc local <VMID> '' 4G
327 pvesm free <VOLUME_ID>
329 WARNING: This really destroys all volume data.
335 List storage contents
337 pvesm list <STORAGE_ID> [--vmid <VMID>]
339 List volumes allocated by VMID
341 pvesm list <STORAGE_ID> --vmid <VMID>
345 pvesm list <STORAGE_ID> --iso
347 List container templates
349 pvesm list <STORAGE_ID> --vztmpl
351 Show file system path for a volume
353 pvesm path <VOLUME_ID>
360 * link:/wiki/Storage:_Directory[Storage: Directory]
362 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
364 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
366 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
368 * link:/wiki/Storage:_LVM[Storage: LVM]
370 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
372 * link:/wiki/Storage:_NFS[Storage: NFS]
374 * link:/wiki/Storage:_CIFS[Storage: CIFS]
376 * link:/wiki/Storage:_RBD[Storage: RBD]
378 * link:/wiki/Storage:_ZFS[Storage: ZFS]
380 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
386 // backend documentation
388 include::pve-storage-dir.adoc[]
390 include::pve-storage-nfs.adoc[]
392 include::pve-storage-cifs.adoc[]
394 include::pve-storage-glusterfs.adoc[]
396 include::pve-storage-zfspool.adoc[]
398 include::pve-storage-lvm.adoc[]
400 include::pve-storage-lvmthin.adoc[]
402 include::pve-storage-iscsi.adoc[]
404 include::pve-storage-iscsidirect.adoc[]
406 include::pve-storage-rbd.adoc[]
411 include::pve-copyright.adoc[]