5 include::attributes.txt[]
11 pvesm - Proxmox VE Storage Manager
17 include::pvesm.1-synopsis.adoc[]
26 include::attributes.txt[]
34 The {pve} storage model is very flexible. Virtual machine images
35 can either be stored on one or several local storages, or on shared
36 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
37 configure as many storage pools as you like. You can use all
38 storage technologies available for Debian Linux.
40 One major benefit of storing VMs on shared storage is the ability to
41 live-migrate running machines without any downtime, as all nodes in
42 the cluster have direct access to VM disk images. There is no need to
43 copy VM image data, so live migration is very fast in that case.
45 The storage library (package `libpve-storage-perl`) uses a flexible
46 plugin system to provide a common interface to all storage types. This
47 can be easily adopted to include further storage types in future.
53 There are basically two different classes of storage types:
57 Allows to store large 'raw' images. It is usually not possible to store
58 other files (ISO, backups, ..) on such storage types. Most modern
59 block level storage implementations support snapshots and clones.
60 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
61 data to different nodes.
65 They allow access to a full featured (POSIX) file system. They are
66 more flexible, and allows you to store any content type. ZFS is
67 probably the most advanced system, and it has full support for
71 .Available storage types
72 [width="100%",cols="<d,1*m,4*d",options="header"]
73 |===========================================================
74 |Description |PVE type |Level |Shared|Snapshots|Stable
75 |ZFS (local) |zfspool |file |no |yes |yes
76 |Directory |dir |file |no |no |yes
77 |NFS |nfs |file |yes |no |yes
78 |GlusterFS |glusterfs |file |yes |no |yes
79 |LVM |lvm |block |no |no |yes
80 |LVM-thin |lvmthin |block |no |yes |yes
81 |iSCSI/kernel |iscsi |block |yes |no |yes
82 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83 |Ceph/RBD |rbd |block |yes |yes |yes
84 |Sheepdog |sheepdog |block |yes |yes |beta
85 |DRBD9 |drbd |block |yes |yes |beta
86 |ZFS over iSCSI |zfs |block |yes |yes |yes
87 |=========================================================
89 TIP: It is possible to use LVM on top of an iSCSI storage. That way
90 you get a `shared` LVM storage.
96 A number of storages, and the Qemu image format `qcow2`, support 'thin
97 provisioning'. With thin provisioning activated, only the blocks that
98 the guest system actually use will be written to the storage.
100 Say for instance you create a VM with a 32GB hard disk, and after
101 installing the guest system OS, the root file system of the VM contains
102 3 GB of data. In that case only 3GB are written to the storage, even
103 if the guest VM sees a 32GB hard drive. In this way thin provisioning
104 allows you to create disk images which are larger than the currently
105 available storage blocks. You can create large disk images for your
106 VMs, and when the need arises, add more disks to your storage without
107 resizing the VMs' file systems.
109 All storage types which have the ``Snapshots'' feature also support thin
112 CAUTION: If a storage runs full, all guests using volumes on that
113 storage receives IO error. This can cause file system inconsistencies
114 and may corrupt your data. So it is advisable to avoid
115 over-provisioning of your storage resources, or carefully observe
116 free space to avoid such conditions.
119 Storage Configuration
120 ---------------------
122 All {pve} related storage configuration is stored within a single text
123 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
124 gets automatically distributed to all cluster nodes. So all nodes
125 share the same storage configuration.
127 Sharing storage configuration make perfect sense for shared storage,
128 because the same ``shared'' storage is accessible from all nodes. But is
129 also useful for local storage types. In this case such local storage
130 is available on all nodes, but it is physically different and can have
131 totally different content.
137 Each storage pool has a `<type>`, and is uniquely identified by its
138 `<STORAGE_ID>`. A pool configuration looks like this:
147 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
148 followed by a list of properties. Most properties have values, but some of
149 them come with reasonable default. In that case you can omit the value.
151 To be more specific, take a look at the default storage configuration
152 after installation. It contains one special local storage pool named
153 `local`, which refers to the directory `/var/lib/vz` and is always
154 available. The {pve} installer creates additional storage entries
155 depending on the storage type chosen at installation time.
157 .Default storage configuration (`/etc/pve/storage.cfg`)
161 content iso,vztmpl,backup
163 # default image store on LVM based installation
167 content rootdir,images
169 # default image store on ZFS based installation
173 content images,rootdir
177 Common Storage Properties
178 ~~~~~~~~~~~~~~~~~~~~~~~~~
180 A few storage properties are common among different storage types.
184 List of cluster node names where this storage is
185 usable/accessible. One can use this property to restrict storage
186 access to a limited set of nodes.
190 A storage can support several content types, for example virtual disk
191 images, cdrom iso images, container templates or container root
192 directories. Not all storage types support all content types. One can set
193 this property to select for what this storage is used for.
201 Allow to store container data.
209 Backup files (`vzdump`).
217 Mark storage as shared.
221 You can use this flag to disable the storage completely.
225 Maximum number of backup files per VM. Use `0` for unlimited.
229 Default image format (`raw|qcow2|vmdk`)
232 WARNING: It is not advisable to use the same storage pool on different
233 {pve} clusters. Some storage operation need exclusive access to the
234 storage, so proper locking is required. While this is implemented
235 within a cluster, it does not work between different clusters.
241 We use a special notation to address storage data. When you allocate
242 data from a storage pool, it returns such a volume identifier. A volume
243 is identified by the `<STORAGE_ID>`, followed by a storage type
244 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
247 local:230/example-image.raw
249 local:iso/debian-501-amd64-netinst.iso
251 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
253 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
255 To get the file system path for a `<VOLUME_ID>` use:
257 pvesm path <VOLUME_ID>
263 There exists an ownership relation for `image` type volumes. Each such
264 volume is owned by a VM or Container. For example volume
265 `local:230/example-image.raw` is owned by VM 230. Most storage
266 backends encodes this ownership information into the volume name.
268 When you remove a VM or Container, the system also removes all
269 associated volumes which are owned by that VM or Container.
272 Using the Command Line Interface
273 --------------------------------
275 It is recommended to familiarize yourself with the concept behind storage
276 pools and volume identifiers, but in real life, you are not forced to do any
277 of those low level operations on the command line. Normally,
278 allocation and removal of volumes is done by the VM and Container
281 Nevertheless, there is a command line tool called `pvesm` (``{pve}
282 Storage Manager''), which is able to perform common storage management
291 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
292 pvesm add dir <STORAGE_ID> --path <PATH>
293 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
294 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
295 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
297 Disable storage pools
299 pvesm set <STORAGE_ID> --disable 1
303 pvesm set <STORAGE_ID> --disable 0
305 Change/set storage options
307 pvesm set <STORAGE_ID> <OPTIONS>
308 pvesm set <STORAGE_ID> --shared 1
309 pvesm set local --format qcow2
310 pvesm set <STORAGE_ID> --content iso
312 Remove storage pools. This does not delete any data, and does not
313 disconnect or unmount anything. It just removes the storage
316 pvesm remove <STORAGE_ID>
320 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
322 Allocate a 4G volume in local storage. The name is auto-generated if
323 you pass an empty string as `<name>`
325 pvesm alloc local <VMID> '' 4G
329 pvesm free <VOLUME_ID>
331 WARNING: This really destroys all volume data.
337 List storage contents
339 pvesm list <STORAGE_ID> [--vmid <VMID>]
341 List volumes allocated by VMID
343 pvesm list <STORAGE_ID> --vmid <VMID>
347 pvesm list <STORAGE_ID> --iso
349 List container templates
351 pvesm list <STORAGE_ID> --vztmpl
353 Show file system path for a volume
355 pvesm path <VOLUME_ID>
362 * link:/wiki/Storage:_Directory[Storage: Directory]
364 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
366 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
368 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
370 * link:/wiki/Storage:_LVM[Storage: LVM]
372 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
374 * link:/wiki/Storage:_NFS[Storage: NFS]
376 * link:/wiki/Storage:_RBD[Storage: RBD]
378 * link:/wiki/Storage:_ZFS[Storage: ZFS]
380 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
386 // backend documentation
388 include::pve-storage-dir.adoc[]
390 include::pve-storage-nfs.adoc[]
392 include::pve-storage-glusterfs.adoc[]
394 include::pve-storage-zfspool.adoc[]
396 include::pve-storage-lvm.adoc[]
398 include::pve-storage-lvmthin.adoc[]
400 include::pve-storage-iscsi.adoc[]
402 include::pve-storage-iscsidirect.adoc[]
404 include::pve-storage-rbd.adoc[]
409 include::pve-copyright.adoc[]