5 include::attributes.txt[]
10 pvesm - Proxmox VE Storage Manager
16 include::pvesm.1-synopsis.adoc[]
25 include::attributes.txt[]
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
39 The storage library (package `libpve-storage-perl`) uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
47 There are basically two different classes of storage types:
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |yes
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a `shared` LVM storage.
90 A number of storages, and the Qemu image format `qcow2`, support 'thin
91 provisioning'. With thin provisioning activated, only the blocks that
92 the guest system actually use will be written to the storage.
94 Say for instance you create a VM with a 32GB hard disk, and after
95 installing the guest system OS, the root file system of the VM contains
96 3 GB of data. In that case only 3GB are written to the storage, even
97 if the guest VM sees a 32GB hard drive. In this way thin provisioning
98 allows you to create disk images which are larger than the currently
99 available storage blocks. You can create large disk images for your
100 VMs, and when the need arises, add more disks to your storage without
101 resizing the VMs' file systems.
103 All storage types which have the ``Snapshots'' feature also support thin
106 CAUTION: If a storage runs full, all guests using volumes on that
107 storage receives IO error. This can cause file system inconsistencies
108 and may corrupt your data. So it is advisable to avoid
109 over-provisioning of your storage resources, or carefully observe
110 free space to avoid such conditions.
113 Storage Configuration
114 ---------------------
116 All {pve} related storage configuration is stored within a single text
117 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
118 gets automatically distributed to all cluster nodes. So all nodes
119 share the same storage configuration.
121 Sharing storage configuration make perfect sense for shared storage,
122 because the same ``shared'' storage is accessible from all nodes. But is
123 also useful for local storage types. In this case such local storage
124 is available on all nodes, but it is physically different and can have
125 totally different content.
131 Each storage pool has a `<type>`, and is uniquely identified by its
132 `<STORAGE_ID>`. A pool configuration looks like this:
141 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
142 followed by a list of properties. Most properties have values, but some of
143 them come with reasonable default. In that case you can omit the value.
145 To be more specific, take a look at the default storage configuration
146 after installation. It contains one special local storage pool named
147 `local`, which refers to the directory `/var/lib/vz` and is always
148 available. The {pve} installer creates additional storage entries
149 depending on the storage type chosen at installation time.
151 .Default storage configuration (`/etc/pve/storage.cfg`)
155 content iso,vztmpl,backup
157 # default image store on LVM based installation
161 content rootdir,images
163 # default image store on ZFS based installation
167 content images,rootdir
171 Common Storage Properties
172 ~~~~~~~~~~~~~~~~~~~~~~~~~
174 A few storage properties are common among different storage types.
178 List of cluster node names where this storage is
179 usable/accessible. One can use this property to restrict storage
180 access to a limited set of nodes.
184 A storage can support several content types, for example virtual disk
185 images, cdrom iso images, container templates or container root
186 directories. Not all storage types support all content types. One can set
187 this property to select for what this storage is used for.
195 Allow to store container data.
203 Backup files (`vzdump`).
211 Mark storage as shared.
215 You can use this flag to disable the storage completely.
219 Maximum number of backup files per VM. Use `0` for unlimited.
223 Default image format (`raw|qcow2|vmdk`)
226 WARNING: It is not advisable to use the same storage pool on different
227 {pve} clusters. Some storage operation need exclusive access to the
228 storage, so proper locking is required. While this is implemented
229 within a cluster, it does not work between different clusters.
235 We use a special notation to address storage data. When you allocate
236 data from a storage pool, it returns such a volume identifier. A volume
237 is identified by the `<STORAGE_ID>`, followed by a storage type
238 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
241 local:230/example-image.raw
243 local:iso/debian-501-amd64-netinst.iso
245 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
247 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
249 To get the file system path for a `<VOLUME_ID>` use:
251 pvesm path <VOLUME_ID>
257 There exists an ownership relation for `image` type volumes. Each such
258 volume is owned by a VM or Container. For example volume
259 `local:230/example-image.raw` is owned by VM 230. Most storage
260 backends encodes this ownership information into the volume name.
262 When you remove a VM or Container, the system also removes all
263 associated volumes which are owned by that VM or Container.
266 Using the Command Line Interface
267 --------------------------------
269 It is recommended to familiarize yourself with the concept behind storage
270 pools and volume identifiers, but in real life, you are not forced to do any
271 of those low level operations on the command line. Normally,
272 allocation and removal of volumes is done by the VM and Container
275 Nevertheless, there is a command line tool called `pvesm` (``{pve}
276 Storage Manager''), which is able to perform common storage management
285 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
286 pvesm add dir <STORAGE_ID> --path <PATH>
287 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
288 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
289 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
291 Disable storage pools
293 pvesm set <STORAGE_ID> --disable 1
297 pvesm set <STORAGE_ID> --disable 0
299 Change/set storage options
301 pvesm set <STORAGE_ID> <OPTIONS>
302 pvesm set <STORAGE_ID> --shared 1
303 pvesm set local --format qcow2
304 pvesm set <STORAGE_ID> --content iso
306 Remove storage pools. This does not delete any data, and does not
307 disconnect or unmount anything. It just removes the storage
310 pvesm remove <STORAGE_ID>
314 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
316 Allocate a 4G volume in local storage. The name is auto-generated if
317 you pass an empty string as `<name>`
319 pvesm alloc local <VMID> '' 4G
323 pvesm free <VOLUME_ID>
325 WARNING: This really destroys all volume data.
331 List storage contents
333 pvesm list <STORAGE_ID> [--vmid <VMID>]
335 List volumes allocated by VMID
337 pvesm list <STORAGE_ID> --vmid <VMID>
341 pvesm list <STORAGE_ID> --iso
343 List container templates
345 pvesm list <STORAGE_ID> --vztmpl
347 Show file system path for a volume
349 pvesm path <VOLUME_ID>
356 * link:/wiki/Storage:_Directory[Storage: Directory]
358 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
360 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
362 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
364 * link:/wiki/Storage:_LVM[Storage: LVM]
366 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
368 * link:/wiki/Storage:_NFS[Storage: NFS]
370 * link:/wiki/Storage:_RBD[Storage: RBD]
372 * link:/wiki/Storage:_ZFS[Storage: ZFS]
379 // backend documentation
381 include::pve-storage-dir.adoc[]
383 include::pve-storage-nfs.adoc[]
385 include::pve-storage-glusterfs.adoc[]
387 include::pve-storage-zfspool.adoc[]
389 include::pve-storage-lvm.adoc[]
391 include::pve-storage-lvmthin.adoc[]
393 include::pve-storage-iscsi.adoc[]
395 include::pve-storage-iscsidirect.adoc[]
397 include::pve-storage-rbd.adoc[]
402 include::pve-copyright.adoc[]