10 pvesm - Proxmox VE Storage Manager
16 include::pvesm.1-synopsis.adoc[]
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
49 There are basically two different classes of storage types:
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57 data to different nodes.
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no |yes
73 |NFS |nfs |file |yes |no |yes
74 |GlusterFS |glusterfs |file |yes |no |yes
75 |LVM |lvm |block |no |no |yes
76 |LVM-thin |lvmthin |block |no |yes |yes
77 |iSCSI/kernel |iscsi |block |yes |no |yes
78 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79 |Ceph/RBD |rbd |block |yes |yes |yes
80 |Sheepdog |sheepdog |block |yes |yes |beta
81 |ZFS over iSCSI |zfs |block |yes |yes |yes
82 |=========================================================
84 TIP: It is possible to use LVM on top of an iSCSI storage. That way
85 you get a `shared` LVM storage.
91 A number of storages, and the Qemu image format `qcow2`, support 'thin
92 provisioning'. With thin provisioning activated, only the blocks that
93 the guest system actually use will be written to the storage.
95 Say for instance you create a VM with a 32GB hard disk, and after
96 installing the guest system OS, the root file system of the VM contains
97 3 GB of data. In that case only 3GB are written to the storage, even
98 if the guest VM sees a 32GB hard drive. In this way thin provisioning
99 allows you to create disk images which are larger than the currently
100 available storage blocks. You can create large disk images for your
101 VMs, and when the need arises, add more disks to your storage without
102 resizing the VMs' file systems.
104 All storage types which have the ``Snapshots'' feature also support thin
107 CAUTION: If a storage runs full, all guests using volumes on that
108 storage receives IO error. This can cause file system inconsistencies
109 and may corrupt your data. So it is advisable to avoid
110 over-provisioning of your storage resources, or carefully observe
111 free space to avoid such conditions.
114 Storage Configuration
115 ---------------------
117 All {pve} related storage configuration is stored within a single text
118 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
119 gets automatically distributed to all cluster nodes. So all nodes
120 share the same storage configuration.
122 Sharing storage configuration make perfect sense for shared storage,
123 because the same ``shared'' storage is accessible from all nodes. But is
124 also useful for local storage types. In this case such local storage
125 is available on all nodes, but it is physically different and can have
126 totally different content.
132 Each storage pool has a `<type>`, and is uniquely identified by its
133 `<STORAGE_ID>`. A pool configuration looks like this:
142 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
143 followed by a list of properties. Most properties have values, but some of
144 them come with reasonable default. In that case you can omit the value.
146 To be more specific, take a look at the default storage configuration
147 after installation. It contains one special local storage pool named
148 `local`, which refers to the directory `/var/lib/vz` and is always
149 available. The {pve} installer creates additional storage entries
150 depending on the storage type chosen at installation time.
152 .Default storage configuration (`/etc/pve/storage.cfg`)
156 content iso,vztmpl,backup
158 # default image store on LVM based installation
162 content rootdir,images
164 # default image store on ZFS based installation
168 content images,rootdir
172 Common Storage Properties
173 ~~~~~~~~~~~~~~~~~~~~~~~~~
175 A few storage properties are common among different storage types.
179 List of cluster node names where this storage is
180 usable/accessible. One can use this property to restrict storage
181 access to a limited set of nodes.
185 A storage can support several content types, for example virtual disk
186 images, cdrom iso images, container templates or container root
187 directories. Not all storage types support all content types. One can set
188 this property to select for what this storage is used for.
196 Allow to store container data.
204 Backup files (`vzdump`).
212 Mark storage as shared.
216 You can use this flag to disable the storage completely.
220 Maximum number of backup files per VM. Use `0` for unlimited.
224 Default image format (`raw|qcow2|vmdk`)
227 WARNING: It is not advisable to use the same storage pool on different
228 {pve} clusters. Some storage operation need exclusive access to the
229 storage, so proper locking is required. While this is implemented
230 within a cluster, it does not work between different clusters.
236 We use a special notation to address storage data. When you allocate
237 data from a storage pool, it returns such a volume identifier. A volume
238 is identified by the `<STORAGE_ID>`, followed by a storage type
239 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
242 local:230/example-image.raw
244 local:iso/debian-501-amd64-netinst.iso
246 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
248 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
250 To get the file system path for a `<VOLUME_ID>` use:
252 pvesm path <VOLUME_ID>
258 There exists an ownership relation for `image` type volumes. Each such
259 volume is owned by a VM or Container. For example volume
260 `local:230/example-image.raw` is owned by VM 230. Most storage
261 backends encodes this ownership information into the volume name.
263 When you remove a VM or Container, the system also removes all
264 associated volumes which are owned by that VM or Container.
267 Using the Command Line Interface
268 --------------------------------
270 It is recommended to familiarize yourself with the concept behind storage
271 pools and volume identifiers, but in real life, you are not forced to do any
272 of those low level operations on the command line. Normally,
273 allocation and removal of volumes is done by the VM and Container
276 Nevertheless, there is a command line tool called `pvesm` (``{pve}
277 Storage Manager''), which is able to perform common storage management
286 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
287 pvesm add dir <STORAGE_ID> --path <PATH>
288 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
289 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
290 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
292 Disable storage pools
294 pvesm set <STORAGE_ID> --disable 1
298 pvesm set <STORAGE_ID> --disable 0
300 Change/set storage options
302 pvesm set <STORAGE_ID> <OPTIONS>
303 pvesm set <STORAGE_ID> --shared 1
304 pvesm set local --format qcow2
305 pvesm set <STORAGE_ID> --content iso
307 Remove storage pools. This does not delete any data, and does not
308 disconnect or unmount anything. It just removes the storage
311 pvesm remove <STORAGE_ID>
315 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
317 Allocate a 4G volume in local storage. The name is auto-generated if
318 you pass an empty string as `<name>`
320 pvesm alloc local <VMID> '' 4G
324 pvesm free <VOLUME_ID>
326 WARNING: This really destroys all volume data.
332 List storage contents
334 pvesm list <STORAGE_ID> [--vmid <VMID>]
336 List volumes allocated by VMID
338 pvesm list <STORAGE_ID> --vmid <VMID>
342 pvesm list <STORAGE_ID> --iso
344 List container templates
346 pvesm list <STORAGE_ID> --vztmpl
348 Show file system path for a volume
350 pvesm path <VOLUME_ID>
357 * link:/wiki/Storage:_Directory[Storage: Directory]
359 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
361 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
363 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
365 * link:/wiki/Storage:_LVM[Storage: LVM]
367 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
369 * link:/wiki/Storage:_NFS[Storage: NFS]
371 * link:/wiki/Storage:_RBD[Storage: RBD]
373 * link:/wiki/Storage:_ZFS[Storage: ZFS]
375 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
381 // backend documentation
383 include::pve-storage-dir.adoc[]
385 include::pve-storage-nfs.adoc[]
387 include::pve-storage-glusterfs.adoc[]
389 include::pve-storage-zfspool.adoc[]
391 include::pve-storage-lvm.adoc[]
393 include::pve-storage-lvmthin.adoc[]
395 include::pve-storage-iscsi.adoc[]
397 include::pve-storage-iscsidirect.adoc[]
399 include::pve-storage-rbd.adoc[]
404 include::pve-copyright.adoc[]