5 include::attributes.txt[]
11 pvesm - Proxmox VE Storage Manager
17 include::pvesm.1-synopsis.adoc[]
26 include::attributes.txt[]
33 The {pve} storage model is very flexible. Virtual machine images
34 can either be stored on one or several local storages, or on shared
35 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
36 configure as many storage pools as you like. You can use all
37 storage technologies available for Debian Linux.
39 One major benefit of storing VMs on shared storage is the ability to
40 live-migrate running machines without any downtime, as all nodes in
41 the cluster have direct access to VM disk images. There is no need to
42 copy VM image data, so live migration is very fast in that case.
44 The storage library (package `libpve-storage-perl`) uses a flexible
45 plugin system to provide a common interface to all storage types. This
46 can be easily adopted to include further storage types in future.
52 There are basically two different classes of storage types:
56 Allows to store large 'raw' images. It is usually not possible to store
57 other files (ISO, backups, ..) on such storage types. Most modern
58 block level storage implementations support snapshots and clones.
59 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
60 data to different nodes.
64 They allow access to a full featured (POSIX) file system. They are
65 more flexible, and allows you to store any content type. ZFS is
66 probably the most advanced system, and it has full support for
70 .Available storage types
71 [width="100%",cols="<d,1*m,4*d",options="header"]
72 |===========================================================
73 |Description |PVE type |Level |Shared|Snapshots|Stable
74 |ZFS (local) |zfspool |file |no |yes |yes
75 |Directory |dir |file |no |no |yes
76 |NFS |nfs |file |yes |no |yes
77 |GlusterFS |glusterfs |file |yes |no |yes
78 |LVM |lvm |block |no |no |yes
79 |LVM-thin |lvmthin |block |no |yes |yes
80 |iSCSI/kernel |iscsi |block |yes |no |yes
81 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
82 |Ceph/RBD |rbd |block |yes |yes |yes
83 |Sheepdog |sheepdog |block |yes |yes |beta
84 |DRBD9 |drbd |block |yes |yes |beta
85 |ZFS over iSCSI |zfs |block |yes |yes |yes
86 |=========================================================
88 TIP: It is possible to use LVM on top of an iSCSI storage. That way
89 you get a `shared` LVM storage.
95 A number of storages, and the Qemu image format `qcow2`, support 'thin
96 provisioning'. With thin provisioning activated, only the blocks that
97 the guest system actually use will be written to the storage.
99 Say for instance you create a VM with a 32GB hard disk, and after
100 installing the guest system OS, the root file system of the VM contains
101 3 GB of data. In that case only 3GB are written to the storage, even
102 if the guest VM sees a 32GB hard drive. In this way thin provisioning
103 allows you to create disk images which are larger than the currently
104 available storage blocks. You can create large disk images for your
105 VMs, and when the need arises, add more disks to your storage without
106 resizing the VMs' file systems.
108 All storage types which have the ``Snapshots'' feature also support thin
111 CAUTION: If a storage runs full, all guests using volumes on that
112 storage receives IO error. This can cause file system inconsistencies
113 and may corrupt your data. So it is advisable to avoid
114 over-provisioning of your storage resources, or carefully observe
115 free space to avoid such conditions.
118 Storage Configuration
119 ---------------------
121 All {pve} related storage configuration is stored within a single text
122 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
123 gets automatically distributed to all cluster nodes. So all nodes
124 share the same storage configuration.
126 Sharing storage configuration make perfect sense for shared storage,
127 because the same ``shared'' storage is accessible from all nodes. But is
128 also useful for local storage types. In this case such local storage
129 is available on all nodes, but it is physically different and can have
130 totally different content.
136 Each storage pool has a `<type>`, and is uniquely identified by its
137 `<STORAGE_ID>`. A pool configuration looks like this:
146 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
147 followed by a list of properties. Most properties have values, but some of
148 them come with reasonable default. In that case you can omit the value.
150 To be more specific, take a look at the default storage configuration
151 after installation. It contains one special local storage pool named
152 `local`, which refers to the directory `/var/lib/vz` and is always
153 available. The {pve} installer creates additional storage entries
154 depending on the storage type chosen at installation time.
156 .Default storage configuration (`/etc/pve/storage.cfg`)
160 content iso,vztmpl,backup
162 # default image store on LVM based installation
166 content rootdir,images
168 # default image store on ZFS based installation
172 content images,rootdir
176 Common Storage Properties
177 ~~~~~~~~~~~~~~~~~~~~~~~~~
179 A few storage properties are common among different storage types.
183 List of cluster node names where this storage is
184 usable/accessible. One can use this property to restrict storage
185 access to a limited set of nodes.
189 A storage can support several content types, for example virtual disk
190 images, cdrom iso images, container templates or container root
191 directories. Not all storage types support all content types. One can set
192 this property to select for what this storage is used for.
200 Allow to store container data.
208 Backup files (`vzdump`).
216 Mark storage as shared.
220 You can use this flag to disable the storage completely.
224 Maximum number of backup files per VM. Use `0` for unlimited.
228 Default image format (`raw|qcow2|vmdk`)
231 WARNING: It is not advisable to use the same storage pool on different
232 {pve} clusters. Some storage operation need exclusive access to the
233 storage, so proper locking is required. While this is implemented
234 within a cluster, it does not work between different clusters.
240 We use a special notation to address storage data. When you allocate
241 data from a storage pool, it returns such a volume identifier. A volume
242 is identified by the `<STORAGE_ID>`, followed by a storage type
243 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
246 local:230/example-image.raw
248 local:iso/debian-501-amd64-netinst.iso
250 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
252 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
254 To get the file system path for a `<VOLUME_ID>` use:
256 pvesm path <VOLUME_ID>
262 There exists an ownership relation for `image` type volumes. Each such
263 volume is owned by a VM or Container. For example volume
264 `local:230/example-image.raw` is owned by VM 230. Most storage
265 backends encodes this ownership information into the volume name.
267 When you remove a VM or Container, the system also removes all
268 associated volumes which are owned by that VM or Container.
271 Using the Command Line Interface
272 --------------------------------
274 It is recommended to familiarize yourself with the concept behind storage
275 pools and volume identifiers, but in real life, you are not forced to do any
276 of those low level operations on the command line. Normally,
277 allocation and removal of volumes is done by the VM and Container
280 Nevertheless, there is a command line tool called `pvesm` (``{pve}
281 Storage Manager''), which is able to perform common storage management
290 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
291 pvesm add dir <STORAGE_ID> --path <PATH>
292 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
293 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
294 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
296 Disable storage pools
298 pvesm set <STORAGE_ID> --disable 1
302 pvesm set <STORAGE_ID> --disable 0
304 Change/set storage options
306 pvesm set <STORAGE_ID> <OPTIONS>
307 pvesm set <STORAGE_ID> --shared 1
308 pvesm set local --format qcow2
309 pvesm set <STORAGE_ID> --content iso
311 Remove storage pools. This does not delete any data, and does not
312 disconnect or unmount anything. It just removes the storage
315 pvesm remove <STORAGE_ID>
319 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
321 Allocate a 4G volume in local storage. The name is auto-generated if
322 you pass an empty string as `<name>`
324 pvesm alloc local <VMID> '' 4G
328 pvesm free <VOLUME_ID>
330 WARNING: This really destroys all volume data.
336 List storage contents
338 pvesm list <STORAGE_ID> [--vmid <VMID>]
340 List volumes allocated by VMID
342 pvesm list <STORAGE_ID> --vmid <VMID>
346 pvesm list <STORAGE_ID> --iso
348 List container templates
350 pvesm list <STORAGE_ID> --vztmpl
352 Show file system path for a volume
354 pvesm path <VOLUME_ID>
361 * link:/wiki/Storage:_Directory[Storage: Directory]
363 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
365 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
367 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
369 * link:/wiki/Storage:_LVM[Storage: LVM]
371 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
373 * link:/wiki/Storage:_NFS[Storage: NFS]
375 * link:/wiki/Storage:_RBD[Storage: RBD]
377 * link:/wiki/Storage:_ZFS[Storage: ZFS]
379 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
385 // backend documentation
387 include::pve-storage-dir.adoc[]
389 include::pve-storage-nfs.adoc[]
391 include::pve-storage-glusterfs.adoc[]
393 include::pve-storage-zfspool.adoc[]
395 include::pve-storage-lvm.adoc[]
397 include::pve-storage-lvmthin.adoc[]
399 include::pve-storage-iscsi.adoc[]
401 include::pve-storage-iscsidirect.adoc[]
403 include::pve-storage-rbd.adoc[]
408 include::pve-copyright.adoc[]