]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
pvesm: update documentation of the list feature
[pve-docs.git] / pvesm.adoc
CommitLineData
80c0adcb 1[[chapter_storage]]
aa039b0f 2ifdef::manvolnum[]
b2f242ab
DM
3pvesm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
aa039b0f
DM
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
49a5e11c 13SYNOPSIS
aa039b0f
DM
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
aa039b0f
DM
21ifndef::manvolnum[]
22{pve} Storage
23=============
194d2f29 24:pve-toplevel:
aa039b0f 25endif::manvolnum[]
5f09af76 26ifdef::wiki[]
cb84ed18 27:title: Storage
5f09af76
DM
28endif::wiki[]
29
aa039b0f
DM
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
8c1189b6 41The storage library (package `libpve-storage-perl`) uses a flexible
aa039b0f 42plugin system to provide a common interface to all storage types. This
cc15d2c5 43can be easily adopted to include further storage types in the future.
aa039b0f
DM
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
e21e6f8a
TL
51File level storage::
52
cc15d2c5 53File level based storage technologies allow access to a fully featured (POSIX)
e21e6f8a
TL
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
aa039b0f
DM
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
e4fefc2c 63RADOS and GlusterFS are distributed systems, replicating storage
aa039b0f
DM
64data to different nodes.
65
aa039b0f
DM
66
67.Available storage types
b84c51fa 68[width="100%",cols="<2d,1*m,4*d",options="header"]
aa039b0f 69|===========================================================
4bcf9cc3
AL
70|Description |Plugin type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |both^1^|no |yes |yes
72|Directory |dir |file |no |no^2^ |yes
73|BTRFS |btrfs |file |no |yes |technology preview
74|NFS |nfs |file |yes |no^2^ |yes
75|CIFS |cifs |file |yes |no^2^ |yes
76|Proxmox Backup |pbs |both |yes |n/a |yes
77|GlusterFS |glusterfs |file |yes |no^2^ |yes
78|CephFS |cephfs |file |yes |yes |yes
79|LVM |lvm |block |no^3^ |no |yes
80|LVM-thin |lvmthin |block |no |yes |yes
81|iSCSI/kernel |iscsi |block |yes |no |yes
82|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83|Ceph/RBD |rbd |block |yes |yes |yes
84|ZFS over iSCSI |zfs |block |yes |yes |yes
93e1d33e 85|===========================================================
aa039b0f 86
4bcf9cc3
AL
87^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88block device functionality.
db7f8770 89
4bcf9cc3
AL
90^2^: On file based storages, snapshots are possible with the 'qcow2' format.
91
92^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93That way you get a `shared` LVM storage
aa039b0f 94
5eba0743
FG
95
96Thin Provisioning
2afe468c 97~~~~~~~~~~~~~~~~~
ebc15cbc 98
c730e973 99A number of storages, and the QEMU image format `qcow2`, support 'thin
8c1189b6 100provisioning'. With thin provisioning activated, only the blocks that
2afe468c 101the guest system actually use will be written to the storage.
ebc15cbc 102
2afe468c 103Say for instance you create a VM with a 32GB hard disk, and after
5eba0743 104installing the guest system OS, the root file system of the VM contains
2afe468c
DM
1053 GB of data. In that case only 3GB are written to the storage, even
106if the guest VM sees a 32GB hard drive. In this way thin provisioning
107allows you to create disk images which are larger than the currently
108available storage blocks. You can create large disk images for your
109VMs, and when the need arises, add more disks to your storage without
5eba0743 110resizing the VMs' file systems.
2afe468c 111
8c1189b6 112All storage types which have the ``Snapshots'' feature also support thin
2afe468c 113provisioning.
ebc15cbc 114
ba1d96fd 115CAUTION: If a storage runs full, all guests using volumes on that
38d1cf56 116storage receive IO errors. This can cause file system inconsistencies
ba1d96fd
DM
117and may corrupt your data. So it is advisable to avoid
118over-provisioning of your storage resources, or carefully observe
119free space to avoid such conditions.
ebc15cbc 120
5eba0743 121
aa039b0f
DM
122Storage Configuration
123---------------------
124
125All {pve} related storage configuration is stored within a single text
8c1189b6 126file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
aa039b0f
DM
127gets automatically distributed to all cluster nodes. So all nodes
128share the same storage configuration.
129
cc15d2c5
FE
130Sharing storage configuration makes perfect sense for shared storage,
131because the same ``shared'' storage is accessible from all nodes. But it is
aa039b0f
DM
132also useful for local storage types. In this case such local storage
133is available on all nodes, but it is physically different and can have
134totally different content.
135
5eba0743 136
aa039b0f
DM
137Storage Pools
138~~~~~~~~~~~~~
139
5eba0743
FG
140Each storage pool has a `<type>`, and is uniquely identified by its
141`<STORAGE_ID>`. A pool configuration looks like this:
aa039b0f
DM
142
143----
144<type>: <STORAGE_ID>
145 <property> <value>
146 <property> <value>
a550860d 147 <property>
aa039b0f
DM
148 ...
149----
150
aa039b0f 151The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
a550860d
TL
152followed by a list of properties. Most properties require a value. Some have
153reasonable defaults, in which case you can omit the value.
aa039b0f 154
9c41b54d
DM
155To be more specific, take a look at the default storage configuration
156after installation. It contains one special local storage pool named
8c1189b6 157`local`, which refers to the directory `/var/lib/vz` and is always
9c41b54d
DM
158available. The {pve} installer creates additional storage entries
159depending on the storage type chosen at installation time.
160
8c1189b6 161.Default storage configuration (`/etc/pve/storage.cfg`)
9801e1c3
DM
162----
163dir: local
aa039b0f 164 path /var/lib/vz
9801e1c3
DM
165 content iso,vztmpl,backup
166
9c41b54d 167# default image store on LVM based installation
9801e1c3
DM
168lvmthin: local-lvm
169 thinpool data
170 vgname pve
171 content rootdir,images
9c41b54d
DM
172
173# default image store on ZFS based installation
174zfspool: local-zfs
175 pool rpool/data
176 sparse
177 content images,rootdir
9801e1c3 178----
aa039b0f 179
0c3c5ff3
AL
180CAUTION: It is problematic to have multiple storage configurations pointing to
181the exact same underlying storage. Such an _aliased_ storage configuration can
182lead to two different volume IDs ('volid') pointing to the exact same disk
183image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184different content types for _aliased_ storage configurations can be fine, but
185is not recommended.
5eba0743 186
aa039b0f
DM
187Common Storage Properties
188~~~~~~~~~~~~~~~~~~~~~~~~~
189
871e1fd6 190A few storage properties are common among different storage types.
aa039b0f
DM
191
192nodes::
193
194List of cluster node names where this storage is
195usable/accessible. One can use this property to restrict storage
196access to a limited set of nodes.
197
198content::
199
200A storage can support several content types, for example virtual disk
201images, cdrom iso images, container templates or container root
871e1fd6 202directories. Not all storage types support all content types. One can set
cc15d2c5 203this property to select what this storage is used for.
aa039b0f
DM
204
205images:::
206
c730e973 207QEMU/KVM VM images.
aa039b0f
DM
208
209rootdir:::
210
871e1fd6 211Allow to store container data.
aa039b0f
DM
212
213vztmpl:::
214
215Container templates.
216
217backup:::
218
8c1189b6 219Backup files (`vzdump`).
aa039b0f
DM
220
221iso:::
222
223ISO images
224
c2c8eb89
DC
225snippets:::
226
227Snippet files, for example guest hook scripts
228
aa039b0f
DM
229shared::
230
231Mark storage as shared.
232
233disable::
234
235You can use this flag to disable the storage completely.
236
237maxfiles::
238
3a976366
FE
239Deprecated, please use `prune-backups` instead. Maximum number of backup files
240per VM. Use `0` for unlimited.
241
242prune-backups::
243
244Retention options for backups. For details, see
245xref:vzdump_retention[Backup Retention].
aa039b0f
DM
246
247format::
248
249Default image format (`raw|qcow2|vmdk`)
250
0537ebf1
FE
251preallocation::
252
253Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
254file-based storages. The default is `metadata`, which is treated like `off` for
255`raw` images. When using network storages in combination with large `qcow2`
256images, using `off` can help to avoid timeouts.
aa039b0f
DM
257
258WARNING: It is not advisable to use the same storage pool on different
871e1fd6 259{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 260storage, so proper locking is required. While this is implemented
871e1fd6 261within a cluster, it does not work between different clusters.
aa039b0f
DM
262
263
264Volumes
265-------
266
267We use a special notation to address storage data. When you allocate
871e1fd6 268data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
269is identified by the `<STORAGE_ID>`, followed by a storage type
270dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
271like:
272
273 local:230/example-image.raw
274
275 local:iso/debian-501-amd64-netinst.iso
276
277 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
278
279 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
280
5eba0743 281To get the file system path for a `<VOLUME_ID>` use:
aa039b0f
DM
282
283 pvesm path <VOLUME_ID>
284
5eba0743 285
aa039b0f
DM
286Volume Ownership
287~~~~~~~~~~~~~~~~
288
8c1189b6 289There exists an ownership relation for `image` type volumes. Each such
aa039b0f
DM
290volume is owned by a VM or Container. For example volume
291`local:230/example-image.raw` is owned by VM 230. Most storage
292backends encodes this ownership information into the volume name.
293
871e1fd6 294When you remove a VM or Container, the system also removes all
aa039b0f
DM
295associated volumes which are owned by that VM or Container.
296
297
298Using the Command Line Interface
299--------------------------------
300
871e1fd6
FG
301It is recommended to familiarize yourself with the concept behind storage
302pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
303of those low level operations on the command line. Normally,
304allocation and removal of volumes is done by the VM and Container
305management tools.
306
8c1189b6
FG
307Nevertheless, there is a command line tool called `pvesm` (``{pve}
308Storage Manager''), which is able to perform common storage management
aa039b0f
DM
309tasks.
310
311
312Examples
313~~~~~~~~
314
315Add storage pools
316
317 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
318 pvesm add dir <STORAGE_ID> --path <PATH>
319 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
320 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
321 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
322
323Disable storage pools
324
325 pvesm set <STORAGE_ID> --disable 1
326
327Enable storage pools
328
329 pvesm set <STORAGE_ID> --disable 0
330
331Change/set storage options
332
333 pvesm set <STORAGE_ID> <OPTIONS>
334 pvesm set <STORAGE_ID> --shared 1
335 pvesm set local --format qcow2
336 pvesm set <STORAGE_ID> --content iso
337
338Remove storage pools. This does not delete any data, and does not
339disconnect or unmount anything. It just removes the storage
340configuration.
341
342 pvesm remove <STORAGE_ID>
343
344Allocate volumes
345
346 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
347
348Allocate a 4G volume in local storage. The name is auto-generated if
349you pass an empty string as `<name>`
350
351 pvesm alloc local <VMID> '' 4G
352
5eba0743 353Free volumes
aa039b0f
DM
354
355 pvesm free <VOLUME_ID>
356
357WARNING: This really destroys all volume data.
358
359List storage status
360
361 pvesm status
362
363List storage contents
364
365 pvesm list <STORAGE_ID> [--vmid <VMID>]
366
367List volumes allocated by VMID
368
369 pvesm list <STORAGE_ID> --vmid <VMID>
370
371List iso images
372
65ef3bb6 373 pvesm list <STORAGE_ID> --content iso
aa039b0f
DM
374
375List container templates
376
65ef3bb6 377 pvesm list <STORAGE_ID> --content vztmpl
aa039b0f 378
5eba0743 379Show file system path for a volume
aa039b0f
DM
380
381 pvesm path <VOLUME_ID>
382
13962741
DJ
383Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
384This is mostly used internally with `pvesm import`.
385The stream format qcow2+size is different to the qcow2 format.
386Consequently, the exported file cannot simply be attached to a VM.
387This also holds for the other formats.
388
389 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
390
deb4673f
DM
391ifdef::wiki[]
392
393See Also
394--------
395
f532afb7 396* link:/wiki/Storage:_Directory[Storage: Directory]
deb4673f 397
f532afb7 398* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
deb4673f 399
f532afb7 400* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
deb4673f 401
f532afb7 402* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
deb4673f 403
f532afb7 404* link:/wiki/Storage:_LVM[Storage: LVM]
deb4673f 405
f532afb7 406* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
deb4673f 407
f532afb7 408* link:/wiki/Storage:_NFS[Storage: NFS]
deb4673f 409
de14ebff
WL
410* link:/wiki/Storage:_CIFS[Storage: CIFS]
411
7b43e874 412* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
93e1d33e 413
f532afb7 414* link:/wiki/Storage:_RBD[Storage: RBD]
deb4673f 415
ef488ba5 416* link:/wiki/Storage:_CephFS[Storage: CephFS]
a82d3cc3 417
f532afb7 418* link:/wiki/Storage:_ZFS[Storage: ZFS]
deb4673f 419
032e755c 420* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
deb4673f
DM
421
422endif::wiki[]
423
251666be
DM
424ifndef::wiki[]
425
aa039b0f
DM
426// backend documentation
427
428include::pve-storage-dir.adoc[]
429
430include::pve-storage-nfs.adoc[]
431
de14ebff
WL
432include::pve-storage-cifs.adoc[]
433
93e1d33e
TL
434include::pve-storage-pbs.adoc[]
435
aa039b0f
DM
436include::pve-storage-glusterfs.adoc[]
437
438include::pve-storage-zfspool.adoc[]
439
440include::pve-storage-lvm.adoc[]
441
9801e1c3
DM
442include::pve-storage-lvmthin.adoc[]
443
aa039b0f
DM
444include::pve-storage-iscsi.adoc[]
445
446include::pve-storage-iscsidirect.adoc[]
447
448include::pve-storage-rbd.adoc[]
449
669bce8b
AA
450include::pve-storage-cephfs.adoc[]
451
ea856d57 452include::pve-storage-btrfs.adoc[]
aa039b0f 453
93f65836
SI
454include::pve-storage-zfs.adoc[]
455
251666be 456
aa039b0f
DM
457ifdef::manvolnum[]
458include::pve-copyright.adoc[]
459endif::manvolnum[]
460
251666be
DM
461endif::wiki[]
462