]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
vzdump: add section about backup fleecing
[pve-docs.git] / pvesm.adoc
CommitLineData
80c0adcb 1[[chapter_storage]]
aa039b0f 2ifdef::manvolnum[]
b2f242ab
DM
3pvesm(1)
4========
5f09af76
DM
5:pve-toplevel:
6
aa039b0f
DM
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
49a5e11c 13SYNOPSIS
aa039b0f
DM
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
aa039b0f
DM
21ifndef::manvolnum[]
22{pve} Storage
23=============
194d2f29 24:pve-toplevel:
aa039b0f 25endif::manvolnum[]
5f09af76 26ifdef::wiki[]
cb84ed18 27:title: Storage
5f09af76
DM
28endif::wiki[]
29
aa039b0f
DM
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
8c1189b6 41The storage library (package `libpve-storage-perl`) uses a flexible
aa039b0f 42plugin system to provide a common interface to all storage types. This
cc15d2c5 43can be easily adopted to include further storage types in the future.
aa039b0f
DM
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
e21e6f8a
TL
51File level storage::
52
cc15d2c5 53File level based storage technologies allow access to a fully featured (POSIX)
e21e6f8a
TL
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
aa039b0f
DM
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
e4fefc2c 63RADOS and GlusterFS are distributed systems, replicating storage
aa039b0f
DM
64data to different nodes.
65
aa039b0f
DM
66
67.Available storage types
b84c51fa 68[width="100%",cols="<2d,1*m,4*d",options="header"]
aa039b0f 69|===========================================================
4bcf9cc3
AL
70|Description |Plugin type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |both^1^|no |yes |yes
72|Directory |dir |file |no |no^2^ |yes
73|BTRFS |btrfs |file |no |yes |technology preview
74|NFS |nfs |file |yes |no^2^ |yes
75|CIFS |cifs |file |yes |no^2^ |yes
76|Proxmox Backup |pbs |both |yes |n/a |yes
77|GlusterFS |glusterfs |file |yes |no^2^ |yes
78|CephFS |cephfs |file |yes |yes |yes
79|LVM |lvm |block |no^3^ |no |yes
80|LVM-thin |lvmthin |block |no |yes |yes
81|iSCSI/kernel |iscsi |block |yes |no |yes
82|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83|Ceph/RBD |rbd |block |yes |yes |yes
84|ZFS over iSCSI |zfs |block |yes |yes |yes
93e1d33e 85|===========================================================
aa039b0f 86
4bcf9cc3
AL
87^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88block device functionality.
db7f8770 89
4bcf9cc3
AL
90^2^: On file based storages, snapshots are possible with the 'qcow2' format.
91
92^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93That way you get a `shared` LVM storage
aa039b0f 94
5eba0743
FG
95
96Thin Provisioning
2afe468c 97~~~~~~~~~~~~~~~~~
ebc15cbc 98
c730e973 99A number of storages, and the QEMU image format `qcow2`, support 'thin
8c1189b6 100provisioning'. With thin provisioning activated, only the blocks that
2afe468c 101the guest system actually use will be written to the storage.
ebc15cbc 102
2afe468c 103Say for instance you create a VM with a 32GB hard disk, and after
5eba0743 104installing the guest system OS, the root file system of the VM contains
2afe468c
DM
1053 GB of data. In that case only 3GB are written to the storage, even
106if the guest VM sees a 32GB hard drive. In this way thin provisioning
107allows you to create disk images which are larger than the currently
108available storage blocks. You can create large disk images for your
109VMs, and when the need arises, add more disks to your storage without
5eba0743 110resizing the VMs' file systems.
2afe468c 111
8c1189b6 112All storage types which have the ``Snapshots'' feature also support thin
2afe468c 113provisioning.
ebc15cbc 114
ba1d96fd 115CAUTION: If a storage runs full, all guests using volumes on that
38d1cf56 116storage receive IO errors. This can cause file system inconsistencies
ba1d96fd
DM
117and may corrupt your data. So it is advisable to avoid
118over-provisioning of your storage resources, or carefully observe
119free space to avoid such conditions.
ebc15cbc 120
5eba0743 121
aa039b0f
DM
122Storage Configuration
123---------------------
124
125All {pve} related storage configuration is stored within a single text
8c1189b6 126file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
aa039b0f
DM
127gets automatically distributed to all cluster nodes. So all nodes
128share the same storage configuration.
129
cc15d2c5
FE
130Sharing storage configuration makes perfect sense for shared storage,
131because the same ``shared'' storage is accessible from all nodes. But it is
aa039b0f
DM
132also useful for local storage types. In this case such local storage
133is available on all nodes, but it is physically different and can have
134totally different content.
135
5eba0743 136
aa039b0f
DM
137Storage Pools
138~~~~~~~~~~~~~
139
5eba0743
FG
140Each storage pool has a `<type>`, and is uniquely identified by its
141`<STORAGE_ID>`. A pool configuration looks like this:
aa039b0f
DM
142
143----
144<type>: <STORAGE_ID>
145 <property> <value>
146 <property> <value>
a550860d 147 <property>
aa039b0f
DM
148 ...
149----
150
aa039b0f 151The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
a550860d
TL
152followed by a list of properties. Most properties require a value. Some have
153reasonable defaults, in which case you can omit the value.
aa039b0f 154
9c41b54d
DM
155To be more specific, take a look at the default storage configuration
156after installation. It contains one special local storage pool named
8c1189b6 157`local`, which refers to the directory `/var/lib/vz` and is always
9c41b54d
DM
158available. The {pve} installer creates additional storage entries
159depending on the storage type chosen at installation time.
160
8c1189b6 161.Default storage configuration (`/etc/pve/storage.cfg`)
9801e1c3
DM
162----
163dir: local
aa039b0f 164 path /var/lib/vz
9801e1c3
DM
165 content iso,vztmpl,backup
166
9c41b54d 167# default image store on LVM based installation
9801e1c3
DM
168lvmthin: local-lvm
169 thinpool data
170 vgname pve
171 content rootdir,images
9c41b54d
DM
172
173# default image store on ZFS based installation
174zfspool: local-zfs
175 pool rpool/data
176 sparse
177 content images,rootdir
9801e1c3 178----
aa039b0f 179
0c3c5ff3
AL
180CAUTION: It is problematic to have multiple storage configurations pointing to
181the exact same underlying storage. Such an _aliased_ storage configuration can
182lead to two different volume IDs ('volid') pointing to the exact same disk
183image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184different content types for _aliased_ storage configurations can be fine, but
185is not recommended.
5eba0743 186
aa039b0f
DM
187Common Storage Properties
188~~~~~~~~~~~~~~~~~~~~~~~~~
189
871e1fd6 190A few storage properties are common among different storage types.
aa039b0f
DM
191
192nodes::
193
194List of cluster node names where this storage is
195usable/accessible. One can use this property to restrict storage
196access to a limited set of nodes.
197
198content::
199
200A storage can support several content types, for example virtual disk
201images, cdrom iso images, container templates or container root
871e1fd6 202directories. Not all storage types support all content types. One can set
cc15d2c5 203this property to select what this storage is used for.
aa039b0f
DM
204
205images:::
206
c730e973 207QEMU/KVM VM images.
aa039b0f
DM
208
209rootdir:::
210
871e1fd6 211Allow to store container data.
aa039b0f
DM
212
213vztmpl:::
214
215Container templates.
216
217backup:::
218
8c1189b6 219Backup files (`vzdump`).
aa039b0f
DM
220
221iso:::
222
223ISO images
224
c2c8eb89
DC
225snippets:::
226
227Snippet files, for example guest hook scripts
228
aa039b0f
DM
229shared::
230
8c60c6e4
FE
231Indicate that this is a single storage with the same contents on all nodes (or
232all listed in the 'nodes' option). It will not make the contents of a local
233storage automatically accessible to other nodes, it just marks an already shared
234storage as such!
aa039b0f
DM
235
236disable::
237
238You can use this flag to disable the storage completely.
239
240maxfiles::
241
3a976366
FE
242Deprecated, please use `prune-backups` instead. Maximum number of backup files
243per VM. Use `0` for unlimited.
244
245prune-backups::
246
247Retention options for backups. For details, see
248xref:vzdump_retention[Backup Retention].
aa039b0f
DM
249
250format::
251
252Default image format (`raw|qcow2|vmdk`)
253
0537ebf1
FE
254preallocation::
255
256Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
257file-based storages. The default is `metadata`, which is treated like `off` for
258`raw` images. When using network storages in combination with large `qcow2`
259images, using `off` can help to avoid timeouts.
aa039b0f
DM
260
261WARNING: It is not advisable to use the same storage pool on different
871e1fd6 262{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 263storage, so proper locking is required. While this is implemented
871e1fd6 264within a cluster, it does not work between different clusters.
aa039b0f
DM
265
266
267Volumes
268-------
269
270We use a special notation to address storage data. When you allocate
871e1fd6 271data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
272is identified by the `<STORAGE_ID>`, followed by a storage type
273dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
274like:
275
276 local:230/example-image.raw
277
278 local:iso/debian-501-amd64-netinst.iso
279
280 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
281
282 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
283
5eba0743 284To get the file system path for a `<VOLUME_ID>` use:
aa039b0f
DM
285
286 pvesm path <VOLUME_ID>
287
5eba0743 288
aa039b0f
DM
289Volume Ownership
290~~~~~~~~~~~~~~~~
291
8c1189b6 292There exists an ownership relation for `image` type volumes. Each such
aa039b0f
DM
293volume is owned by a VM or Container. For example volume
294`local:230/example-image.raw` is owned by VM 230. Most storage
295backends encodes this ownership information into the volume name.
296
871e1fd6 297When you remove a VM or Container, the system also removes all
aa039b0f
DM
298associated volumes which are owned by that VM or Container.
299
300
ff4ae052 301Using the Command-line Interface
aa039b0f
DM
302--------------------------------
303
871e1fd6
FG
304It is recommended to familiarize yourself with the concept behind storage
305pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
306of those low level operations on the command line. Normally,
307allocation and removal of volumes is done by the VM and Container
308management tools.
309
ff4ae052 310Nevertheless, there is a command-line tool called `pvesm` (``{pve}
8c1189b6 311Storage Manager''), which is able to perform common storage management
aa039b0f
DM
312tasks.
313
314
315Examples
316~~~~~~~~
317
318Add storage pools
319
320 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
321 pvesm add dir <STORAGE_ID> --path <PATH>
322 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
323 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
324 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
325
326Disable storage pools
327
328 pvesm set <STORAGE_ID> --disable 1
329
330Enable storage pools
331
332 pvesm set <STORAGE_ID> --disable 0
333
334Change/set storage options
335
336 pvesm set <STORAGE_ID> <OPTIONS>
337 pvesm set <STORAGE_ID> --shared 1
338 pvesm set local --format qcow2
339 pvesm set <STORAGE_ID> --content iso
340
341Remove storage pools. This does not delete any data, and does not
342disconnect or unmount anything. It just removes the storage
343configuration.
344
345 pvesm remove <STORAGE_ID>
346
347Allocate volumes
348
349 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
350
351Allocate a 4G volume in local storage. The name is auto-generated if
352you pass an empty string as `<name>`
353
354 pvesm alloc local <VMID> '' 4G
355
5eba0743 356Free volumes
aa039b0f
DM
357
358 pvesm free <VOLUME_ID>
359
360WARNING: This really destroys all volume data.
361
362List storage status
363
364 pvesm status
365
366List storage contents
367
368 pvesm list <STORAGE_ID> [--vmid <VMID>]
369
370List volumes allocated by VMID
371
372 pvesm list <STORAGE_ID> --vmid <VMID>
373
374List iso images
375
65ef3bb6 376 pvesm list <STORAGE_ID> --content iso
aa039b0f
DM
377
378List container templates
379
65ef3bb6 380 pvesm list <STORAGE_ID> --content vztmpl
aa039b0f 381
5eba0743 382Show file system path for a volume
aa039b0f
DM
383
384 pvesm path <VOLUME_ID>
385
13962741
DJ
386Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
387This is mostly used internally with `pvesm import`.
388The stream format qcow2+size is different to the qcow2 format.
389Consequently, the exported file cannot simply be attached to a VM.
390This also holds for the other formats.
391
392 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
393
deb4673f
DM
394ifdef::wiki[]
395
396See Also
397--------
398
f532afb7 399* link:/wiki/Storage:_Directory[Storage: Directory]
deb4673f 400
f532afb7 401* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
deb4673f 402
f532afb7 403* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
deb4673f 404
f532afb7 405* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
deb4673f 406
f532afb7 407* link:/wiki/Storage:_LVM[Storage: LVM]
deb4673f 408
f532afb7 409* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
deb4673f 410
f532afb7 411* link:/wiki/Storage:_NFS[Storage: NFS]
deb4673f 412
de14ebff
WL
413* link:/wiki/Storage:_CIFS[Storage: CIFS]
414
7b43e874 415* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
93e1d33e 416
f532afb7 417* link:/wiki/Storage:_RBD[Storage: RBD]
deb4673f 418
ef488ba5 419* link:/wiki/Storage:_CephFS[Storage: CephFS]
a82d3cc3 420
f532afb7 421* link:/wiki/Storage:_ZFS[Storage: ZFS]
deb4673f 422
2c462964 423* link:/wiki/Storage:_ZFS_over_ISCSI[Storage: ZFS over ISCSI]
deb4673f
DM
424
425endif::wiki[]
426
251666be
DM
427ifndef::wiki[]
428
aa039b0f
DM
429// backend documentation
430
431include::pve-storage-dir.adoc[]
432
433include::pve-storage-nfs.adoc[]
434
de14ebff
WL
435include::pve-storage-cifs.adoc[]
436
93e1d33e
TL
437include::pve-storage-pbs.adoc[]
438
aa039b0f
DM
439include::pve-storage-glusterfs.adoc[]
440
441include::pve-storage-zfspool.adoc[]
442
443include::pve-storage-lvm.adoc[]
444
9801e1c3
DM
445include::pve-storage-lvmthin.adoc[]
446
aa039b0f
DM
447include::pve-storage-iscsi.adoc[]
448
449include::pve-storage-iscsidirect.adoc[]
450
451include::pve-storage-rbd.adoc[]
452
669bce8b
AA
453include::pve-storage-cephfs.adoc[]
454
ea856d57 455include::pve-storage-btrfs.adoc[]
aa039b0f 456
93f65836
SI
457include::pve-storage-zfs.adoc[]
458
251666be 459
aa039b0f
DM
460ifdef::manvolnum[]
461include::pve-copyright.adoc[]
462endif::manvolnum[]
463
251666be
DM
464endif::wiki[]
465