]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
update static and schema information
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in the future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51File level storage::
52
53File level based storage technologies allow access to a fully featured (POSIX)
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
63RADOS and GlusterFS are distributed systems, replicating storage
64data to different nodes.
65
66
67.Available storage types
68[width="100%",cols="<2d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|NFS |nfs |file |yes |no^1^ |yes
74|CIFS |cifs |file |yes |no^1^ |yes
75|Proxmox Backup |pbs |both |yes |n/a |yes
76|GlusterFS |glusterfs |file |yes |no^1^ |yes
77|CephFS |cephfs |file |yes |yes |yes
78|LVM |lvm |block |no^2^ |no |yes
79|LVM-thin |lvmthin |block |no |yes |yes
80|iSCSI/kernel |iscsi |block |yes |no |yes
81|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
82|Ceph/RBD |rbd |block |yes |yes |yes
83|ZFS over iSCSI |zfs |block |yes |yes |yes
84|===========================================================
85
86^1^: On file based storages, snapshots are possible with the 'qcow2' format.
87
88^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
89That way you get a `shared` LVM storage.
90
91
92Thin Provisioning
93~~~~~~~~~~~~~~~~~
94
95A number of storages, and the Qemu image format `qcow2`, support 'thin
96provisioning'. With thin provisioning activated, only the blocks that
97the guest system actually use will be written to the storage.
98
99Say for instance you create a VM with a 32GB hard disk, and after
100installing the guest system OS, the root file system of the VM contains
1013 GB of data. In that case only 3GB are written to the storage, even
102if the guest VM sees a 32GB hard drive. In this way thin provisioning
103allows you to create disk images which are larger than the currently
104available storage blocks. You can create large disk images for your
105VMs, and when the need arises, add more disks to your storage without
106resizing the VMs' file systems.
107
108All storage types which have the ``Snapshots'' feature also support thin
109provisioning.
110
111CAUTION: If a storage runs full, all guests using volumes on that
112storage receive IO errors. This can cause file system inconsistencies
113and may corrupt your data. So it is advisable to avoid
114over-provisioning of your storage resources, or carefully observe
115free space to avoid such conditions.
116
117
118Storage Configuration
119---------------------
120
121All {pve} related storage configuration is stored within a single text
122file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
123gets automatically distributed to all cluster nodes. So all nodes
124share the same storage configuration.
125
126Sharing storage configuration makes perfect sense for shared storage,
127because the same ``shared'' storage is accessible from all nodes. But it is
128also useful for local storage types. In this case such local storage
129is available on all nodes, but it is physically different and can have
130totally different content.
131
132
133Storage Pools
134~~~~~~~~~~~~~
135
136Each storage pool has a `<type>`, and is uniquely identified by its
137`<STORAGE_ID>`. A pool configuration looks like this:
138
139----
140<type>: <STORAGE_ID>
141 <property> <value>
142 <property> <value>
143 <property>
144 ...
145----
146
147The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
148followed by a list of properties. Most properties require a value. Some have
149reasonable defaults, in which case you can omit the value.
150
151To be more specific, take a look at the default storage configuration
152after installation. It contains one special local storage pool named
153`local`, which refers to the directory `/var/lib/vz` and is always
154available. The {pve} installer creates additional storage entries
155depending on the storage type chosen at installation time.
156
157.Default storage configuration (`/etc/pve/storage.cfg`)
158----
159dir: local
160 path /var/lib/vz
161 content iso,vztmpl,backup
162
163# default image store on LVM based installation
164lvmthin: local-lvm
165 thinpool data
166 vgname pve
167 content rootdir,images
168
169# default image store on ZFS based installation
170zfspool: local-zfs
171 pool rpool/data
172 sparse
173 content images,rootdir
174----
175
176
177Common Storage Properties
178~~~~~~~~~~~~~~~~~~~~~~~~~
179
180A few storage properties are common among different storage types.
181
182nodes::
183
184List of cluster node names where this storage is
185usable/accessible. One can use this property to restrict storage
186access to a limited set of nodes.
187
188content::
189
190A storage can support several content types, for example virtual disk
191images, cdrom iso images, container templates or container root
192directories. Not all storage types support all content types. One can set
193this property to select what this storage is used for.
194
195images:::
196
197KVM-Qemu VM images.
198
199rootdir:::
200
201Allow to store container data.
202
203vztmpl:::
204
205Container templates.
206
207backup:::
208
209Backup files (`vzdump`).
210
211iso:::
212
213ISO images
214
215snippets:::
216
217Snippet files, for example guest hook scripts
218
219shared::
220
221Mark storage as shared.
222
223disable::
224
225You can use this flag to disable the storage completely.
226
227maxfiles::
228
229Deprecated, please use `prune-backups` instead. Maximum number of backup files
230per VM. Use `0` for unlimited.
231
232prune-backups::
233
234Retention options for backups. For details, see
235xref:vzdump_retention[Backup Retention].
236
237format::
238
239Default image format (`raw|qcow2|vmdk`)
240
241
242WARNING: It is not advisable to use the same storage pool on different
243{pve} clusters. Some storage operation need exclusive access to the
244storage, so proper locking is required. While this is implemented
245within a cluster, it does not work between different clusters.
246
247
248Volumes
249-------
250
251We use a special notation to address storage data. When you allocate
252data from a storage pool, it returns such a volume identifier. A volume
253is identified by the `<STORAGE_ID>`, followed by a storage type
254dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
255like:
256
257 local:230/example-image.raw
258
259 local:iso/debian-501-amd64-netinst.iso
260
261 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
262
263 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
264
265To get the file system path for a `<VOLUME_ID>` use:
266
267 pvesm path <VOLUME_ID>
268
269
270Volume Ownership
271~~~~~~~~~~~~~~~~
272
273There exists an ownership relation for `image` type volumes. Each such
274volume is owned by a VM or Container. For example volume
275`local:230/example-image.raw` is owned by VM 230. Most storage
276backends encodes this ownership information into the volume name.
277
278When you remove a VM or Container, the system also removes all
279associated volumes which are owned by that VM or Container.
280
281
282Using the Command Line Interface
283--------------------------------
284
285It is recommended to familiarize yourself with the concept behind storage
286pools and volume identifiers, but in real life, you are not forced to do any
287of those low level operations on the command line. Normally,
288allocation and removal of volumes is done by the VM and Container
289management tools.
290
291Nevertheless, there is a command line tool called `pvesm` (``{pve}
292Storage Manager''), which is able to perform common storage management
293tasks.
294
295
296Examples
297~~~~~~~~
298
299Add storage pools
300
301 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
302 pvesm add dir <STORAGE_ID> --path <PATH>
303 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
304 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
305 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
306
307Disable storage pools
308
309 pvesm set <STORAGE_ID> --disable 1
310
311Enable storage pools
312
313 pvesm set <STORAGE_ID> --disable 0
314
315Change/set storage options
316
317 pvesm set <STORAGE_ID> <OPTIONS>
318 pvesm set <STORAGE_ID> --shared 1
319 pvesm set local --format qcow2
320 pvesm set <STORAGE_ID> --content iso
321
322Remove storage pools. This does not delete any data, and does not
323disconnect or unmount anything. It just removes the storage
324configuration.
325
326 pvesm remove <STORAGE_ID>
327
328Allocate volumes
329
330 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
331
332Allocate a 4G volume in local storage. The name is auto-generated if
333you pass an empty string as `<name>`
334
335 pvesm alloc local <VMID> '' 4G
336
337Free volumes
338
339 pvesm free <VOLUME_ID>
340
341WARNING: This really destroys all volume data.
342
343List storage status
344
345 pvesm status
346
347List storage contents
348
349 pvesm list <STORAGE_ID> [--vmid <VMID>]
350
351List volumes allocated by VMID
352
353 pvesm list <STORAGE_ID> --vmid <VMID>
354
355List iso images
356
357 pvesm list <STORAGE_ID> --iso
358
359List container templates
360
361 pvesm list <STORAGE_ID> --vztmpl
362
363Show file system path for a volume
364
365 pvesm path <VOLUME_ID>
366
367Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
368This is mostly used internally with `pvesm import`.
369The stream format qcow2+size is different to the qcow2 format.
370Consequently, the exported file cannot simply be attached to a VM.
371This also holds for the other formats.
372
373 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
374
375ifdef::wiki[]
376
377See Also
378--------
379
380* link:/wiki/Storage:_Directory[Storage: Directory]
381
382* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
383
384* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
385
386* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
387
388* link:/wiki/Storage:_LVM[Storage: LVM]
389
390* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
391
392* link:/wiki/Storage:_NFS[Storage: NFS]
393
394* link:/wiki/Storage:_CIFS[Storage: CIFS]
395
396* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
397
398* link:/wiki/Storage:_RBD[Storage: RBD]
399
400* link:/wiki/Storage:_CephFS[Storage: CephFS]
401
402* link:/wiki/Storage:_ZFS[Storage: ZFS]
403
404* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
405
406endif::wiki[]
407
408ifndef::wiki[]
409
410// backend documentation
411
412include::pve-storage-dir.adoc[]
413
414include::pve-storage-nfs.adoc[]
415
416include::pve-storage-cifs.adoc[]
417
418include::pve-storage-pbs.adoc[]
419
420include::pve-storage-glusterfs.adoc[]
421
422include::pve-storage-zfspool.adoc[]
423
424include::pve-storage-lvm.adoc[]
425
426include::pve-storage-lvmthin.adoc[]
427
428include::pve-storage-iscsi.adoc[]
429
430include::pve-storage-iscsidirect.adoc[]
431
432include::pve-storage-rbd.adoc[]
433
434include::pve-storage-cephfs.adoc[]
435
436
437
438ifdef::manvolnum[]
439include::pve-copyright.adoc[]
440endif::manvolnum[]
441
442endif::wiki[]
443