]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
it's -> its because it isn't 'it is'
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in the future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51File level storage::
52
53File level based storage technologies allow access to a fully featured (POSIX)
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
63RADOS and GlusterFS are distributed systems, replicating storage
64data to different nodes.
65
66
67.Available storage types
68[width="100%",cols="<2d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|BTRFS |btrfs |file |no |yes |technology preview
74|NFS |nfs |file |yes |no^1^ |yes
75|CIFS |cifs |file |yes |no^1^ |yes
76|Proxmox Backup |pbs |both |yes |n/a |yes
77|GlusterFS |glusterfs |file |yes |no^1^ |yes
78|CephFS |cephfs |file |yes |yes |yes
79|LVM |lvm |block |no^2^ |no |yes
80|LVM-thin |lvmthin |block |no |yes |yes
81|iSCSI/kernel |iscsi |block |yes |no |yes
82|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83|Ceph/RBD |rbd |block |yes |yes |yes
84|ZFS over iSCSI |zfs |block |yes |yes |yes
85|===========================================================
86
87^1^: On file based storages, snapshots are possible with the 'qcow2' format.
88
89^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
90That way you get a `shared` LVM storage.
91
92
93Thin Provisioning
94~~~~~~~~~~~~~~~~~
95
96A number of storages, and the Qemu image format `qcow2`, support 'thin
97provisioning'. With thin provisioning activated, only the blocks that
98the guest system actually use will be written to the storage.
99
100Say for instance you create a VM with a 32GB hard disk, and after
101installing the guest system OS, the root file system of the VM contains
1023 GB of data. In that case only 3GB are written to the storage, even
103if the guest VM sees a 32GB hard drive. In this way thin provisioning
104allows you to create disk images which are larger than the currently
105available storage blocks. You can create large disk images for your
106VMs, and when the need arises, add more disks to your storage without
107resizing the VMs' file systems.
108
109All storage types which have the ``Snapshots'' feature also support thin
110provisioning.
111
112CAUTION: If a storage runs full, all guests using volumes on that
113storage receive IO errors. This can cause file system inconsistencies
114and may corrupt your data. So it is advisable to avoid
115over-provisioning of your storage resources, or carefully observe
116free space to avoid such conditions.
117
118
119Storage Configuration
120---------------------
121
122All {pve} related storage configuration is stored within a single text
123file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
124gets automatically distributed to all cluster nodes. So all nodes
125share the same storage configuration.
126
127Sharing storage configuration makes perfect sense for shared storage,
128because the same ``shared'' storage is accessible from all nodes. But it is
129also useful for local storage types. In this case such local storage
130is available on all nodes, but it is physically different and can have
131totally different content.
132
133
134Storage Pools
135~~~~~~~~~~~~~
136
137Each storage pool has a `<type>`, and is uniquely identified by its
138`<STORAGE_ID>`. A pool configuration looks like this:
139
140----
141<type>: <STORAGE_ID>
142 <property> <value>
143 <property> <value>
144 <property>
145 ...
146----
147
148The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
149followed by a list of properties. Most properties require a value. Some have
150reasonable defaults, in which case you can omit the value.
151
152To be more specific, take a look at the default storage configuration
153after installation. It contains one special local storage pool named
154`local`, which refers to the directory `/var/lib/vz` and is always
155available. The {pve} installer creates additional storage entries
156depending on the storage type chosen at installation time.
157
158.Default storage configuration (`/etc/pve/storage.cfg`)
159----
160dir: local
161 path /var/lib/vz
162 content iso,vztmpl,backup
163
164# default image store on LVM based installation
165lvmthin: local-lvm
166 thinpool data
167 vgname pve
168 content rootdir,images
169
170# default image store on ZFS based installation
171zfspool: local-zfs
172 pool rpool/data
173 sparse
174 content images,rootdir
175----
176
177
178Common Storage Properties
179~~~~~~~~~~~~~~~~~~~~~~~~~
180
181A few storage properties are common among different storage types.
182
183nodes::
184
185List of cluster node names where this storage is
186usable/accessible. One can use this property to restrict storage
187access to a limited set of nodes.
188
189content::
190
191A storage can support several content types, for example virtual disk
192images, cdrom iso images, container templates or container root
193directories. Not all storage types support all content types. One can set
194this property to select what this storage is used for.
195
196images:::
197
198KVM-Qemu VM images.
199
200rootdir:::
201
202Allow to store container data.
203
204vztmpl:::
205
206Container templates.
207
208backup:::
209
210Backup files (`vzdump`).
211
212iso:::
213
214ISO images
215
216snippets:::
217
218Snippet files, for example guest hook scripts
219
220shared::
221
222Mark storage as shared.
223
224disable::
225
226You can use this flag to disable the storage completely.
227
228maxfiles::
229
230Deprecated, please use `prune-backups` instead. Maximum number of backup files
231per VM. Use `0` for unlimited.
232
233prune-backups::
234
235Retention options for backups. For details, see
236xref:vzdump_retention[Backup Retention].
237
238format::
239
240Default image format (`raw|qcow2|vmdk`)
241
242preallocation::
243
244Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
245file-based storages. The default is `metadata`, which is treated like `off` for
246`raw` images. When using network storages in combination with large `qcow2`
247images, using `off` can help to avoid timeouts.
248
249WARNING: It is not advisable to use the same storage pool on different
250{pve} clusters. Some storage operation need exclusive access to the
251storage, so proper locking is required. While this is implemented
252within a cluster, it does not work between different clusters.
253
254
255Volumes
256-------
257
258We use a special notation to address storage data. When you allocate
259data from a storage pool, it returns such a volume identifier. A volume
260is identified by the `<STORAGE_ID>`, followed by a storage type
261dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
262like:
263
264 local:230/example-image.raw
265
266 local:iso/debian-501-amd64-netinst.iso
267
268 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
269
270 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
271
272To get the file system path for a `<VOLUME_ID>` use:
273
274 pvesm path <VOLUME_ID>
275
276
277Volume Ownership
278~~~~~~~~~~~~~~~~
279
280There exists an ownership relation for `image` type volumes. Each such
281volume is owned by a VM or Container. For example volume
282`local:230/example-image.raw` is owned by VM 230. Most storage
283backends encodes this ownership information into the volume name.
284
285When you remove a VM or Container, the system also removes all
286associated volumes which are owned by that VM or Container.
287
288
289Using the Command Line Interface
290--------------------------------
291
292It is recommended to familiarize yourself with the concept behind storage
293pools and volume identifiers, but in real life, you are not forced to do any
294of those low level operations on the command line. Normally,
295allocation and removal of volumes is done by the VM and Container
296management tools.
297
298Nevertheless, there is a command line tool called `pvesm` (``{pve}
299Storage Manager''), which is able to perform common storage management
300tasks.
301
302
303Examples
304~~~~~~~~
305
306Add storage pools
307
308 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
309 pvesm add dir <STORAGE_ID> --path <PATH>
310 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
311 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
312 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
313
314Disable storage pools
315
316 pvesm set <STORAGE_ID> --disable 1
317
318Enable storage pools
319
320 pvesm set <STORAGE_ID> --disable 0
321
322Change/set storage options
323
324 pvesm set <STORAGE_ID> <OPTIONS>
325 pvesm set <STORAGE_ID> --shared 1
326 pvesm set local --format qcow2
327 pvesm set <STORAGE_ID> --content iso
328
329Remove storage pools. This does not delete any data, and does not
330disconnect or unmount anything. It just removes the storage
331configuration.
332
333 pvesm remove <STORAGE_ID>
334
335Allocate volumes
336
337 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
338
339Allocate a 4G volume in local storage. The name is auto-generated if
340you pass an empty string as `<name>`
341
342 pvesm alloc local <VMID> '' 4G
343
344Free volumes
345
346 pvesm free <VOLUME_ID>
347
348WARNING: This really destroys all volume data.
349
350List storage status
351
352 pvesm status
353
354List storage contents
355
356 pvesm list <STORAGE_ID> [--vmid <VMID>]
357
358List volumes allocated by VMID
359
360 pvesm list <STORAGE_ID> --vmid <VMID>
361
362List iso images
363
364 pvesm list <STORAGE_ID> --iso
365
366List container templates
367
368 pvesm list <STORAGE_ID> --vztmpl
369
370Show file system path for a volume
371
372 pvesm path <VOLUME_ID>
373
374Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
375This is mostly used internally with `pvesm import`.
376The stream format qcow2+size is different to the qcow2 format.
377Consequently, the exported file cannot simply be attached to a VM.
378This also holds for the other formats.
379
380 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
381
382ifdef::wiki[]
383
384See Also
385--------
386
387* link:/wiki/Storage:_Directory[Storage: Directory]
388
389* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
390
391* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
392
393* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
394
395* link:/wiki/Storage:_LVM[Storage: LVM]
396
397* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
398
399* link:/wiki/Storage:_NFS[Storage: NFS]
400
401* link:/wiki/Storage:_CIFS[Storage: CIFS]
402
403* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
404
405* link:/wiki/Storage:_RBD[Storage: RBD]
406
407* link:/wiki/Storage:_CephFS[Storage: CephFS]
408
409* link:/wiki/Storage:_ZFS[Storage: ZFS]
410
411* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
412
413endif::wiki[]
414
415ifndef::wiki[]
416
417// backend documentation
418
419include::pve-storage-dir.adoc[]
420
421include::pve-storage-nfs.adoc[]
422
423include::pve-storage-cifs.adoc[]
424
425include::pve-storage-pbs.adoc[]
426
427include::pve-storage-glusterfs.adoc[]
428
429include::pve-storage-zfspool.adoc[]
430
431include::pve-storage-lvm.adoc[]
432
433include::pve-storage-lvmthin.adoc[]
434
435include::pve-storage-iscsi.adoc[]
436
437include::pve-storage-iscsidirect.adoc[]
438
439include::pve-storage-rbd.adoc[]
440
441include::pve-storage-cephfs.adoc[]
442
443include::pve-storage-btrfs.adoc[]
444
445include::pve-storage-zfs.adoc[]
446
447
448ifdef::manvolnum[]
449include::pve-copyright.adoc[]
450endif::manvolnum[]
451
452endif::wiki[]
453