]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
Fix spelling of command line/command-line
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in the future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51File level storage::
52
53File level based storage technologies allow access to a fully featured (POSIX)
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
63RADOS and GlusterFS are distributed systems, replicating storage
64data to different nodes.
65
66
67.Available storage types
68[width="100%",cols="<2d,1*m,4*d",options="header"]
69|===========================================================
70|Description |Plugin type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |both^1^|no |yes |yes
72|Directory |dir |file |no |no^2^ |yes
73|BTRFS |btrfs |file |no |yes |technology preview
74|NFS |nfs |file |yes |no^2^ |yes
75|CIFS |cifs |file |yes |no^2^ |yes
76|Proxmox Backup |pbs |both |yes |n/a |yes
77|GlusterFS |glusterfs |file |yes |no^2^ |yes
78|CephFS |cephfs |file |yes |yes |yes
79|LVM |lvm |block |no^3^ |no |yes
80|LVM-thin |lvmthin |block |no |yes |yes
81|iSCSI/kernel |iscsi |block |yes |no |yes
82|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83|Ceph/RBD |rbd |block |yes |yes |yes
84|ZFS over iSCSI |zfs |block |yes |yes |yes
85|===========================================================
86
87^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88block device functionality.
89
90^2^: On file based storages, snapshots are possible with the 'qcow2' format.
91
92^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93That way you get a `shared` LVM storage
94
95
96Thin Provisioning
97~~~~~~~~~~~~~~~~~
98
99A number of storages, and the QEMU image format `qcow2`, support 'thin
100provisioning'. With thin provisioning activated, only the blocks that
101the guest system actually use will be written to the storage.
102
103Say for instance you create a VM with a 32GB hard disk, and after
104installing the guest system OS, the root file system of the VM contains
1053 GB of data. In that case only 3GB are written to the storage, even
106if the guest VM sees a 32GB hard drive. In this way thin provisioning
107allows you to create disk images which are larger than the currently
108available storage blocks. You can create large disk images for your
109VMs, and when the need arises, add more disks to your storage without
110resizing the VMs' file systems.
111
112All storage types which have the ``Snapshots'' feature also support thin
113provisioning.
114
115CAUTION: If a storage runs full, all guests using volumes on that
116storage receive IO errors. This can cause file system inconsistencies
117and may corrupt your data. So it is advisable to avoid
118over-provisioning of your storage resources, or carefully observe
119free space to avoid such conditions.
120
121
122Storage Configuration
123---------------------
124
125All {pve} related storage configuration is stored within a single text
126file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
127gets automatically distributed to all cluster nodes. So all nodes
128share the same storage configuration.
129
130Sharing storage configuration makes perfect sense for shared storage,
131because the same ``shared'' storage is accessible from all nodes. But it is
132also useful for local storage types. In this case such local storage
133is available on all nodes, but it is physically different and can have
134totally different content.
135
136
137Storage Pools
138~~~~~~~~~~~~~
139
140Each storage pool has a `<type>`, and is uniquely identified by its
141`<STORAGE_ID>`. A pool configuration looks like this:
142
143----
144<type>: <STORAGE_ID>
145 <property> <value>
146 <property> <value>
147 <property>
148 ...
149----
150
151The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
152followed by a list of properties. Most properties require a value. Some have
153reasonable defaults, in which case you can omit the value.
154
155To be more specific, take a look at the default storage configuration
156after installation. It contains one special local storage pool named
157`local`, which refers to the directory `/var/lib/vz` and is always
158available. The {pve} installer creates additional storage entries
159depending on the storage type chosen at installation time.
160
161.Default storage configuration (`/etc/pve/storage.cfg`)
162----
163dir: local
164 path /var/lib/vz
165 content iso,vztmpl,backup
166
167# default image store on LVM based installation
168lvmthin: local-lvm
169 thinpool data
170 vgname pve
171 content rootdir,images
172
173# default image store on ZFS based installation
174zfspool: local-zfs
175 pool rpool/data
176 sparse
177 content images,rootdir
178----
179
180CAUTION: It is problematic to have multiple storage configurations pointing to
181the exact same underlying storage. Such an _aliased_ storage configuration can
182lead to two different volume IDs ('volid') pointing to the exact same disk
183image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184different content types for _aliased_ storage configurations can be fine, but
185is not recommended.
186
187Common Storage Properties
188~~~~~~~~~~~~~~~~~~~~~~~~~
189
190A few storage properties are common among different storage types.
191
192nodes::
193
194List of cluster node names where this storage is
195usable/accessible. One can use this property to restrict storage
196access to a limited set of nodes.
197
198content::
199
200A storage can support several content types, for example virtual disk
201images, cdrom iso images, container templates or container root
202directories. Not all storage types support all content types. One can set
203this property to select what this storage is used for.
204
205images:::
206
207QEMU/KVM VM images.
208
209rootdir:::
210
211Allow to store container data.
212
213vztmpl:::
214
215Container templates.
216
217backup:::
218
219Backup files (`vzdump`).
220
221iso:::
222
223ISO images
224
225snippets:::
226
227Snippet files, for example guest hook scripts
228
229shared::
230
231Mark storage as shared.
232
233disable::
234
235You can use this flag to disable the storage completely.
236
237maxfiles::
238
239Deprecated, please use `prune-backups` instead. Maximum number of backup files
240per VM. Use `0` for unlimited.
241
242prune-backups::
243
244Retention options for backups. For details, see
245xref:vzdump_retention[Backup Retention].
246
247format::
248
249Default image format (`raw|qcow2|vmdk`)
250
251preallocation::
252
253Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
254file-based storages. The default is `metadata`, which is treated like `off` for
255`raw` images. When using network storages in combination with large `qcow2`
256images, using `off` can help to avoid timeouts.
257
258WARNING: It is not advisable to use the same storage pool on different
259{pve} clusters. Some storage operation need exclusive access to the
260storage, so proper locking is required. While this is implemented
261within a cluster, it does not work between different clusters.
262
263
264Volumes
265-------
266
267We use a special notation to address storage data. When you allocate
268data from a storage pool, it returns such a volume identifier. A volume
269is identified by the `<STORAGE_ID>`, followed by a storage type
270dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
271like:
272
273 local:230/example-image.raw
274
275 local:iso/debian-501-amd64-netinst.iso
276
277 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
278
279 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
280
281To get the file system path for a `<VOLUME_ID>` use:
282
283 pvesm path <VOLUME_ID>
284
285
286Volume Ownership
287~~~~~~~~~~~~~~~~
288
289There exists an ownership relation for `image` type volumes. Each such
290volume is owned by a VM or Container. For example volume
291`local:230/example-image.raw` is owned by VM 230. Most storage
292backends encodes this ownership information into the volume name.
293
294When you remove a VM or Container, the system also removes all
295associated volumes which are owned by that VM or Container.
296
297
298Using the Command-line Interface
299--------------------------------
300
301It is recommended to familiarize yourself with the concept behind storage
302pools and volume identifiers, but in real life, you are not forced to do any
303of those low level operations on the command line. Normally,
304allocation and removal of volumes is done by the VM and Container
305management tools.
306
307Nevertheless, there is a command-line tool called `pvesm` (``{pve}
308Storage Manager''), which is able to perform common storage management
309tasks.
310
311
312Examples
313~~~~~~~~
314
315Add storage pools
316
317 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
318 pvesm add dir <STORAGE_ID> --path <PATH>
319 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
320 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
321 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
322
323Disable storage pools
324
325 pvesm set <STORAGE_ID> --disable 1
326
327Enable storage pools
328
329 pvesm set <STORAGE_ID> --disable 0
330
331Change/set storage options
332
333 pvesm set <STORAGE_ID> <OPTIONS>
334 pvesm set <STORAGE_ID> --shared 1
335 pvesm set local --format qcow2
336 pvesm set <STORAGE_ID> --content iso
337
338Remove storage pools. This does not delete any data, and does not
339disconnect or unmount anything. It just removes the storage
340configuration.
341
342 pvesm remove <STORAGE_ID>
343
344Allocate volumes
345
346 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
347
348Allocate a 4G volume in local storage. The name is auto-generated if
349you pass an empty string as `<name>`
350
351 pvesm alloc local <VMID> '' 4G
352
353Free volumes
354
355 pvesm free <VOLUME_ID>
356
357WARNING: This really destroys all volume data.
358
359List storage status
360
361 pvesm status
362
363List storage contents
364
365 pvesm list <STORAGE_ID> [--vmid <VMID>]
366
367List volumes allocated by VMID
368
369 pvesm list <STORAGE_ID> --vmid <VMID>
370
371List iso images
372
373 pvesm list <STORAGE_ID> --content iso
374
375List container templates
376
377 pvesm list <STORAGE_ID> --content vztmpl
378
379Show file system path for a volume
380
381 pvesm path <VOLUME_ID>
382
383Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
384This is mostly used internally with `pvesm import`.
385The stream format qcow2+size is different to the qcow2 format.
386Consequently, the exported file cannot simply be attached to a VM.
387This also holds for the other formats.
388
389 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
390
391ifdef::wiki[]
392
393See Also
394--------
395
396* link:/wiki/Storage:_Directory[Storage: Directory]
397
398* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
399
400* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
401
402* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
403
404* link:/wiki/Storage:_LVM[Storage: LVM]
405
406* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
407
408* link:/wiki/Storage:_NFS[Storage: NFS]
409
410* link:/wiki/Storage:_CIFS[Storage: CIFS]
411
412* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
413
414* link:/wiki/Storage:_RBD[Storage: RBD]
415
416* link:/wiki/Storage:_CephFS[Storage: CephFS]
417
418* link:/wiki/Storage:_ZFS[Storage: ZFS]
419
420* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
421
422endif::wiki[]
423
424ifndef::wiki[]
425
426// backend documentation
427
428include::pve-storage-dir.adoc[]
429
430include::pve-storage-nfs.adoc[]
431
432include::pve-storage-cifs.adoc[]
433
434include::pve-storage-pbs.adoc[]
435
436include::pve-storage-glusterfs.adoc[]
437
438include::pve-storage-zfspool.adoc[]
439
440include::pve-storage-lvm.adoc[]
441
442include::pve-storage-lvmthin.adoc[]
443
444include::pve-storage-iscsi.adoc[]
445
446include::pve-storage-iscsidirect.adoc[]
447
448include::pve-storage-rbd.adoc[]
449
450include::pve-storage-cephfs.adoc[]
451
452include::pve-storage-btrfs.adoc[]
453
454include::pve-storage-zfs.adoc[]
455
456
457ifdef::manvolnum[]
458include::pve-copyright.adoc[]
459endif::manvolnum[]
460
461endif::wiki[]
462