]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
btrfs: document df weirdness and how to better get usage
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in the future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51File level storage::
52
53File level based storage technologies allow access to a fully featured (POSIX)
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
63RADOS and GlusterFS are distributed systems, replicating storage
64data to different nodes.
65
66
67.Available storage types
68[width="100%",cols="<2d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|BTRFS |btrfs |file |no |yes |technology preview
74|NFS |nfs |file |yes |no^1^ |yes
75|CIFS |cifs |file |yes |no^1^ |yes
76|Proxmox Backup |pbs |both |yes |n/a |yes
77|GlusterFS |glusterfs |file |yes |no^1^ |yes
78|CephFS |cephfs |file |yes |yes |yes
79|LVM |lvm |block |no^2^ |no |yes
80|LVM-thin |lvmthin |block |no |yes |yes
81|iSCSI/kernel |iscsi |block |yes |no |yes
82|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83|Ceph/RBD |rbd |block |yes |yes |yes
84|ZFS over iSCSI |zfs |block |yes |yes |yes
85|===========================================================
86
87^1^: On file based storages, snapshots are possible with the 'qcow2' format.
88
89^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
90That way you get a `shared` LVM storage.
91
92
93Thin Provisioning
94~~~~~~~~~~~~~~~~~
95
96A number of storages, and the Qemu image format `qcow2`, support 'thin
97provisioning'. With thin provisioning activated, only the blocks that
98the guest system actually use will be written to the storage.
99
100Say for instance you create a VM with a 32GB hard disk, and after
101installing the guest system OS, the root file system of the VM contains
1023 GB of data. In that case only 3GB are written to the storage, even
103if the guest VM sees a 32GB hard drive. In this way thin provisioning
104allows you to create disk images which are larger than the currently
105available storage blocks. You can create large disk images for your
106VMs, and when the need arises, add more disks to your storage without
107resizing the VMs' file systems.
108
109All storage types which have the ``Snapshots'' feature also support thin
110provisioning.
111
112CAUTION: If a storage runs full, all guests using volumes on that
113storage receive IO errors. This can cause file system inconsistencies
114and may corrupt your data. So it is advisable to avoid
115over-provisioning of your storage resources, or carefully observe
116free space to avoid such conditions.
117
118
119Storage Configuration
120---------------------
121
122All {pve} related storage configuration is stored within a single text
123file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
124gets automatically distributed to all cluster nodes. So all nodes
125share the same storage configuration.
126
127Sharing storage configuration makes perfect sense for shared storage,
128because the same ``shared'' storage is accessible from all nodes. But it is
129also useful for local storage types. In this case such local storage
130is available on all nodes, but it is physically different and can have
131totally different content.
132
133
134Storage Pools
135~~~~~~~~~~~~~
136
137Each storage pool has a `<type>`, and is uniquely identified by its
138`<STORAGE_ID>`. A pool configuration looks like this:
139
140----
141<type>: <STORAGE_ID>
142 <property> <value>
143 <property> <value>
144 <property>
145 ...
146----
147
148The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
149followed by a list of properties. Most properties require a value. Some have
150reasonable defaults, in which case you can omit the value.
151
152To be more specific, take a look at the default storage configuration
153after installation. It contains one special local storage pool named
154`local`, which refers to the directory `/var/lib/vz` and is always
155available. The {pve} installer creates additional storage entries
156depending on the storage type chosen at installation time.
157
158.Default storage configuration (`/etc/pve/storage.cfg`)
159----
160dir: local
161 path /var/lib/vz
162 content iso,vztmpl,backup
163
164# default image store on LVM based installation
165lvmthin: local-lvm
166 thinpool data
167 vgname pve
168 content rootdir,images
169
170# default image store on ZFS based installation
171zfspool: local-zfs
172 pool rpool/data
173 sparse
174 content images,rootdir
175----
176
177
178Common Storage Properties
179~~~~~~~~~~~~~~~~~~~~~~~~~
180
181A few storage properties are common among different storage types.
182
183nodes::
184
185List of cluster node names where this storage is
186usable/accessible. One can use this property to restrict storage
187access to a limited set of nodes.
188
189content::
190
191A storage can support several content types, for example virtual disk
192images, cdrom iso images, container templates or container root
193directories. Not all storage types support all content types. One can set
194this property to select what this storage is used for.
195
196images:::
197
198KVM-Qemu VM images.
199
200rootdir:::
201
202Allow to store container data.
203
204vztmpl:::
205
206Container templates.
207
208backup:::
209
210Backup files (`vzdump`).
211
212iso:::
213
214ISO images
215
216snippets:::
217
218Snippet files, for example guest hook scripts
219
220shared::
221
222Mark storage as shared.
223
224disable::
225
226You can use this flag to disable the storage completely.
227
228maxfiles::
229
230Deprecated, please use `prune-backups` instead. Maximum number of backup files
231per VM. Use `0` for unlimited.
232
233prune-backups::
234
235Retention options for backups. For details, see
236xref:vzdump_retention[Backup Retention].
237
238format::
239
240Default image format (`raw|qcow2|vmdk`)
241
242
243WARNING: It is not advisable to use the same storage pool on different
244{pve} clusters. Some storage operation need exclusive access to the
245storage, so proper locking is required. While this is implemented
246within a cluster, it does not work between different clusters.
247
248
249Volumes
250-------
251
252We use a special notation to address storage data. When you allocate
253data from a storage pool, it returns such a volume identifier. A volume
254is identified by the `<STORAGE_ID>`, followed by a storage type
255dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
256like:
257
258 local:230/example-image.raw
259
260 local:iso/debian-501-amd64-netinst.iso
261
262 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
263
264 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
265
266To get the file system path for a `<VOLUME_ID>` use:
267
268 pvesm path <VOLUME_ID>
269
270
271Volume Ownership
272~~~~~~~~~~~~~~~~
273
274There exists an ownership relation for `image` type volumes. Each such
275volume is owned by a VM or Container. For example volume
276`local:230/example-image.raw` is owned by VM 230. Most storage
277backends encodes this ownership information into the volume name.
278
279When you remove a VM or Container, the system also removes all
280associated volumes which are owned by that VM or Container.
281
282
283Using the Command Line Interface
284--------------------------------
285
286It is recommended to familiarize yourself with the concept behind storage
287pools and volume identifiers, but in real life, you are not forced to do any
288of those low level operations on the command line. Normally,
289allocation and removal of volumes is done by the VM and Container
290management tools.
291
292Nevertheless, there is a command line tool called `pvesm` (``{pve}
293Storage Manager''), which is able to perform common storage management
294tasks.
295
296
297Examples
298~~~~~~~~
299
300Add storage pools
301
302 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
303 pvesm add dir <STORAGE_ID> --path <PATH>
304 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
305 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
306 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
307
308Disable storage pools
309
310 pvesm set <STORAGE_ID> --disable 1
311
312Enable storage pools
313
314 pvesm set <STORAGE_ID> --disable 0
315
316Change/set storage options
317
318 pvesm set <STORAGE_ID> <OPTIONS>
319 pvesm set <STORAGE_ID> --shared 1
320 pvesm set local --format qcow2
321 pvesm set <STORAGE_ID> --content iso
322
323Remove storage pools. This does not delete any data, and does not
324disconnect or unmount anything. It just removes the storage
325configuration.
326
327 pvesm remove <STORAGE_ID>
328
329Allocate volumes
330
331 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
332
333Allocate a 4G volume in local storage. The name is auto-generated if
334you pass an empty string as `<name>`
335
336 pvesm alloc local <VMID> '' 4G
337
338Free volumes
339
340 pvesm free <VOLUME_ID>
341
342WARNING: This really destroys all volume data.
343
344List storage status
345
346 pvesm status
347
348List storage contents
349
350 pvesm list <STORAGE_ID> [--vmid <VMID>]
351
352List volumes allocated by VMID
353
354 pvesm list <STORAGE_ID> --vmid <VMID>
355
356List iso images
357
358 pvesm list <STORAGE_ID> --iso
359
360List container templates
361
362 pvesm list <STORAGE_ID> --vztmpl
363
364Show file system path for a volume
365
366 pvesm path <VOLUME_ID>
367
368Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
369This is mostly used internally with `pvesm import`.
370The stream format qcow2+size is different to the qcow2 format.
371Consequently, the exported file cannot simply be attached to a VM.
372This also holds for the other formats.
373
374 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
375
376ifdef::wiki[]
377
378See Also
379--------
380
381* link:/wiki/Storage:_Directory[Storage: Directory]
382
383* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
384
385* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
386
387* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
388
389* link:/wiki/Storage:_LVM[Storage: LVM]
390
391* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
392
393* link:/wiki/Storage:_NFS[Storage: NFS]
394
395* link:/wiki/Storage:_CIFS[Storage: CIFS]
396
397* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
398
399* link:/wiki/Storage:_RBD[Storage: RBD]
400
401* link:/wiki/Storage:_CephFS[Storage: CephFS]
402
403* link:/wiki/Storage:_ZFS[Storage: ZFS]
404
405* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
406
407endif::wiki[]
408
409ifndef::wiki[]
410
411// backend documentation
412
413include::pve-storage-dir.adoc[]
414
415include::pve-storage-nfs.adoc[]
416
417include::pve-storage-cifs.adoc[]
418
419include::pve-storage-pbs.adoc[]
420
421include::pve-storage-glusterfs.adoc[]
422
423include::pve-storage-zfspool.adoc[]
424
425include::pve-storage-lvm.adoc[]
426
427include::pve-storage-lvmthin.adoc[]
428
429include::pve-storage-iscsi.adoc[]
430
431include::pve-storage-iscsidirect.adoc[]
432
433include::pve-storage-rbd.adoc[]
434
435include::pve-storage-cephfs.adoc[]
436
437include::pve-storage-btrfs.adoc[]
438
439
440ifdef::manvolnum[]
441include::pve-copyright.adoc[]
442endif::manvolnum[]
443
444endif::wiki[]
445