-include::attributes.txt[]
-[[chapter-storage]]
+[[chapter_storage]]
ifdef::manvolnum[]
-PVE({manvolnum})
-================
+pvesm(1)
+========
+:pve-toplevel:
NAME
----
pvesm - Proxmox VE Storage Manager
-SYNOPSYS
+SYNOPSIS
--------
include::pvesm.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
-
ifndef::manvolnum[]
{pve} Storage
=============
+:pve-toplevel:
endif::manvolnum[]
+ifdef::wiki[]
+:title: Storage
+endif::wiki[]
The {pve} storage model is very flexible. Virtual machine images
can either be stored on one or several local storages, or on shared
the cluster have direct access to VM disk images. There is no need to
copy VM image data, so live migration is very fast in that case.
-The storage library (package 'libpve-storage-perl') uses a flexible
+The storage library (package `libpve-storage-perl`) uses a flexible
plugin system to provide a common interface to all storage types. This
can be easily adopted to include further storage types in future.
There are basically two different classes of storage types:
+File level storage::
+
+File level based storage technologies allow access to a full featured (POSIX)
+file system. They are in general more flexible than any Block level storage
+(see below), and allow you to store content of any type. ZFS is probably the
+most advanced system, and it has full support for snapshots and clones.
+
Block level storage::
Allows to store large 'raw' images. It is usually not possible to store
other files (ISO, backups, ..) on such storage types. Most modern
block level storage implementations support snapshots and clones.
-RADOS, Sheepdog and DRBD are distributed systems, replicating storage
+RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
data to different nodes.
-File level storage::
-
-They allow access to a full featured (POSIX) file system. They are
-more flexible, and allows you to store any content type. ZFS is
-probably the most advanced system, and it has full support for
-snapshots and clones.
-
.Available storage types
[width="100%",cols="<d,1*m,4*d",options="header"]
|===========================================================
|Description |PVE type |Level |Shared|Snapshots|Stable
|ZFS (local) |zfspool |file |no |yes |yes
-|Directory |dir |file |no |no |yes
-|NFS |nfs |file |yes |no |yes
-|GlusterFS |glusterfs |file |yes |no |yes
-|LVM |lvm |block |no |no |yes
-|LVM-thin |lvmthin |block |no |yes |beta
+|Directory |dir |file |no |no^1^ |yes
+|NFS |nfs |file |yes |no^1^ |yes
+|CIFS |cifs |file |yes |no^1^ |yes
+|GlusterFS |glusterfs |file |yes |no^1^ |yes
+|CephFS |cephfs |file |yes |yes |yes
+|LVM |lvm |block |no^2^ |no |yes
+|LVM-thin |lvmthin |block |no |yes |yes
|iSCSI/kernel |iscsi |block |yes |no |yes
|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
|Ceph/RBD |rbd |block |yes |yes |yes
|Sheepdog |sheepdog |block |yes |yes |beta
-|DRBD9 |drbd |block |yes |yes |beta
|ZFS over iSCSI |zfs |block |yes |yes |yes
|=========================================================
-TIP: It is possible to use LVM on top of an iSCSI storage. That way
-you get a 'shared' LVM storage.
+^1^: On file based storages, snapshots are possible with the 'qcow2' format.
+
+^2^: It is possible to use LVM on top of an iSCSI storage. That way
+you get a `shared` LVM storage.
+
+
+Thin Provisioning
+~~~~~~~~~~~~~~~~~
+
+A number of storages, and the Qemu image format `qcow2`, support 'thin
+provisioning'. With thin provisioning activated, only the blocks that
+the guest system actually use will be written to the storage.
+
+Say for instance you create a VM with a 32GB hard disk, and after
+installing the guest system OS, the root file system of the VM contains
+3 GB of data. In that case only 3GB are written to the storage, even
+if the guest VM sees a 32GB hard drive. In this way thin provisioning
+allows you to create disk images which are larger than the currently
+available storage blocks. You can create large disk images for your
+VMs, and when the need arises, add more disks to your storage without
+resizing the VMs' file systems.
+
+All storage types which have the ``Snapshots'' feature also support thin
+provisioning.
+
+CAUTION: If a storage runs full, all guests using volumes on that
+storage receive IO errors. This can cause file system inconsistencies
+and may corrupt your data. So it is advisable to avoid
+over-provisioning of your storage resources, or carefully observe
+free space to avoid such conditions.
+
Storage Configuration
---------------------
All {pve} related storage configuration is stored within a single text
-file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
+file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
gets automatically distributed to all cluster nodes. So all nodes
share the same storage configuration.
Sharing storage configuration make perfect sense for shared storage,
-because the same 'shared' storage is accessible from all nodes. But is
+because the same ``shared'' storage is accessible from all nodes. But is
also useful for local storage types. In this case such local storage
is available on all nodes, but it is physically different and can have
totally different content.
+
Storage Pools
~~~~~~~~~~~~~
-Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
+Each storage pool has a `<type>`, and is uniquely identified by its
+`<STORAGE_ID>`. A pool configuration looks like this:
----
<type>: <STORAGE_ID>
...
----
-NOTE: There is one special local storage pool named `local`. It refers to
-directory '/var/lib/vz' and is automatically generated at installation
-time.
-
The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
-followed by a list of properties. Most properties have values, but some of them comes
-with reasonable default. In that case you can omit the value.
+followed by a list of properties. Most properties have values, but some of
+them come with reasonable default. In that case you can omit the value.
+
+To be more specific, take a look at the default storage configuration
+after installation. It contains one special local storage pool named
+`local`, which refers to the directory `/var/lib/vz` and is always
+available. The {pve} installer creates additional storage entries
+depending on the storage type chosen at installation time.
-.Default storage configuration ('/etc/pve/storage.cfg')
-====
- dir: local
+.Default storage configuration (`/etc/pve/storage.cfg`)
+----
+dir: local
path /var/lib/vz
- content backup,iso,vztmpl,images,rootdir
- maxfiles 3
-====
+ content iso,vztmpl,backup
+
+# default image store on LVM based installation
+lvmthin: local-lvm
+ thinpool data
+ vgname pve
+ content rootdir,images
+
+# default image store on ZFS based installation
+zfspool: local-zfs
+ pool rpool/data
+ sparse
+ content images,rootdir
+----
+
Common Storage Properties
~~~~~~~~~~~~~~~~~~~~~~~~~
-A few storage properties are common among differenty storage types.
+A few storage properties are common among different storage types.
nodes::
A storage can support several content types, for example virtual disk
images, cdrom iso images, container templates or container root
-directories. Not all storage types supports all content types. One can set
+directories. Not all storage types support all content types. One can set
this property to select for what this storage is used for.
images:::
rootdir:::
-Allow to store Container data.
+Allow to store container data.
vztmpl:::
backup:::
-Backup files ('vzdump').
+Backup files (`vzdump`).
iso:::
ISO images
+snippets:::
+
+Snippet files, for example guest hook scripts
+
shared::
Mark storage as shared.
maxfiles::
-Maximal number of backup files per VM. Use `0` for unlimted.
+Maximum number of backup files per VM. Use `0` for unlimited.
format::
WARNING: It is not advisable to use the same storage pool on different
-{pve} clusters. Some storage operation needs exclusive access to the
+{pve} clusters. Some storage operation need exclusive access to the
storage, so proper locking is required. While this is implemented
-within an cluster, it does not work between different clusters.
+within a cluster, it does not work between different clusters.
Volumes
-------
We use a special notation to address storage data. When you allocate
-data from a storage pool, it returns such volume identifier. A volume
+data from a storage pool, it returns such a volume identifier. A volume
is identified by the `<STORAGE_ID>`, followed by a storage type
dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
like:
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
-To get the filesystem path for a `<VOLUME_ID>` use:
+To get the file system path for a `<VOLUME_ID>` use:
pvesm path <VOLUME_ID>
+
Volume Ownership
~~~~~~~~~~~~~~~~
-There exists an ownership relation for 'image' type volumes. Each such
+There exists an ownership relation for `image` type volumes. Each such
volume is owned by a VM or Container. For example volume
`local:230/example-image.raw` is owned by VM 230. Most storage
backends encodes this ownership information into the volume name.
-When you remove a VM or Container, the system also remove all
+When you remove a VM or Container, the system also removes all
associated volumes which are owned by that VM or Container.
Using the Command Line Interface
--------------------------------
-I think it is required to understand the concept behind storage pools
-and volume identifier, but in real life, you are not forced to do any
+It is recommended to familiarize yourself with the concept behind storage
+pools and volume identifiers, but in real life, you are not forced to do any
of those low level operations on the command line. Normally,
allocation and removal of volumes is done by the VM and Container
management tools.
-Nevertheless, there is a command line tool called 'pvesm' ({pve}
-storage manager), which is able to perform common storage management
+Nevertheless, there is a command line tool called `pvesm` (``{pve}
+Storage Manager''), which is able to perform common storage management
tasks.
pvesm alloc local <VMID> '' 4G
-Free volumes
+Free volumes
pvesm free <VOLUME_ID>
pvesm list <STORAGE_ID> --vztmpl
-Show filesystem path for a volume
+Show file system path for a volume
pvesm path <VOLUME_ID>
+ifdef::wiki[]
+
+See Also
+--------
+
+* link:/wiki/Storage:_Directory[Storage: Directory]
+
+* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
+
+* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
+
+* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
+
+* link:/wiki/Storage:_LVM[Storage: LVM]
+
+* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
+
+* link:/wiki/Storage:_NFS[Storage: NFS]
+
+* link:/wiki/Storage:_CIFS[Storage: CIFS]
+
+* link:/wiki/Storage:_RBD[Storage: RBD]
+
+* link:/wiki/Storage:_CephFS[Storage: CephFS]
+
+* link:/wiki/Storage:_ZFS[Storage: ZFS]
+
+* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
+
+endif::wiki[]
+
+ifndef::wiki[]
+
// backend documentation
include::pve-storage-dir.adoc[]
include::pve-storage-nfs.adoc[]
+include::pve-storage-cifs.adoc[]
+
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]
include::pve-storage-lvm.adoc[]
+include::pve-storage-lvmthin.adoc[]
+
include::pve-storage-iscsi.adoc[]
include::pve-storage-iscsidirect.adoc[]
include::pve-storage-rbd.adoc[]
+include::pve-storage-cephfs.adoc[]
+
+
ifdef::manvolnum[]
include::pve-copyright.adoc[]
endif::manvolnum[]
+endif::wiki[]
+