-[[chapter-storage]]
+[[chapter_storage]]
ifdef::manvolnum[]
-PVE({manvolnum})
-================
-include::attributes.txt[]
+pvesm(1)
+========
+:pve-toplevel:
NAME
----
pvesm - Proxmox VE Storage Manager
-SYNOPSYS
+SYNOPSIS
--------
include::pvesm.1-synopsis.adoc[]
DESCRIPTION
-----------
endif::manvolnum[]
-
ifndef::manvolnum[]
{pve} Storage
=============
-include::attributes.txt[]
+:pve-toplevel:
endif::manvolnum[]
+ifdef::wiki[]
+:title: Storage
+endif::wiki[]
The {pve} storage model is very flexible. Virtual machine images
can either be stored on one or several local storages, or on shared
the cluster have direct access to VM disk images. There is no need to
copy VM image data, so live migration is very fast in that case.
-The storage library (package 'libpve-storage-perl') uses a flexible
+The storage library (package `libpve-storage-perl`) uses a flexible
plugin system to provide a common interface to all storage types. This
can be easily adopted to include further storage types in future.
Allows to store large 'raw' images. It is usually not possible to store
other files (ISO, backups, ..) on such storage types. Most modern
block level storage implementations support snapshots and clones.
-RADOS, Sheepdog and DRBD are distributed systems, replicating storage
+RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
data to different nodes.
File level storage::
|===========================================================
|Description |PVE type |Level |Shared|Snapshots|Stable
|ZFS (local) |zfspool |file |no |yes |yes
-|Directory |dir |file |no |no |yes
-|NFS |nfs |file |yes |no |yes
-|GlusterFS |glusterfs |file |yes |no |yes
-|LVM |lvm |block |no |no |yes
+|Directory |dir |file |no |no^1^ |yes
+|NFS |nfs |file |yes |no^1^ |yes
+|CIFS |cifs |file |yes |no^1^ |yes
+|GlusterFS |glusterfs |file |yes |no^1^ |yes
+|LVM |lvm |block |no^2^ |no |yes
|LVM-thin |lvmthin |block |no |yes |yes
|iSCSI/kernel |iscsi |block |yes |no |yes
|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
|Ceph/RBD |rbd |block |yes |yes |yes
+|Ceph/CephFS |cephfs |file |yes |yes |yes
|Sheepdog |sheepdog |block |yes |yes |beta
-|DRBD9 |drbd |block |yes |yes |beta
|ZFS over iSCSI |zfs |block |yes |yes |yes
|=========================================================
-TIP: It is possible to use LVM on top of an iSCSI storage. That way
-you get a 'shared' LVM storage.
+^1^: On file based storages, snapshots are possible with the 'qcow2' format.
+
+^2^: It is possible to use LVM on top of an iSCSI storage. That way
+you get a `shared` LVM storage.
-Thin provisioning
+
+Thin Provisioning
~~~~~~~~~~~~~~~~~
-A number of storages, and the Qemu image format `qcow2`, support _thin
-provisioning_. With thin provisioning activated, only the blocks that
+A number of storages, and the Qemu image format `qcow2`, support 'thin
+provisioning'. With thin provisioning activated, only the blocks that
the guest system actually use will be written to the storage.
Say for instance you create a VM with a 32GB hard disk, and after
-installing the guest system OS, the root filesystem of the VM contains
+installing the guest system OS, the root file system of the VM contains
3 GB of data. In that case only 3GB are written to the storage, even
if the guest VM sees a 32GB hard drive. In this way thin provisioning
allows you to create disk images which are larger than the currently
available storage blocks. You can create large disk images for your
VMs, and when the need arises, add more disks to your storage without
-resizing the VMs filesystems.
+resizing the VMs' file systems.
-All storage types which have the 'Snapshots' feature also support thin
+All storage types which have the ``Snapshots'' feature also support thin
provisioning.
CAUTION: If a storage runs full, all guests using volumes on that
-storage receives IO error. This can cause file system inconsistencies
+storage receive IO errors. This can cause file system inconsistencies
and may corrupt your data. So it is advisable to avoid
over-provisioning of your storage resources, or carefully observe
free space to avoid such conditions.
+
Storage Configuration
---------------------
All {pve} related storage configuration is stored within a single text
-file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
+file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
gets automatically distributed to all cluster nodes. So all nodes
share the same storage configuration.
Sharing storage configuration make perfect sense for shared storage,
-because the same 'shared' storage is accessible from all nodes. But is
+because the same ``shared'' storage is accessible from all nodes. But is
also useful for local storage types. In this case such local storage
is available on all nodes, but it is physically different and can have
totally different content.
+
Storage Pools
~~~~~~~~~~~~~
-Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
+Each storage pool has a `<type>`, and is uniquely identified by its
+`<STORAGE_ID>`. A pool configuration looks like this:
----
<type>: <STORAGE_ID>
To be more specific, take a look at the default storage configuration
after installation. It contains one special local storage pool named
-`local`, which refers to the directory '/var/lib/vz' and is always
+`local`, which refers to the directory `/var/lib/vz` and is always
available. The {pve} installer creates additional storage entries
depending on the storage type chosen at installation time.
-.Default storage configuration ('/etc/pve/storage.cfg')
+.Default storage configuration (`/etc/pve/storage.cfg`)
----
dir: local
path /var/lib/vz
content images,rootdir
----
+
Common Storage Properties
~~~~~~~~~~~~~~~~~~~~~~~~~
backup:::
-Backup files ('vzdump').
+Backup files (`vzdump`).
iso:::
maxfiles::
-Maximal number of backup files per VM. Use `0` for unlimted.
+Maximum number of backup files per VM. Use `0` for unlimited.
format::
iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
-To get the filesystem path for a `<VOLUME_ID>` use:
+To get the file system path for a `<VOLUME_ID>` use:
pvesm path <VOLUME_ID>
+
Volume Ownership
~~~~~~~~~~~~~~~~
-There exists an ownership relation for 'image' type volumes. Each such
+There exists an ownership relation for `image` type volumes. Each such
volume is owned by a VM or Container. For example volume
`local:230/example-image.raw` is owned by VM 230. Most storage
backends encodes this ownership information into the volume name.
allocation and removal of volumes is done by the VM and Container
management tools.
-Nevertheless, there is a command line tool called 'pvesm' ({pve}
-storage manager), which is able to perform common storage management
+Nevertheless, there is a command line tool called `pvesm` (``{pve}
+Storage Manager''), which is able to perform common storage management
tasks.
pvesm alloc local <VMID> '' 4G
-Free volumes
+Free volumes
pvesm free <VOLUME_ID>
pvesm list <STORAGE_ID> --vztmpl
-Show filesystem path for a volume
+Show file system path for a volume
pvesm path <VOLUME_ID>
* link:/wiki/Storage:_NFS[Storage: NFS]
+* link:/wiki/Storage:_CIFS[Storage: CIFS]
+
* link:/wiki/Storage:_RBD[Storage: RBD]
+* link:/wiki/Storage:_RBD[Storage: CephFS]
+
* link:/wiki/Storage:_ZFS[Storage: ZFS]
+* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
endif::wiki[]
include::pve-storage-nfs.adoc[]
+include::pve-storage-cifs.adoc[]
+
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]
include::pve-storage-rbd.adoc[]
+include::pve-storage-cephfs.adoc[]
+
ifdef::manvolnum[]