.Available storage types
-[width="100%",cols="<d,1*m,4*d",options="header"]
+[width="100%",cols="<2d,1*m,4*d",options="header"]
|===========================================================
-|Description |PVE type |Level |Shared|Snapshots|Stable
-|ZFS (local) |zfspool |file |no |yes |yes
-|Directory |dir |file |no |no^1^ |yes
-|NFS |nfs |file |yes |no^1^ |yes
-|CIFS |cifs |file |yes |no^1^ |yes
-|GlusterFS |glusterfs |file |yes |no^1^ |yes
-|CephFS |cephfs |file |yes |yes |yes
-|LVM |lvm |block |no^2^ |no |yes
-|LVM-thin |lvmthin |block |no |yes |yes
-|iSCSI/kernel |iscsi |block |yes |no |yes
-|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
-|Ceph/RBD |rbd |block |yes |yes |yes
-|ZFS over iSCSI |zfs |block |yes |yes |yes
-|=========================================================
-
-^1^: On file based storages, snapshots are possible with the 'qcow2' format.
-
-^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
-That way you get a `shared` LVM storage.
+|Description |Plugin type |Level |Shared|Snapshots|Stable
+|ZFS (local) |zfspool |both^1^|no |yes |yes
+|Directory |dir |file |no |no^2^ |yes
+|BTRFS |btrfs |file |no |yes |technology preview
+|NFS |nfs |file |yes |no^2^ |yes
+|CIFS |cifs |file |yes |no^2^ |yes
+|Proxmox Backup |pbs |both |yes |n/a |yes
+|GlusterFS |glusterfs |file |yes |no^2^ |yes
+|CephFS |cephfs |file |yes |yes |yes
+|LVM |lvm |block |no^3^ |no |yes
+|LVM-thin |lvmthin |block |no |yes |yes
+|iSCSI/kernel |iscsi |block |yes |no |yes
+|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
+|Ceph/RBD |rbd |block |yes |yes |yes
+|ZFS over iSCSI |zfs |block |yes |yes |yes
+|===========================================================
+
+^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
+block device functionality.
+
+^2^: On file based storages, snapshots are possible with the 'qcow2' format.
+
+^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
+That way you get a `shared` LVM storage
Thin Provisioning
~~~~~~~~~~~~~~~~~
-A number of storages, and the Qemu image format `qcow2`, support 'thin
+A number of storages, and the QEMU image format `qcow2`, support 'thin
provisioning'. With thin provisioning activated, only the blocks that
the guest system actually use will be written to the storage.
images:::
-KVM-Qemu VM images.
+QEMU/KVM VM images.
rootdir:::
maxfiles::
-Maximum number of backup files per VM. Use `0` for unlimited.
+Deprecated, please use `prune-backups` instead. Maximum number of backup files
+per VM. Use `0` for unlimited.
+
+prune-backups::
+
+Retention options for backups. For details, see
+xref:vzdump_retention[Backup Retention].
format::
Default image format (`raw|qcow2|vmdk`)
+preallocation::
+
+Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
+file-based storages. The default is `metadata`, which is treated like `off` for
+`raw` images. When using network storages in combination with large `qcow2`
+images, using `off` can help to avoid timeouts.
WARNING: It is not advisable to use the same storage pool on different
{pve} clusters. Some storage operation need exclusive access to the
* link:/wiki/Storage:_CIFS[Storage: CIFS]
+* link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
+
* link:/wiki/Storage:_RBD[Storage: RBD]
* link:/wiki/Storage:_CephFS[Storage: CephFS]
include::pve-storage-cifs.adoc[]
+include::pve-storage-pbs.adoc[]
+
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]
include::pve-storage-cephfs.adoc[]
+include::pve-storage-btrfs.adoc[]
+
+include::pve-storage-zfs.adoc[]
ifdef::manvolnum[]