X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=pvesm.adoc;h=7ae4063309654a86251dc55fd7d400683eee4501;hb=a366aa5937be3e370123b8dddfaf584934dffcaf;hp=ee7f598a86875a9e9361dbf3b32d97054865e704;hpb=7b43e874a2611e64d08a00b7357a31a04d303538;p=pve-docs.git diff --git a/pvesm.adoc b/pvesm.adoc index ee7f598..7ae4063 100644 --- a/pvesm.adoc +++ b/pvesm.adoc @@ -67,32 +67,36 @@ data to different nodes. .Available storage types [width="100%",cols="<2d,1*m,4*d",options="header"] |=========================================================== -|Description |PVE type |Level |Shared|Snapshots|Stable -|ZFS (local) |zfspool |file |no |yes |yes -|Directory |dir |file |no |no^1^ |yes -|NFS |nfs |file |yes |no^1^ |yes -|CIFS |cifs |file |yes |no^1^ |yes -|Proxmox Backup |pbs |both |yes |n/a |beta -|GlusterFS |glusterfs |file |yes |no^1^ |yes -|CephFS |cephfs |file |yes |yes |yes -|LVM |lvm |block |no^2^ |no |yes -|LVM-thin |lvmthin |block |no |yes |yes -|iSCSI/kernel |iscsi |block |yes |no |yes -|iSCSI/libiscsi |iscsidirect |block |yes |no |yes -|Ceph/RBD |rbd |block |yes |yes |yes -|ZFS over iSCSI |zfs |block |yes |yes |yes +|Description |Plugin type |Level |Shared|Snapshots|Stable +|ZFS (local) |zfspool |both^1^|no |yes |yes +|Directory |dir |file |no |no^2^ |yes +|BTRFS |btrfs |file |no |yes |technology preview +|NFS |nfs |file |yes |no^2^ |yes +|CIFS |cifs |file |yes |no^2^ |yes +|Proxmox Backup |pbs |both |yes |n/a |yes +|GlusterFS |glusterfs |file |yes |no^2^ |yes +|CephFS |cephfs |file |yes |yes |yes +|LVM |lvm |block |no^3^ |no |yes +|LVM-thin |lvmthin |block |no |yes |yes +|iSCSI/kernel |iscsi |block |yes |no |yes +|iSCSI/libiscsi |iscsidirect |block |yes |no |yes +|Ceph/RBD |rbd |block |yes |yes |yes +|ZFS over iSCSI |zfs |block |yes |yes |yes |=========================================================== -^1^: On file based storages, snapshots are possible with the 'qcow2' format. +^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide +block device functionality. -^2^: It is possible to use LVM on top of an iSCSI or FC-based storage. -That way you get a `shared` LVM storage. +^2^: On file based storages, snapshots are possible with the 'qcow2' format. + +^3^: It is possible to use LVM on top of an iSCSI or FC-based storage. +That way you get a `shared` LVM storage Thin Provisioning ~~~~~~~~~~~~~~~~~ -A number of storages, and the Qemu image format `qcow2`, support 'thin +A number of storages, and the QEMU image format `qcow2`, support 'thin provisioning'. With thin provisioning activated, only the blocks that the guest system actually use will be written to the storage. @@ -173,6 +177,12 @@ zfspool: local-zfs content images,rootdir ---- +CAUTION: It is problematic to have multiple storage configurations pointing to +the exact same underlying storage. Such an _aliased_ storage configuration can +lead to two different volume IDs ('volid') pointing to the exact same disk +image. {pve} expects that the images' volume IDs point to, are unique. Choosing +different content types for _aliased_ storage configurations can be fine, but +is not recommended. Common Storage Properties ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -194,7 +204,7 @@ this property to select what this storage is used for. images::: -KVM-Qemu VM images. +QEMU/KVM VM images. rootdir::: @@ -218,7 +228,10 @@ Snippet files, for example guest hook scripts shared:: -Mark storage as shared. +Indicate that this is a single storage with the same contents on all nodes (or +all listed in the 'nodes' option). It will not make the contents of a local +storage automatically accessible to other nodes, it just marks an already shared +storage as such! disable:: @@ -226,12 +239,24 @@ You can use this flag to disable the storage completely. maxfiles:: -Maximum number of backup files per VM. Use `0` for unlimited. +Deprecated, please use `prune-backups` instead. Maximum number of backup files +per VM. Use `0` for unlimited. + +prune-backups:: + +Retention options for backups. For details, see +xref:vzdump_retention[Backup Retention]. format:: Default image format (`raw|qcow2|vmdk`) +preallocation:: + +Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on +file-based storages. The default is `metadata`, which is treated like `off` for +`raw` images. When using network storages in combination with large `qcow2` +images, using `off` can help to avoid timeouts. WARNING: It is not advisable to use the same storage pool on different {pve} clusters. Some storage operation need exclusive access to the @@ -273,7 +298,7 @@ When you remove a VM or Container, the system also removes all associated volumes which are owned by that VM or Container. -Using the Command Line Interface +Using the Command-line Interface -------------------------------- It is recommended to familiarize yourself with the concept behind storage @@ -282,7 +307,7 @@ of those low level operations on the command line. Normally, allocation and removal of volumes is done by the VM and Container management tools. -Nevertheless, there is a command line tool called `pvesm` (``{pve} +Nevertheless, there is a command-line tool called `pvesm` (``{pve} Storage Manager''), which is able to perform common storage management tasks. @@ -348,11 +373,11 @@ List volumes allocated by VMID List iso images - pvesm list --iso + pvesm list --content iso List container templates - pvesm list --vztmpl + pvesm list --content vztmpl Show file system path for a volume @@ -395,7 +420,7 @@ See Also * link:/wiki/Storage:_ZFS[Storage: ZFS] -* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] +* link:/wiki/Storage:_ZFS_over_ISCSI[Storage: ZFS over ISCSI] endif::wiki[] @@ -427,6 +452,9 @@ include::pve-storage-rbd.adoc[] include::pve-storage-cephfs.adoc[] +include::pve-storage-btrfs.adoc[] + +include::pve-storage-zfs.adoc[] ifdef::manvolnum[]