X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=pvesm.adoc;h=677638536b6b9e42f43d71bca4fefe22a324386a;hp=270fc97c2a004bbd151f09641cc6e98605a3e409;hb=bc7e0c7006c3b3db1dbe8ba7e441a01253bdddb1;hpb=8c1189b640ae7d10119ff1c046580f48749d38bd diff --git a/pvesm.adoc b/pvesm.adoc index 270fc97..6776385 100644 --- a/pvesm.adoc +++ b/pvesm.adoc @@ -1,8 +1,8 @@ -[[chapter-storage]] +[[chapter_storage]] ifdef::manvolnum[] -PVE({manvolnum}) -================ -include::attributes.txt[] +pvesm(1) +======== +:pve-toplevel: NAME ---- @@ -10,7 +10,7 @@ NAME pvesm - Proxmox VE Storage Manager -SYNOPSYS +SYNOPSIS -------- include::pvesm.1-synopsis.adoc[] @@ -18,12 +18,14 @@ include::pvesm.1-synopsis.adoc[] DESCRIPTION ----------- endif::manvolnum[] - ifndef::manvolnum[] {pve} Storage ============= -include::attributes.txt[] +:pve-toplevel: endif::manvolnum[] +ifdef::wiki[] +:title: Storage +endif::wiki[] The {pve} storage model is very flexible. Virtual machine images can either be stored on one or several local storages, or on shared @@ -51,7 +53,7 @@ Block level storage:: Allows to store large 'raw' images. It is usually not possible to store other files (ISO, backups, ..) on such storage types. Most modern block level storage implementations support snapshots and clones. -RADOS, Sheepdog and DRBD are distributed systems, replicating storage +RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage data to different nodes. File level storage:: @@ -67,23 +69,27 @@ snapshots and clones. |=========================================================== |Description |PVE type |Level |Shared|Snapshots|Stable |ZFS (local) |zfspool |file |no |yes |yes -|Directory |dir |file |no |no |yes -|NFS |nfs |file |yes |no |yes -|GlusterFS |glusterfs |file |yes |no |yes -|LVM |lvm |block |no |no |yes +|Directory |dir |file |no |no^1^ |yes +|NFS |nfs |file |yes |no^1^ |yes +|CIFS |cifs |file |yes |no^1^ |yes +|GlusterFS |glusterfs |file |yes |no^1^ |yes +|CephFS |cephfs |file |yes |yes |yes +|LVM |lvm |block |no^2^ |no |yes |LVM-thin |lvmthin |block |no |yes |yes |iSCSI/kernel |iscsi |block |yes |no |yes |iSCSI/libiscsi |iscsidirect |block |yes |no |yes |Ceph/RBD |rbd |block |yes |yes |yes |Sheepdog |sheepdog |block |yes |yes |beta -|DRBD9 |drbd |block |yes |yes |beta |ZFS over iSCSI |zfs |block |yes |yes |yes |========================================================= -TIP: It is possible to use LVM on top of an iSCSI storage. That way +^1^: On file based storages, snapshots are possible with the 'qcow2' format. + +^2^: It is possible to use LVM on top of an iSCSI storage. That way you get a `shared` LVM storage. -Thin provisioning + +Thin Provisioning ~~~~~~~~~~~~~~~~~ A number of storages, and the Qemu image format `qcow2`, support 'thin @@ -91,23 +97,24 @@ provisioning'. With thin provisioning activated, only the blocks that the guest system actually use will be written to the storage. Say for instance you create a VM with a 32GB hard disk, and after -installing the guest system OS, the root filesystem of the VM contains +installing the guest system OS, the root file system of the VM contains 3 GB of data. In that case only 3GB are written to the storage, even if the guest VM sees a 32GB hard drive. In this way thin provisioning allows you to create disk images which are larger than the currently available storage blocks. You can create large disk images for your VMs, and when the need arises, add more disks to your storage without -resizing the VMs filesystems. +resizing the VMs' file systems. All storage types which have the ``Snapshots'' feature also support thin provisioning. CAUTION: If a storage runs full, all guests using volumes on that -storage receives IO error. This can cause file system inconsistencies +storage receive IO errors. This can cause file system inconsistencies and may corrupt your data. So it is advisable to avoid over-provisioning of your storage resources, or carefully observe free space to avoid such conditions. + Storage Configuration --------------------- @@ -122,10 +129,12 @@ also useful for local storage types. In this case such local storage is available on all nodes, but it is physically different and can have totally different content. + Storage Pools ~~~~~~~~~~~~~ -Each storage pool has a ``, and is uniquely identified by its ``. A pool configuration looks like this: +Each storage pool has a ``, and is uniquely identified by its +``. A pool configuration looks like this: ---- : @@ -163,6 +172,7 @@ zfspool: local-zfs content images,rootdir ---- + Common Storage Properties ~~~~~~~~~~~~~~~~~~~~~~~~~ @@ -211,7 +221,7 @@ You can use this flag to disable the storage completely. maxfiles:: -Maximal number of backup files per VM. Use `0` for unlimted. +Maximum number of backup files per VM. Use `0` for unlimited. format:: @@ -241,10 +251,11 @@ like: iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61 -To get the filesystem path for a `` use: +To get the file system path for a `` use: pvesm path + Volume Ownership ~~~~~~~~~~~~~~~~ @@ -312,7 +323,7 @@ you pass an empty string as `` pvesm alloc local '' 4G -Free volumes +Free volumes pvesm free @@ -338,7 +349,7 @@ List container templates pvesm list --vztmpl -Show filesystem path for a volume +Show file system path for a volume pvesm path @@ -361,10 +372,15 @@ See Also * link:/wiki/Storage:_NFS[Storage: NFS] +* link:/wiki/Storage:_CIFS[Storage: CIFS] + * link:/wiki/Storage:_RBD[Storage: RBD] +* link:/wiki/Storage:_CephFS[Storage: CephFS] + * link:/wiki/Storage:_ZFS[Storage: ZFS] +* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] endif::wiki[] @@ -376,6 +392,8 @@ include::pve-storage-dir.adoc[] include::pve-storage-nfs.adoc[] +include::pve-storage-cifs.adoc[] + include::pve-storage-glusterfs.adoc[] include::pve-storage-zfspool.adoc[] @@ -390,6 +408,8 @@ include::pve-storage-iscsidirect.adoc[] include::pve-storage-rbd.adoc[] +include::pve-storage-cephfs.adoc[] + ifdef::manvolnum[]