The storage library (package `libpve-storage-perl`) uses a flexible
plugin system to provide a common interface to all storage types. This
-can be easily adopted to include further storage types in future.
+can be easily adopted to include further storage types in the future.
Storage Types
There are basically two different classes of storage types:
+File level storage::
+
+File level based storage technologies allow access to a fully featured (POSIX)
+file system. They are in general more flexible than any Block level storage
+(see below), and allow you to store content of any type. ZFS is probably the
+most advanced system, and it has full support for snapshots and clones.
+
Block level storage::
Allows to store large 'raw' images. It is usually not possible to store
other files (ISO, backups, ..) on such storage types. Most modern
block level storage implementations support snapshots and clones.
-RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
+RADOS and GlusterFS are distributed systems, replicating storage
data to different nodes.
-File level storage::
-
-They allow access to a full featured (POSIX) file system. They are
-more flexible, and allows you to store any content type. ZFS is
-probably the most advanced system, and it has full support for
-snapshots and clones.
-
.Available storage types
[width="100%",cols="<d,1*m,4*d",options="header"]
|ZFS (local) |zfspool |file |no |yes |yes
|Directory |dir |file |no |no^1^ |yes
|NFS |nfs |file |yes |no^1^ |yes
+|CIFS |cifs |file |yes |no^1^ |yes
|GlusterFS |glusterfs |file |yes |no^1^ |yes
+|CephFS |cephfs |file |yes |yes |yes
|LVM |lvm |block |no^2^ |no |yes
|LVM-thin |lvmthin |block |no |yes |yes
|iSCSI/kernel |iscsi |block |yes |no |yes
|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
|Ceph/RBD |rbd |block |yes |yes |yes
-|Sheepdog |sheepdog |block |yes |yes |beta
|ZFS over iSCSI |zfs |block |yes |yes |yes
|=========================================================
gets automatically distributed to all cluster nodes. So all nodes
share the same storage configuration.
-Sharing storage configuration make perfect sense for shared storage,
-because the same ``shared'' storage is accessible from all nodes. But is
+Sharing storage configuration makes perfect sense for shared storage,
+because the same ``shared'' storage is accessible from all nodes. But it is
also useful for local storage types. In this case such local storage
is available on all nodes, but it is physically different and can have
totally different content.
<type>: <STORAGE_ID>
<property> <value>
<property> <value>
+ <property>
...
----
The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
-followed by a list of properties. Most properties have values, but some of
-them come with reasonable default. In that case you can omit the value.
+followed by a list of properties. Most properties require a value. Some have
+reasonable defaults, in which case you can omit the value.
To be more specific, take a look at the default storage configuration
after installation. It contains one special local storage pool named
A storage can support several content types, for example virtual disk
images, cdrom iso images, container templates or container root
directories. Not all storage types support all content types. One can set
-this property to select for what this storage is used for.
+this property to select what this storage is used for.
images:::
ISO images
+snippets:::
+
+Snippet files, for example guest hook scripts
+
shared::
Mark storage as shared.
* link:/wiki/Storage:_NFS[Storage: NFS]
+* link:/wiki/Storage:_CIFS[Storage: CIFS]
+
* link:/wiki/Storage:_RBD[Storage: RBD]
+* link:/wiki/Storage:_CephFS[Storage: CephFS]
+
* link:/wiki/Storage:_ZFS[Storage: ZFS]
* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
include::pve-storage-nfs.adoc[]
+include::pve-storage-cifs.adoc[]
+
include::pve-storage-glusterfs.adoc[]
include::pve-storage-zfspool.adoc[]
include::pve-storage-rbd.adoc[]
+include::pve-storage-cephfs.adoc[]
+
ifdef::manvolnum[]