From: Wolfgang Link Date: Thu, 5 Apr 2018 12:08:24 +0000 (+0200) Subject: Add documentation for CIFS Storage Plugin. X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=commitdiff_plain;h=de14ebffd27b0985272a6d77969a065a4a3a53a3 Add documentation for CIFS Storage Plugin. --- diff --git a/pve-intro.adoc b/pve-intro.adoc index 1188e77..f0b0d1e 100644 --- a/pve-intro.adoc +++ b/pve-intro.adoc @@ -106,6 +106,7 @@ We currently support the following Network storage types: * LVM Group (network backing with iSCSI targets) * iSCSI target * NFS Share +* CIFS Share * Ceph RBD * Directly use iSCSI LUNs * GlusterFS @@ -125,7 +126,7 @@ running Containers and KVM guests. It basically creates an archive of the VM or CT data which includes the VM/CT configuration files. KVM live backup works for all storage types including VM images on -NFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is +NFS, CIFS, iSCSI LUN, Ceph RBD or Sheepdog. The new backup format is optimized for storing VM backups fast and effective (sparse files, out of order data, minimized I/O). diff --git a/pve-storage-cifs.adoc b/pve-storage-cifs.adoc new file mode 100644 index 0000000..38f30fc --- /dev/null +++ b/pve-storage-cifs.adoc @@ -0,0 +1,99 @@ +CIFS Backend +----------- +ifdef::wiki[] +:pve-toplevel: +:title: Storage: CIFS +endif::wiki[] + +Storage pool type: `cifs` + +The CIFS backend is based on the directory backend, so it shares most +properties. The directory layout and the file naming conventions are +the same. The main advantage is that you can directly configure the +CIFS server, so the backend can mount the share automatically in +the hole cluster. There is no need to modify `/etc/fstab`. The backend +can also test if the server is online, and provides a method to query +the server for exported shares. + +Configuration +~~~~~~~~~~~~~ + +The backend supports all common storage properties, except the shared +flag, which is always set. Additionally, the following properties are +used to configure the CIFS server: + +server:: + +Server IP or DNS name. To avoid DNS lookup delays, it is usually +preferable to use an IP address instead of a DNS name - unless you +have a very reliable DNS server, or list the server in the local +`/etc/hosts` file. + +share:: + +CIFS share (as listed by `pvesm cifsscan`). + +Optional properties: + +username:: + +If not presents, "guest" is used. + +password:: + +The user password. +It will be saved in a private directory (/etc/pve/priv/.cred). + +domain:: + +sets the domain (workgroup) of the user + +smbversion:: + +SMB protocol Version (default is `3`). +SMB1 is not supported due to security issues. + +path:: + +The local mount point (defaults to `/mnt/pve//`). + +.Configuration Example (`/etc/pve/storage.cfg`) +---- +cifs: backup + path /mnt/pve/backup + server 10.0.0.11 + share VMData + content backup + username anna + smbversion 3 + +---- + +Storage Features +~~~~~~~~~~~~~~~~ + +CIFS does not support snapshots, but the backend uses `qcow2` features +to implement snapshots and cloning. + +.Storage features for backend `nfs` +[width="100%",cols="m,m,3*d",options="header"] +|============================================================================== +|Content types |Image formats |Shared |Snapshots |Clones +|images rootdir vztmpl iso backup |raw qcow2 vmdk subvol |yes |qcow2 |qcow2 +|============================================================================== + +Examples +~~~~~~~~ + +You can get a list of exported CIFS shares with: + + # pvesm cifsscan [--username ] [--password] + +ifdef::wiki[] + +See Also +~~~~~~~~ + +* link:/wiki/Storage[Storage] + +endif::wiki[] diff --git a/pvesm.adoc b/pvesm.adoc index 62d190e..1d55d59 100644 --- a/pvesm.adoc +++ b/pvesm.adoc @@ -71,6 +71,7 @@ snapshots and clones. |ZFS (local) |zfspool |file |no |yes |yes |Directory |dir |file |no |no^1^ |yes |NFS |nfs |file |yes |no^1^ |yes +|CIFS |cifs |file |yes |no^1^ |yes |GlusterFS |glusterfs |file |yes |no^1^ |yes |LVM |lvm |block |no^2^ |no |yes |LVM-thin |lvmthin |block |no |yes |yes @@ -370,6 +371,8 @@ See Also * link:/wiki/Storage:_NFS[Storage: NFS] +* link:/wiki/Storage:_CIFS[Storage: CIFS] + * link:/wiki/Storage:_RBD[Storage: RBD] * link:/wiki/Storage:_ZFS[Storage: ZFS] @@ -386,6 +389,8 @@ include::pve-storage-dir.adoc[] include::pve-storage-nfs.adoc[] +include::pve-storage-cifs.adoc[] + include::pve-storage-glusterfs.adoc[] include::pve-storage-zfspool.adoc[] diff --git a/qm.adoc b/qm.adoc index 154c5c1..5fba463 100644 --- a/qm.adoc +++ b/qm.adoc @@ -163,7 +163,7 @@ On each controller you attach a number of emulated hard disks, which are backed by a file or a block device residing in the configured storage. The choice of a storage type will determine the format of the hard disk image. Storages which present block devices (LVM, ZFS, Ceph) will require the *raw disk image format*, -whereas files based storages (Ext4, NFS, GlusterFS) will let you to choose +whereas files based storages (Ext4, NFS, CIFS, GlusterFS) will let you to choose either the *raw disk image format* or the *QEMU image format*. * the *QEMU image format* is a copy on write format which allows snapshots, and diff --git a/vzdump.adoc b/vzdump.adoc index 0461140..193e1cf 100644 --- a/vzdump.adoc +++ b/vzdump.adoc @@ -111,7 +111,7 @@ started (resumed) again. This results in minimal downtime, but needs additional space to hold the container copy. + When the container is on a local file system and the target storage of -the backup is an NFS server, you should set `--tmpdir` to reside on a +the backup is an NFS/CIFS server, you should set `--tmpdir` to reside on a local file system too, as this will result in a many fold performance improvement. Use of a local `tmpdir` is also required if you want to backup a local container using ACLs in suspend mode if the backup