]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
add year 2017 to copyright
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51Block level storage::
52
53Allows to store large 'raw' images. It is usually not possible to store
54other files (ISO, backups, ..) on such storage types. Most modern
55block level storage implementations support snapshots and clones.
56RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57data to different nodes.
58
59File level storage::
60
61They allow access to a full featured (POSIX) file system. They are
62more flexible, and allows you to store any content type. ZFS is
63probably the most advanced system, and it has full support for
64snapshots and clones.
65
66
67.Available storage types
68[width="100%",cols="<d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no |yes
73|NFS |nfs |file |yes |no |yes
74|GlusterFS |glusterfs |file |yes |no |yes
75|LVM |lvm |block |no |no |yes
76|LVM-thin |lvmthin |block |no |yes |yes
77|iSCSI/kernel |iscsi |block |yes |no |yes
78|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79|Ceph/RBD |rbd |block |yes |yes |yes
80|Sheepdog |sheepdog |block |yes |yes |beta
81|ZFS over iSCSI |zfs |block |yes |yes |yes
82|=========================================================
83
84TIP: It is possible to use LVM on top of an iSCSI storage. That way
85you get a `shared` LVM storage.
86
87
88Thin Provisioning
89~~~~~~~~~~~~~~~~~
90
91A number of storages, and the Qemu image format `qcow2`, support 'thin
92provisioning'. With thin provisioning activated, only the blocks that
93the guest system actually use will be written to the storage.
94
95Say for instance you create a VM with a 32GB hard disk, and after
96installing the guest system OS, the root file system of the VM contains
973 GB of data. In that case only 3GB are written to the storage, even
98if the guest VM sees a 32GB hard drive. In this way thin provisioning
99allows you to create disk images which are larger than the currently
100available storage blocks. You can create large disk images for your
101VMs, and when the need arises, add more disks to your storage without
102resizing the VMs' file systems.
103
104All storage types which have the ``Snapshots'' feature also support thin
105provisioning.
106
107CAUTION: If a storage runs full, all guests using volumes on that
108storage receives IO error. This can cause file system inconsistencies
109and may corrupt your data. So it is advisable to avoid
110over-provisioning of your storage resources, or carefully observe
111free space to avoid such conditions.
112
113
114Storage Configuration
115---------------------
116
117All {pve} related storage configuration is stored within a single text
118file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
119gets automatically distributed to all cluster nodes. So all nodes
120share the same storage configuration.
121
122Sharing storage configuration make perfect sense for shared storage,
123because the same ``shared'' storage is accessible from all nodes. But is
124also useful for local storage types. In this case such local storage
125is available on all nodes, but it is physically different and can have
126totally different content.
127
128
129Storage Pools
130~~~~~~~~~~~~~
131
132Each storage pool has a `<type>`, and is uniquely identified by its
133`<STORAGE_ID>`. A pool configuration looks like this:
134
135----
136<type>: <STORAGE_ID>
137 <property> <value>
138 <property> <value>
139 ...
140----
141
142The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
143followed by a list of properties. Most properties have values, but some of
144them come with reasonable default. In that case you can omit the value.
145
146To be more specific, take a look at the default storage configuration
147after installation. It contains one special local storage pool named
148`local`, which refers to the directory `/var/lib/vz` and is always
149available. The {pve} installer creates additional storage entries
150depending on the storage type chosen at installation time.
151
152.Default storage configuration (`/etc/pve/storage.cfg`)
153----
154dir: local
155 path /var/lib/vz
156 content iso,vztmpl,backup
157
158# default image store on LVM based installation
159lvmthin: local-lvm
160 thinpool data
161 vgname pve
162 content rootdir,images
163
164# default image store on ZFS based installation
165zfspool: local-zfs
166 pool rpool/data
167 sparse
168 content images,rootdir
169----
170
171
172Common Storage Properties
173~~~~~~~~~~~~~~~~~~~~~~~~~
174
175A few storage properties are common among different storage types.
176
177nodes::
178
179List of cluster node names where this storage is
180usable/accessible. One can use this property to restrict storage
181access to a limited set of nodes.
182
183content::
184
185A storage can support several content types, for example virtual disk
186images, cdrom iso images, container templates or container root
187directories. Not all storage types support all content types. One can set
188this property to select for what this storage is used for.
189
190images:::
191
192KVM-Qemu VM images.
193
194rootdir:::
195
196Allow to store container data.
197
198vztmpl:::
199
200Container templates.
201
202backup:::
203
204Backup files (`vzdump`).
205
206iso:::
207
208ISO images
209
210shared::
211
212Mark storage as shared.
213
214disable::
215
216You can use this flag to disable the storage completely.
217
218maxfiles::
219
220Maximum number of backup files per VM. Use `0` for unlimited.
221
222format::
223
224Default image format (`raw|qcow2|vmdk`)
225
226
227WARNING: It is not advisable to use the same storage pool on different
228{pve} clusters. Some storage operation need exclusive access to the
229storage, so proper locking is required. While this is implemented
230within a cluster, it does not work between different clusters.
231
232
233Volumes
234-------
235
236We use a special notation to address storage data. When you allocate
237data from a storage pool, it returns such a volume identifier. A volume
238is identified by the `<STORAGE_ID>`, followed by a storage type
239dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
240like:
241
242 local:230/example-image.raw
243
244 local:iso/debian-501-amd64-netinst.iso
245
246 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
247
248 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
249
250To get the file system path for a `<VOLUME_ID>` use:
251
252 pvesm path <VOLUME_ID>
253
254
255Volume Ownership
256~~~~~~~~~~~~~~~~
257
258There exists an ownership relation for `image` type volumes. Each such
259volume is owned by a VM or Container. For example volume
260`local:230/example-image.raw` is owned by VM 230. Most storage
261backends encodes this ownership information into the volume name.
262
263When you remove a VM or Container, the system also removes all
264associated volumes which are owned by that VM or Container.
265
266
267Using the Command Line Interface
268--------------------------------
269
270It is recommended to familiarize yourself with the concept behind storage
271pools and volume identifiers, but in real life, you are not forced to do any
272of those low level operations on the command line. Normally,
273allocation and removal of volumes is done by the VM and Container
274management tools.
275
276Nevertheless, there is a command line tool called `pvesm` (``{pve}
277Storage Manager''), which is able to perform common storage management
278tasks.
279
280
281Examples
282~~~~~~~~
283
284Add storage pools
285
286 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
287 pvesm add dir <STORAGE_ID> --path <PATH>
288 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
289 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
290 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
291
292Disable storage pools
293
294 pvesm set <STORAGE_ID> --disable 1
295
296Enable storage pools
297
298 pvesm set <STORAGE_ID> --disable 0
299
300Change/set storage options
301
302 pvesm set <STORAGE_ID> <OPTIONS>
303 pvesm set <STORAGE_ID> --shared 1
304 pvesm set local --format qcow2
305 pvesm set <STORAGE_ID> --content iso
306
307Remove storage pools. This does not delete any data, and does not
308disconnect or unmount anything. It just removes the storage
309configuration.
310
311 pvesm remove <STORAGE_ID>
312
313Allocate volumes
314
315 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
316
317Allocate a 4G volume in local storage. The name is auto-generated if
318you pass an empty string as `<name>`
319
320 pvesm alloc local <VMID> '' 4G
321
322Free volumes
323
324 pvesm free <VOLUME_ID>
325
326WARNING: This really destroys all volume data.
327
328List storage status
329
330 pvesm status
331
332List storage contents
333
334 pvesm list <STORAGE_ID> [--vmid <VMID>]
335
336List volumes allocated by VMID
337
338 pvesm list <STORAGE_ID> --vmid <VMID>
339
340List iso images
341
342 pvesm list <STORAGE_ID> --iso
343
344List container templates
345
346 pvesm list <STORAGE_ID> --vztmpl
347
348Show file system path for a volume
349
350 pvesm path <VOLUME_ID>
351
352ifdef::wiki[]
353
354See Also
355--------
356
357* link:/wiki/Storage:_Directory[Storage: Directory]
358
359* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
360
361* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
362
363* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
364
365* link:/wiki/Storage:_LVM[Storage: LVM]
366
367* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
368
369* link:/wiki/Storage:_NFS[Storage: NFS]
370
371* link:/wiki/Storage:_RBD[Storage: RBD]
372
373* link:/wiki/Storage:_ZFS[Storage: ZFS]
374
375* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
376
377endif::wiki[]
378
379ifndef::wiki[]
380
381// backend documentation
382
383include::pve-storage-dir.adoc[]
384
385include::pve-storage-nfs.adoc[]
386
387include::pve-storage-glusterfs.adoc[]
388
389include::pve-storage-zfspool.adoc[]
390
391include::pve-storage-lvm.adoc[]
392
393include::pve-storage-lvmthin.adoc[]
394
395include::pve-storage-iscsi.adoc[]
396
397include::pve-storage-iscsidirect.adoc[]
398
399include::pve-storage-rbd.adoc[]
400
401
402
403ifdef::manvolnum[]
404include::pve-copyright.adoc[]
405endif::manvolnum[]
406
407endif::wiki[]
408