]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
new file pve-docs-mediawiki-import.in
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
5include::attributes.txt[]
6
7:pve-toplevel:
8
9NAME
10----
11
12pvesm - Proxmox VE Storage Manager
13
14
15SYNOPSIS
16--------
17
18include::pvesm.1-synopsis.adoc[]
19
20DESCRIPTION
21-----------
22endif::manvolnum[]
23
24ifndef::manvolnum[]
25{pve} Storage
26=============
27include::attributes.txt[]
28endif::manvolnum[]
29
30ifdef::wiki[]
31:pve-toplevel:
32:title: Storage
33endif::wiki[]
34
35The {pve} storage model is very flexible. Virtual machine images
36can either be stored on one or several local storages, or on shared
37storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
38configure as many storage pools as you like. You can use all
39storage technologies available for Debian Linux.
40
41One major benefit of storing VMs on shared storage is the ability to
42live-migrate running machines without any downtime, as all nodes in
43the cluster have direct access to VM disk images. There is no need to
44copy VM image data, so live migration is very fast in that case.
45
46The storage library (package `libpve-storage-perl`) uses a flexible
47plugin system to provide a common interface to all storage types. This
48can be easily adopted to include further storage types in future.
49
50
51Storage Types
52-------------
53
54There are basically two different classes of storage types:
55
56Block level storage::
57
58Allows to store large 'raw' images. It is usually not possible to store
59other files (ISO, backups, ..) on such storage types. Most modern
60block level storage implementations support snapshots and clones.
61RADOS, Sheepdog and DRBD are distributed systems, replicating storage
62data to different nodes.
63
64File level storage::
65
66They allow access to a full featured (POSIX) file system. They are
67more flexible, and allows you to store any content type. ZFS is
68probably the most advanced system, and it has full support for
69snapshots and clones.
70
71
72.Available storage types
73[width="100%",cols="<d,1*m,4*d",options="header"]
74|===========================================================
75|Description |PVE type |Level |Shared|Snapshots|Stable
76|ZFS (local) |zfspool |file |no |yes |yes
77|Directory |dir |file |no |no |yes
78|NFS |nfs |file |yes |no |yes
79|GlusterFS |glusterfs |file |yes |no |yes
80|LVM |lvm |block |no |no |yes
81|LVM-thin |lvmthin |block |no |yes |yes
82|iSCSI/kernel |iscsi |block |yes |no |yes
83|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
84|Ceph/RBD |rbd |block |yes |yes |yes
85|Sheepdog |sheepdog |block |yes |yes |beta
86|DRBD9 |drbd |block |yes |yes |beta
87|ZFS over iSCSI |zfs |block |yes |yes |yes
88|=========================================================
89
90TIP: It is possible to use LVM on top of an iSCSI storage. That way
91you get a `shared` LVM storage.
92
93
94Thin Provisioning
95~~~~~~~~~~~~~~~~~
96
97A number of storages, and the Qemu image format `qcow2`, support 'thin
98provisioning'. With thin provisioning activated, only the blocks that
99the guest system actually use will be written to the storage.
100
101Say for instance you create a VM with a 32GB hard disk, and after
102installing the guest system OS, the root file system of the VM contains
1033 GB of data. In that case only 3GB are written to the storage, even
104if the guest VM sees a 32GB hard drive. In this way thin provisioning
105allows you to create disk images which are larger than the currently
106available storage blocks. You can create large disk images for your
107VMs, and when the need arises, add more disks to your storage without
108resizing the VMs' file systems.
109
110All storage types which have the ``Snapshots'' feature also support thin
111provisioning.
112
113CAUTION: If a storage runs full, all guests using volumes on that
114storage receives IO error. This can cause file system inconsistencies
115and may corrupt your data. So it is advisable to avoid
116over-provisioning of your storage resources, or carefully observe
117free space to avoid such conditions.
118
119
120Storage Configuration
121---------------------
122
123All {pve} related storage configuration is stored within a single text
124file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
125gets automatically distributed to all cluster nodes. So all nodes
126share the same storage configuration.
127
128Sharing storage configuration make perfect sense for shared storage,
129because the same ``shared'' storage is accessible from all nodes. But is
130also useful for local storage types. In this case such local storage
131is available on all nodes, but it is physically different and can have
132totally different content.
133
134
135Storage Pools
136~~~~~~~~~~~~~
137
138Each storage pool has a `<type>`, and is uniquely identified by its
139`<STORAGE_ID>`. A pool configuration looks like this:
140
141----
142<type>: <STORAGE_ID>
143 <property> <value>
144 <property> <value>
145 ...
146----
147
148The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
149followed by a list of properties. Most properties have values, but some of
150them come with reasonable default. In that case you can omit the value.
151
152To be more specific, take a look at the default storage configuration
153after installation. It contains one special local storage pool named
154`local`, which refers to the directory `/var/lib/vz` and is always
155available. The {pve} installer creates additional storage entries
156depending on the storage type chosen at installation time.
157
158.Default storage configuration (`/etc/pve/storage.cfg`)
159----
160dir: local
161 path /var/lib/vz
162 content iso,vztmpl,backup
163
164# default image store on LVM based installation
165lvmthin: local-lvm
166 thinpool data
167 vgname pve
168 content rootdir,images
169
170# default image store on ZFS based installation
171zfspool: local-zfs
172 pool rpool/data
173 sparse
174 content images,rootdir
175----
176
177
178Common Storage Properties
179~~~~~~~~~~~~~~~~~~~~~~~~~
180
181A few storage properties are common among different storage types.
182
183nodes::
184
185List of cluster node names where this storage is
186usable/accessible. One can use this property to restrict storage
187access to a limited set of nodes.
188
189content::
190
191A storage can support several content types, for example virtual disk
192images, cdrom iso images, container templates or container root
193directories. Not all storage types support all content types. One can set
194this property to select for what this storage is used for.
195
196images:::
197
198KVM-Qemu VM images.
199
200rootdir:::
201
202Allow to store container data.
203
204vztmpl:::
205
206Container templates.
207
208backup:::
209
210Backup files (`vzdump`).
211
212iso:::
213
214ISO images
215
216shared::
217
218Mark storage as shared.
219
220disable::
221
222You can use this flag to disable the storage completely.
223
224maxfiles::
225
226Maximum number of backup files per VM. Use `0` for unlimited.
227
228format::
229
230Default image format (`raw|qcow2|vmdk`)
231
232
233WARNING: It is not advisable to use the same storage pool on different
234{pve} clusters. Some storage operation need exclusive access to the
235storage, so proper locking is required. While this is implemented
236within a cluster, it does not work between different clusters.
237
238
239Volumes
240-------
241
242We use a special notation to address storage data. When you allocate
243data from a storage pool, it returns such a volume identifier. A volume
244is identified by the `<STORAGE_ID>`, followed by a storage type
245dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
246like:
247
248 local:230/example-image.raw
249
250 local:iso/debian-501-amd64-netinst.iso
251
252 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
253
254 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
255
256To get the file system path for a `<VOLUME_ID>` use:
257
258 pvesm path <VOLUME_ID>
259
260
261Volume Ownership
262~~~~~~~~~~~~~~~~
263
264There exists an ownership relation for `image` type volumes. Each such
265volume is owned by a VM or Container. For example volume
266`local:230/example-image.raw` is owned by VM 230. Most storage
267backends encodes this ownership information into the volume name.
268
269When you remove a VM or Container, the system also removes all
270associated volumes which are owned by that VM or Container.
271
272
273Using the Command Line Interface
274--------------------------------
275
276It is recommended to familiarize yourself with the concept behind storage
277pools and volume identifiers, but in real life, you are not forced to do any
278of those low level operations on the command line. Normally,
279allocation and removal of volumes is done by the VM and Container
280management tools.
281
282Nevertheless, there is a command line tool called `pvesm` (``{pve}
283Storage Manager''), which is able to perform common storage management
284tasks.
285
286
287Examples
288~~~~~~~~
289
290Add storage pools
291
292 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
293 pvesm add dir <STORAGE_ID> --path <PATH>
294 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
295 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
296 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
297
298Disable storage pools
299
300 pvesm set <STORAGE_ID> --disable 1
301
302Enable storage pools
303
304 pvesm set <STORAGE_ID> --disable 0
305
306Change/set storage options
307
308 pvesm set <STORAGE_ID> <OPTIONS>
309 pvesm set <STORAGE_ID> --shared 1
310 pvesm set local --format qcow2
311 pvesm set <STORAGE_ID> --content iso
312
313Remove storage pools. This does not delete any data, and does not
314disconnect or unmount anything. It just removes the storage
315configuration.
316
317 pvesm remove <STORAGE_ID>
318
319Allocate volumes
320
321 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
322
323Allocate a 4G volume in local storage. The name is auto-generated if
324you pass an empty string as `<name>`
325
326 pvesm alloc local <VMID> '' 4G
327
328Free volumes
329
330 pvesm free <VOLUME_ID>
331
332WARNING: This really destroys all volume data.
333
334List storage status
335
336 pvesm status
337
338List storage contents
339
340 pvesm list <STORAGE_ID> [--vmid <VMID>]
341
342List volumes allocated by VMID
343
344 pvesm list <STORAGE_ID> --vmid <VMID>
345
346List iso images
347
348 pvesm list <STORAGE_ID> --iso
349
350List container templates
351
352 pvesm list <STORAGE_ID> --vztmpl
353
354Show file system path for a volume
355
356 pvesm path <VOLUME_ID>
357
358ifdef::wiki[]
359
360See Also
361--------
362
363* link:/wiki/Storage:_Directory[Storage: Directory]
364
365* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
366
367* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
368
369* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
370
371* link:/wiki/Storage:_LVM[Storage: LVM]
372
373* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
374
375* link:/wiki/Storage:_NFS[Storage: NFS]
376
377* link:/wiki/Storage:_RBD[Storage: RBD]
378
379* link:/wiki/Storage:_ZFS[Storage: ZFS]
380
381* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
382
383endif::wiki[]
384
385ifndef::wiki[]
386
387// backend documentation
388
389include::pve-storage-dir.adoc[]
390
391include::pve-storage-nfs.adoc[]
392
393include::pve-storage-glusterfs.adoc[]
394
395include::pve-storage-zfspool.adoc[]
396
397include::pve-storage-lvm.adoc[]
398
399include::pve-storage-lvmthin.adoc[]
400
401include::pve-storage-iscsi.adoc[]
402
403include::pve-storage-iscsidirect.adoc[]
404
405include::pve-storage-rbd.adoc[]
406
407
408
409ifdef::manvolnum[]
410include::pve-copyright.adoc[]
411endif::manvolnum[]
412
413endif::wiki[]
414