]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
pvesm: fix storage type table: order by level and shared
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51Block level storage::
52
53Allows to store large 'raw' images. It is usually not possible to store
54other files (ISO, backups, ..) on such storage types. Most modern
55block level storage implementations support snapshots and clones.
56RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57data to different nodes.
58
59File level storage::
60
61They allow access to a full featured (POSIX) file system. They are
62more flexible, and allows you to store any content type. ZFS is
63probably the most advanced system, and it has full support for
64snapshots and clones.
65
66
67.Available storage types
68[width="100%",cols="<d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|NFS |nfs |file |yes |no^1^ |yes
74|CIFS |cifs |file |yes |no^1^ |yes
75|GlusterFS |glusterfs |file |yes |no^1^ |yes
76|CephFS |cephfs |file |yes |yes |yes
77|LVM |lvm |block |no^2^ |no |yes
78|LVM-thin |lvmthin |block |no |yes |yes
79|iSCSI/kernel |iscsi |block |yes |no |yes
80|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
81|Ceph/RBD |rbd |block |yes |yes |yes
82|Sheepdog |sheepdog |block |yes |yes |beta
83|ZFS over iSCSI |zfs |block |yes |yes |yes
84|=========================================================
85
86^1^: On file based storages, snapshots are possible with the 'qcow2' format.
87
88^2^: It is possible to use LVM on top of an iSCSI storage. That way
89you get a `shared` LVM storage.
90
91
92Thin Provisioning
93~~~~~~~~~~~~~~~~~
94
95A number of storages, and the Qemu image format `qcow2`, support 'thin
96provisioning'. With thin provisioning activated, only the blocks that
97the guest system actually use will be written to the storage.
98
99Say for instance you create a VM with a 32GB hard disk, and after
100installing the guest system OS, the root file system of the VM contains
1013 GB of data. In that case only 3GB are written to the storage, even
102if the guest VM sees a 32GB hard drive. In this way thin provisioning
103allows you to create disk images which are larger than the currently
104available storage blocks. You can create large disk images for your
105VMs, and when the need arises, add more disks to your storage without
106resizing the VMs' file systems.
107
108All storage types which have the ``Snapshots'' feature also support thin
109provisioning.
110
111CAUTION: If a storage runs full, all guests using volumes on that
112storage receive IO errors. This can cause file system inconsistencies
113and may corrupt your data. So it is advisable to avoid
114over-provisioning of your storage resources, or carefully observe
115free space to avoid such conditions.
116
117
118Storage Configuration
119---------------------
120
121All {pve} related storage configuration is stored within a single text
122file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
123gets automatically distributed to all cluster nodes. So all nodes
124share the same storage configuration.
125
126Sharing storage configuration make perfect sense for shared storage,
127because the same ``shared'' storage is accessible from all nodes. But is
128also useful for local storage types. In this case such local storage
129is available on all nodes, but it is physically different and can have
130totally different content.
131
132
133Storage Pools
134~~~~~~~~~~~~~
135
136Each storage pool has a `<type>`, and is uniquely identified by its
137`<STORAGE_ID>`. A pool configuration looks like this:
138
139----
140<type>: <STORAGE_ID>
141 <property> <value>
142 <property> <value>
143 ...
144----
145
146The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
147followed by a list of properties. Most properties have values, but some of
148them come with reasonable default. In that case you can omit the value.
149
150To be more specific, take a look at the default storage configuration
151after installation. It contains one special local storage pool named
152`local`, which refers to the directory `/var/lib/vz` and is always
153available. The {pve} installer creates additional storage entries
154depending on the storage type chosen at installation time.
155
156.Default storage configuration (`/etc/pve/storage.cfg`)
157----
158dir: local
159 path /var/lib/vz
160 content iso,vztmpl,backup
161
162# default image store on LVM based installation
163lvmthin: local-lvm
164 thinpool data
165 vgname pve
166 content rootdir,images
167
168# default image store on ZFS based installation
169zfspool: local-zfs
170 pool rpool/data
171 sparse
172 content images,rootdir
173----
174
175
176Common Storage Properties
177~~~~~~~~~~~~~~~~~~~~~~~~~
178
179A few storage properties are common among different storage types.
180
181nodes::
182
183List of cluster node names where this storage is
184usable/accessible. One can use this property to restrict storage
185access to a limited set of nodes.
186
187content::
188
189A storage can support several content types, for example virtual disk
190images, cdrom iso images, container templates or container root
191directories. Not all storage types support all content types. One can set
192this property to select for what this storage is used for.
193
194images:::
195
196KVM-Qemu VM images.
197
198rootdir:::
199
200Allow to store container data.
201
202vztmpl:::
203
204Container templates.
205
206backup:::
207
208Backup files (`vzdump`).
209
210iso:::
211
212ISO images
213
214shared::
215
216Mark storage as shared.
217
218disable::
219
220You can use this flag to disable the storage completely.
221
222maxfiles::
223
224Maximum number of backup files per VM. Use `0` for unlimited.
225
226format::
227
228Default image format (`raw|qcow2|vmdk`)
229
230
231WARNING: It is not advisable to use the same storage pool on different
232{pve} clusters. Some storage operation need exclusive access to the
233storage, so proper locking is required. While this is implemented
234within a cluster, it does not work between different clusters.
235
236
237Volumes
238-------
239
240We use a special notation to address storage data. When you allocate
241data from a storage pool, it returns such a volume identifier. A volume
242is identified by the `<STORAGE_ID>`, followed by a storage type
243dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
244like:
245
246 local:230/example-image.raw
247
248 local:iso/debian-501-amd64-netinst.iso
249
250 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
251
252 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
253
254To get the file system path for a `<VOLUME_ID>` use:
255
256 pvesm path <VOLUME_ID>
257
258
259Volume Ownership
260~~~~~~~~~~~~~~~~
261
262There exists an ownership relation for `image` type volumes. Each such
263volume is owned by a VM or Container. For example volume
264`local:230/example-image.raw` is owned by VM 230. Most storage
265backends encodes this ownership information into the volume name.
266
267When you remove a VM or Container, the system also removes all
268associated volumes which are owned by that VM or Container.
269
270
271Using the Command Line Interface
272--------------------------------
273
274It is recommended to familiarize yourself with the concept behind storage
275pools and volume identifiers, but in real life, you are not forced to do any
276of those low level operations on the command line. Normally,
277allocation and removal of volumes is done by the VM and Container
278management tools.
279
280Nevertheless, there is a command line tool called `pvesm` (``{pve}
281Storage Manager''), which is able to perform common storage management
282tasks.
283
284
285Examples
286~~~~~~~~
287
288Add storage pools
289
290 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
291 pvesm add dir <STORAGE_ID> --path <PATH>
292 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
293 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
294 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
295
296Disable storage pools
297
298 pvesm set <STORAGE_ID> --disable 1
299
300Enable storage pools
301
302 pvesm set <STORAGE_ID> --disable 0
303
304Change/set storage options
305
306 pvesm set <STORAGE_ID> <OPTIONS>
307 pvesm set <STORAGE_ID> --shared 1
308 pvesm set local --format qcow2
309 pvesm set <STORAGE_ID> --content iso
310
311Remove storage pools. This does not delete any data, and does not
312disconnect or unmount anything. It just removes the storage
313configuration.
314
315 pvesm remove <STORAGE_ID>
316
317Allocate volumes
318
319 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
320
321Allocate a 4G volume in local storage. The name is auto-generated if
322you pass an empty string as `<name>`
323
324 pvesm alloc local <VMID> '' 4G
325
326Free volumes
327
328 pvesm free <VOLUME_ID>
329
330WARNING: This really destroys all volume data.
331
332List storage status
333
334 pvesm status
335
336List storage contents
337
338 pvesm list <STORAGE_ID> [--vmid <VMID>]
339
340List volumes allocated by VMID
341
342 pvesm list <STORAGE_ID> --vmid <VMID>
343
344List iso images
345
346 pvesm list <STORAGE_ID> --iso
347
348List container templates
349
350 pvesm list <STORAGE_ID> --vztmpl
351
352Show file system path for a volume
353
354 pvesm path <VOLUME_ID>
355
356ifdef::wiki[]
357
358See Also
359--------
360
361* link:/wiki/Storage:_Directory[Storage: Directory]
362
363* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
364
365* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
366
367* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
368
369* link:/wiki/Storage:_LVM[Storage: LVM]
370
371* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
372
373* link:/wiki/Storage:_NFS[Storage: NFS]
374
375* link:/wiki/Storage:_CIFS[Storage: CIFS]
376
377* link:/wiki/Storage:_RBD[Storage: RBD]
378
379* link:/wiki/Storage:_RBD[Storage: CephFS]
380
381* link:/wiki/Storage:_ZFS[Storage: ZFS]
382
383* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
384
385endif::wiki[]
386
387ifndef::wiki[]
388
389// backend documentation
390
391include::pve-storage-dir.adoc[]
392
393include::pve-storage-nfs.adoc[]
394
395include::pve-storage-cifs.adoc[]
396
397include::pve-storage-glusterfs.adoc[]
398
399include::pve-storage-zfspool.adoc[]
400
401include::pve-storage-lvm.adoc[]
402
403include::pve-storage-lvmthin.adoc[]
404
405include::pve-storage-iscsi.adoc[]
406
407include::pve-storage-iscsidirect.adoc[]
408
409include::pve-storage-rbd.adoc[]
410
411include::pve-storage-cephfs.adoc[]
412
413
414
415ifdef::manvolnum[]
416include::pve-copyright.adoc[]
417endif::manvolnum[]
418
419endif::wiki[]
420