]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
rework SDN docs a bit
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in the future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51File level storage::
52
53File level based storage technologies allow access to a fully featured (POSIX)
54file system. They are in general more flexible than any Block level storage
55(see below), and allow you to store content of any type. ZFS is probably the
56most advanced system, and it has full support for snapshots and clones.
57
58Block level storage::
59
60Allows to store large 'raw' images. It is usually not possible to store
61other files (ISO, backups, ..) on such storage types. Most modern
62block level storage implementations support snapshots and clones.
63RADOS and GlusterFS are distributed systems, replicating storage
64data to different nodes.
65
66
67.Available storage types
68[width="100%",cols="<d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|NFS |nfs |file |yes |no^1^ |yes
74|CIFS |cifs |file |yes |no^1^ |yes
75|GlusterFS |glusterfs |file |yes |no^1^ |yes
76|CephFS |cephfs |file |yes |yes |yes
77|LVM |lvm |block |no^2^ |no |yes
78|LVM-thin |lvmthin |block |no |yes |yes
79|iSCSI/kernel |iscsi |block |yes |no |yes
80|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
81|Ceph/RBD |rbd |block |yes |yes |yes
82|ZFS over iSCSI |zfs |block |yes |yes |yes
83|=========================================================
84
85^1^: On file based storages, snapshots are possible with the 'qcow2' format.
86
87^2^: It is possible to use LVM on top of an iSCSI or FC-based storage.
88That way you get a `shared` LVM storage.
89
90
91Thin Provisioning
92~~~~~~~~~~~~~~~~~
93
94A number of storages, and the Qemu image format `qcow2`, support 'thin
95provisioning'. With thin provisioning activated, only the blocks that
96the guest system actually use will be written to the storage.
97
98Say for instance you create a VM with a 32GB hard disk, and after
99installing the guest system OS, the root file system of the VM contains
1003 GB of data. In that case only 3GB are written to the storage, even
101if the guest VM sees a 32GB hard drive. In this way thin provisioning
102allows you to create disk images which are larger than the currently
103available storage blocks. You can create large disk images for your
104VMs, and when the need arises, add more disks to your storage without
105resizing the VMs' file systems.
106
107All storage types which have the ``Snapshots'' feature also support thin
108provisioning.
109
110CAUTION: If a storage runs full, all guests using volumes on that
111storage receive IO errors. This can cause file system inconsistencies
112and may corrupt your data. So it is advisable to avoid
113over-provisioning of your storage resources, or carefully observe
114free space to avoid such conditions.
115
116
117Storage Configuration
118---------------------
119
120All {pve} related storage configuration is stored within a single text
121file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
122gets automatically distributed to all cluster nodes. So all nodes
123share the same storage configuration.
124
125Sharing storage configuration makes perfect sense for shared storage,
126because the same ``shared'' storage is accessible from all nodes. But it is
127also useful for local storage types. In this case such local storage
128is available on all nodes, but it is physically different and can have
129totally different content.
130
131
132Storage Pools
133~~~~~~~~~~~~~
134
135Each storage pool has a `<type>`, and is uniquely identified by its
136`<STORAGE_ID>`. A pool configuration looks like this:
137
138----
139<type>: <STORAGE_ID>
140 <property> <value>
141 <property> <value>
142 <property>
143 ...
144----
145
146The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
147followed by a list of properties. Most properties require a value. Some have
148reasonable defaults, in which case you can omit the value.
149
150To be more specific, take a look at the default storage configuration
151after installation. It contains one special local storage pool named
152`local`, which refers to the directory `/var/lib/vz` and is always
153available. The {pve} installer creates additional storage entries
154depending on the storage type chosen at installation time.
155
156.Default storage configuration (`/etc/pve/storage.cfg`)
157----
158dir: local
159 path /var/lib/vz
160 content iso,vztmpl,backup
161
162# default image store on LVM based installation
163lvmthin: local-lvm
164 thinpool data
165 vgname pve
166 content rootdir,images
167
168# default image store on ZFS based installation
169zfspool: local-zfs
170 pool rpool/data
171 sparse
172 content images,rootdir
173----
174
175
176Common Storage Properties
177~~~~~~~~~~~~~~~~~~~~~~~~~
178
179A few storage properties are common among different storage types.
180
181nodes::
182
183List of cluster node names where this storage is
184usable/accessible. One can use this property to restrict storage
185access to a limited set of nodes.
186
187content::
188
189A storage can support several content types, for example virtual disk
190images, cdrom iso images, container templates or container root
191directories. Not all storage types support all content types. One can set
192this property to select what this storage is used for.
193
194images:::
195
196KVM-Qemu VM images.
197
198rootdir:::
199
200Allow to store container data.
201
202vztmpl:::
203
204Container templates.
205
206backup:::
207
208Backup files (`vzdump`).
209
210iso:::
211
212ISO images
213
214snippets:::
215
216Snippet files, for example guest hook scripts
217
218shared::
219
220Mark storage as shared.
221
222disable::
223
224You can use this flag to disable the storage completely.
225
226maxfiles::
227
228Maximum number of backup files per VM. Use `0` for unlimited.
229
230format::
231
232Default image format (`raw|qcow2|vmdk`)
233
234
235WARNING: It is not advisable to use the same storage pool on different
236{pve} clusters. Some storage operation need exclusive access to the
237storage, so proper locking is required. While this is implemented
238within a cluster, it does not work between different clusters.
239
240
241Volumes
242-------
243
244We use a special notation to address storage data. When you allocate
245data from a storage pool, it returns such a volume identifier. A volume
246is identified by the `<STORAGE_ID>`, followed by a storage type
247dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
248like:
249
250 local:230/example-image.raw
251
252 local:iso/debian-501-amd64-netinst.iso
253
254 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
255
256 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
257
258To get the file system path for a `<VOLUME_ID>` use:
259
260 pvesm path <VOLUME_ID>
261
262
263Volume Ownership
264~~~~~~~~~~~~~~~~
265
266There exists an ownership relation for `image` type volumes. Each such
267volume is owned by a VM or Container. For example volume
268`local:230/example-image.raw` is owned by VM 230. Most storage
269backends encodes this ownership information into the volume name.
270
271When you remove a VM or Container, the system also removes all
272associated volumes which are owned by that VM or Container.
273
274
275Using the Command Line Interface
276--------------------------------
277
278It is recommended to familiarize yourself with the concept behind storage
279pools and volume identifiers, but in real life, you are not forced to do any
280of those low level operations on the command line. Normally,
281allocation and removal of volumes is done by the VM and Container
282management tools.
283
284Nevertheless, there is a command line tool called `pvesm` (``{pve}
285Storage Manager''), which is able to perform common storage management
286tasks.
287
288
289Examples
290~~~~~~~~
291
292Add storage pools
293
294 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
295 pvesm add dir <STORAGE_ID> --path <PATH>
296 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
297 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
298 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
299
300Disable storage pools
301
302 pvesm set <STORAGE_ID> --disable 1
303
304Enable storage pools
305
306 pvesm set <STORAGE_ID> --disable 0
307
308Change/set storage options
309
310 pvesm set <STORAGE_ID> <OPTIONS>
311 pvesm set <STORAGE_ID> --shared 1
312 pvesm set local --format qcow2
313 pvesm set <STORAGE_ID> --content iso
314
315Remove storage pools. This does not delete any data, and does not
316disconnect or unmount anything. It just removes the storage
317configuration.
318
319 pvesm remove <STORAGE_ID>
320
321Allocate volumes
322
323 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
324
325Allocate a 4G volume in local storage. The name is auto-generated if
326you pass an empty string as `<name>`
327
328 pvesm alloc local <VMID> '' 4G
329
330Free volumes
331
332 pvesm free <VOLUME_ID>
333
334WARNING: This really destroys all volume data.
335
336List storage status
337
338 pvesm status
339
340List storage contents
341
342 pvesm list <STORAGE_ID> [--vmid <VMID>]
343
344List volumes allocated by VMID
345
346 pvesm list <STORAGE_ID> --vmid <VMID>
347
348List iso images
349
350 pvesm list <STORAGE_ID> --iso
351
352List container templates
353
354 pvesm list <STORAGE_ID> --vztmpl
355
356Show file system path for a volume
357
358 pvesm path <VOLUME_ID>
359
360ifdef::wiki[]
361
362See Also
363--------
364
365* link:/wiki/Storage:_Directory[Storage: Directory]
366
367* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
368
369* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
370
371* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
372
373* link:/wiki/Storage:_LVM[Storage: LVM]
374
375* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
376
377* link:/wiki/Storage:_NFS[Storage: NFS]
378
379* link:/wiki/Storage:_CIFS[Storage: CIFS]
380
381* link:/wiki/Storage:_RBD[Storage: RBD]
382
383* link:/wiki/Storage:_CephFS[Storage: CephFS]
384
385* link:/wiki/Storage:_ZFS[Storage: ZFS]
386
387* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
388
389endif::wiki[]
390
391ifndef::wiki[]
392
393// backend documentation
394
395include::pve-storage-dir.adoc[]
396
397include::pve-storage-nfs.adoc[]
398
399include::pve-storage-cifs.adoc[]
400
401include::pve-storage-glusterfs.adoc[]
402
403include::pve-storage-zfspool.adoc[]
404
405include::pve-storage-lvm.adoc[]
406
407include::pve-storage-lvmthin.adoc[]
408
409include::pve-storage-iscsi.adoc[]
410
411include::pve-storage-iscsidirect.adoc[]
412
413include::pve-storage-rbd.adoc[]
414
415include::pve-storage-cephfs.adoc[]
416
417
418
419ifdef::manvolnum[]
420include::pve-copyright.adoc[]
421endif::manvolnum[]
422
423endif::wiki[]
424