]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
include/improve new LVM page
[pve-docs.git] / pvesm.adoc
CommitLineData
aa039b0f
DM
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
38fd0958 5include::attributes.txt[]
aa039b0f
DM
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSYS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23{pve} Storage
24=============
38fd0958 25include::attributes.txt[]
aa039b0f
DM
26endif::manvolnum[]
27
28The {pve} storage model is very flexible. Virtual machine images
29can either be stored on one or several local storages, or on shared
30storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31configure as many storage pools as you like. You can use all
32storage technologies available for Debian Linux.
33
34One major benefit of storing VMs on shared storage is the ability to
35live-migrate running machines without any downtime, as all nodes in
36the cluster have direct access to VM disk images. There is no need to
37copy VM image data, so live migration is very fast in that case.
38
39The storage library (package 'libpve-storage-perl') uses a flexible
40plugin system to provide a common interface to all storage types. This
41can be easily adopted to include further storage types in future.
42
43
44Storage Types
45-------------
46
47There are basically two different classes of storage types:
48
49Block level storage::
50
51Allows to store large 'raw' images. It is usually not possible to store
52other files (ISO, backups, ..) on such storage types. Most modern
53block level storage implementations support snapshots and clones.
54RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55data to different nodes.
56
57File level storage::
58
59They allow access to a full featured (POSIX) file system. They are
60more flexible, and allows you to store any content type. ZFS is
61probably the most advanced system, and it has full support for
62snapshots and clones.
63
64
65.Available storage types
66[width="100%",cols="<d,1*m,4*d",options="header"]
67|===========================================================
68|Description |PVE type |Level |Shared|Snapshots|Stable
69|ZFS (local) |zfspool |file |no |yes |yes
70|Directory |dir |file |no |no |yes
71|NFS |nfs |file |yes |no |yes
72|GlusterFS |glusterfs |file |yes |no |yes
73|LVM |lvm |block |no |no |yes
9801e1c3 74|LVM-thin |lvmthin |block |no |yes |yes
aa039b0f
DM
75|iSCSI/kernel |iscsi |block |yes |no |yes
76|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77|Ceph/RBD |rbd |block |yes |yes |yes
78|Sheepdog |sheepdog |block |yes |yes |beta
79|DRBD9 |drbd |block |yes |yes |beta
80|ZFS over iSCSI |zfs |block |yes |yes |yes
81|=========================================================
82
83TIP: It is possible to use LVM on top of an iSCSI storage. That way
84you get a 'shared' LVM storage.
85
ebc15cbc 86Thin provisioning
2afe468c 87~~~~~~~~~~~~~~~~~
ebc15cbc 88
2afe468c
DM
89A number of storages, and the Qemu image format `qcow2`, support _thin
90provisioning_. With thin provisioning activated, only the blocks that
91the guest system actually use will be written to the storage.
ebc15cbc 92
2afe468c
DM
93Say for instance you create a VM with a 32GB hard disk, and after
94installing the guest system OS, the root filesystem of the VM contains
953 GB of data. In that case only 3GB are written to the storage, even
96if the guest VM sees a 32GB hard drive. In this way thin provisioning
97allows you to create disk images which are larger than the currently
98available storage blocks. You can create large disk images for your
99VMs, and when the need arises, add more disks to your storage without
100resizing the VMs filesystems.
101
102All storage types which have the 'Snapshots' feature also support thin
103provisioning.
ebc15cbc 104
ba1d96fd
DM
105CAUTION: If a storage runs full, all guests using volumes on that
106storage receives IO error. This can cause file system inconsistencies
107and may corrupt your data. So it is advisable to avoid
108over-provisioning of your storage resources, or carefully observe
109free space to avoid such conditions.
ebc15cbc 110
aa039b0f
DM
111Storage Configuration
112---------------------
113
114All {pve} related storage configuration is stored within a single text
115file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
116gets automatically distributed to all cluster nodes. So all nodes
117share the same storage configuration.
118
119Sharing storage configuration make perfect sense for shared storage,
120because the same 'shared' storage is accessible from all nodes. But is
121also useful for local storage types. In this case such local storage
122is available on all nodes, but it is physically different and can have
123totally different content.
124
125Storage Pools
126~~~~~~~~~~~~~
127
128Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
129
130----
131<type>: <STORAGE_ID>
132 <property> <value>
133 <property> <value>
134 ...
135----
136
aa039b0f 137The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
871e1fd6
FG
138followed by a list of properties. Most properties have values, but some of
139them come with reasonable default. In that case you can omit the value.
aa039b0f 140
9c41b54d
DM
141To be more specific, take a look at the default storage configuration
142after installation. It contains one special local storage pool named
143`local`, which refers to the directory '/var/lib/vz' and is always
144available. The {pve} installer creates additional storage entries
145depending on the storage type chosen at installation time.
146
aa039b0f 147.Default storage configuration ('/etc/pve/storage.cfg')
9801e1c3
DM
148----
149dir: local
aa039b0f 150 path /var/lib/vz
9801e1c3
DM
151 content iso,vztmpl,backup
152
9c41b54d 153# default image store on LVM based installation
9801e1c3
DM
154lvmthin: local-lvm
155 thinpool data
156 vgname pve
157 content rootdir,images
9c41b54d
DM
158
159# default image store on ZFS based installation
160zfspool: local-zfs
161 pool rpool/data
162 sparse
163 content images,rootdir
9801e1c3 164----
aa039b0f
DM
165
166Common Storage Properties
167~~~~~~~~~~~~~~~~~~~~~~~~~
168
871e1fd6 169A few storage properties are common among different storage types.
aa039b0f
DM
170
171nodes::
172
173List of cluster node names where this storage is
174usable/accessible. One can use this property to restrict storage
175access to a limited set of nodes.
176
177content::
178
179A storage can support several content types, for example virtual disk
180images, cdrom iso images, container templates or container root
871e1fd6 181directories. Not all storage types support all content types. One can set
aa039b0f
DM
182this property to select for what this storage is used for.
183
184images:::
185
186KVM-Qemu VM images.
187
188rootdir:::
189
871e1fd6 190Allow to store container data.
aa039b0f
DM
191
192vztmpl:::
193
194Container templates.
195
196backup:::
197
198Backup files ('vzdump').
199
200iso:::
201
202ISO images
203
204shared::
205
206Mark storage as shared.
207
208disable::
209
210You can use this flag to disable the storage completely.
211
212maxfiles::
213
214Maximal number of backup files per VM. Use `0` for unlimted.
215
216format::
217
218Default image format (`raw|qcow2|vmdk`)
219
220
221WARNING: It is not advisable to use the same storage pool on different
871e1fd6 222{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 223storage, so proper locking is required. While this is implemented
871e1fd6 224within a cluster, it does not work between different clusters.
aa039b0f
DM
225
226
227Volumes
228-------
229
230We use a special notation to address storage data. When you allocate
871e1fd6 231data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
232is identified by the `<STORAGE_ID>`, followed by a storage type
233dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
234like:
235
236 local:230/example-image.raw
237
238 local:iso/debian-501-amd64-netinst.iso
239
240 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
241
242 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
243
244To get the filesystem path for a `<VOLUME_ID>` use:
245
246 pvesm path <VOLUME_ID>
247
248Volume Ownership
249~~~~~~~~~~~~~~~~
250
251There exists an ownership relation for 'image' type volumes. Each such
252volume is owned by a VM or Container. For example volume
253`local:230/example-image.raw` is owned by VM 230. Most storage
254backends encodes this ownership information into the volume name.
255
871e1fd6 256When you remove a VM or Container, the system also removes all
aa039b0f
DM
257associated volumes which are owned by that VM or Container.
258
259
260Using the Command Line Interface
261--------------------------------
262
871e1fd6
FG
263It is recommended to familiarize yourself with the concept behind storage
264pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
265of those low level operations on the command line. Normally,
266allocation and removal of volumes is done by the VM and Container
267management tools.
268
269Nevertheless, there is a command line tool called 'pvesm' ({pve}
270storage manager), which is able to perform common storage management
271tasks.
272
273
274Examples
275~~~~~~~~
276
277Add storage pools
278
279 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
280 pvesm add dir <STORAGE_ID> --path <PATH>
281 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
282 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
283 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
284
285Disable storage pools
286
287 pvesm set <STORAGE_ID> --disable 1
288
289Enable storage pools
290
291 pvesm set <STORAGE_ID> --disable 0
292
293Change/set storage options
294
295 pvesm set <STORAGE_ID> <OPTIONS>
296 pvesm set <STORAGE_ID> --shared 1
297 pvesm set local --format qcow2
298 pvesm set <STORAGE_ID> --content iso
299
300Remove storage pools. This does not delete any data, and does not
301disconnect or unmount anything. It just removes the storage
302configuration.
303
304 pvesm remove <STORAGE_ID>
305
306Allocate volumes
307
308 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
309
310Allocate a 4G volume in local storage. The name is auto-generated if
311you pass an empty string as `<name>`
312
313 pvesm alloc local <VMID> '' 4G
314
315Free volumes
316
317 pvesm free <VOLUME_ID>
318
319WARNING: This really destroys all volume data.
320
321List storage status
322
323 pvesm status
324
325List storage contents
326
327 pvesm list <STORAGE_ID> [--vmid <VMID>]
328
329List volumes allocated by VMID
330
331 pvesm list <STORAGE_ID> --vmid <VMID>
332
333List iso images
334
335 pvesm list <STORAGE_ID> --iso
336
337List container templates
338
339 pvesm list <STORAGE_ID> --vztmpl
340
341Show filesystem path for a volume
342
343 pvesm path <VOLUME_ID>
344
deb4673f
DM
345ifdef::wiki[]
346
347See Also
348--------
349
f532afb7 350* link:/wiki/Storage:_Directory[Storage: Directory]
deb4673f 351
f532afb7 352* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
deb4673f 353
f532afb7 354* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
deb4673f 355
f532afb7 356* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
deb4673f 357
f532afb7 358* link:/wiki/Storage:_LVM[Storage: LVM]
deb4673f 359
f532afb7 360* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
deb4673f 361
f532afb7 362* link:/wiki/Storage:_NFS[Storage: NFS]
deb4673f 363
f532afb7 364* link:/wiki/Storage:_RBD[Storage: RBD]
deb4673f 365
f532afb7 366* link:/wiki/Storage:_ZFS[Storage: ZFS]
deb4673f
DM
367
368
369endif::wiki[]
370
251666be
DM
371ifndef::wiki[]
372
aa039b0f
DM
373// backend documentation
374
375include::pve-storage-dir.adoc[]
376
377include::pve-storage-nfs.adoc[]
378
379include::pve-storage-glusterfs.adoc[]
380
381include::pve-storage-zfspool.adoc[]
382
383include::pve-storage-lvm.adoc[]
384
9801e1c3
DM
385include::pve-storage-lvmthin.adoc[]
386
aa039b0f
DM
387include::pve-storage-iscsi.adoc[]
388
389include::pve-storage-iscsidirect.adoc[]
390
391include::pve-storage-rbd.adoc[]
392
393
251666be 394
aa039b0f
DM
395ifdef::manvolnum[]
396include::pve-copyright.adoc[]
397endif::manvolnum[]
398
251666be
DM
399endif::wiki[]
400