]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
add logo to admin guide and index
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
5include::attributes.txt[]
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSYS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23{pve} Storage
24=============
25include::attributes.txt[]
26endif::manvolnum[]
27
28The {pve} storage model is very flexible. Virtual machine images
29can either be stored on one or several local storages, or on shared
30storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31configure as many storage pools as you like. You can use all
32storage technologies available for Debian Linux.
33
34One major benefit of storing VMs on shared storage is the ability to
35live-migrate running machines without any downtime, as all nodes in
36the cluster have direct access to VM disk images. There is no need to
37copy VM image data, so live migration is very fast in that case.
38
39The storage library (package `libpve-storage-perl`) uses a flexible
40plugin system to provide a common interface to all storage types. This
41can be easily adopted to include further storage types in future.
42
43
44Storage Types
45-------------
46
47There are basically two different classes of storage types:
48
49Block level storage::
50
51Allows to store large 'raw' images. It is usually not possible to store
52other files (ISO, backups, ..) on such storage types. Most modern
53block level storage implementations support snapshots and clones.
54RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55data to different nodes.
56
57File level storage::
58
59They allow access to a full featured (POSIX) file system. They are
60more flexible, and allows you to store any content type. ZFS is
61probably the most advanced system, and it has full support for
62snapshots and clones.
63
64
65.Available storage types
66[width="100%",cols="<d,1*m,4*d",options="header"]
67|===========================================================
68|Description |PVE type |Level |Shared|Snapshots|Stable
69|ZFS (local) |zfspool |file |no |yes |yes
70|Directory |dir |file |no |no |yes
71|NFS |nfs |file |yes |no |yes
72|GlusterFS |glusterfs |file |yes |no |yes
73|LVM |lvm |block |no |no |yes
74|LVM-thin |lvmthin |block |no |yes |yes
75|iSCSI/kernel |iscsi |block |yes |no |yes
76|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77|Ceph/RBD |rbd |block |yes |yes |yes
78|Sheepdog |sheepdog |block |yes |yes |beta
79|DRBD9 |drbd |block |yes |yes |beta
80|ZFS over iSCSI |zfs |block |yes |yes |yes
81|=========================================================
82
83TIP: It is possible to use LVM on top of an iSCSI storage. That way
84you get a `shared` LVM storage.
85
86
87Thin Provisioning
88~~~~~~~~~~~~~~~~~
89
90A number of storages, and the Qemu image format `qcow2`, support 'thin
91provisioning'. With thin provisioning activated, only the blocks that
92the guest system actually use will be written to the storage.
93
94Say for instance you create a VM with a 32GB hard disk, and after
95installing the guest system OS, the root file system of the VM contains
963 GB of data. In that case only 3GB are written to the storage, even
97if the guest VM sees a 32GB hard drive. In this way thin provisioning
98allows you to create disk images which are larger than the currently
99available storage blocks. You can create large disk images for your
100VMs, and when the need arises, add more disks to your storage without
101resizing the VMs' file systems.
102
103All storage types which have the ``Snapshots'' feature also support thin
104provisioning.
105
106CAUTION: If a storage runs full, all guests using volumes on that
107storage receives IO error. This can cause file system inconsistencies
108and may corrupt your data. So it is advisable to avoid
109over-provisioning of your storage resources, or carefully observe
110free space to avoid such conditions.
111
112
113Storage Configuration
114---------------------
115
116All {pve} related storage configuration is stored within a single text
117file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
118gets automatically distributed to all cluster nodes. So all nodes
119share the same storage configuration.
120
121Sharing storage configuration make perfect sense for shared storage,
122because the same ``shared'' storage is accessible from all nodes. But is
123also useful for local storage types. In this case such local storage
124is available on all nodes, but it is physically different and can have
125totally different content.
126
127
128Storage Pools
129~~~~~~~~~~~~~
130
131Each storage pool has a `<type>`, and is uniquely identified by its
132`<STORAGE_ID>`. A pool configuration looks like this:
133
134----
135<type>: <STORAGE_ID>
136 <property> <value>
137 <property> <value>
138 ...
139----
140
141The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
142followed by a list of properties. Most properties have values, but some of
143them come with reasonable default. In that case you can omit the value.
144
145To be more specific, take a look at the default storage configuration
146after installation. It contains one special local storage pool named
147`local`, which refers to the directory `/var/lib/vz` and is always
148available. The {pve} installer creates additional storage entries
149depending on the storage type chosen at installation time.
150
151.Default storage configuration (`/etc/pve/storage.cfg`)
152----
153dir: local
154 path /var/lib/vz
155 content iso,vztmpl,backup
156
157# default image store on LVM based installation
158lvmthin: local-lvm
159 thinpool data
160 vgname pve
161 content rootdir,images
162
163# default image store on ZFS based installation
164zfspool: local-zfs
165 pool rpool/data
166 sparse
167 content images,rootdir
168----
169
170
171Common Storage Properties
172~~~~~~~~~~~~~~~~~~~~~~~~~
173
174A few storage properties are common among different storage types.
175
176nodes::
177
178List of cluster node names where this storage is
179usable/accessible. One can use this property to restrict storage
180access to a limited set of nodes.
181
182content::
183
184A storage can support several content types, for example virtual disk
185images, cdrom iso images, container templates or container root
186directories. Not all storage types support all content types. One can set
187this property to select for what this storage is used for.
188
189images:::
190
191KVM-Qemu VM images.
192
193rootdir:::
194
195Allow to store container data.
196
197vztmpl:::
198
199Container templates.
200
201backup:::
202
203Backup files (`vzdump`).
204
205iso:::
206
207ISO images
208
209shared::
210
211Mark storage as shared.
212
213disable::
214
215You can use this flag to disable the storage completely.
216
217maxfiles::
218
219Maximum number of backup files per VM. Use `0` for unlimited.
220
221format::
222
223Default image format (`raw|qcow2|vmdk`)
224
225
226WARNING: It is not advisable to use the same storage pool on different
227{pve} clusters. Some storage operation need exclusive access to the
228storage, so proper locking is required. While this is implemented
229within a cluster, it does not work between different clusters.
230
231
232Volumes
233-------
234
235We use a special notation to address storage data. When you allocate
236data from a storage pool, it returns such a volume identifier. A volume
237is identified by the `<STORAGE_ID>`, followed by a storage type
238dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
239like:
240
241 local:230/example-image.raw
242
243 local:iso/debian-501-amd64-netinst.iso
244
245 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
246
247 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
248
249To get the file system path for a `<VOLUME_ID>` use:
250
251 pvesm path <VOLUME_ID>
252
253
254Volume Ownership
255~~~~~~~~~~~~~~~~
256
257There exists an ownership relation for `image` type volumes. Each such
258volume is owned by a VM or Container. For example volume
259`local:230/example-image.raw` is owned by VM 230. Most storage
260backends encodes this ownership information into the volume name.
261
262When you remove a VM or Container, the system also removes all
263associated volumes which are owned by that VM or Container.
264
265
266Using the Command Line Interface
267--------------------------------
268
269It is recommended to familiarize yourself with the concept behind storage
270pools and volume identifiers, but in real life, you are not forced to do any
271of those low level operations on the command line. Normally,
272allocation and removal of volumes is done by the VM and Container
273management tools.
274
275Nevertheless, there is a command line tool called `pvesm` (``{pve}
276Storage Manager''), which is able to perform common storage management
277tasks.
278
279
280Examples
281~~~~~~~~
282
283Add storage pools
284
285 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
286 pvesm add dir <STORAGE_ID> --path <PATH>
287 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
288 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
289 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
290
291Disable storage pools
292
293 pvesm set <STORAGE_ID> --disable 1
294
295Enable storage pools
296
297 pvesm set <STORAGE_ID> --disable 0
298
299Change/set storage options
300
301 pvesm set <STORAGE_ID> <OPTIONS>
302 pvesm set <STORAGE_ID> --shared 1
303 pvesm set local --format qcow2
304 pvesm set <STORAGE_ID> --content iso
305
306Remove storage pools. This does not delete any data, and does not
307disconnect or unmount anything. It just removes the storage
308configuration.
309
310 pvesm remove <STORAGE_ID>
311
312Allocate volumes
313
314 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
315
316Allocate a 4G volume in local storage. The name is auto-generated if
317you pass an empty string as `<name>`
318
319 pvesm alloc local <VMID> '' 4G
320
321Free volumes
322
323 pvesm free <VOLUME_ID>
324
325WARNING: This really destroys all volume data.
326
327List storage status
328
329 pvesm status
330
331List storage contents
332
333 pvesm list <STORAGE_ID> [--vmid <VMID>]
334
335List volumes allocated by VMID
336
337 pvesm list <STORAGE_ID> --vmid <VMID>
338
339List iso images
340
341 pvesm list <STORAGE_ID> --iso
342
343List container templates
344
345 pvesm list <STORAGE_ID> --vztmpl
346
347Show file system path for a volume
348
349 pvesm path <VOLUME_ID>
350
351ifdef::wiki[]
352
353See Also
354--------
355
356* link:/wiki/Storage:_Directory[Storage: Directory]
357
358* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
359
360* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
361
362* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
363
364* link:/wiki/Storage:_LVM[Storage: LVM]
365
366* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
367
368* link:/wiki/Storage:_NFS[Storage: NFS]
369
370* link:/wiki/Storage:_RBD[Storage: RBD]
371
372* link:/wiki/Storage:_ZFS[Storage: ZFS]
373
374
375endif::wiki[]
376
377ifndef::wiki[]
378
379// backend documentation
380
381include::pve-storage-dir.adoc[]
382
383include::pve-storage-nfs.adoc[]
384
385include::pve-storage-glusterfs.adoc[]
386
387include::pve-storage-zfspool.adoc[]
388
389include::pve-storage-lvm.adoc[]
390
391include::pve-storage-lvmthin.adoc[]
392
393include::pve-storage-iscsi.adoc[]
394
395include::pve-storage-iscsidirect.adoc[]
396
397include::pve-storage-rbd.adoc[]
398
399
400
401ifdef::manvolnum[]
402include::pve-copyright.adoc[]
403endif::manvolnum[]
404
405endif::wiki[]
406