]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
cleanup previous patch: improve grammer
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51Block level storage::
52
53Allows to store large 'raw' images. It is usually not possible to store
54other files (ISO, backups, ..) on such storage types. Most modern
55block level storage implementations support snapshots and clones.
56RADOS, Sheepdog and DRBD are distributed systems, replicating storage
57data to different nodes.
58
59File level storage::
60
61They allow access to a full featured (POSIX) file system. They are
62more flexible, and allows you to store any content type. ZFS is
63probably the most advanced system, and it has full support for
64snapshots and clones.
65
66
67.Available storage types
68[width="100%",cols="<d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no |yes
73|NFS |nfs |file |yes |no |yes
74|GlusterFS |glusterfs |file |yes |no |yes
75|LVM |lvm |block |no |no |yes
76|LVM-thin |lvmthin |block |no |yes |yes
77|iSCSI/kernel |iscsi |block |yes |no |yes
78|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79|Ceph/RBD |rbd |block |yes |yes |yes
80|Sheepdog |sheepdog |block |yes |yes |beta
81|DRBD9 |drbd |block |yes |yes |beta
82|ZFS over iSCSI |zfs |block |yes |yes |yes
83|=========================================================
84
85TIP: It is possible to use LVM on top of an iSCSI storage. That way
86you get a `shared` LVM storage.
87
88
89Thin Provisioning
90~~~~~~~~~~~~~~~~~
91
92A number of storages, and the Qemu image format `qcow2`, support 'thin
93provisioning'. With thin provisioning activated, only the blocks that
94the guest system actually use will be written to the storage.
95
96Say for instance you create a VM with a 32GB hard disk, and after
97installing the guest system OS, the root file system of the VM contains
983 GB of data. In that case only 3GB are written to the storage, even
99if the guest VM sees a 32GB hard drive. In this way thin provisioning
100allows you to create disk images which are larger than the currently
101available storage blocks. You can create large disk images for your
102VMs, and when the need arises, add more disks to your storage without
103resizing the VMs' file systems.
104
105All storage types which have the ``Snapshots'' feature also support thin
106provisioning.
107
108CAUTION: If a storage runs full, all guests using volumes on that
109storage receives IO error. This can cause file system inconsistencies
110and may corrupt your data. So it is advisable to avoid
111over-provisioning of your storage resources, or carefully observe
112free space to avoid such conditions.
113
114
115Storage Configuration
116---------------------
117
118All {pve} related storage configuration is stored within a single text
119file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
120gets automatically distributed to all cluster nodes. So all nodes
121share the same storage configuration.
122
123Sharing storage configuration make perfect sense for shared storage,
124because the same ``shared'' storage is accessible from all nodes. But is
125also useful for local storage types. In this case such local storage
126is available on all nodes, but it is physically different and can have
127totally different content.
128
129
130Storage Pools
131~~~~~~~~~~~~~
132
133Each storage pool has a `<type>`, and is uniquely identified by its
134`<STORAGE_ID>`. A pool configuration looks like this:
135
136----
137<type>: <STORAGE_ID>
138 <property> <value>
139 <property> <value>
140 ...
141----
142
143The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
144followed by a list of properties. Most properties have values, but some of
145them come with reasonable default. In that case you can omit the value.
146
147To be more specific, take a look at the default storage configuration
148after installation. It contains one special local storage pool named
149`local`, which refers to the directory `/var/lib/vz` and is always
150available. The {pve} installer creates additional storage entries
151depending on the storage type chosen at installation time.
152
153.Default storage configuration (`/etc/pve/storage.cfg`)
154----
155dir: local
156 path /var/lib/vz
157 content iso,vztmpl,backup
158
159# default image store on LVM based installation
160lvmthin: local-lvm
161 thinpool data
162 vgname pve
163 content rootdir,images
164
165# default image store on ZFS based installation
166zfspool: local-zfs
167 pool rpool/data
168 sparse
169 content images,rootdir
170----
171
172
173Common Storage Properties
174~~~~~~~~~~~~~~~~~~~~~~~~~
175
176A few storage properties are common among different storage types.
177
178nodes::
179
180List of cluster node names where this storage is
181usable/accessible. One can use this property to restrict storage
182access to a limited set of nodes.
183
184content::
185
186A storage can support several content types, for example virtual disk
187images, cdrom iso images, container templates or container root
188directories. Not all storage types support all content types. One can set
189this property to select for what this storage is used for.
190
191images:::
192
193KVM-Qemu VM images.
194
195rootdir:::
196
197Allow to store container data.
198
199vztmpl:::
200
201Container templates.
202
203backup:::
204
205Backup files (`vzdump`).
206
207iso:::
208
209ISO images
210
211shared::
212
213Mark storage as shared.
214
215disable::
216
217You can use this flag to disable the storage completely.
218
219maxfiles::
220
221Maximum number of backup files per VM. Use `0` for unlimited.
222
223format::
224
225Default image format (`raw|qcow2|vmdk`)
226
227
228WARNING: It is not advisable to use the same storage pool on different
229{pve} clusters. Some storage operation need exclusive access to the
230storage, so proper locking is required. While this is implemented
231within a cluster, it does not work between different clusters.
232
233
234Volumes
235-------
236
237We use a special notation to address storage data. When you allocate
238data from a storage pool, it returns such a volume identifier. A volume
239is identified by the `<STORAGE_ID>`, followed by a storage type
240dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
241like:
242
243 local:230/example-image.raw
244
245 local:iso/debian-501-amd64-netinst.iso
246
247 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
248
249 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
250
251To get the file system path for a `<VOLUME_ID>` use:
252
253 pvesm path <VOLUME_ID>
254
255
256Volume Ownership
257~~~~~~~~~~~~~~~~
258
259There exists an ownership relation for `image` type volumes. Each such
260volume is owned by a VM or Container. For example volume
261`local:230/example-image.raw` is owned by VM 230. Most storage
262backends encodes this ownership information into the volume name.
263
264When you remove a VM or Container, the system also removes all
265associated volumes which are owned by that VM or Container.
266
267
268Using the Command Line Interface
269--------------------------------
270
271It is recommended to familiarize yourself with the concept behind storage
272pools and volume identifiers, but in real life, you are not forced to do any
273of those low level operations on the command line. Normally,
274allocation and removal of volumes is done by the VM and Container
275management tools.
276
277Nevertheless, there is a command line tool called `pvesm` (``{pve}
278Storage Manager''), which is able to perform common storage management
279tasks.
280
281
282Examples
283~~~~~~~~
284
285Add storage pools
286
287 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
288 pvesm add dir <STORAGE_ID> --path <PATH>
289 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
290 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
291 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
292
293Disable storage pools
294
295 pvesm set <STORAGE_ID> --disable 1
296
297Enable storage pools
298
299 pvesm set <STORAGE_ID> --disable 0
300
301Change/set storage options
302
303 pvesm set <STORAGE_ID> <OPTIONS>
304 pvesm set <STORAGE_ID> --shared 1
305 pvesm set local --format qcow2
306 pvesm set <STORAGE_ID> --content iso
307
308Remove storage pools. This does not delete any data, and does not
309disconnect or unmount anything. It just removes the storage
310configuration.
311
312 pvesm remove <STORAGE_ID>
313
314Allocate volumes
315
316 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
317
318Allocate a 4G volume in local storage. The name is auto-generated if
319you pass an empty string as `<name>`
320
321 pvesm alloc local <VMID> '' 4G
322
323Free volumes
324
325 pvesm free <VOLUME_ID>
326
327WARNING: This really destroys all volume data.
328
329List storage status
330
331 pvesm status
332
333List storage contents
334
335 pvesm list <STORAGE_ID> [--vmid <VMID>]
336
337List volumes allocated by VMID
338
339 pvesm list <STORAGE_ID> --vmid <VMID>
340
341List iso images
342
343 pvesm list <STORAGE_ID> --iso
344
345List container templates
346
347 pvesm list <STORAGE_ID> --vztmpl
348
349Show file system path for a volume
350
351 pvesm path <VOLUME_ID>
352
353ifdef::wiki[]
354
355See Also
356--------
357
358* link:/wiki/Storage:_Directory[Storage: Directory]
359
360* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
361
362* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
363
364* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
365
366* link:/wiki/Storage:_LVM[Storage: LVM]
367
368* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
369
370* link:/wiki/Storage:_NFS[Storage: NFS]
371
372* link:/wiki/Storage:_RBD[Storage: RBD]
373
374* link:/wiki/Storage:_ZFS[Storage: ZFS]
375
376* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
377
378endif::wiki[]
379
380ifndef::wiki[]
381
382// backend documentation
383
384include::pve-storage-dir.adoc[]
385
386include::pve-storage-nfs.adoc[]
387
388include::pve-storage-glusterfs.adoc[]
389
390include::pve-storage-zfspool.adoc[]
391
392include::pve-storage-lvm.adoc[]
393
394include::pve-storage-lvmthin.adoc[]
395
396include::pve-storage-iscsi.adoc[]
397
398include::pve-storage-iscsidirect.adoc[]
399
400include::pve-storage-rbd.adoc[]
401
402
403
404ifdef::manvolnum[]
405include::pve-copyright.adoc[]
406endif::manvolnum[]
407
408endif::wiki[]
409