]> git.proxmox.com Git - pve-docs.git/blame_incremental - pvesm.adoc
fix #1818: use NTP instead of Servers in timesyncd.conf
[pve-docs.git] / pvesm.adoc
... / ...
CommitLineData
1[[chapter_storage]]
2ifdef::manvolnum[]
3pvesm(1)
4========
5:pve-toplevel:
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSIS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21ifndef::manvolnum[]
22{pve} Storage
23=============
24:pve-toplevel:
25endif::manvolnum[]
26ifdef::wiki[]
27:title: Storage
28endif::wiki[]
29
30The {pve} storage model is very flexible. Virtual machine images
31can either be stored on one or several local storages, or on shared
32storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33configure as many storage pools as you like. You can use all
34storage technologies available for Debian Linux.
35
36One major benefit of storing VMs on shared storage is the ability to
37live-migrate running machines without any downtime, as all nodes in
38the cluster have direct access to VM disk images. There is no need to
39copy VM image data, so live migration is very fast in that case.
40
41The storage library (package `libpve-storage-perl`) uses a flexible
42plugin system to provide a common interface to all storage types. This
43can be easily adopted to include further storage types in future.
44
45
46Storage Types
47-------------
48
49There are basically two different classes of storage types:
50
51Block level storage::
52
53Allows to store large 'raw' images. It is usually not possible to store
54other files (ISO, backups, ..) on such storage types. Most modern
55block level storage implementations support snapshots and clones.
56RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57data to different nodes.
58
59File level storage::
60
61They allow access to a full featured (POSIX) file system. They are
62more flexible, and allows you to store any content type. ZFS is
63probably the most advanced system, and it has full support for
64snapshots and clones.
65
66
67.Available storage types
68[width="100%",cols="<d,1*m,4*d",options="header"]
69|===========================================================
70|Description |PVE type |Level |Shared|Snapshots|Stable
71|ZFS (local) |zfspool |file |no |yes |yes
72|Directory |dir |file |no |no^1^ |yes
73|NFS |nfs |file |yes |no^1^ |yes
74|CIFS |cifs |file |yes |no^1^ |yes
75|GlusterFS |glusterfs |file |yes |no^1^ |yes
76|LVM |lvm |block |no^2^ |no |yes
77|LVM-thin |lvmthin |block |no |yes |yes
78|iSCSI/kernel |iscsi |block |yes |no |yes
79|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
80|Ceph/RBD |rbd |block |yes |yes |yes
81|Sheepdog |sheepdog |block |yes |yes |beta
82|ZFS over iSCSI |zfs |block |yes |yes |yes
83|=========================================================
84
85^1^: On file based storages, snapshots are possible with the 'qcow2' format.
86
87^2^: It is possible to use LVM on top of an iSCSI storage. That way
88you get a `shared` LVM storage.
89
90
91Thin Provisioning
92~~~~~~~~~~~~~~~~~
93
94A number of storages, and the Qemu image format `qcow2`, support 'thin
95provisioning'. With thin provisioning activated, only the blocks that
96the guest system actually use will be written to the storage.
97
98Say for instance you create a VM with a 32GB hard disk, and after
99installing the guest system OS, the root file system of the VM contains
1003 GB of data. In that case only 3GB are written to the storage, even
101if the guest VM sees a 32GB hard drive. In this way thin provisioning
102allows you to create disk images which are larger than the currently
103available storage blocks. You can create large disk images for your
104VMs, and when the need arises, add more disks to your storage without
105resizing the VMs' file systems.
106
107All storage types which have the ``Snapshots'' feature also support thin
108provisioning.
109
110CAUTION: If a storage runs full, all guests using volumes on that
111storage receive IO errors. This can cause file system inconsistencies
112and may corrupt your data. So it is advisable to avoid
113over-provisioning of your storage resources, or carefully observe
114free space to avoid such conditions.
115
116
117Storage Configuration
118---------------------
119
120All {pve} related storage configuration is stored within a single text
121file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
122gets automatically distributed to all cluster nodes. So all nodes
123share the same storage configuration.
124
125Sharing storage configuration make perfect sense for shared storage,
126because the same ``shared'' storage is accessible from all nodes. But is
127also useful for local storage types. In this case such local storage
128is available on all nodes, but it is physically different and can have
129totally different content.
130
131
132Storage Pools
133~~~~~~~~~~~~~
134
135Each storage pool has a `<type>`, and is uniquely identified by its
136`<STORAGE_ID>`. A pool configuration looks like this:
137
138----
139<type>: <STORAGE_ID>
140 <property> <value>
141 <property> <value>
142 ...
143----
144
145The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
146followed by a list of properties. Most properties have values, but some of
147them come with reasonable default. In that case you can omit the value.
148
149To be more specific, take a look at the default storage configuration
150after installation. It contains one special local storage pool named
151`local`, which refers to the directory `/var/lib/vz` and is always
152available. The {pve} installer creates additional storage entries
153depending on the storage type chosen at installation time.
154
155.Default storage configuration (`/etc/pve/storage.cfg`)
156----
157dir: local
158 path /var/lib/vz
159 content iso,vztmpl,backup
160
161# default image store on LVM based installation
162lvmthin: local-lvm
163 thinpool data
164 vgname pve
165 content rootdir,images
166
167# default image store on ZFS based installation
168zfspool: local-zfs
169 pool rpool/data
170 sparse
171 content images,rootdir
172----
173
174
175Common Storage Properties
176~~~~~~~~~~~~~~~~~~~~~~~~~
177
178A few storage properties are common among different storage types.
179
180nodes::
181
182List of cluster node names where this storage is
183usable/accessible. One can use this property to restrict storage
184access to a limited set of nodes.
185
186content::
187
188A storage can support several content types, for example virtual disk
189images, cdrom iso images, container templates or container root
190directories. Not all storage types support all content types. One can set
191this property to select for what this storage is used for.
192
193images:::
194
195KVM-Qemu VM images.
196
197rootdir:::
198
199Allow to store container data.
200
201vztmpl:::
202
203Container templates.
204
205backup:::
206
207Backup files (`vzdump`).
208
209iso:::
210
211ISO images
212
213shared::
214
215Mark storage as shared.
216
217disable::
218
219You can use this flag to disable the storage completely.
220
221maxfiles::
222
223Maximum number of backup files per VM. Use `0` for unlimited.
224
225format::
226
227Default image format (`raw|qcow2|vmdk`)
228
229
230WARNING: It is not advisable to use the same storage pool on different
231{pve} clusters. Some storage operation need exclusive access to the
232storage, so proper locking is required. While this is implemented
233within a cluster, it does not work between different clusters.
234
235
236Volumes
237-------
238
239We use a special notation to address storage data. When you allocate
240data from a storage pool, it returns such a volume identifier. A volume
241is identified by the `<STORAGE_ID>`, followed by a storage type
242dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
243like:
244
245 local:230/example-image.raw
246
247 local:iso/debian-501-amd64-netinst.iso
248
249 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
250
251 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
252
253To get the file system path for a `<VOLUME_ID>` use:
254
255 pvesm path <VOLUME_ID>
256
257
258Volume Ownership
259~~~~~~~~~~~~~~~~
260
261There exists an ownership relation for `image` type volumes. Each such
262volume is owned by a VM or Container. For example volume
263`local:230/example-image.raw` is owned by VM 230. Most storage
264backends encodes this ownership information into the volume name.
265
266When you remove a VM or Container, the system also removes all
267associated volumes which are owned by that VM or Container.
268
269
270Using the Command Line Interface
271--------------------------------
272
273It is recommended to familiarize yourself with the concept behind storage
274pools and volume identifiers, but in real life, you are not forced to do any
275of those low level operations on the command line. Normally,
276allocation and removal of volumes is done by the VM and Container
277management tools.
278
279Nevertheless, there is a command line tool called `pvesm` (``{pve}
280Storage Manager''), which is able to perform common storage management
281tasks.
282
283
284Examples
285~~~~~~~~
286
287Add storage pools
288
289 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
290 pvesm add dir <STORAGE_ID> --path <PATH>
291 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
292 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
293 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
294
295Disable storage pools
296
297 pvesm set <STORAGE_ID> --disable 1
298
299Enable storage pools
300
301 pvesm set <STORAGE_ID> --disable 0
302
303Change/set storage options
304
305 pvesm set <STORAGE_ID> <OPTIONS>
306 pvesm set <STORAGE_ID> --shared 1
307 pvesm set local --format qcow2
308 pvesm set <STORAGE_ID> --content iso
309
310Remove storage pools. This does not delete any data, and does not
311disconnect or unmount anything. It just removes the storage
312configuration.
313
314 pvesm remove <STORAGE_ID>
315
316Allocate volumes
317
318 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
319
320Allocate a 4G volume in local storage. The name is auto-generated if
321you pass an empty string as `<name>`
322
323 pvesm alloc local <VMID> '' 4G
324
325Free volumes
326
327 pvesm free <VOLUME_ID>
328
329WARNING: This really destroys all volume data.
330
331List storage status
332
333 pvesm status
334
335List storage contents
336
337 pvesm list <STORAGE_ID> [--vmid <VMID>]
338
339List volumes allocated by VMID
340
341 pvesm list <STORAGE_ID> --vmid <VMID>
342
343List iso images
344
345 pvesm list <STORAGE_ID> --iso
346
347List container templates
348
349 pvesm list <STORAGE_ID> --vztmpl
350
351Show file system path for a volume
352
353 pvesm path <VOLUME_ID>
354
355ifdef::wiki[]
356
357See Also
358--------
359
360* link:/wiki/Storage:_Directory[Storage: Directory]
361
362* link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
363
364* link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
365
366* link:/wiki/Storage:_iSCSI[Storage: iSCSI]
367
368* link:/wiki/Storage:_LVM[Storage: LVM]
369
370* link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
371
372* link:/wiki/Storage:_NFS[Storage: NFS]
373
374* link:/wiki/Storage:_CIFS[Storage: CIFS]
375
376* link:/wiki/Storage:_RBD[Storage: RBD]
377
378* link:/wiki/Storage:_ZFS[Storage: ZFS]
379
380* link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
381
382endif::wiki[]
383
384ifndef::wiki[]
385
386// backend documentation
387
388include::pve-storage-dir.adoc[]
389
390include::pve-storage-nfs.adoc[]
391
392include::pve-storage-cifs.adoc[]
393
394include::pve-storage-glusterfs.adoc[]
395
396include::pve-storage-zfspool.adoc[]
397
398include::pve-storage-lvm.adoc[]
399
400include::pve-storage-lvmthin.adoc[]
401
402include::pve-storage-iscsi.adoc[]
403
404include::pve-storage-iscsidirect.adoc[]
405
406include::pve-storage-rbd.adoc[]
407
408
409
410ifdef::manvolnum[]
411include::pve-copyright.adoc[]
412endif::manvolnum[]
413
414endif::wiki[]
415