]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
update static/schema information
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in the future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 File level storage::
52
53 File level based storage technologies allow access to a fully featured (POSIX)
54 file system. They are in general more flexible than any Block level storage
55 (see below), and allow you to store content of any type. ZFS is probably the
56 most advanced system, and it has full support for snapshots and clones.
57
58 Block level storage::
59
60 Allows to store large 'raw' images. It is usually not possible to store
61 other files (ISO, backups, ..) on such storage types. Most modern
62 block level storage implementations support snapshots and clones.
63 RADOS and GlusterFS are distributed systems, replicating storage
64 data to different nodes.
65
66
67 .Available storage types
68 [width="100%",cols="<2d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |Plugin type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |both^1^|no |yes |yes
72 |Directory |dir |file |no |no^2^ |yes
73 |BTRFS |btrfs |file |no |yes |technology preview
74 |NFS |nfs |file |yes |no^2^ |yes
75 |CIFS |cifs |file |yes |no^2^ |yes
76 |Proxmox Backup |pbs |both |yes |n/a |yes
77 |GlusterFS |glusterfs |file |yes |no^2^ |yes
78 |CephFS |cephfs |file |yes |yes |yes
79 |LVM |lvm |block |no^3^ |no |yes
80 |LVM-thin |lvmthin |block |no |yes |yes
81 |iSCSI/kernel |iscsi |block |yes |no |yes
82 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83 |Ceph/RBD |rbd |block |yes |yes |yes
84 |ZFS over iSCSI |zfs |block |yes |yes |yes
85 |===========================================================
86
87 ^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88 block device functionality.
89
90 ^2^: On file based storages, snapshots are possible with the 'qcow2' format.
91
92 ^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93 That way you get a `shared` LVM storage
94
95
96 Thin Provisioning
97 ~~~~~~~~~~~~~~~~~
98
99 A number of storages, and the QEMU image format `qcow2`, support 'thin
100 provisioning'. With thin provisioning activated, only the blocks that
101 the guest system actually use will be written to the storage.
102
103 Say for instance you create a VM with a 32GB hard disk, and after
104 installing the guest system OS, the root file system of the VM contains
105 3 GB of data. In that case only 3GB are written to the storage, even
106 if the guest VM sees a 32GB hard drive. In this way thin provisioning
107 allows you to create disk images which are larger than the currently
108 available storage blocks. You can create large disk images for your
109 VMs, and when the need arises, add more disks to your storage without
110 resizing the VMs' file systems.
111
112 All storage types which have the ``Snapshots'' feature also support thin
113 provisioning.
114
115 CAUTION: If a storage runs full, all guests using volumes on that
116 storage receive IO errors. This can cause file system inconsistencies
117 and may corrupt your data. So it is advisable to avoid
118 over-provisioning of your storage resources, or carefully observe
119 free space to avoid such conditions.
120
121
122 Storage Configuration
123 ---------------------
124
125 All {pve} related storage configuration is stored within a single text
126 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
127 gets automatically distributed to all cluster nodes. So all nodes
128 share the same storage configuration.
129
130 Sharing storage configuration makes perfect sense for shared storage,
131 because the same ``shared'' storage is accessible from all nodes. But it is
132 also useful for local storage types. In this case such local storage
133 is available on all nodes, but it is physically different and can have
134 totally different content.
135
136
137 Storage Pools
138 ~~~~~~~~~~~~~
139
140 Each storage pool has a `<type>`, and is uniquely identified by its
141 `<STORAGE_ID>`. A pool configuration looks like this:
142
143 ----
144 <type>: <STORAGE_ID>
145 <property> <value>
146 <property> <value>
147 <property>
148 ...
149 ----
150
151 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
152 followed by a list of properties. Most properties require a value. Some have
153 reasonable defaults, in which case you can omit the value.
154
155 To be more specific, take a look at the default storage configuration
156 after installation. It contains one special local storage pool named
157 `local`, which refers to the directory `/var/lib/vz` and is always
158 available. The {pve} installer creates additional storage entries
159 depending on the storage type chosen at installation time.
160
161 .Default storage configuration (`/etc/pve/storage.cfg`)
162 ----
163 dir: local
164 path /var/lib/vz
165 content iso,vztmpl,backup
166
167 # default image store on LVM based installation
168 lvmthin: local-lvm
169 thinpool data
170 vgname pve
171 content rootdir,images
172
173 # default image store on ZFS based installation
174 zfspool: local-zfs
175 pool rpool/data
176 sparse
177 content images,rootdir
178 ----
179
180 CAUTION: It is problematic to have multiple storage configurations pointing to
181 the exact same underlying storage. Such an _aliased_ storage configuration can
182 lead to two different volume IDs ('volid') pointing to the exact same disk
183 image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184 different content types for _aliased_ storage configurations can be fine, but
185 is not recommended.
186
187 Common Storage Properties
188 ~~~~~~~~~~~~~~~~~~~~~~~~~
189
190 A few storage properties are common among different storage types.
191
192 nodes::
193
194 List of cluster node names where this storage is
195 usable/accessible. One can use this property to restrict storage
196 access to a limited set of nodes.
197
198 content::
199
200 A storage can support several content types, for example virtual disk
201 images, cdrom iso images, container templates or container root
202 directories. Not all storage types support all content types. One can set
203 this property to select what this storage is used for.
204
205 images:::
206
207 QEMU/KVM VM images.
208
209 rootdir:::
210
211 Allow to store container data.
212
213 vztmpl:::
214
215 Container templates.
216
217 backup:::
218
219 Backup files (`vzdump`).
220
221 iso:::
222
223 ISO images
224
225 snippets:::
226
227 Snippet files, for example guest hook scripts
228
229 shared::
230
231 Mark storage as shared.
232
233 disable::
234
235 You can use this flag to disable the storage completely.
236
237 maxfiles::
238
239 Deprecated, please use `prune-backups` instead. Maximum number of backup files
240 per VM. Use `0` for unlimited.
241
242 prune-backups::
243
244 Retention options for backups. For details, see
245 xref:vzdump_retention[Backup Retention].
246
247 format::
248
249 Default image format (`raw|qcow2|vmdk`)
250
251 preallocation::
252
253 Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
254 file-based storages. The default is `metadata`, which is treated like `off` for
255 `raw` images. When using network storages in combination with large `qcow2`
256 images, using `off` can help to avoid timeouts.
257
258 WARNING: It is not advisable to use the same storage pool on different
259 {pve} clusters. Some storage operation need exclusive access to the
260 storage, so proper locking is required. While this is implemented
261 within a cluster, it does not work between different clusters.
262
263
264 Volumes
265 -------
266
267 We use a special notation to address storage data. When you allocate
268 data from a storage pool, it returns such a volume identifier. A volume
269 is identified by the `<STORAGE_ID>`, followed by a storage type
270 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
271 like:
272
273 local:230/example-image.raw
274
275 local:iso/debian-501-amd64-netinst.iso
276
277 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
278
279 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
280
281 To get the file system path for a `<VOLUME_ID>` use:
282
283 pvesm path <VOLUME_ID>
284
285
286 Volume Ownership
287 ~~~~~~~~~~~~~~~~
288
289 There exists an ownership relation for `image` type volumes. Each such
290 volume is owned by a VM or Container. For example volume
291 `local:230/example-image.raw` is owned by VM 230. Most storage
292 backends encodes this ownership information into the volume name.
293
294 When you remove a VM or Container, the system also removes all
295 associated volumes which are owned by that VM or Container.
296
297
298 Using the Command Line Interface
299 --------------------------------
300
301 It is recommended to familiarize yourself with the concept behind storage
302 pools and volume identifiers, but in real life, you are not forced to do any
303 of those low level operations on the command line. Normally,
304 allocation and removal of volumes is done by the VM and Container
305 management tools.
306
307 Nevertheless, there is a command line tool called `pvesm` (``{pve}
308 Storage Manager''), which is able to perform common storage management
309 tasks.
310
311
312 Examples
313 ~~~~~~~~
314
315 Add storage pools
316
317 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
318 pvesm add dir <STORAGE_ID> --path <PATH>
319 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
320 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
321 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
322
323 Disable storage pools
324
325 pvesm set <STORAGE_ID> --disable 1
326
327 Enable storage pools
328
329 pvesm set <STORAGE_ID> --disable 0
330
331 Change/set storage options
332
333 pvesm set <STORAGE_ID> <OPTIONS>
334 pvesm set <STORAGE_ID> --shared 1
335 pvesm set local --format qcow2
336 pvesm set <STORAGE_ID> --content iso
337
338 Remove storage pools. This does not delete any data, and does not
339 disconnect or unmount anything. It just removes the storage
340 configuration.
341
342 pvesm remove <STORAGE_ID>
343
344 Allocate volumes
345
346 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
347
348 Allocate a 4G volume in local storage. The name is auto-generated if
349 you pass an empty string as `<name>`
350
351 pvesm alloc local <VMID> '' 4G
352
353 Free volumes
354
355 pvesm free <VOLUME_ID>
356
357 WARNING: This really destroys all volume data.
358
359 List storage status
360
361 pvesm status
362
363 List storage contents
364
365 pvesm list <STORAGE_ID> [--vmid <VMID>]
366
367 List volumes allocated by VMID
368
369 pvesm list <STORAGE_ID> --vmid <VMID>
370
371 List iso images
372
373 pvesm list <STORAGE_ID> --content iso
374
375 List container templates
376
377 pvesm list <STORAGE_ID> --content vztmpl
378
379 Show file system path for a volume
380
381 pvesm path <VOLUME_ID>
382
383 Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
384 This is mostly used internally with `pvesm import`.
385 The stream format qcow2+size is different to the qcow2 format.
386 Consequently, the exported file cannot simply be attached to a VM.
387 This also holds for the other formats.
388
389 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
390
391 ifdef::wiki[]
392
393 See Also
394 --------
395
396 * link:/wiki/Storage:_Directory[Storage: Directory]
397
398 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
399
400 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
401
402 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
403
404 * link:/wiki/Storage:_LVM[Storage: LVM]
405
406 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
407
408 * link:/wiki/Storage:_NFS[Storage: NFS]
409
410 * link:/wiki/Storage:_CIFS[Storage: CIFS]
411
412 * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
413
414 * link:/wiki/Storage:_RBD[Storage: RBD]
415
416 * link:/wiki/Storage:_CephFS[Storage: CephFS]
417
418 * link:/wiki/Storage:_ZFS[Storage: ZFS]
419
420 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
421
422 endif::wiki[]
423
424 ifndef::wiki[]
425
426 // backend documentation
427
428 include::pve-storage-dir.adoc[]
429
430 include::pve-storage-nfs.adoc[]
431
432 include::pve-storage-cifs.adoc[]
433
434 include::pve-storage-pbs.adoc[]
435
436 include::pve-storage-glusterfs.adoc[]
437
438 include::pve-storage-zfspool.adoc[]
439
440 include::pve-storage-lvm.adoc[]
441
442 include::pve-storage-lvmthin.adoc[]
443
444 include::pve-storage-iscsi.adoc[]
445
446 include::pve-storage-iscsidirect.adoc[]
447
448 include::pve-storage-rbd.adoc[]
449
450 include::pve-storage-cephfs.adoc[]
451
452 include::pve-storage-btrfs.adoc[]
453
454 include::pve-storage-zfs.adoc[]
455
456
457 ifdef::manvolnum[]
458 include::pve-copyright.adoc[]
459 endif::manvolnum[]
460
461 endif::wiki[]
462