]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
fix #5429: network: override device names: include Type=ether
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in the future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 File level storage::
52
53 File level based storage technologies allow access to a fully featured (POSIX)
54 file system. They are in general more flexible than any Block level storage
55 (see below), and allow you to store content of any type. ZFS is probably the
56 most advanced system, and it has full support for snapshots and clones.
57
58 Block level storage::
59
60 Allows to store large 'raw' images. It is usually not possible to store
61 other files (ISO, backups, ..) on such storage types. Most modern
62 block level storage implementations support snapshots and clones.
63 RADOS and GlusterFS are distributed systems, replicating storage
64 data to different nodes.
65
66
67 .Available storage types
68 [width="100%",cols="<2d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |Plugin type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |both^1^|no |yes |yes
72 |Directory |dir |file |no |no^2^ |yes
73 |BTRFS |btrfs |file |no |yes |technology preview
74 |NFS |nfs |file |yes |no^2^ |yes
75 |CIFS |cifs |file |yes |no^2^ |yes
76 |Proxmox Backup |pbs |both |yes |n/a |yes
77 |GlusterFS |glusterfs |file |yes |no^2^ |yes
78 |CephFS |cephfs |file |yes |yes |yes
79 |LVM |lvm |block |no^3^ |no |yes
80 |LVM-thin |lvmthin |block |no |yes |yes
81 |iSCSI/kernel |iscsi |block |yes |no |yes
82 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83 |Ceph/RBD |rbd |block |yes |yes |yes
84 |ZFS over iSCSI |zfs |block |yes |yes |yes
85 |===========================================================
86
87 ^1^: Disk images for VMs are stored in ZFS volume (zvol) datasets, which provide
88 block device functionality.
89
90 ^2^: On file based storages, snapshots are possible with the 'qcow2' format.
91
92 ^3^: It is possible to use LVM on top of an iSCSI or FC-based storage.
93 That way you get a `shared` LVM storage
94
95
96 Thin Provisioning
97 ~~~~~~~~~~~~~~~~~
98
99 A number of storages, and the QEMU image format `qcow2`, support 'thin
100 provisioning'. With thin provisioning activated, only the blocks that
101 the guest system actually use will be written to the storage.
102
103 Say for instance you create a VM with a 32GB hard disk, and after
104 installing the guest system OS, the root file system of the VM contains
105 3 GB of data. In that case only 3GB are written to the storage, even
106 if the guest VM sees a 32GB hard drive. In this way thin provisioning
107 allows you to create disk images which are larger than the currently
108 available storage blocks. You can create large disk images for your
109 VMs, and when the need arises, add more disks to your storage without
110 resizing the VMs' file systems.
111
112 All storage types which have the ``Snapshots'' feature also support thin
113 provisioning.
114
115 CAUTION: If a storage runs full, all guests using volumes on that
116 storage receive IO errors. This can cause file system inconsistencies
117 and may corrupt your data. So it is advisable to avoid
118 over-provisioning of your storage resources, or carefully observe
119 free space to avoid such conditions.
120
121
122 Storage Configuration
123 ---------------------
124
125 All {pve} related storage configuration is stored within a single text
126 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
127 gets automatically distributed to all cluster nodes. So all nodes
128 share the same storage configuration.
129
130 Sharing storage configuration makes perfect sense for shared storage,
131 because the same ``shared'' storage is accessible from all nodes. But it is
132 also useful for local storage types. In this case such local storage
133 is available on all nodes, but it is physically different and can have
134 totally different content.
135
136
137 Storage Pools
138 ~~~~~~~~~~~~~
139
140 Each storage pool has a `<type>`, and is uniquely identified by its
141 `<STORAGE_ID>`. A pool configuration looks like this:
142
143 ----
144 <type>: <STORAGE_ID>
145 <property> <value>
146 <property> <value>
147 <property>
148 ...
149 ----
150
151 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
152 followed by a list of properties. Most properties require a value. Some have
153 reasonable defaults, in which case you can omit the value.
154
155 To be more specific, take a look at the default storage configuration
156 after installation. It contains one special local storage pool named
157 `local`, which refers to the directory `/var/lib/vz` and is always
158 available. The {pve} installer creates additional storage entries
159 depending on the storage type chosen at installation time.
160
161 .Default storage configuration (`/etc/pve/storage.cfg`)
162 ----
163 dir: local
164 path /var/lib/vz
165 content iso,vztmpl,backup
166
167 # default image store on LVM based installation
168 lvmthin: local-lvm
169 thinpool data
170 vgname pve
171 content rootdir,images
172
173 # default image store on ZFS based installation
174 zfspool: local-zfs
175 pool rpool/data
176 sparse
177 content images,rootdir
178 ----
179
180 CAUTION: It is problematic to have multiple storage configurations pointing to
181 the exact same underlying storage. Such an _aliased_ storage configuration can
182 lead to two different volume IDs ('volid') pointing to the exact same disk
183 image. {pve} expects that the images' volume IDs point to, are unique. Choosing
184 different content types for _aliased_ storage configurations can be fine, but
185 is not recommended.
186
187 Common Storage Properties
188 ~~~~~~~~~~~~~~~~~~~~~~~~~
189
190 A few storage properties are common among different storage types.
191
192 nodes::
193
194 List of cluster node names where this storage is
195 usable/accessible. One can use this property to restrict storage
196 access to a limited set of nodes.
197
198 content::
199
200 A storage can support several content types, for example virtual disk
201 images, cdrom iso images, container templates or container root
202 directories. Not all storage types support all content types. One can set
203 this property to select what this storage is used for.
204
205 images:::
206
207 QEMU/KVM VM images.
208
209 rootdir:::
210
211 Allow to store container data.
212
213 vztmpl:::
214
215 Container templates.
216
217 backup:::
218
219 Backup files (`vzdump`).
220
221 iso:::
222
223 ISO images
224
225 snippets:::
226
227 Snippet files, for example guest hook scripts
228
229 shared::
230
231 Indicate that this is a single storage with the same contents on all nodes (or
232 all listed in the 'nodes' option). It will not make the contents of a local
233 storage automatically accessible to other nodes, it just marks an already shared
234 storage as such!
235
236 disable::
237
238 You can use this flag to disable the storage completely.
239
240 maxfiles::
241
242 Deprecated, please use `prune-backups` instead. Maximum number of backup files
243 per VM. Use `0` for unlimited.
244
245 prune-backups::
246
247 Retention options for backups. For details, see
248 xref:vzdump_retention[Backup Retention].
249
250 format::
251
252 Default image format (`raw|qcow2|vmdk`)
253
254 preallocation::
255
256 Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on
257 file-based storages. The default is `metadata`, which is treated like `off` for
258 `raw` images. When using network storages in combination with large `qcow2`
259 images, using `off` can help to avoid timeouts.
260
261 WARNING: It is not advisable to use the same storage pool on different
262 {pve} clusters. Some storage operation need exclusive access to the
263 storage, so proper locking is required. While this is implemented
264 within a cluster, it does not work between different clusters.
265
266
267 Volumes
268 -------
269
270 We use a special notation to address storage data. When you allocate
271 data from a storage pool, it returns such a volume identifier. A volume
272 is identified by the `<STORAGE_ID>`, followed by a storage type
273 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
274 like:
275
276 local:230/example-image.raw
277
278 local:iso/debian-501-amd64-netinst.iso
279
280 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
281
282 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
283
284 To get the file system path for a `<VOLUME_ID>` use:
285
286 pvesm path <VOLUME_ID>
287
288
289 Volume Ownership
290 ~~~~~~~~~~~~~~~~
291
292 There exists an ownership relation for `image` type volumes. Each such
293 volume is owned by a VM or Container. For example volume
294 `local:230/example-image.raw` is owned by VM 230. Most storage
295 backends encodes this ownership information into the volume name.
296
297 When you remove a VM or Container, the system also removes all
298 associated volumes which are owned by that VM or Container.
299
300
301 Using the Command-line Interface
302 --------------------------------
303
304 It is recommended to familiarize yourself with the concept behind storage
305 pools and volume identifiers, but in real life, you are not forced to do any
306 of those low level operations on the command line. Normally,
307 allocation and removal of volumes is done by the VM and Container
308 management tools.
309
310 Nevertheless, there is a command-line tool called `pvesm` (``{pve}
311 Storage Manager''), which is able to perform common storage management
312 tasks.
313
314
315 Examples
316 ~~~~~~~~
317
318 Add storage pools
319
320 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
321 pvesm add dir <STORAGE_ID> --path <PATH>
322 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
323 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
324 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
325
326 Disable storage pools
327
328 pvesm set <STORAGE_ID> --disable 1
329
330 Enable storage pools
331
332 pvesm set <STORAGE_ID> --disable 0
333
334 Change/set storage options
335
336 pvesm set <STORAGE_ID> <OPTIONS>
337 pvesm set <STORAGE_ID> --shared 1
338 pvesm set local --format qcow2
339 pvesm set <STORAGE_ID> --content iso
340
341 Remove storage pools. This does not delete any data, and does not
342 disconnect or unmount anything. It just removes the storage
343 configuration.
344
345 pvesm remove <STORAGE_ID>
346
347 Allocate volumes
348
349 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
350
351 Allocate a 4G volume in local storage. The name is auto-generated if
352 you pass an empty string as `<name>`
353
354 pvesm alloc local <VMID> '' 4G
355
356 Free volumes
357
358 pvesm free <VOLUME_ID>
359
360 WARNING: This really destroys all volume data.
361
362 List storage status
363
364 pvesm status
365
366 List storage contents
367
368 pvesm list <STORAGE_ID> [--vmid <VMID>]
369
370 List volumes allocated by VMID
371
372 pvesm list <STORAGE_ID> --vmid <VMID>
373
374 List iso images
375
376 pvesm list <STORAGE_ID> --content iso
377
378 List container templates
379
380 pvesm list <STORAGE_ID> --content vztmpl
381
382 Show file system path for a volume
383
384 pvesm path <VOLUME_ID>
385
386 Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`.
387 This is mostly used internally with `pvesm import`.
388 The stream format qcow2+size is different to the qcow2 format.
389 Consequently, the exported file cannot simply be attached to a VM.
390 This also holds for the other formats.
391
392 pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1
393
394 ifdef::wiki[]
395
396 See Also
397 --------
398
399 * link:/wiki/Storage:_Directory[Storage: Directory]
400
401 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
402
403 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
404
405 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
406
407 * link:/wiki/Storage:_LVM[Storage: LVM]
408
409 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
410
411 * link:/wiki/Storage:_NFS[Storage: NFS]
412
413 * link:/wiki/Storage:_CIFS[Storage: CIFS]
414
415 * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server]
416
417 * link:/wiki/Storage:_RBD[Storage: RBD]
418
419 * link:/wiki/Storage:_CephFS[Storage: CephFS]
420
421 * link:/wiki/Storage:_ZFS[Storage: ZFS]
422
423 * link:/wiki/Storage:_ZFS_over_ISCSI[Storage: ZFS over ISCSI]
424
425 endif::wiki[]
426
427 ifndef::wiki[]
428
429 // backend documentation
430
431 include::pve-storage-dir.adoc[]
432
433 include::pve-storage-nfs.adoc[]
434
435 include::pve-storage-cifs.adoc[]
436
437 include::pve-storage-pbs.adoc[]
438
439 include::pve-storage-glusterfs.adoc[]
440
441 include::pve-storage-zfspool.adoc[]
442
443 include::pve-storage-lvm.adoc[]
444
445 include::pve-storage-lvmthin.adoc[]
446
447 include::pve-storage-iscsi.adoc[]
448
449 include::pve-storage-iscsidirect.adoc[]
450
451 include::pve-storage-rbd.adoc[]
452
453 include::pve-storage-cephfs.adoc[]
454
455 include::pve-storage-btrfs.adoc[]
456
457 include::pve-storage-zfs.adoc[]
458
459
460 ifdef::manvolnum[]
461 include::pve-copyright.adoc[]
462 endif::manvolnum[]
463
464 endif::wiki[]
465