]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
FAQ: add link to 5.x -> 6.0 upgrade
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 File level storage::
52
53 File level based storage technologies allow access to a full featured (POSIX)
54 file system. They are in general more flexible than any Block level storage
55 (see below), and allow you to store content of any type. ZFS is probably the
56 most advanced system, and it has full support for snapshots and clones.
57
58 Block level storage::
59
60 Allows to store large 'raw' images. It is usually not possible to store
61 other files (ISO, backups, ..) on such storage types. Most modern
62 block level storage implementations support snapshots and clones.
63 RADOS and GlusterFS are distributed systems, replicating storage
64 data to different nodes.
65
66
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no^1^ |yes
73 |NFS |nfs |file |yes |no^1^ |yes
74 |CIFS |cifs |file |yes |no^1^ |yes
75 |GlusterFS |glusterfs |file |yes |no^1^ |yes
76 |CephFS |cephfs |file |yes |yes |yes
77 |LVM |lvm |block |no^2^ |no |yes
78 |LVM-thin |lvmthin |block |no |yes |yes
79 |iSCSI/kernel |iscsi |block |yes |no |yes
80 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
81 |Ceph/RBD |rbd |block |yes |yes |yes
82 |ZFS over iSCSI |zfs |block |yes |yes |yes
83 |=========================================================
84
85 ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
86
87 ^2^: It is possible to use LVM on top of an iSCSI storage. That way
88 you get a `shared` LVM storage.
89
90
91 Thin Provisioning
92 ~~~~~~~~~~~~~~~~~
93
94 A number of storages, and the Qemu image format `qcow2`, support 'thin
95 provisioning'. With thin provisioning activated, only the blocks that
96 the guest system actually use will be written to the storage.
97
98 Say for instance you create a VM with a 32GB hard disk, and after
99 installing the guest system OS, the root file system of the VM contains
100 3 GB of data. In that case only 3GB are written to the storage, even
101 if the guest VM sees a 32GB hard drive. In this way thin provisioning
102 allows you to create disk images which are larger than the currently
103 available storage blocks. You can create large disk images for your
104 VMs, and when the need arises, add more disks to your storage without
105 resizing the VMs' file systems.
106
107 All storage types which have the ``Snapshots'' feature also support thin
108 provisioning.
109
110 CAUTION: If a storage runs full, all guests using volumes on that
111 storage receive IO errors. This can cause file system inconsistencies
112 and may corrupt your data. So it is advisable to avoid
113 over-provisioning of your storage resources, or carefully observe
114 free space to avoid such conditions.
115
116
117 Storage Configuration
118 ---------------------
119
120 All {pve} related storage configuration is stored within a single text
121 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
122 gets automatically distributed to all cluster nodes. So all nodes
123 share the same storage configuration.
124
125 Sharing storage configuration make perfect sense for shared storage,
126 because the same ``shared'' storage is accessible from all nodes. But is
127 also useful for local storage types. In this case such local storage
128 is available on all nodes, but it is physically different and can have
129 totally different content.
130
131
132 Storage Pools
133 ~~~~~~~~~~~~~
134
135 Each storage pool has a `<type>`, and is uniquely identified by its
136 `<STORAGE_ID>`. A pool configuration looks like this:
137
138 ----
139 <type>: <STORAGE_ID>
140 <property> <value>
141 <property> <value>
142 ...
143 ----
144
145 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
146 followed by a list of properties. Most properties have values, but some of
147 them come with reasonable default. In that case you can omit the value.
148
149 To be more specific, take a look at the default storage configuration
150 after installation. It contains one special local storage pool named
151 `local`, which refers to the directory `/var/lib/vz` and is always
152 available. The {pve} installer creates additional storage entries
153 depending on the storage type chosen at installation time.
154
155 .Default storage configuration (`/etc/pve/storage.cfg`)
156 ----
157 dir: local
158 path /var/lib/vz
159 content iso,vztmpl,backup
160
161 # default image store on LVM based installation
162 lvmthin: local-lvm
163 thinpool data
164 vgname pve
165 content rootdir,images
166
167 # default image store on ZFS based installation
168 zfspool: local-zfs
169 pool rpool/data
170 sparse
171 content images,rootdir
172 ----
173
174
175 Common Storage Properties
176 ~~~~~~~~~~~~~~~~~~~~~~~~~
177
178 A few storage properties are common among different storage types.
179
180 nodes::
181
182 List of cluster node names where this storage is
183 usable/accessible. One can use this property to restrict storage
184 access to a limited set of nodes.
185
186 content::
187
188 A storage can support several content types, for example virtual disk
189 images, cdrom iso images, container templates or container root
190 directories. Not all storage types support all content types. One can set
191 this property to select for what this storage is used for.
192
193 images:::
194
195 KVM-Qemu VM images.
196
197 rootdir:::
198
199 Allow to store container data.
200
201 vztmpl:::
202
203 Container templates.
204
205 backup:::
206
207 Backup files (`vzdump`).
208
209 iso:::
210
211 ISO images
212
213 snippets:::
214
215 Snippet files, for example guest hook scripts
216
217 shared::
218
219 Mark storage as shared.
220
221 disable::
222
223 You can use this flag to disable the storage completely.
224
225 maxfiles::
226
227 Maximum number of backup files per VM. Use `0` for unlimited.
228
229 format::
230
231 Default image format (`raw|qcow2|vmdk`)
232
233
234 WARNING: It is not advisable to use the same storage pool on different
235 {pve} clusters. Some storage operation need exclusive access to the
236 storage, so proper locking is required. While this is implemented
237 within a cluster, it does not work between different clusters.
238
239
240 Volumes
241 -------
242
243 We use a special notation to address storage data. When you allocate
244 data from a storage pool, it returns such a volume identifier. A volume
245 is identified by the `<STORAGE_ID>`, followed by a storage type
246 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
247 like:
248
249 local:230/example-image.raw
250
251 local:iso/debian-501-amd64-netinst.iso
252
253 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
254
255 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
256
257 To get the file system path for a `<VOLUME_ID>` use:
258
259 pvesm path <VOLUME_ID>
260
261
262 Volume Ownership
263 ~~~~~~~~~~~~~~~~
264
265 There exists an ownership relation for `image` type volumes. Each such
266 volume is owned by a VM or Container. For example volume
267 `local:230/example-image.raw` is owned by VM 230. Most storage
268 backends encodes this ownership information into the volume name.
269
270 When you remove a VM or Container, the system also removes all
271 associated volumes which are owned by that VM or Container.
272
273
274 Using the Command Line Interface
275 --------------------------------
276
277 It is recommended to familiarize yourself with the concept behind storage
278 pools and volume identifiers, but in real life, you are not forced to do any
279 of those low level operations on the command line. Normally,
280 allocation and removal of volumes is done by the VM and Container
281 management tools.
282
283 Nevertheless, there is a command line tool called `pvesm` (``{pve}
284 Storage Manager''), which is able to perform common storage management
285 tasks.
286
287
288 Examples
289 ~~~~~~~~
290
291 Add storage pools
292
293 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
294 pvesm add dir <STORAGE_ID> --path <PATH>
295 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
296 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
297 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
298
299 Disable storage pools
300
301 pvesm set <STORAGE_ID> --disable 1
302
303 Enable storage pools
304
305 pvesm set <STORAGE_ID> --disable 0
306
307 Change/set storage options
308
309 pvesm set <STORAGE_ID> <OPTIONS>
310 pvesm set <STORAGE_ID> --shared 1
311 pvesm set local --format qcow2
312 pvesm set <STORAGE_ID> --content iso
313
314 Remove storage pools. This does not delete any data, and does not
315 disconnect or unmount anything. It just removes the storage
316 configuration.
317
318 pvesm remove <STORAGE_ID>
319
320 Allocate volumes
321
322 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
323
324 Allocate a 4G volume in local storage. The name is auto-generated if
325 you pass an empty string as `<name>`
326
327 pvesm alloc local <VMID> '' 4G
328
329 Free volumes
330
331 pvesm free <VOLUME_ID>
332
333 WARNING: This really destroys all volume data.
334
335 List storage status
336
337 pvesm status
338
339 List storage contents
340
341 pvesm list <STORAGE_ID> [--vmid <VMID>]
342
343 List volumes allocated by VMID
344
345 pvesm list <STORAGE_ID> --vmid <VMID>
346
347 List iso images
348
349 pvesm list <STORAGE_ID> --iso
350
351 List container templates
352
353 pvesm list <STORAGE_ID> --vztmpl
354
355 Show file system path for a volume
356
357 pvesm path <VOLUME_ID>
358
359 ifdef::wiki[]
360
361 See Also
362 --------
363
364 * link:/wiki/Storage:_Directory[Storage: Directory]
365
366 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
367
368 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
369
370 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
371
372 * link:/wiki/Storage:_LVM[Storage: LVM]
373
374 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
375
376 * link:/wiki/Storage:_NFS[Storage: NFS]
377
378 * link:/wiki/Storage:_CIFS[Storage: CIFS]
379
380 * link:/wiki/Storage:_RBD[Storage: RBD]
381
382 * link:/wiki/Storage:_CephFS[Storage: CephFS]
383
384 * link:/wiki/Storage:_ZFS[Storage: ZFS]
385
386 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
387
388 endif::wiki[]
389
390 ifndef::wiki[]
391
392 // backend documentation
393
394 include::pve-storage-dir.adoc[]
395
396 include::pve-storage-nfs.adoc[]
397
398 include::pve-storage-cifs.adoc[]
399
400 include::pve-storage-glusterfs.adoc[]
401
402 include::pve-storage-zfspool.adoc[]
403
404 include::pve-storage-lvm.adoc[]
405
406 include::pve-storage-lvmthin.adoc[]
407
408 include::pve-storage-iscsi.adoc[]
409
410 include::pve-storage-iscsidirect.adoc[]
411
412 include::pve-storage-rbd.adoc[]
413
414 include::pve-storage-cephfs.adoc[]
415
416
417
418 ifdef::manvolnum[]
419 include::pve-copyright.adoc[]
420 endif::manvolnum[]
421
422 endif::wiki[]
423