]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
qm-hard-disk.png: new screenshot
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 Block level storage::
52
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
57 data to different nodes.
58
59 File level storage::
60
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
64 snapshots and clones.
65
66
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no |yes
73 |NFS |nfs |file |yes |no |yes
74 |GlusterFS |glusterfs |file |yes |no |yes
75 |LVM |lvm |block |no |no |yes
76 |LVM-thin |lvmthin |block |no |yes |yes
77 |iSCSI/kernel |iscsi |block |yes |no |yes
78 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79 |Ceph/RBD |rbd |block |yes |yes |yes
80 |Sheepdog |sheepdog |block |yes |yes |beta
81 |DRBD9 |drbd |block |yes |yes |beta
82 |ZFS over iSCSI |zfs |block |yes |yes |yes
83 |=========================================================
84
85 TIP: It is possible to use LVM on top of an iSCSI storage. That way
86 you get a `shared` LVM storage.
87
88
89 Thin Provisioning
90 ~~~~~~~~~~~~~~~~~
91
92 A number of storages, and the Qemu image format `qcow2`, support 'thin
93 provisioning'. With thin provisioning activated, only the blocks that
94 the guest system actually use will be written to the storage.
95
96 Say for instance you create a VM with a 32GB hard disk, and after
97 installing the guest system OS, the root file system of the VM contains
98 3 GB of data. In that case only 3GB are written to the storage, even
99 if the guest VM sees a 32GB hard drive. In this way thin provisioning
100 allows you to create disk images which are larger than the currently
101 available storage blocks. You can create large disk images for your
102 VMs, and when the need arises, add more disks to your storage without
103 resizing the VMs' file systems.
104
105 All storage types which have the ``Snapshots'' feature also support thin
106 provisioning.
107
108 CAUTION: If a storage runs full, all guests using volumes on that
109 storage receives IO error. This can cause file system inconsistencies
110 and may corrupt your data. So it is advisable to avoid
111 over-provisioning of your storage resources, or carefully observe
112 free space to avoid such conditions.
113
114
115 Storage Configuration
116 ---------------------
117
118 All {pve} related storage configuration is stored within a single text
119 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
120 gets automatically distributed to all cluster nodes. So all nodes
121 share the same storage configuration.
122
123 Sharing storage configuration make perfect sense for shared storage,
124 because the same ``shared'' storage is accessible from all nodes. But is
125 also useful for local storage types. In this case such local storage
126 is available on all nodes, but it is physically different and can have
127 totally different content.
128
129
130 Storage Pools
131 ~~~~~~~~~~~~~
132
133 Each storage pool has a `<type>`, and is uniquely identified by its
134 `<STORAGE_ID>`. A pool configuration looks like this:
135
136 ----
137 <type>: <STORAGE_ID>
138 <property> <value>
139 <property> <value>
140 ...
141 ----
142
143 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
144 followed by a list of properties. Most properties have values, but some of
145 them come with reasonable default. In that case you can omit the value.
146
147 To be more specific, take a look at the default storage configuration
148 after installation. It contains one special local storage pool named
149 `local`, which refers to the directory `/var/lib/vz` and is always
150 available. The {pve} installer creates additional storage entries
151 depending on the storage type chosen at installation time.
152
153 .Default storage configuration (`/etc/pve/storage.cfg`)
154 ----
155 dir: local
156 path /var/lib/vz
157 content iso,vztmpl,backup
158
159 # default image store on LVM based installation
160 lvmthin: local-lvm
161 thinpool data
162 vgname pve
163 content rootdir,images
164
165 # default image store on ZFS based installation
166 zfspool: local-zfs
167 pool rpool/data
168 sparse
169 content images,rootdir
170 ----
171
172
173 Common Storage Properties
174 ~~~~~~~~~~~~~~~~~~~~~~~~~
175
176 A few storage properties are common among different storage types.
177
178 nodes::
179
180 List of cluster node names where this storage is
181 usable/accessible. One can use this property to restrict storage
182 access to a limited set of nodes.
183
184 content::
185
186 A storage can support several content types, for example virtual disk
187 images, cdrom iso images, container templates or container root
188 directories. Not all storage types support all content types. One can set
189 this property to select for what this storage is used for.
190
191 images:::
192
193 KVM-Qemu VM images.
194
195 rootdir:::
196
197 Allow to store container data.
198
199 vztmpl:::
200
201 Container templates.
202
203 backup:::
204
205 Backup files (`vzdump`).
206
207 iso:::
208
209 ISO images
210
211 shared::
212
213 Mark storage as shared.
214
215 disable::
216
217 You can use this flag to disable the storage completely.
218
219 maxfiles::
220
221 Maximum number of backup files per VM. Use `0` for unlimited.
222
223 format::
224
225 Default image format (`raw|qcow2|vmdk`)
226
227
228 WARNING: It is not advisable to use the same storage pool on different
229 {pve} clusters. Some storage operation need exclusive access to the
230 storage, so proper locking is required. While this is implemented
231 within a cluster, it does not work between different clusters.
232
233
234 Volumes
235 -------
236
237 We use a special notation to address storage data. When you allocate
238 data from a storage pool, it returns such a volume identifier. A volume
239 is identified by the `<STORAGE_ID>`, followed by a storage type
240 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
241 like:
242
243 local:230/example-image.raw
244
245 local:iso/debian-501-amd64-netinst.iso
246
247 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
248
249 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
250
251 To get the file system path for a `<VOLUME_ID>` use:
252
253 pvesm path <VOLUME_ID>
254
255
256 Volume Ownership
257 ~~~~~~~~~~~~~~~~
258
259 There exists an ownership relation for `image` type volumes. Each such
260 volume is owned by a VM or Container. For example volume
261 `local:230/example-image.raw` is owned by VM 230. Most storage
262 backends encodes this ownership information into the volume name.
263
264 When you remove a VM or Container, the system also removes all
265 associated volumes which are owned by that VM or Container.
266
267
268 Using the Command Line Interface
269 --------------------------------
270
271 It is recommended to familiarize yourself with the concept behind storage
272 pools and volume identifiers, but in real life, you are not forced to do any
273 of those low level operations on the command line. Normally,
274 allocation and removal of volumes is done by the VM and Container
275 management tools.
276
277 Nevertheless, there is a command line tool called `pvesm` (``{pve}
278 Storage Manager''), which is able to perform common storage management
279 tasks.
280
281
282 Examples
283 ~~~~~~~~
284
285 Add storage pools
286
287 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
288 pvesm add dir <STORAGE_ID> --path <PATH>
289 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
290 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
291 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
292
293 Disable storage pools
294
295 pvesm set <STORAGE_ID> --disable 1
296
297 Enable storage pools
298
299 pvesm set <STORAGE_ID> --disable 0
300
301 Change/set storage options
302
303 pvesm set <STORAGE_ID> <OPTIONS>
304 pvesm set <STORAGE_ID> --shared 1
305 pvesm set local --format qcow2
306 pvesm set <STORAGE_ID> --content iso
307
308 Remove storage pools. This does not delete any data, and does not
309 disconnect or unmount anything. It just removes the storage
310 configuration.
311
312 pvesm remove <STORAGE_ID>
313
314 Allocate volumes
315
316 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
317
318 Allocate a 4G volume in local storage. The name is auto-generated if
319 you pass an empty string as `<name>`
320
321 pvesm alloc local <VMID> '' 4G
322
323 Free volumes
324
325 pvesm free <VOLUME_ID>
326
327 WARNING: This really destroys all volume data.
328
329 List storage status
330
331 pvesm status
332
333 List storage contents
334
335 pvesm list <STORAGE_ID> [--vmid <VMID>]
336
337 List volumes allocated by VMID
338
339 pvesm list <STORAGE_ID> --vmid <VMID>
340
341 List iso images
342
343 pvesm list <STORAGE_ID> --iso
344
345 List container templates
346
347 pvesm list <STORAGE_ID> --vztmpl
348
349 Show file system path for a volume
350
351 pvesm path <VOLUME_ID>
352
353 ifdef::wiki[]
354
355 See Also
356 --------
357
358 * link:/wiki/Storage:_Directory[Storage: Directory]
359
360 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
361
362 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
363
364 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
365
366 * link:/wiki/Storage:_LVM[Storage: LVM]
367
368 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
369
370 * link:/wiki/Storage:_NFS[Storage: NFS]
371
372 * link:/wiki/Storage:_RBD[Storage: RBD]
373
374 * link:/wiki/Storage:_ZFS[Storage: ZFS]
375
376 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
377
378 endif::wiki[]
379
380 ifndef::wiki[]
381
382 // backend documentation
383
384 include::pve-storage-dir.adoc[]
385
386 include::pve-storage-nfs.adoc[]
387
388 include::pve-storage-glusterfs.adoc[]
389
390 include::pve-storage-zfspool.adoc[]
391
392 include::pve-storage-lvm.adoc[]
393
394 include::pve-storage-lvmthin.adoc[]
395
396 include::pve-storage-iscsi.adoc[]
397
398 include::pve-storage-iscsidirect.adoc[]
399
400 include::pve-storage-rbd.adoc[]
401
402
403
404 ifdef::manvolnum[]
405 include::pve-copyright.adoc[]
406 endif::manvolnum[]
407
408 endif::wiki[]
409