]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
small corrections and clarifications
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 Block level storage::
52
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57 data to different nodes.
58
59 File level storage::
60
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
64 snapshots and clones.
65
66
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no |yes
73 |NFS |nfs |file |yes |no |yes
74 |GlusterFS |glusterfs |file |yes |no |yes
75 |LVM |lvm |block |no |no |yes
76 |LVM-thin |lvmthin |block |no |yes |yes
77 |iSCSI/kernel |iscsi |block |yes |no |yes
78 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79 |Ceph/RBD |rbd |block |yes |yes |yes
80 |Sheepdog |sheepdog |block |yes |yes |beta
81 |ZFS over iSCSI |zfs |block |yes |yes |yes
82 |=========================================================
83
84 TIP: It is possible to use LVM on top of an iSCSI storage. That way
85 you get a `shared` LVM storage.
86
87
88 Thin Provisioning
89 ~~~~~~~~~~~~~~~~~
90
91 A number of storages, and the Qemu image format `qcow2`, support 'thin
92 provisioning'. With thin provisioning activated, only the blocks that
93 the guest system actually use will be written to the storage.
94
95 Say for instance you create a VM with a 32GB hard disk, and after
96 installing the guest system OS, the root file system of the VM contains
97 3 GB of data. In that case only 3GB are written to the storage, even
98 if the guest VM sees a 32GB hard drive. In this way thin provisioning
99 allows you to create disk images which are larger than the currently
100 available storage blocks. You can create large disk images for your
101 VMs, and when the need arises, add more disks to your storage without
102 resizing the VMs' file systems.
103
104 All storage types which have the ``Snapshots'' feature also support thin
105 provisioning.
106
107 CAUTION: If a storage runs full, all guests using volumes on that
108 storage receives IO error. This can cause file system inconsistencies
109 and may corrupt your data. So it is advisable to avoid
110 over-provisioning of your storage resources, or carefully observe
111 free space to avoid such conditions.
112
113
114 Storage Configuration
115 ---------------------
116
117 All {pve} related storage configuration is stored within a single text
118 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
119 gets automatically distributed to all cluster nodes. So all nodes
120 share the same storage configuration.
121
122 Sharing storage configuration make perfect sense for shared storage,
123 because the same ``shared'' storage is accessible from all nodes. But is
124 also useful for local storage types. In this case such local storage
125 is available on all nodes, but it is physically different and can have
126 totally different content.
127
128
129 Storage Pools
130 ~~~~~~~~~~~~~
131
132 Each storage pool has a `<type>`, and is uniquely identified by its
133 `<STORAGE_ID>`. A pool configuration looks like this:
134
135 ----
136 <type>: <STORAGE_ID>
137 <property> <value>
138 <property> <value>
139 ...
140 ----
141
142 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
143 followed by a list of properties. Most properties have values, but some of
144 them come with reasonable default. In that case you can omit the value.
145
146 To be more specific, take a look at the default storage configuration
147 after installation. It contains one special local storage pool named
148 `local`, which refers to the directory `/var/lib/vz` and is always
149 available. The {pve} installer creates additional storage entries
150 depending on the storage type chosen at installation time.
151
152 .Default storage configuration (`/etc/pve/storage.cfg`)
153 ----
154 dir: local
155 path /var/lib/vz
156 content iso,vztmpl,backup
157
158 # default image store on LVM based installation
159 lvmthin: local-lvm
160 thinpool data
161 vgname pve
162 content rootdir,images
163
164 # default image store on ZFS based installation
165 zfspool: local-zfs
166 pool rpool/data
167 sparse
168 content images,rootdir
169 ----
170
171
172 Common Storage Properties
173 ~~~~~~~~~~~~~~~~~~~~~~~~~
174
175 A few storage properties are common among different storage types.
176
177 nodes::
178
179 List of cluster node names where this storage is
180 usable/accessible. One can use this property to restrict storage
181 access to a limited set of nodes.
182
183 content::
184
185 A storage can support several content types, for example virtual disk
186 images, cdrom iso images, container templates or container root
187 directories. Not all storage types support all content types. One can set
188 this property to select for what this storage is used for.
189
190 images:::
191
192 KVM-Qemu VM images.
193
194 rootdir:::
195
196 Allow to store container data.
197
198 vztmpl:::
199
200 Container templates.
201
202 backup:::
203
204 Backup files (`vzdump`).
205
206 iso:::
207
208 ISO images
209
210 shared::
211
212 Mark storage as shared.
213
214 disable::
215
216 You can use this flag to disable the storage completely.
217
218 maxfiles::
219
220 Maximum number of backup files per VM. Use `0` for unlimited.
221
222 format::
223
224 Default image format (`raw|qcow2|vmdk`)
225
226
227 WARNING: It is not advisable to use the same storage pool on different
228 {pve} clusters. Some storage operation need exclusive access to the
229 storage, so proper locking is required. While this is implemented
230 within a cluster, it does not work between different clusters.
231
232
233 Volumes
234 -------
235
236 We use a special notation to address storage data. When you allocate
237 data from a storage pool, it returns such a volume identifier. A volume
238 is identified by the `<STORAGE_ID>`, followed by a storage type
239 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
240 like:
241
242 local:230/example-image.raw
243
244 local:iso/debian-501-amd64-netinst.iso
245
246 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
247
248 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
249
250 To get the file system path for a `<VOLUME_ID>` use:
251
252 pvesm path <VOLUME_ID>
253
254
255 Volume Ownership
256 ~~~~~~~~~~~~~~~~
257
258 There exists an ownership relation for `image` type volumes. Each such
259 volume is owned by a VM or Container. For example volume
260 `local:230/example-image.raw` is owned by VM 230. Most storage
261 backends encodes this ownership information into the volume name.
262
263 When you remove a VM or Container, the system also removes all
264 associated volumes which are owned by that VM or Container.
265
266
267 Using the Command Line Interface
268 --------------------------------
269
270 It is recommended to familiarize yourself with the concept behind storage
271 pools and volume identifiers, but in real life, you are not forced to do any
272 of those low level operations on the command line. Normally,
273 allocation and removal of volumes is done by the VM and Container
274 management tools.
275
276 Nevertheless, there is a command line tool called `pvesm` (``{pve}
277 Storage Manager''), which is able to perform common storage management
278 tasks.
279
280
281 Examples
282 ~~~~~~~~
283
284 Add storage pools
285
286 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
287 pvesm add dir <STORAGE_ID> --path <PATH>
288 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
289 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
290 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
291
292 Disable storage pools
293
294 pvesm set <STORAGE_ID> --disable 1
295
296 Enable storage pools
297
298 pvesm set <STORAGE_ID> --disable 0
299
300 Change/set storage options
301
302 pvesm set <STORAGE_ID> <OPTIONS>
303 pvesm set <STORAGE_ID> --shared 1
304 pvesm set local --format qcow2
305 pvesm set <STORAGE_ID> --content iso
306
307 Remove storage pools. This does not delete any data, and does not
308 disconnect or unmount anything. It just removes the storage
309 configuration.
310
311 pvesm remove <STORAGE_ID>
312
313 Allocate volumes
314
315 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
316
317 Allocate a 4G volume in local storage. The name is auto-generated if
318 you pass an empty string as `<name>`
319
320 pvesm alloc local <VMID> '' 4G
321
322 Free volumes
323
324 pvesm free <VOLUME_ID>
325
326 WARNING: This really destroys all volume data.
327
328 List storage status
329
330 pvesm status
331
332 List storage contents
333
334 pvesm list <STORAGE_ID> [--vmid <VMID>]
335
336 List volumes allocated by VMID
337
338 pvesm list <STORAGE_ID> --vmid <VMID>
339
340 List iso images
341
342 pvesm list <STORAGE_ID> --iso
343
344 List container templates
345
346 pvesm list <STORAGE_ID> --vztmpl
347
348 Show file system path for a volume
349
350 pvesm path <VOLUME_ID>
351
352 ifdef::wiki[]
353
354 See Also
355 --------
356
357 * link:/wiki/Storage:_Directory[Storage: Directory]
358
359 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
360
361 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
362
363 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
364
365 * link:/wiki/Storage:_LVM[Storage: LVM]
366
367 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
368
369 * link:/wiki/Storage:_NFS[Storage: NFS]
370
371 * link:/wiki/Storage:_RBD[Storage: RBD]
372
373 * link:/wiki/Storage:_ZFS[Storage: ZFS]
374
375 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
376
377 endif::wiki[]
378
379 ifndef::wiki[]
380
381 // backend documentation
382
383 include::pve-storage-dir.adoc[]
384
385 include::pve-storage-nfs.adoc[]
386
387 include::pve-storage-glusterfs.adoc[]
388
389 include::pve-storage-zfspool.adoc[]
390
391 include::pve-storage-lvm.adoc[]
392
393 include::pve-storage-lvmthin.adoc[]
394
395 include::pve-storage-iscsi.adoc[]
396
397 include::pve-storage-iscsidirect.adoc[]
398
399 include::pve-storage-rbd.adoc[]
400
401
402
403 ifdef::manvolnum[]
404 include::pve-copyright.adoc[]
405 endif::manvolnum[]
406
407 endif::wiki[]
408