]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
mark toplevel docs
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 :pve-toplevel:
8
9 NAME
10 ----
11
12 pvesm - Proxmox VE Storage Manager
13
14
15 SYNOPSIS
16 --------
17
18 include::pvesm.1-synopsis.adoc[]
19
20 DESCRIPTION
21 -----------
22 endif::manvolnum[]
23
24 ifndef::manvolnum[]
25 {pve} Storage
26 =============
27 include::attributes.txt[]
28 endif::manvolnum[]
29
30 ifdef::wiki[]
31 :pve-toplevel:
32 endif::wiki[]
33
34 The {pve} storage model is very flexible. Virtual machine images
35 can either be stored on one or several local storages, or on shared
36 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
37 configure as many storage pools as you like. You can use all
38 storage technologies available for Debian Linux.
39
40 One major benefit of storing VMs on shared storage is the ability to
41 live-migrate running machines without any downtime, as all nodes in
42 the cluster have direct access to VM disk images. There is no need to
43 copy VM image data, so live migration is very fast in that case.
44
45 The storage library (package `libpve-storage-perl`) uses a flexible
46 plugin system to provide a common interface to all storage types. This
47 can be easily adopted to include further storage types in future.
48
49
50 Storage Types
51 -------------
52
53 There are basically two different classes of storage types:
54
55 Block level storage::
56
57 Allows to store large 'raw' images. It is usually not possible to store
58 other files (ISO, backups, ..) on such storage types. Most modern
59 block level storage implementations support snapshots and clones.
60 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
61 data to different nodes.
62
63 File level storage::
64
65 They allow access to a full featured (POSIX) file system. They are
66 more flexible, and allows you to store any content type. ZFS is
67 probably the most advanced system, and it has full support for
68 snapshots and clones.
69
70
71 .Available storage types
72 [width="100%",cols="<d,1*m,4*d",options="header"]
73 |===========================================================
74 |Description |PVE type |Level |Shared|Snapshots|Stable
75 |ZFS (local) |zfspool |file |no |yes |yes
76 |Directory |dir |file |no |no |yes
77 |NFS |nfs |file |yes |no |yes
78 |GlusterFS |glusterfs |file |yes |no |yes
79 |LVM |lvm |block |no |no |yes
80 |LVM-thin |lvmthin |block |no |yes |yes
81 |iSCSI/kernel |iscsi |block |yes |no |yes
82 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
83 |Ceph/RBD |rbd |block |yes |yes |yes
84 |Sheepdog |sheepdog |block |yes |yes |beta
85 |DRBD9 |drbd |block |yes |yes |beta
86 |ZFS over iSCSI |zfs |block |yes |yes |yes
87 |=========================================================
88
89 TIP: It is possible to use LVM on top of an iSCSI storage. That way
90 you get a `shared` LVM storage.
91
92
93 Thin Provisioning
94 ~~~~~~~~~~~~~~~~~
95
96 A number of storages, and the Qemu image format `qcow2`, support 'thin
97 provisioning'. With thin provisioning activated, only the blocks that
98 the guest system actually use will be written to the storage.
99
100 Say for instance you create a VM with a 32GB hard disk, and after
101 installing the guest system OS, the root file system of the VM contains
102 3 GB of data. In that case only 3GB are written to the storage, even
103 if the guest VM sees a 32GB hard drive. In this way thin provisioning
104 allows you to create disk images which are larger than the currently
105 available storage blocks. You can create large disk images for your
106 VMs, and when the need arises, add more disks to your storage without
107 resizing the VMs' file systems.
108
109 All storage types which have the ``Snapshots'' feature also support thin
110 provisioning.
111
112 CAUTION: If a storage runs full, all guests using volumes on that
113 storage receives IO error. This can cause file system inconsistencies
114 and may corrupt your data. So it is advisable to avoid
115 over-provisioning of your storage resources, or carefully observe
116 free space to avoid such conditions.
117
118
119 Storage Configuration
120 ---------------------
121
122 All {pve} related storage configuration is stored within a single text
123 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
124 gets automatically distributed to all cluster nodes. So all nodes
125 share the same storage configuration.
126
127 Sharing storage configuration make perfect sense for shared storage,
128 because the same ``shared'' storage is accessible from all nodes. But is
129 also useful for local storage types. In this case such local storage
130 is available on all nodes, but it is physically different and can have
131 totally different content.
132
133
134 Storage Pools
135 ~~~~~~~~~~~~~
136
137 Each storage pool has a `<type>`, and is uniquely identified by its
138 `<STORAGE_ID>`. A pool configuration looks like this:
139
140 ----
141 <type>: <STORAGE_ID>
142 <property> <value>
143 <property> <value>
144 ...
145 ----
146
147 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
148 followed by a list of properties. Most properties have values, but some of
149 them come with reasonable default. In that case you can omit the value.
150
151 To be more specific, take a look at the default storage configuration
152 after installation. It contains one special local storage pool named
153 `local`, which refers to the directory `/var/lib/vz` and is always
154 available. The {pve} installer creates additional storage entries
155 depending on the storage type chosen at installation time.
156
157 .Default storage configuration (`/etc/pve/storage.cfg`)
158 ----
159 dir: local
160 path /var/lib/vz
161 content iso,vztmpl,backup
162
163 # default image store on LVM based installation
164 lvmthin: local-lvm
165 thinpool data
166 vgname pve
167 content rootdir,images
168
169 # default image store on ZFS based installation
170 zfspool: local-zfs
171 pool rpool/data
172 sparse
173 content images,rootdir
174 ----
175
176
177 Common Storage Properties
178 ~~~~~~~~~~~~~~~~~~~~~~~~~
179
180 A few storage properties are common among different storage types.
181
182 nodes::
183
184 List of cluster node names where this storage is
185 usable/accessible. One can use this property to restrict storage
186 access to a limited set of nodes.
187
188 content::
189
190 A storage can support several content types, for example virtual disk
191 images, cdrom iso images, container templates or container root
192 directories. Not all storage types support all content types. One can set
193 this property to select for what this storage is used for.
194
195 images:::
196
197 KVM-Qemu VM images.
198
199 rootdir:::
200
201 Allow to store container data.
202
203 vztmpl:::
204
205 Container templates.
206
207 backup:::
208
209 Backup files (`vzdump`).
210
211 iso:::
212
213 ISO images
214
215 shared::
216
217 Mark storage as shared.
218
219 disable::
220
221 You can use this flag to disable the storage completely.
222
223 maxfiles::
224
225 Maximum number of backup files per VM. Use `0` for unlimited.
226
227 format::
228
229 Default image format (`raw|qcow2|vmdk`)
230
231
232 WARNING: It is not advisable to use the same storage pool on different
233 {pve} clusters. Some storage operation need exclusive access to the
234 storage, so proper locking is required. While this is implemented
235 within a cluster, it does not work between different clusters.
236
237
238 Volumes
239 -------
240
241 We use a special notation to address storage data. When you allocate
242 data from a storage pool, it returns such a volume identifier. A volume
243 is identified by the `<STORAGE_ID>`, followed by a storage type
244 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
245 like:
246
247 local:230/example-image.raw
248
249 local:iso/debian-501-amd64-netinst.iso
250
251 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
252
253 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
254
255 To get the file system path for a `<VOLUME_ID>` use:
256
257 pvesm path <VOLUME_ID>
258
259
260 Volume Ownership
261 ~~~~~~~~~~~~~~~~
262
263 There exists an ownership relation for `image` type volumes. Each such
264 volume is owned by a VM or Container. For example volume
265 `local:230/example-image.raw` is owned by VM 230. Most storage
266 backends encodes this ownership information into the volume name.
267
268 When you remove a VM or Container, the system also removes all
269 associated volumes which are owned by that VM or Container.
270
271
272 Using the Command Line Interface
273 --------------------------------
274
275 It is recommended to familiarize yourself with the concept behind storage
276 pools and volume identifiers, but in real life, you are not forced to do any
277 of those low level operations on the command line. Normally,
278 allocation and removal of volumes is done by the VM and Container
279 management tools.
280
281 Nevertheless, there is a command line tool called `pvesm` (``{pve}
282 Storage Manager''), which is able to perform common storage management
283 tasks.
284
285
286 Examples
287 ~~~~~~~~
288
289 Add storage pools
290
291 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
292 pvesm add dir <STORAGE_ID> --path <PATH>
293 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
294 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
295 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
296
297 Disable storage pools
298
299 pvesm set <STORAGE_ID> --disable 1
300
301 Enable storage pools
302
303 pvesm set <STORAGE_ID> --disable 0
304
305 Change/set storage options
306
307 pvesm set <STORAGE_ID> <OPTIONS>
308 pvesm set <STORAGE_ID> --shared 1
309 pvesm set local --format qcow2
310 pvesm set <STORAGE_ID> --content iso
311
312 Remove storage pools. This does not delete any data, and does not
313 disconnect or unmount anything. It just removes the storage
314 configuration.
315
316 pvesm remove <STORAGE_ID>
317
318 Allocate volumes
319
320 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
321
322 Allocate a 4G volume in local storage. The name is auto-generated if
323 you pass an empty string as `<name>`
324
325 pvesm alloc local <VMID> '' 4G
326
327 Free volumes
328
329 pvesm free <VOLUME_ID>
330
331 WARNING: This really destroys all volume data.
332
333 List storage status
334
335 pvesm status
336
337 List storage contents
338
339 pvesm list <STORAGE_ID> [--vmid <VMID>]
340
341 List volumes allocated by VMID
342
343 pvesm list <STORAGE_ID> --vmid <VMID>
344
345 List iso images
346
347 pvesm list <STORAGE_ID> --iso
348
349 List container templates
350
351 pvesm list <STORAGE_ID> --vztmpl
352
353 Show file system path for a volume
354
355 pvesm path <VOLUME_ID>
356
357 ifdef::wiki[]
358
359 See Also
360 --------
361
362 * link:/wiki/Storage:_Directory[Storage: Directory]
363
364 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
365
366 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
367
368 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
369
370 * link:/wiki/Storage:_LVM[Storage: LVM]
371
372 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
373
374 * link:/wiki/Storage:_NFS[Storage: NFS]
375
376 * link:/wiki/Storage:_RBD[Storage: RBD]
377
378 * link:/wiki/Storage:_ZFS[Storage: ZFS]
379
380 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
381
382 endif::wiki[]
383
384 ifndef::wiki[]
385
386 // backend documentation
387
388 include::pve-storage-dir.adoc[]
389
390 include::pve-storage-nfs.adoc[]
391
392 include::pve-storage-glusterfs.adoc[]
393
394 include::pve-storage-zfspool.adoc[]
395
396 include::pve-storage-lvm.adoc[]
397
398 include::pve-storage-lvmthin.adoc[]
399
400 include::pve-storage-iscsi.adoc[]
401
402 include::pve-storage-iscsidirect.adoc[]
403
404 include::pve-storage-rbd.adoc[]
405
406
407
408 ifdef::manvolnum[]
409 include::pve-copyright.adoc[]
410 endif::manvolnum[]
411
412 endif::wiki[]
413