]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
add warning about over-provisioning
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 include::attributes.txt[]
26 endif::manvolnum[]
27
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
33
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
38
39 The storage library (package 'libpve-storage-perl') uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
42
43
44 Storage Types
45 -------------
46
47 There are basically two different classes of storage types:
48
49 Block level storage::
50
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
56
57 File level storage::
58
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
62 snapshots and clones.
63
64
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |yes
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
82
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a 'shared' LVM storage.
85
86 Thin provisioning
87 ~~~~~~~~~~~~~~~~~
88
89 A number of storages, and the Qemu image format `qcow2`, support _thin
90 provisioning_. With thin provisioning activated, only the blocks that
91 the guest system actually use will be written to the storage.
92
93 Say for instance you create a VM with a 32GB hard disk, and after
94 installing the guest system OS, the root filesystem of the VM contains
95 3 GB of data. In that case only 3GB are written to the storage, even
96 if the guest VM sees a 32GB hard drive. In this way thin provisioning
97 allows you to create disk images which are larger than the currently
98 available storage blocks. You can create large disk images for your
99 VMs, and when the need arises, add more disks to your storage without
100 resizing the VMs filesystems.
101
102 All storage types which have the 'Snapshots' feature also support thin
103 provisioning.
104
105 CAUTION: If a storage runs full, all guests using volumes on that
106 storage receives IO error. This can cause file system inconsistencies
107 and may corrupt your data. So it is advisable to avoid
108 over-provisioning of your storage resources, or carefully observe
109 free space to avoid such conditions.
110
111 Storage Configuration
112 ---------------------
113
114 All {pve} related storage configuration is stored within a single text
115 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
116 gets automatically distributed to all cluster nodes. So all nodes
117 share the same storage configuration.
118
119 Sharing storage configuration make perfect sense for shared storage,
120 because the same 'shared' storage is accessible from all nodes. But is
121 also useful for local storage types. In this case such local storage
122 is available on all nodes, but it is physically different and can have
123 totally different content.
124
125 Storage Pools
126 ~~~~~~~~~~~~~
127
128 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
129
130 ----
131 <type>: <STORAGE_ID>
132 <property> <value>
133 <property> <value>
134 ...
135 ----
136
137 NOTE: There is one special local storage pool named `local`. It refers to
138 the directory '/var/lib/vz' and is automatically generated at installation
139 time.
140
141 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
142 followed by a list of properties. Most properties have values, but some of
143 them come with reasonable default. In that case you can omit the value.
144
145 .Default storage configuration ('/etc/pve/storage.cfg')
146 ----
147 dir: local
148 path /var/lib/vz
149 content iso,vztmpl,backup
150
151 lvmthin: local-lvm
152 thinpool data
153 vgname pve
154 content rootdir,images
155 ----
156
157 Common Storage Properties
158 ~~~~~~~~~~~~~~~~~~~~~~~~~
159
160 A few storage properties are common among different storage types.
161
162 nodes::
163
164 List of cluster node names where this storage is
165 usable/accessible. One can use this property to restrict storage
166 access to a limited set of nodes.
167
168 content::
169
170 A storage can support several content types, for example virtual disk
171 images, cdrom iso images, container templates or container root
172 directories. Not all storage types support all content types. One can set
173 this property to select for what this storage is used for.
174
175 images:::
176
177 KVM-Qemu VM images.
178
179 rootdir:::
180
181 Allow to store container data.
182
183 vztmpl:::
184
185 Container templates.
186
187 backup:::
188
189 Backup files ('vzdump').
190
191 iso:::
192
193 ISO images
194
195 shared::
196
197 Mark storage as shared.
198
199 disable::
200
201 You can use this flag to disable the storage completely.
202
203 maxfiles::
204
205 Maximal number of backup files per VM. Use `0` for unlimted.
206
207 format::
208
209 Default image format (`raw|qcow2|vmdk`)
210
211
212 WARNING: It is not advisable to use the same storage pool on different
213 {pve} clusters. Some storage operation need exclusive access to the
214 storage, so proper locking is required. While this is implemented
215 within a cluster, it does not work between different clusters.
216
217
218 Volumes
219 -------
220
221 We use a special notation to address storage data. When you allocate
222 data from a storage pool, it returns such a volume identifier. A volume
223 is identified by the `<STORAGE_ID>`, followed by a storage type
224 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
225 like:
226
227 local:230/example-image.raw
228
229 local:iso/debian-501-amd64-netinst.iso
230
231 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
232
233 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
234
235 To get the filesystem path for a `<VOLUME_ID>` use:
236
237 pvesm path <VOLUME_ID>
238
239 Volume Ownership
240 ~~~~~~~~~~~~~~~~
241
242 There exists an ownership relation for 'image' type volumes. Each such
243 volume is owned by a VM or Container. For example volume
244 `local:230/example-image.raw` is owned by VM 230. Most storage
245 backends encodes this ownership information into the volume name.
246
247 When you remove a VM or Container, the system also removes all
248 associated volumes which are owned by that VM or Container.
249
250
251 Using the Command Line Interface
252 --------------------------------
253
254 It is recommended to familiarize yourself with the concept behind storage
255 pools and volume identifiers, but in real life, you are not forced to do any
256 of those low level operations on the command line. Normally,
257 allocation and removal of volumes is done by the VM and Container
258 management tools.
259
260 Nevertheless, there is a command line tool called 'pvesm' ({pve}
261 storage manager), which is able to perform common storage management
262 tasks.
263
264
265 Examples
266 ~~~~~~~~
267
268 Add storage pools
269
270 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
271 pvesm add dir <STORAGE_ID> --path <PATH>
272 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
273 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
274 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
275
276 Disable storage pools
277
278 pvesm set <STORAGE_ID> --disable 1
279
280 Enable storage pools
281
282 pvesm set <STORAGE_ID> --disable 0
283
284 Change/set storage options
285
286 pvesm set <STORAGE_ID> <OPTIONS>
287 pvesm set <STORAGE_ID> --shared 1
288 pvesm set local --format qcow2
289 pvesm set <STORAGE_ID> --content iso
290
291 Remove storage pools. This does not delete any data, and does not
292 disconnect or unmount anything. It just removes the storage
293 configuration.
294
295 pvesm remove <STORAGE_ID>
296
297 Allocate volumes
298
299 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
300
301 Allocate a 4G volume in local storage. The name is auto-generated if
302 you pass an empty string as `<name>`
303
304 pvesm alloc local <VMID> '' 4G
305
306 Free volumes
307
308 pvesm free <VOLUME_ID>
309
310 WARNING: This really destroys all volume data.
311
312 List storage status
313
314 pvesm status
315
316 List storage contents
317
318 pvesm list <STORAGE_ID> [--vmid <VMID>]
319
320 List volumes allocated by VMID
321
322 pvesm list <STORAGE_ID> --vmid <VMID>
323
324 List iso images
325
326 pvesm list <STORAGE_ID> --iso
327
328 List container templates
329
330 pvesm list <STORAGE_ID> --vztmpl
331
332 Show filesystem path for a volume
333
334 pvesm path <VOLUME_ID>
335
336 ifdef::wiki[]
337
338 See Also
339 --------
340
341 * link:/index.php/Storage:_Directory[Storage: Directory]
342
343 * link:/index.php/Storage:_GlusterFS[Storage: GlusterFS]
344
345 * link:/index.php/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
346
347 * link:/index.php/Storage:_iSCSI[Storage: iSCSI]
348
349 * link:/index.php/Storage:_LVM[Storage: LVM]
350
351 * link:/index.php/Storage:_LVM_Thin[Storage: LVM Thin]
352
353 * link:/index.php/Storage:_NFS[Storage: NFS]
354
355 * link:/index.php/Storage:_RBD[Storage: RBD]
356
357 * link:/index.php/Storage:_ZFS[Storage: ZFS]
358
359
360 endif::wiki[]
361
362 ifndef::wiki[]
363
364 // backend documentation
365
366 include::pve-storage-dir.adoc[]
367
368 include::pve-storage-nfs.adoc[]
369
370 include::pve-storage-glusterfs.adoc[]
371
372 include::pve-storage-zfspool.adoc[]
373
374 include::pve-storage-lvm.adoc[]
375
376 include::pve-storage-lvmthin.adoc[]
377
378 include::pve-storage-iscsi.adoc[]
379
380 include::pve-storage-iscsidirect.adoc[]
381
382 include::pve-storage-rbd.adoc[]
383
384
385
386 ifdef::manvolnum[]
387 include::pve-copyright.adoc[]
388 endif::manvolnum[]
389
390 endif::wiki[]
391