]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
Add chapter for thin provisioning
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 include::attributes.txt[]
26 endif::manvolnum[]
27
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
33
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
38
39 The storage library (package 'libpve-storage-perl') uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
42
43
44 Storage Types
45 -------------
46
47 There are basically two different classes of storage types:
48
49 Block level storage::
50
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
56
57 File level storage::
58
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
62 snapshots and clones.
63
64
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |yes
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
82
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a 'shared' LVM storage.
85
86 Thin provisioning
87 -----------------
88
89 A number of storages, and the Qemu image format `qcow2`, support _thin provisioning_.
90 With thin provisioning activated, only the blocks that the guest system actually use will be
91 written to the storage.
92
93 Say for instance you create a VM with a 32GB hard disk, and after installing the
94 guest system OS, the root filesystem of the VM contains 3 GB of data.
95 In that case only 3GB are written to the storage, even if the guest VM sees a
96 32GB hard drive. In this way thin provisioning allows you to create disk images
97 which are larger than the currently available storage blocks. You can create
98 large disk images for your VMs, and when the need arises, add more disks to your
99 storage without resizing the VMs filesystems.
100
101 All storage types which have the 'Snapshots' feature also support thin provisioning.
102
103 Storage Configuration
104 ---------------------
105
106 All {pve} related storage configuration is stored within a single text
107 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
108 gets automatically distributed to all cluster nodes. So all nodes
109 share the same storage configuration.
110
111 Sharing storage configuration make perfect sense for shared storage,
112 because the same 'shared' storage is accessible from all nodes. But is
113 also useful for local storage types. In this case such local storage
114 is available on all nodes, but it is physically different and can have
115 totally different content.
116
117 Storage Pools
118 ~~~~~~~~~~~~~
119
120 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
121
122 ----
123 <type>: <STORAGE_ID>
124 <property> <value>
125 <property> <value>
126 ...
127 ----
128
129 NOTE: There is one special local storage pool named `local`. It refers to
130 the directory '/var/lib/vz' and is automatically generated at installation
131 time.
132
133 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
134 followed by a list of properties. Most properties have values, but some of
135 them come with reasonable default. In that case you can omit the value.
136
137 .Default storage configuration ('/etc/pve/storage.cfg')
138 ----
139 dir: local
140 path /var/lib/vz
141 content iso,vztmpl,backup
142
143 lvmthin: local-lvm
144 thinpool data
145 vgname pve
146 content rootdir,images
147 ----
148
149 Common Storage Properties
150 ~~~~~~~~~~~~~~~~~~~~~~~~~
151
152 A few storage properties are common among different storage types.
153
154 nodes::
155
156 List of cluster node names where this storage is
157 usable/accessible. One can use this property to restrict storage
158 access to a limited set of nodes.
159
160 content::
161
162 A storage can support several content types, for example virtual disk
163 images, cdrom iso images, container templates or container root
164 directories. Not all storage types support all content types. One can set
165 this property to select for what this storage is used for.
166
167 images:::
168
169 KVM-Qemu VM images.
170
171 rootdir:::
172
173 Allow to store container data.
174
175 vztmpl:::
176
177 Container templates.
178
179 backup:::
180
181 Backup files ('vzdump').
182
183 iso:::
184
185 ISO images
186
187 shared::
188
189 Mark storage as shared.
190
191 disable::
192
193 You can use this flag to disable the storage completely.
194
195 maxfiles::
196
197 Maximal number of backup files per VM. Use `0` for unlimted.
198
199 format::
200
201 Default image format (`raw|qcow2|vmdk`)
202
203
204 WARNING: It is not advisable to use the same storage pool on different
205 {pve} clusters. Some storage operation need exclusive access to the
206 storage, so proper locking is required. While this is implemented
207 within a cluster, it does not work between different clusters.
208
209
210 Volumes
211 -------
212
213 We use a special notation to address storage data. When you allocate
214 data from a storage pool, it returns such a volume identifier. A volume
215 is identified by the `<STORAGE_ID>`, followed by a storage type
216 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
217 like:
218
219 local:230/example-image.raw
220
221 local:iso/debian-501-amd64-netinst.iso
222
223 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
224
225 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
226
227 To get the filesystem path for a `<VOLUME_ID>` use:
228
229 pvesm path <VOLUME_ID>
230
231 Volume Ownership
232 ~~~~~~~~~~~~~~~~
233
234 There exists an ownership relation for 'image' type volumes. Each such
235 volume is owned by a VM or Container. For example volume
236 `local:230/example-image.raw` is owned by VM 230. Most storage
237 backends encodes this ownership information into the volume name.
238
239 When you remove a VM or Container, the system also removes all
240 associated volumes which are owned by that VM or Container.
241
242
243 Using the Command Line Interface
244 --------------------------------
245
246 It is recommended to familiarize yourself with the concept behind storage
247 pools and volume identifiers, but in real life, you are not forced to do any
248 of those low level operations on the command line. Normally,
249 allocation and removal of volumes is done by the VM and Container
250 management tools.
251
252 Nevertheless, there is a command line tool called 'pvesm' ({pve}
253 storage manager), which is able to perform common storage management
254 tasks.
255
256
257 Examples
258 ~~~~~~~~
259
260 Add storage pools
261
262 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
263 pvesm add dir <STORAGE_ID> --path <PATH>
264 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
265 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
266 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
267
268 Disable storage pools
269
270 pvesm set <STORAGE_ID> --disable 1
271
272 Enable storage pools
273
274 pvesm set <STORAGE_ID> --disable 0
275
276 Change/set storage options
277
278 pvesm set <STORAGE_ID> <OPTIONS>
279 pvesm set <STORAGE_ID> --shared 1
280 pvesm set local --format qcow2
281 pvesm set <STORAGE_ID> --content iso
282
283 Remove storage pools. This does not delete any data, and does not
284 disconnect or unmount anything. It just removes the storage
285 configuration.
286
287 pvesm remove <STORAGE_ID>
288
289 Allocate volumes
290
291 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
292
293 Allocate a 4G volume in local storage. The name is auto-generated if
294 you pass an empty string as `<name>`
295
296 pvesm alloc local <VMID> '' 4G
297
298 Free volumes
299
300 pvesm free <VOLUME_ID>
301
302 WARNING: This really destroys all volume data.
303
304 List storage status
305
306 pvesm status
307
308 List storage contents
309
310 pvesm list <STORAGE_ID> [--vmid <VMID>]
311
312 List volumes allocated by VMID
313
314 pvesm list <STORAGE_ID> --vmid <VMID>
315
316 List iso images
317
318 pvesm list <STORAGE_ID> --iso
319
320 List container templates
321
322 pvesm list <STORAGE_ID> --vztmpl
323
324 Show filesystem path for a volume
325
326 pvesm path <VOLUME_ID>
327
328 ifdef::wiki[]
329
330 See Also
331 --------
332
333 * link:/index.php/Storage:_Directory[Storage: Directory]
334
335 * link:/index.php/Storage:_GlusterFS[Storage: GlusterFS]
336
337 * link:/index.php/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
338
339 * link:/index.php/Storage:_iSCSI[Storage: iSCSI]
340
341 * link:/index.php/Storage:_LVM[Storage: LVM]
342
343 * link:/index.php/Storage:_LVM_Thin[Storage: LVM Thin]
344
345 * link:/index.php/Storage:_NFS[Storage: NFS]
346
347 * link:/index.php/Storage:_RBD[Storage: RBD]
348
349 * link:/index.php/Storage:_ZFS[Storage: ZFS]
350
351
352 endif::wiki[]
353
354 ifndef::wiki[]
355
356 // backend documentation
357
358 include::pve-storage-dir.adoc[]
359
360 include::pve-storage-nfs.adoc[]
361
362 include::pve-storage-glusterfs.adoc[]
363
364 include::pve-storage-zfspool.adoc[]
365
366 include::pve-storage-lvm.adoc[]
367
368 include::pve-storage-lvmthin.adoc[]
369
370 include::pve-storage-iscsi.adoc[]
371
372 include::pve-storage-iscsidirect.adoc[]
373
374 include::pve-storage-rbd.adoc[]
375
376
377
378 ifdef::manvolnum[]
379 include::pve-copyright.adoc[]
380 endif::manvolnum[]
381
382 endif::wiki[]
383