]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
Add chapter for thin provisioning
[pve-docs.git] / pvesm.adoc
CommitLineData
aa039b0f
DM
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
38fd0958 5include::attributes.txt[]
aa039b0f
DM
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSYS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23{pve} Storage
24=============
38fd0958 25include::attributes.txt[]
aa039b0f
DM
26endif::manvolnum[]
27
28The {pve} storage model is very flexible. Virtual machine images
29can either be stored on one or several local storages, or on shared
30storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31configure as many storage pools as you like. You can use all
32storage technologies available for Debian Linux.
33
34One major benefit of storing VMs on shared storage is the ability to
35live-migrate running machines without any downtime, as all nodes in
36the cluster have direct access to VM disk images. There is no need to
37copy VM image data, so live migration is very fast in that case.
38
39The storage library (package 'libpve-storage-perl') uses a flexible
40plugin system to provide a common interface to all storage types. This
41can be easily adopted to include further storage types in future.
42
43
44Storage Types
45-------------
46
47There are basically two different classes of storage types:
48
49Block level storage::
50
51Allows to store large 'raw' images. It is usually not possible to store
52other files (ISO, backups, ..) on such storage types. Most modern
53block level storage implementations support snapshots and clones.
54RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55data to different nodes.
56
57File level storage::
58
59They allow access to a full featured (POSIX) file system. They are
60more flexible, and allows you to store any content type. ZFS is
61probably the most advanced system, and it has full support for
62snapshots and clones.
63
64
65.Available storage types
66[width="100%",cols="<d,1*m,4*d",options="header"]
67|===========================================================
68|Description |PVE type |Level |Shared|Snapshots|Stable
69|ZFS (local) |zfspool |file |no |yes |yes
70|Directory |dir |file |no |no |yes
71|NFS |nfs |file |yes |no |yes
72|GlusterFS |glusterfs |file |yes |no |yes
73|LVM |lvm |block |no |no |yes
9801e1c3 74|LVM-thin |lvmthin |block |no |yes |yes
aa039b0f
DM
75|iSCSI/kernel |iscsi |block |yes |no |yes
76|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77|Ceph/RBD |rbd |block |yes |yes |yes
78|Sheepdog |sheepdog |block |yes |yes |beta
79|DRBD9 |drbd |block |yes |yes |beta
80|ZFS over iSCSI |zfs |block |yes |yes |yes
81|=========================================================
82
83TIP: It is possible to use LVM on top of an iSCSI storage. That way
84you get a 'shared' LVM storage.
85
ebc15cbc
EK
86Thin provisioning
87-----------------
88
89A number of storages, and the Qemu image format `qcow2`, support _thin provisioning_.
90With thin provisioning activated, only the blocks that the guest system actually use will be
91written to the storage.
92
93Say for instance you create a VM with a 32GB hard disk, and after installing the
94guest system OS, the root filesystem of the VM contains 3 GB of data.
95In that case only 3GB are written to the storage, even if the guest VM sees a
9632GB hard drive. In this way thin provisioning allows you to create disk images
97which are larger than the currently available storage blocks. You can create
98large disk images for your VMs, and when the need arises, add more disks to your
99storage without resizing the VMs filesystems.
100
101All storage types which have the 'Snapshots' feature also support thin provisioning.
102
aa039b0f
DM
103Storage Configuration
104---------------------
105
106All {pve} related storage configuration is stored within a single text
107file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
108gets automatically distributed to all cluster nodes. So all nodes
109share the same storage configuration.
110
111Sharing storage configuration make perfect sense for shared storage,
112because the same 'shared' storage is accessible from all nodes. But is
113also useful for local storage types. In this case such local storage
114is available on all nodes, but it is physically different and can have
115totally different content.
116
117Storage Pools
118~~~~~~~~~~~~~
119
120Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
121
122----
123<type>: <STORAGE_ID>
124 <property> <value>
125 <property> <value>
126 ...
127----
128
129NOTE: There is one special local storage pool named `local`. It refers to
871e1fd6 130the directory '/var/lib/vz' and is automatically generated at installation
aa039b0f
DM
131time.
132
133The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
871e1fd6
FG
134followed by a list of properties. Most properties have values, but some of
135them come with reasonable default. In that case you can omit the value.
aa039b0f
DM
136
137.Default storage configuration ('/etc/pve/storage.cfg')
9801e1c3
DM
138----
139dir: local
aa039b0f 140 path /var/lib/vz
9801e1c3
DM
141 content iso,vztmpl,backup
142
143lvmthin: local-lvm
144 thinpool data
145 vgname pve
146 content rootdir,images
147----
aa039b0f
DM
148
149Common Storage Properties
150~~~~~~~~~~~~~~~~~~~~~~~~~
151
871e1fd6 152A few storage properties are common among different storage types.
aa039b0f
DM
153
154nodes::
155
156List of cluster node names where this storage is
157usable/accessible. One can use this property to restrict storage
158access to a limited set of nodes.
159
160content::
161
162A storage can support several content types, for example virtual disk
163images, cdrom iso images, container templates or container root
871e1fd6 164directories. Not all storage types support all content types. One can set
aa039b0f
DM
165this property to select for what this storage is used for.
166
167images:::
168
169KVM-Qemu VM images.
170
171rootdir:::
172
871e1fd6 173Allow to store container data.
aa039b0f
DM
174
175vztmpl:::
176
177Container templates.
178
179backup:::
180
181Backup files ('vzdump').
182
183iso:::
184
185ISO images
186
187shared::
188
189Mark storage as shared.
190
191disable::
192
193You can use this flag to disable the storage completely.
194
195maxfiles::
196
197Maximal number of backup files per VM. Use `0` for unlimted.
198
199format::
200
201Default image format (`raw|qcow2|vmdk`)
202
203
204WARNING: It is not advisable to use the same storage pool on different
871e1fd6 205{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 206storage, so proper locking is required. While this is implemented
871e1fd6 207within a cluster, it does not work between different clusters.
aa039b0f
DM
208
209
210Volumes
211-------
212
213We use a special notation to address storage data. When you allocate
871e1fd6 214data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
215is identified by the `<STORAGE_ID>`, followed by a storage type
216dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
217like:
218
219 local:230/example-image.raw
220
221 local:iso/debian-501-amd64-netinst.iso
222
223 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
224
225 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
226
227To get the filesystem path for a `<VOLUME_ID>` use:
228
229 pvesm path <VOLUME_ID>
230
231Volume Ownership
232~~~~~~~~~~~~~~~~
233
234There exists an ownership relation for 'image' type volumes. Each such
235volume is owned by a VM or Container. For example volume
236`local:230/example-image.raw` is owned by VM 230. Most storage
237backends encodes this ownership information into the volume name.
238
871e1fd6 239When you remove a VM or Container, the system also removes all
aa039b0f
DM
240associated volumes which are owned by that VM or Container.
241
242
243Using the Command Line Interface
244--------------------------------
245
871e1fd6
FG
246It is recommended to familiarize yourself with the concept behind storage
247pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
248of those low level operations on the command line. Normally,
249allocation and removal of volumes is done by the VM and Container
250management tools.
251
252Nevertheless, there is a command line tool called 'pvesm' ({pve}
253storage manager), which is able to perform common storage management
254tasks.
255
256
257Examples
258~~~~~~~~
259
260Add storage pools
261
262 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
263 pvesm add dir <STORAGE_ID> --path <PATH>
264 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
265 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
266 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
267
268Disable storage pools
269
270 pvesm set <STORAGE_ID> --disable 1
271
272Enable storage pools
273
274 pvesm set <STORAGE_ID> --disable 0
275
276Change/set storage options
277
278 pvesm set <STORAGE_ID> <OPTIONS>
279 pvesm set <STORAGE_ID> --shared 1
280 pvesm set local --format qcow2
281 pvesm set <STORAGE_ID> --content iso
282
283Remove storage pools. This does not delete any data, and does not
284disconnect or unmount anything. It just removes the storage
285configuration.
286
287 pvesm remove <STORAGE_ID>
288
289Allocate volumes
290
291 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
292
293Allocate a 4G volume in local storage. The name is auto-generated if
294you pass an empty string as `<name>`
295
296 pvesm alloc local <VMID> '' 4G
297
298Free volumes
299
300 pvesm free <VOLUME_ID>
301
302WARNING: This really destroys all volume data.
303
304List storage status
305
306 pvesm status
307
308List storage contents
309
310 pvesm list <STORAGE_ID> [--vmid <VMID>]
311
312List volumes allocated by VMID
313
314 pvesm list <STORAGE_ID> --vmid <VMID>
315
316List iso images
317
318 pvesm list <STORAGE_ID> --iso
319
320List container templates
321
322 pvesm list <STORAGE_ID> --vztmpl
323
324Show filesystem path for a volume
325
326 pvesm path <VOLUME_ID>
327
deb4673f
DM
328ifdef::wiki[]
329
330See Also
331--------
332
333* link:/index.php/Storage:_Directory[Storage: Directory]
334
335* link:/index.php/Storage:_GlusterFS[Storage: GlusterFS]
336
337* link:/index.php/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
338
339* link:/index.php/Storage:_iSCSI[Storage: iSCSI]
340
341* link:/index.php/Storage:_LVM[Storage: LVM]
342
343* link:/index.php/Storage:_LVM_Thin[Storage: LVM Thin]
344
345* link:/index.php/Storage:_NFS[Storage: NFS]
346
347* link:/index.php/Storage:_RBD[Storage: RBD]
348
349* link:/index.php/Storage:_ZFS[Storage: ZFS]
350
351
352endif::wiki[]
353
251666be
DM
354ifndef::wiki[]
355
aa039b0f
DM
356// backend documentation
357
358include::pve-storage-dir.adoc[]
359
360include::pve-storage-nfs.adoc[]
361
362include::pve-storage-glusterfs.adoc[]
363
364include::pve-storage-zfspool.adoc[]
365
366include::pve-storage-lvm.adoc[]
367
9801e1c3
DM
368include::pve-storage-lvmthin.adoc[]
369
aa039b0f
DM
370include::pve-storage-iscsi.adoc[]
371
372include::pve-storage-iscsidirect.adoc[]
373
374include::pve-storage-rbd.adoc[]
375
376
251666be 377
aa039b0f
DM
378ifdef::manvolnum[]
379include::pve-copyright.adoc[]
380endif::manvolnum[]
381
251666be
DM
382endif::wiki[]
383