]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
bump version to 4.2-6
[pve-docs.git] / pvesm.adoc
CommitLineData
aa039b0f
DM
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
38fd0958 5include::attributes.txt[]
aa039b0f
DM
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSYS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23{pve} Storage
24=============
38fd0958 25include::attributes.txt[]
aa039b0f
DM
26endif::manvolnum[]
27
28The {pve} storage model is very flexible. Virtual machine images
29can either be stored on one or several local storages, or on shared
30storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31configure as many storage pools as you like. You can use all
32storage technologies available for Debian Linux.
33
34One major benefit of storing VMs on shared storage is the ability to
35live-migrate running machines without any downtime, as all nodes in
36the cluster have direct access to VM disk images. There is no need to
37copy VM image data, so live migration is very fast in that case.
38
39The storage library (package 'libpve-storage-perl') uses a flexible
40plugin system to provide a common interface to all storage types. This
41can be easily adopted to include further storage types in future.
42
43
44Storage Types
45-------------
46
47There are basically two different classes of storage types:
48
49Block level storage::
50
51Allows to store large 'raw' images. It is usually not possible to store
52other files (ISO, backups, ..) on such storage types. Most modern
53block level storage implementations support snapshots and clones.
54RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55data to different nodes.
56
57File level storage::
58
59They allow access to a full featured (POSIX) file system. They are
60more flexible, and allows you to store any content type. ZFS is
61probably the most advanced system, and it has full support for
62snapshots and clones.
63
64
65.Available storage types
66[width="100%",cols="<d,1*m,4*d",options="header"]
67|===========================================================
68|Description |PVE type |Level |Shared|Snapshots|Stable
69|ZFS (local) |zfspool |file |no |yes |yes
70|Directory |dir |file |no |no |yes
71|NFS |nfs |file |yes |no |yes
72|GlusterFS |glusterfs |file |yes |no |yes
73|LVM |lvm |block |no |no |yes
9801e1c3 74|LVM-thin |lvmthin |block |no |yes |yes
aa039b0f
DM
75|iSCSI/kernel |iscsi |block |yes |no |yes
76|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77|Ceph/RBD |rbd |block |yes |yes |yes
78|Sheepdog |sheepdog |block |yes |yes |beta
79|DRBD9 |drbd |block |yes |yes |beta
80|ZFS over iSCSI |zfs |block |yes |yes |yes
81|=========================================================
82
83TIP: It is possible to use LVM on top of an iSCSI storage. That way
84you get a 'shared' LVM storage.
85
ebc15cbc 86Thin provisioning
2afe468c 87~~~~~~~~~~~~~~~~~
ebc15cbc 88
2afe468c
DM
89A number of storages, and the Qemu image format `qcow2`, support _thin
90provisioning_. With thin provisioning activated, only the blocks that
91the guest system actually use will be written to the storage.
ebc15cbc 92
2afe468c
DM
93Say for instance you create a VM with a 32GB hard disk, and after
94installing the guest system OS, the root filesystem of the VM contains
953 GB of data. In that case only 3GB are written to the storage, even
96if the guest VM sees a 32GB hard drive. In this way thin provisioning
97allows you to create disk images which are larger than the currently
98available storage blocks. You can create large disk images for your
99VMs, and when the need arises, add more disks to your storage without
100resizing the VMs filesystems.
101
102All storage types which have the 'Snapshots' feature also support thin
103provisioning.
ebc15cbc 104
ebc15cbc 105
aa039b0f
DM
106Storage Configuration
107---------------------
108
109All {pve} related storage configuration is stored within a single text
110file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
111gets automatically distributed to all cluster nodes. So all nodes
112share the same storage configuration.
113
114Sharing storage configuration make perfect sense for shared storage,
115because the same 'shared' storage is accessible from all nodes. But is
116also useful for local storage types. In this case such local storage
117is available on all nodes, but it is physically different and can have
118totally different content.
119
120Storage Pools
121~~~~~~~~~~~~~
122
123Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
124
125----
126<type>: <STORAGE_ID>
127 <property> <value>
128 <property> <value>
129 ...
130----
131
132NOTE: There is one special local storage pool named `local`. It refers to
871e1fd6 133the directory '/var/lib/vz' and is automatically generated at installation
aa039b0f
DM
134time.
135
136The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
871e1fd6
FG
137followed by a list of properties. Most properties have values, but some of
138them come with reasonable default. In that case you can omit the value.
aa039b0f
DM
139
140.Default storage configuration ('/etc/pve/storage.cfg')
9801e1c3
DM
141----
142dir: local
aa039b0f 143 path /var/lib/vz
9801e1c3
DM
144 content iso,vztmpl,backup
145
146lvmthin: local-lvm
147 thinpool data
148 vgname pve
149 content rootdir,images
150----
aa039b0f
DM
151
152Common Storage Properties
153~~~~~~~~~~~~~~~~~~~~~~~~~
154
871e1fd6 155A few storage properties are common among different storage types.
aa039b0f
DM
156
157nodes::
158
159List of cluster node names where this storage is
160usable/accessible. One can use this property to restrict storage
161access to a limited set of nodes.
162
163content::
164
165A storage can support several content types, for example virtual disk
166images, cdrom iso images, container templates or container root
871e1fd6 167directories. Not all storage types support all content types. One can set
aa039b0f
DM
168this property to select for what this storage is used for.
169
170images:::
171
172KVM-Qemu VM images.
173
174rootdir:::
175
871e1fd6 176Allow to store container data.
aa039b0f
DM
177
178vztmpl:::
179
180Container templates.
181
182backup:::
183
184Backup files ('vzdump').
185
186iso:::
187
188ISO images
189
190shared::
191
192Mark storage as shared.
193
194disable::
195
196You can use this flag to disable the storage completely.
197
198maxfiles::
199
200Maximal number of backup files per VM. Use `0` for unlimted.
201
202format::
203
204Default image format (`raw|qcow2|vmdk`)
205
206
207WARNING: It is not advisable to use the same storage pool on different
871e1fd6 208{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 209storage, so proper locking is required. While this is implemented
871e1fd6 210within a cluster, it does not work between different clusters.
aa039b0f
DM
211
212
213Volumes
214-------
215
216We use a special notation to address storage data. When you allocate
871e1fd6 217data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
218is identified by the `<STORAGE_ID>`, followed by a storage type
219dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
220like:
221
222 local:230/example-image.raw
223
224 local:iso/debian-501-amd64-netinst.iso
225
226 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
227
228 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
229
230To get the filesystem path for a `<VOLUME_ID>` use:
231
232 pvesm path <VOLUME_ID>
233
234Volume Ownership
235~~~~~~~~~~~~~~~~
236
237There exists an ownership relation for 'image' type volumes. Each such
238volume is owned by a VM or Container. For example volume
239`local:230/example-image.raw` is owned by VM 230. Most storage
240backends encodes this ownership information into the volume name.
241
871e1fd6 242When you remove a VM or Container, the system also removes all
aa039b0f
DM
243associated volumes which are owned by that VM or Container.
244
245
246Using the Command Line Interface
247--------------------------------
248
871e1fd6
FG
249It is recommended to familiarize yourself with the concept behind storage
250pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
251of those low level operations on the command line. Normally,
252allocation and removal of volumes is done by the VM and Container
253management tools.
254
255Nevertheless, there is a command line tool called 'pvesm' ({pve}
256storage manager), which is able to perform common storage management
257tasks.
258
259
260Examples
261~~~~~~~~
262
263Add storage pools
264
265 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
266 pvesm add dir <STORAGE_ID> --path <PATH>
267 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
268 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
269 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
270
271Disable storage pools
272
273 pvesm set <STORAGE_ID> --disable 1
274
275Enable storage pools
276
277 pvesm set <STORAGE_ID> --disable 0
278
279Change/set storage options
280
281 pvesm set <STORAGE_ID> <OPTIONS>
282 pvesm set <STORAGE_ID> --shared 1
283 pvesm set local --format qcow2
284 pvesm set <STORAGE_ID> --content iso
285
286Remove storage pools. This does not delete any data, and does not
287disconnect or unmount anything. It just removes the storage
288configuration.
289
290 pvesm remove <STORAGE_ID>
291
292Allocate volumes
293
294 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
295
296Allocate a 4G volume in local storage. The name is auto-generated if
297you pass an empty string as `<name>`
298
299 pvesm alloc local <VMID> '' 4G
300
301Free volumes
302
303 pvesm free <VOLUME_ID>
304
305WARNING: This really destroys all volume data.
306
307List storage status
308
309 pvesm status
310
311List storage contents
312
313 pvesm list <STORAGE_ID> [--vmid <VMID>]
314
315List volumes allocated by VMID
316
317 pvesm list <STORAGE_ID> --vmid <VMID>
318
319List iso images
320
321 pvesm list <STORAGE_ID> --iso
322
323List container templates
324
325 pvesm list <STORAGE_ID> --vztmpl
326
327Show filesystem path for a volume
328
329 pvesm path <VOLUME_ID>
330
deb4673f
DM
331ifdef::wiki[]
332
333See Also
334--------
335
336* link:/index.php/Storage:_Directory[Storage: Directory]
337
338* link:/index.php/Storage:_GlusterFS[Storage: GlusterFS]
339
340* link:/index.php/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
341
342* link:/index.php/Storage:_iSCSI[Storage: iSCSI]
343
344* link:/index.php/Storage:_LVM[Storage: LVM]
345
346* link:/index.php/Storage:_LVM_Thin[Storage: LVM Thin]
347
348* link:/index.php/Storage:_NFS[Storage: NFS]
349
350* link:/index.php/Storage:_RBD[Storage: RBD]
351
352* link:/index.php/Storage:_ZFS[Storage: ZFS]
353
354
355endif::wiki[]
356
251666be
DM
357ifndef::wiki[]
358
aa039b0f
DM
359// backend documentation
360
361include::pve-storage-dir.adoc[]
362
363include::pve-storage-nfs.adoc[]
364
365include::pve-storage-glusterfs.adoc[]
366
367include::pve-storage-zfspool.adoc[]
368
369include::pve-storage-lvm.adoc[]
370
9801e1c3
DM
371include::pve-storage-lvmthin.adoc[]
372
aa039b0f
DM
373include::pve-storage-iscsi.adoc[]
374
375include::pve-storage-iscsidirect.adoc[]
376
377include::pve-storage-rbd.adoc[]
378
379
251666be 380
aa039b0f
DM
381ifdef::manvolnum[]
382include::pve-copyright.adoc[]
383endif::manvolnum[]
384
251666be
DM
385endif::wiki[]
386