]> git.proxmox.com Git - pve-docs.git/blame - pvesm.adoc
generate separate file for storage backends (only for wiki imports)
[pve-docs.git] / pvesm.adoc
CommitLineData
aa039b0f
DM
1[[chapter-storage]]
2ifdef::manvolnum[]
3PVE({manvolnum})
4================
38fd0958 5include::attributes.txt[]
aa039b0f
DM
6
7NAME
8----
9
10pvesm - Proxmox VE Storage Manager
11
12
13SYNOPSYS
14--------
15
16include::pvesm.1-synopsis.adoc[]
17
18DESCRIPTION
19-----------
20endif::manvolnum[]
21
22ifndef::manvolnum[]
23{pve} Storage
24=============
38fd0958 25include::attributes.txt[]
aa039b0f
DM
26endif::manvolnum[]
27
28The {pve} storage model is very flexible. Virtual machine images
29can either be stored on one or several local storages, or on shared
30storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31configure as many storage pools as you like. You can use all
32storage technologies available for Debian Linux.
33
34One major benefit of storing VMs on shared storage is the ability to
35live-migrate running machines without any downtime, as all nodes in
36the cluster have direct access to VM disk images. There is no need to
37copy VM image data, so live migration is very fast in that case.
38
39The storage library (package 'libpve-storage-perl') uses a flexible
40plugin system to provide a common interface to all storage types. This
41can be easily adopted to include further storage types in future.
42
43
44Storage Types
45-------------
46
47There are basically two different classes of storage types:
48
49Block level storage::
50
51Allows to store large 'raw' images. It is usually not possible to store
52other files (ISO, backups, ..) on such storage types. Most modern
53block level storage implementations support snapshots and clones.
54RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55data to different nodes.
56
57File level storage::
58
59They allow access to a full featured (POSIX) file system. They are
60more flexible, and allows you to store any content type. ZFS is
61probably the most advanced system, and it has full support for
62snapshots and clones.
63
64
65.Available storage types
66[width="100%",cols="<d,1*m,4*d",options="header"]
67|===========================================================
68|Description |PVE type |Level |Shared|Snapshots|Stable
69|ZFS (local) |zfspool |file |no |yes |yes
70|Directory |dir |file |no |no |yes
71|NFS |nfs |file |yes |no |yes
72|GlusterFS |glusterfs |file |yes |no |yes
73|LVM |lvm |block |no |no |yes
9801e1c3 74|LVM-thin |lvmthin |block |no |yes |yes
aa039b0f
DM
75|iSCSI/kernel |iscsi |block |yes |no |yes
76|iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77|Ceph/RBD |rbd |block |yes |yes |yes
78|Sheepdog |sheepdog |block |yes |yes |beta
79|DRBD9 |drbd |block |yes |yes |beta
80|ZFS over iSCSI |zfs |block |yes |yes |yes
81|=========================================================
82
83TIP: It is possible to use LVM on top of an iSCSI storage. That way
84you get a 'shared' LVM storage.
85
86Storage Configuration
87---------------------
88
89All {pve} related storage configuration is stored within a single text
90file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
91gets automatically distributed to all cluster nodes. So all nodes
92share the same storage configuration.
93
94Sharing storage configuration make perfect sense for shared storage,
95because the same 'shared' storage is accessible from all nodes. But is
96also useful for local storage types. In this case such local storage
97is available on all nodes, but it is physically different and can have
98totally different content.
99
100Storage Pools
101~~~~~~~~~~~~~
102
103Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
104
105----
106<type>: <STORAGE_ID>
107 <property> <value>
108 <property> <value>
109 ...
110----
111
112NOTE: There is one special local storage pool named `local`. It refers to
871e1fd6 113the directory '/var/lib/vz' and is automatically generated at installation
aa039b0f
DM
114time.
115
116The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
871e1fd6
FG
117followed by a list of properties. Most properties have values, but some of
118them come with reasonable default. In that case you can omit the value.
aa039b0f
DM
119
120.Default storage configuration ('/etc/pve/storage.cfg')
9801e1c3
DM
121----
122dir: local
aa039b0f 123 path /var/lib/vz
9801e1c3
DM
124 content iso,vztmpl,backup
125
126lvmthin: local-lvm
127 thinpool data
128 vgname pve
129 content rootdir,images
130----
aa039b0f
DM
131
132Common Storage Properties
133~~~~~~~~~~~~~~~~~~~~~~~~~
134
871e1fd6 135A few storage properties are common among different storage types.
aa039b0f
DM
136
137nodes::
138
139List of cluster node names where this storage is
140usable/accessible. One can use this property to restrict storage
141access to a limited set of nodes.
142
143content::
144
145A storage can support several content types, for example virtual disk
146images, cdrom iso images, container templates or container root
871e1fd6 147directories. Not all storage types support all content types. One can set
aa039b0f
DM
148this property to select for what this storage is used for.
149
150images:::
151
152KVM-Qemu VM images.
153
154rootdir:::
155
871e1fd6 156Allow to store container data.
aa039b0f
DM
157
158vztmpl:::
159
160Container templates.
161
162backup:::
163
164Backup files ('vzdump').
165
166iso:::
167
168ISO images
169
170shared::
171
172Mark storage as shared.
173
174disable::
175
176You can use this flag to disable the storage completely.
177
178maxfiles::
179
180Maximal number of backup files per VM. Use `0` for unlimted.
181
182format::
183
184Default image format (`raw|qcow2|vmdk`)
185
186
187WARNING: It is not advisable to use the same storage pool on different
871e1fd6 188{pve} clusters. Some storage operation need exclusive access to the
aa039b0f 189storage, so proper locking is required. While this is implemented
871e1fd6 190within a cluster, it does not work between different clusters.
aa039b0f
DM
191
192
193Volumes
194-------
195
196We use a special notation to address storage data. When you allocate
871e1fd6 197data from a storage pool, it returns such a volume identifier. A volume
aa039b0f
DM
198is identified by the `<STORAGE_ID>`, followed by a storage type
199dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
200like:
201
202 local:230/example-image.raw
203
204 local:iso/debian-501-amd64-netinst.iso
205
206 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
207
208 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
209
210To get the filesystem path for a `<VOLUME_ID>` use:
211
212 pvesm path <VOLUME_ID>
213
214Volume Ownership
215~~~~~~~~~~~~~~~~
216
217There exists an ownership relation for 'image' type volumes. Each such
218volume is owned by a VM or Container. For example volume
219`local:230/example-image.raw` is owned by VM 230. Most storage
220backends encodes this ownership information into the volume name.
221
871e1fd6 222When you remove a VM or Container, the system also removes all
aa039b0f
DM
223associated volumes which are owned by that VM or Container.
224
225
226Using the Command Line Interface
227--------------------------------
228
871e1fd6
FG
229It is recommended to familiarize yourself with the concept behind storage
230pools and volume identifiers, but in real life, you are not forced to do any
aa039b0f
DM
231of those low level operations on the command line. Normally,
232allocation and removal of volumes is done by the VM and Container
233management tools.
234
235Nevertheless, there is a command line tool called 'pvesm' ({pve}
236storage manager), which is able to perform common storage management
237tasks.
238
239
240Examples
241~~~~~~~~
242
243Add storage pools
244
245 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
246 pvesm add dir <STORAGE_ID> --path <PATH>
247 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
248 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
249 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
250
251Disable storage pools
252
253 pvesm set <STORAGE_ID> --disable 1
254
255Enable storage pools
256
257 pvesm set <STORAGE_ID> --disable 0
258
259Change/set storage options
260
261 pvesm set <STORAGE_ID> <OPTIONS>
262 pvesm set <STORAGE_ID> --shared 1
263 pvesm set local --format qcow2
264 pvesm set <STORAGE_ID> --content iso
265
266Remove storage pools. This does not delete any data, and does not
267disconnect or unmount anything. It just removes the storage
268configuration.
269
270 pvesm remove <STORAGE_ID>
271
272Allocate volumes
273
274 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
275
276Allocate a 4G volume in local storage. The name is auto-generated if
277you pass an empty string as `<name>`
278
279 pvesm alloc local <VMID> '' 4G
280
281Free volumes
282
283 pvesm free <VOLUME_ID>
284
285WARNING: This really destroys all volume data.
286
287List storage status
288
289 pvesm status
290
291List storage contents
292
293 pvesm list <STORAGE_ID> [--vmid <VMID>]
294
295List volumes allocated by VMID
296
297 pvesm list <STORAGE_ID> --vmid <VMID>
298
299List iso images
300
301 pvesm list <STORAGE_ID> --iso
302
303List container templates
304
305 pvesm list <STORAGE_ID> --vztmpl
306
307Show filesystem path for a volume
308
309 pvesm path <VOLUME_ID>
310
251666be
DM
311ifndef::wiki[]
312
aa039b0f
DM
313// backend documentation
314
315include::pve-storage-dir.adoc[]
316
317include::pve-storage-nfs.adoc[]
318
319include::pve-storage-glusterfs.adoc[]
320
321include::pve-storage-zfspool.adoc[]
322
323include::pve-storage-lvm.adoc[]
324
9801e1c3
DM
325include::pve-storage-lvmthin.adoc[]
326
aa039b0f
DM
327include::pve-storage-iscsi.adoc[]
328
329include::pve-storage-iscsidirect.adoc[]
330
331include::pve-storage-rbd.adoc[]
332
333
251666be 334
aa039b0f
DM
335ifdef::manvolnum[]
336include::pve-copyright.adoc[]
337endif::manvolnum[]
338
251666be
DM
339endif::wiki[]
340