]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
add hint about bug tracker
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 include::attributes.txt[]
26 endif::manvolnum[]
27
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
33
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
38
39 The storage library (package 'libpve-storage-perl') uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
42
43
44 Storage Types
45 -------------
46
47 There are basically two different classes of storage types:
48
49 Block level storage::
50
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
56
57 File level storage::
58
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
62 snapshots and clones.
63
64
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |yes
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
82
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a 'shared' LVM storage.
85
86 Thin provisioning
87 ~~~~~~~~~~~~~~~~~
88
89 A number of storages, and the Qemu image format `qcow2`, support _thin
90 provisioning_. With thin provisioning activated, only the blocks that
91 the guest system actually use will be written to the storage.
92
93 Say for instance you create a VM with a 32GB hard disk, and after
94 installing the guest system OS, the root filesystem of the VM contains
95 3 GB of data. In that case only 3GB are written to the storage, even
96 if the guest VM sees a 32GB hard drive. In this way thin provisioning
97 allows you to create disk images which are larger than the currently
98 available storage blocks. You can create large disk images for your
99 VMs, and when the need arises, add more disks to your storage without
100 resizing the VMs filesystems.
101
102 All storage types which have the 'Snapshots' feature also support thin
103 provisioning.
104
105 CAUTION: If a storage runs full, all guests using volumes on that
106 storage receives IO error. This can cause file system inconsistencies
107 and may corrupt your data. So it is advisable to avoid
108 over-provisioning of your storage resources, or carefully observe
109 free space to avoid such conditions.
110
111 Storage Configuration
112 ---------------------
113
114 All {pve} related storage configuration is stored within a single text
115 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
116 gets automatically distributed to all cluster nodes. So all nodes
117 share the same storage configuration.
118
119 Sharing storage configuration make perfect sense for shared storage,
120 because the same 'shared' storage is accessible from all nodes. But is
121 also useful for local storage types. In this case such local storage
122 is available on all nodes, but it is physically different and can have
123 totally different content.
124
125 Storage Pools
126 ~~~~~~~~~~~~~
127
128 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
129
130 ----
131 <type>: <STORAGE_ID>
132 <property> <value>
133 <property> <value>
134 ...
135 ----
136
137 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
138 followed by a list of properties. Most properties have values, but some of
139 them come with reasonable default. In that case you can omit the value.
140
141 To be more specific, take a look at the default storage configuration
142 after installation. It contains one special local storage pool named
143 `local`, which refers to the directory '/var/lib/vz' and is always
144 available. The {pve} installer creates additional storage entries
145 depending on the storage type chosen at installation time.
146
147 .Default storage configuration ('/etc/pve/storage.cfg')
148 ----
149 dir: local
150 path /var/lib/vz
151 content iso,vztmpl,backup
152
153 # default image store on LVM based installation
154 lvmthin: local-lvm
155 thinpool data
156 vgname pve
157 content rootdir,images
158
159 # default image store on ZFS based installation
160 zfspool: local-zfs
161 pool rpool/data
162 sparse
163 content images,rootdir
164 ----
165
166 Common Storage Properties
167 ~~~~~~~~~~~~~~~~~~~~~~~~~
168
169 A few storage properties are common among different storage types.
170
171 nodes::
172
173 List of cluster node names where this storage is
174 usable/accessible. One can use this property to restrict storage
175 access to a limited set of nodes.
176
177 content::
178
179 A storage can support several content types, for example virtual disk
180 images, cdrom iso images, container templates or container root
181 directories. Not all storage types support all content types. One can set
182 this property to select for what this storage is used for.
183
184 images:::
185
186 KVM-Qemu VM images.
187
188 rootdir:::
189
190 Allow to store container data.
191
192 vztmpl:::
193
194 Container templates.
195
196 backup:::
197
198 Backup files ('vzdump').
199
200 iso:::
201
202 ISO images
203
204 shared::
205
206 Mark storage as shared.
207
208 disable::
209
210 You can use this flag to disable the storage completely.
211
212 maxfiles::
213
214 Maximal number of backup files per VM. Use `0` for unlimted.
215
216 format::
217
218 Default image format (`raw|qcow2|vmdk`)
219
220
221 WARNING: It is not advisable to use the same storage pool on different
222 {pve} clusters. Some storage operation need exclusive access to the
223 storage, so proper locking is required. While this is implemented
224 within a cluster, it does not work between different clusters.
225
226
227 Volumes
228 -------
229
230 We use a special notation to address storage data. When you allocate
231 data from a storage pool, it returns such a volume identifier. A volume
232 is identified by the `<STORAGE_ID>`, followed by a storage type
233 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
234 like:
235
236 local:230/example-image.raw
237
238 local:iso/debian-501-amd64-netinst.iso
239
240 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
241
242 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
243
244 To get the filesystem path for a `<VOLUME_ID>` use:
245
246 pvesm path <VOLUME_ID>
247
248 Volume Ownership
249 ~~~~~~~~~~~~~~~~
250
251 There exists an ownership relation for 'image' type volumes. Each such
252 volume is owned by a VM or Container. For example volume
253 `local:230/example-image.raw` is owned by VM 230. Most storage
254 backends encodes this ownership information into the volume name.
255
256 When you remove a VM or Container, the system also removes all
257 associated volumes which are owned by that VM or Container.
258
259
260 Using the Command Line Interface
261 --------------------------------
262
263 It is recommended to familiarize yourself with the concept behind storage
264 pools and volume identifiers, but in real life, you are not forced to do any
265 of those low level operations on the command line. Normally,
266 allocation and removal of volumes is done by the VM and Container
267 management tools.
268
269 Nevertheless, there is a command line tool called 'pvesm' ({pve}
270 storage manager), which is able to perform common storage management
271 tasks.
272
273
274 Examples
275 ~~~~~~~~
276
277 Add storage pools
278
279 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
280 pvesm add dir <STORAGE_ID> --path <PATH>
281 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
282 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
283 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
284
285 Disable storage pools
286
287 pvesm set <STORAGE_ID> --disable 1
288
289 Enable storage pools
290
291 pvesm set <STORAGE_ID> --disable 0
292
293 Change/set storage options
294
295 pvesm set <STORAGE_ID> <OPTIONS>
296 pvesm set <STORAGE_ID> --shared 1
297 pvesm set local --format qcow2
298 pvesm set <STORAGE_ID> --content iso
299
300 Remove storage pools. This does not delete any data, and does not
301 disconnect or unmount anything. It just removes the storage
302 configuration.
303
304 pvesm remove <STORAGE_ID>
305
306 Allocate volumes
307
308 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
309
310 Allocate a 4G volume in local storage. The name is auto-generated if
311 you pass an empty string as `<name>`
312
313 pvesm alloc local <VMID> '' 4G
314
315 Free volumes
316
317 pvesm free <VOLUME_ID>
318
319 WARNING: This really destroys all volume data.
320
321 List storage status
322
323 pvesm status
324
325 List storage contents
326
327 pvesm list <STORAGE_ID> [--vmid <VMID>]
328
329 List volumes allocated by VMID
330
331 pvesm list <STORAGE_ID> --vmid <VMID>
332
333 List iso images
334
335 pvesm list <STORAGE_ID> --iso
336
337 List container templates
338
339 pvesm list <STORAGE_ID> --vztmpl
340
341 Show filesystem path for a volume
342
343 pvesm path <VOLUME_ID>
344
345 ifdef::wiki[]
346
347 See Also
348 --------
349
350 * link:/wiki/Storage:_Directory[Storage: Directory]
351
352 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
353
354 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
355
356 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
357
358 * link:/wiki/Storage:_LVM[Storage: LVM]
359
360 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
361
362 * link:/wiki/Storage:_NFS[Storage: NFS]
363
364 * link:/wiki/Storage:_RBD[Storage: RBD]
365
366 * link:/wiki/Storage:_ZFS[Storage: ZFS]
367
368
369 endif::wiki[]
370
371 ifndef::wiki[]
372
373 // backend documentation
374
375 include::pve-storage-dir.adoc[]
376
377 include::pve-storage-nfs.adoc[]
378
379 include::pve-storage-glusterfs.adoc[]
380
381 include::pve-storage-zfspool.adoc[]
382
383 include::pve-storage-lvm.adoc[]
384
385 include::pve-storage-lvmthin.adoc[]
386
387 include::pve-storage-iscsi.adoc[]
388
389 include::pve-storage-iscsidirect.adoc[]
390
391 include::pve-storage-rbd.adoc[]
392
393
394
395 ifdef::manvolnum[]
396 include::pve-copyright.adoc[]
397 endif::manvolnum[]
398
399 endif::wiki[]
400