]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
store manpage section inside .adoc file
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE(1)
4 ======
5 include::attributes.txt[]
6
7 :pve-toplevel:
8
9 NAME
10 ----
11
12 pvesm - Proxmox VE Storage Manager
13
14
15 SYNOPSIS
16 --------
17
18 include::pvesm.1-synopsis.adoc[]
19
20 DESCRIPTION
21 -----------
22 endif::manvolnum[]
23
24 ifndef::manvolnum[]
25 {pve} Storage
26 =============
27 include::attributes.txt[]
28 endif::manvolnum[]
29
30 ifdef::wiki[]
31 :pve-toplevel:
32 :title: Storage
33 endif::wiki[]
34
35 The {pve} storage model is very flexible. Virtual machine images
36 can either be stored on one or several local storages, or on shared
37 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
38 configure as many storage pools as you like. You can use all
39 storage technologies available for Debian Linux.
40
41 One major benefit of storing VMs on shared storage is the ability to
42 live-migrate running machines without any downtime, as all nodes in
43 the cluster have direct access to VM disk images. There is no need to
44 copy VM image data, so live migration is very fast in that case.
45
46 The storage library (package `libpve-storage-perl`) uses a flexible
47 plugin system to provide a common interface to all storage types. This
48 can be easily adopted to include further storage types in future.
49
50
51 Storage Types
52 -------------
53
54 There are basically two different classes of storage types:
55
56 Block level storage::
57
58 Allows to store large 'raw' images. It is usually not possible to store
59 other files (ISO, backups, ..) on such storage types. Most modern
60 block level storage implementations support snapshots and clones.
61 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
62 data to different nodes.
63
64 File level storage::
65
66 They allow access to a full featured (POSIX) file system. They are
67 more flexible, and allows you to store any content type. ZFS is
68 probably the most advanced system, and it has full support for
69 snapshots and clones.
70
71
72 .Available storage types
73 [width="100%",cols="<d,1*m,4*d",options="header"]
74 |===========================================================
75 |Description |PVE type |Level |Shared|Snapshots|Stable
76 |ZFS (local) |zfspool |file |no |yes |yes
77 |Directory |dir |file |no |no |yes
78 |NFS |nfs |file |yes |no |yes
79 |GlusterFS |glusterfs |file |yes |no |yes
80 |LVM |lvm |block |no |no |yes
81 |LVM-thin |lvmthin |block |no |yes |yes
82 |iSCSI/kernel |iscsi |block |yes |no |yes
83 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
84 |Ceph/RBD |rbd |block |yes |yes |yes
85 |Sheepdog |sheepdog |block |yes |yes |beta
86 |DRBD9 |drbd |block |yes |yes |beta
87 |ZFS over iSCSI |zfs |block |yes |yes |yes
88 |=========================================================
89
90 TIP: It is possible to use LVM on top of an iSCSI storage. That way
91 you get a `shared` LVM storage.
92
93
94 Thin Provisioning
95 ~~~~~~~~~~~~~~~~~
96
97 A number of storages, and the Qemu image format `qcow2`, support 'thin
98 provisioning'. With thin provisioning activated, only the blocks that
99 the guest system actually use will be written to the storage.
100
101 Say for instance you create a VM with a 32GB hard disk, and after
102 installing the guest system OS, the root file system of the VM contains
103 3 GB of data. In that case only 3GB are written to the storage, even
104 if the guest VM sees a 32GB hard drive. In this way thin provisioning
105 allows you to create disk images which are larger than the currently
106 available storage blocks. You can create large disk images for your
107 VMs, and when the need arises, add more disks to your storage without
108 resizing the VMs' file systems.
109
110 All storage types which have the ``Snapshots'' feature also support thin
111 provisioning.
112
113 CAUTION: If a storage runs full, all guests using volumes on that
114 storage receives IO error. This can cause file system inconsistencies
115 and may corrupt your data. So it is advisable to avoid
116 over-provisioning of your storage resources, or carefully observe
117 free space to avoid such conditions.
118
119
120 Storage Configuration
121 ---------------------
122
123 All {pve} related storage configuration is stored within a single text
124 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
125 gets automatically distributed to all cluster nodes. So all nodes
126 share the same storage configuration.
127
128 Sharing storage configuration make perfect sense for shared storage,
129 because the same ``shared'' storage is accessible from all nodes. But is
130 also useful for local storage types. In this case such local storage
131 is available on all nodes, but it is physically different and can have
132 totally different content.
133
134
135 Storage Pools
136 ~~~~~~~~~~~~~
137
138 Each storage pool has a `<type>`, and is uniquely identified by its
139 `<STORAGE_ID>`. A pool configuration looks like this:
140
141 ----
142 <type>: <STORAGE_ID>
143 <property> <value>
144 <property> <value>
145 ...
146 ----
147
148 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
149 followed by a list of properties. Most properties have values, but some of
150 them come with reasonable default. In that case you can omit the value.
151
152 To be more specific, take a look at the default storage configuration
153 after installation. It contains one special local storage pool named
154 `local`, which refers to the directory `/var/lib/vz` and is always
155 available. The {pve} installer creates additional storage entries
156 depending on the storage type chosen at installation time.
157
158 .Default storage configuration (`/etc/pve/storage.cfg`)
159 ----
160 dir: local
161 path /var/lib/vz
162 content iso,vztmpl,backup
163
164 # default image store on LVM based installation
165 lvmthin: local-lvm
166 thinpool data
167 vgname pve
168 content rootdir,images
169
170 # default image store on ZFS based installation
171 zfspool: local-zfs
172 pool rpool/data
173 sparse
174 content images,rootdir
175 ----
176
177
178 Common Storage Properties
179 ~~~~~~~~~~~~~~~~~~~~~~~~~
180
181 A few storage properties are common among different storage types.
182
183 nodes::
184
185 List of cluster node names where this storage is
186 usable/accessible. One can use this property to restrict storage
187 access to a limited set of nodes.
188
189 content::
190
191 A storage can support several content types, for example virtual disk
192 images, cdrom iso images, container templates or container root
193 directories. Not all storage types support all content types. One can set
194 this property to select for what this storage is used for.
195
196 images:::
197
198 KVM-Qemu VM images.
199
200 rootdir:::
201
202 Allow to store container data.
203
204 vztmpl:::
205
206 Container templates.
207
208 backup:::
209
210 Backup files (`vzdump`).
211
212 iso:::
213
214 ISO images
215
216 shared::
217
218 Mark storage as shared.
219
220 disable::
221
222 You can use this flag to disable the storage completely.
223
224 maxfiles::
225
226 Maximum number of backup files per VM. Use `0` for unlimited.
227
228 format::
229
230 Default image format (`raw|qcow2|vmdk`)
231
232
233 WARNING: It is not advisable to use the same storage pool on different
234 {pve} clusters. Some storage operation need exclusive access to the
235 storage, so proper locking is required. While this is implemented
236 within a cluster, it does not work between different clusters.
237
238
239 Volumes
240 -------
241
242 We use a special notation to address storage data. When you allocate
243 data from a storage pool, it returns such a volume identifier. A volume
244 is identified by the `<STORAGE_ID>`, followed by a storage type
245 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
246 like:
247
248 local:230/example-image.raw
249
250 local:iso/debian-501-amd64-netinst.iso
251
252 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
253
254 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
255
256 To get the file system path for a `<VOLUME_ID>` use:
257
258 pvesm path <VOLUME_ID>
259
260
261 Volume Ownership
262 ~~~~~~~~~~~~~~~~
263
264 There exists an ownership relation for `image` type volumes. Each such
265 volume is owned by a VM or Container. For example volume
266 `local:230/example-image.raw` is owned by VM 230. Most storage
267 backends encodes this ownership information into the volume name.
268
269 When you remove a VM or Container, the system also removes all
270 associated volumes which are owned by that VM or Container.
271
272
273 Using the Command Line Interface
274 --------------------------------
275
276 It is recommended to familiarize yourself with the concept behind storage
277 pools and volume identifiers, but in real life, you are not forced to do any
278 of those low level operations on the command line. Normally,
279 allocation and removal of volumes is done by the VM and Container
280 management tools.
281
282 Nevertheless, there is a command line tool called `pvesm` (``{pve}
283 Storage Manager''), which is able to perform common storage management
284 tasks.
285
286
287 Examples
288 ~~~~~~~~
289
290 Add storage pools
291
292 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
293 pvesm add dir <STORAGE_ID> --path <PATH>
294 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
295 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
296 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
297
298 Disable storage pools
299
300 pvesm set <STORAGE_ID> --disable 1
301
302 Enable storage pools
303
304 pvesm set <STORAGE_ID> --disable 0
305
306 Change/set storage options
307
308 pvesm set <STORAGE_ID> <OPTIONS>
309 pvesm set <STORAGE_ID> --shared 1
310 pvesm set local --format qcow2
311 pvesm set <STORAGE_ID> --content iso
312
313 Remove storage pools. This does not delete any data, and does not
314 disconnect or unmount anything. It just removes the storage
315 configuration.
316
317 pvesm remove <STORAGE_ID>
318
319 Allocate volumes
320
321 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
322
323 Allocate a 4G volume in local storage. The name is auto-generated if
324 you pass an empty string as `<name>`
325
326 pvesm alloc local <VMID> '' 4G
327
328 Free volumes
329
330 pvesm free <VOLUME_ID>
331
332 WARNING: This really destroys all volume data.
333
334 List storage status
335
336 pvesm status
337
338 List storage contents
339
340 pvesm list <STORAGE_ID> [--vmid <VMID>]
341
342 List volumes allocated by VMID
343
344 pvesm list <STORAGE_ID> --vmid <VMID>
345
346 List iso images
347
348 pvesm list <STORAGE_ID> --iso
349
350 List container templates
351
352 pvesm list <STORAGE_ID> --vztmpl
353
354 Show file system path for a volume
355
356 pvesm path <VOLUME_ID>
357
358 ifdef::wiki[]
359
360 See Also
361 --------
362
363 * link:/wiki/Storage:_Directory[Storage: Directory]
364
365 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
366
367 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
368
369 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
370
371 * link:/wiki/Storage:_LVM[Storage: LVM]
372
373 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
374
375 * link:/wiki/Storage:_NFS[Storage: NFS]
376
377 * link:/wiki/Storage:_RBD[Storage: RBD]
378
379 * link:/wiki/Storage:_ZFS[Storage: ZFS]
380
381 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
382
383 endif::wiki[]
384
385 ifndef::wiki[]
386
387 // backend documentation
388
389 include::pve-storage-dir.adoc[]
390
391 include::pve-storage-nfs.adoc[]
392
393 include::pve-storage-glusterfs.adoc[]
394
395 include::pve-storage-zfspool.adoc[]
396
397 include::pve-storage-lvm.adoc[]
398
399 include::pve-storage-lvmthin.adoc[]
400
401 include::pve-storage-iscsi.adoc[]
402
403 include::pve-storage-iscsidirect.adoc[]
404
405 include::pve-storage-rbd.adoc[]
406
407
408
409 ifdef::manvolnum[]
410 include::pve-copyright.adoc[]
411 endif::manvolnum[]
412
413 endif::wiki[]
414