]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
build: refactor build process
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 Block level storage::
52
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57 data to different nodes.
58
59 File level storage::
60
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
64 snapshots and clones.
65
66
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no^1^ |yes
73 |NFS |nfs |file |yes |no^1^ |yes
74 |CIFS |cifs |file |yes |no^1^ |yes
75 |GlusterFS |glusterfs |file |yes |no^1^ |yes
76 |LVM |lvm |block |no^2^ |no |yes
77 |LVM-thin |lvmthin |block |no |yes |yes
78 |iSCSI/kernel |iscsi |block |yes |no |yes
79 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
80 |Ceph/RBD |rbd |block |yes |yes |yes
81 |Ceph/CephFS |cephfs |file |yes |yes |yes
82 |Sheepdog |sheepdog |block |yes |yes |beta
83 |ZFS over iSCSI |zfs |block |yes |yes |yes
84 |=========================================================
85
86 ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
87
88 ^2^: It is possible to use LVM on top of an iSCSI storage. That way
89 you get a `shared` LVM storage.
90
91
92 Thin Provisioning
93 ~~~~~~~~~~~~~~~~~
94
95 A number of storages, and the Qemu image format `qcow2`, support 'thin
96 provisioning'. With thin provisioning activated, only the blocks that
97 the guest system actually use will be written to the storage.
98
99 Say for instance you create a VM with a 32GB hard disk, and after
100 installing the guest system OS, the root file system of the VM contains
101 3 GB of data. In that case only 3GB are written to the storage, even
102 if the guest VM sees a 32GB hard drive. In this way thin provisioning
103 allows you to create disk images which are larger than the currently
104 available storage blocks. You can create large disk images for your
105 VMs, and when the need arises, add more disks to your storage without
106 resizing the VMs' file systems.
107
108 All storage types which have the ``Snapshots'' feature also support thin
109 provisioning.
110
111 CAUTION: If a storage runs full, all guests using volumes on that
112 storage receive IO errors. This can cause file system inconsistencies
113 and may corrupt your data. So it is advisable to avoid
114 over-provisioning of your storage resources, or carefully observe
115 free space to avoid such conditions.
116
117
118 Storage Configuration
119 ---------------------
120
121 All {pve} related storage configuration is stored within a single text
122 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
123 gets automatically distributed to all cluster nodes. So all nodes
124 share the same storage configuration.
125
126 Sharing storage configuration make perfect sense for shared storage,
127 because the same ``shared'' storage is accessible from all nodes. But is
128 also useful for local storage types. In this case such local storage
129 is available on all nodes, but it is physically different and can have
130 totally different content.
131
132
133 Storage Pools
134 ~~~~~~~~~~~~~
135
136 Each storage pool has a `<type>`, and is uniquely identified by its
137 `<STORAGE_ID>`. A pool configuration looks like this:
138
139 ----
140 <type>: <STORAGE_ID>
141 <property> <value>
142 <property> <value>
143 ...
144 ----
145
146 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
147 followed by a list of properties. Most properties have values, but some of
148 them come with reasonable default. In that case you can omit the value.
149
150 To be more specific, take a look at the default storage configuration
151 after installation. It contains one special local storage pool named
152 `local`, which refers to the directory `/var/lib/vz` and is always
153 available. The {pve} installer creates additional storage entries
154 depending on the storage type chosen at installation time.
155
156 .Default storage configuration (`/etc/pve/storage.cfg`)
157 ----
158 dir: local
159 path /var/lib/vz
160 content iso,vztmpl,backup
161
162 # default image store on LVM based installation
163 lvmthin: local-lvm
164 thinpool data
165 vgname pve
166 content rootdir,images
167
168 # default image store on ZFS based installation
169 zfspool: local-zfs
170 pool rpool/data
171 sparse
172 content images,rootdir
173 ----
174
175
176 Common Storage Properties
177 ~~~~~~~~~~~~~~~~~~~~~~~~~
178
179 A few storage properties are common among different storage types.
180
181 nodes::
182
183 List of cluster node names where this storage is
184 usable/accessible. One can use this property to restrict storage
185 access to a limited set of nodes.
186
187 content::
188
189 A storage can support several content types, for example virtual disk
190 images, cdrom iso images, container templates or container root
191 directories. Not all storage types support all content types. One can set
192 this property to select for what this storage is used for.
193
194 images:::
195
196 KVM-Qemu VM images.
197
198 rootdir:::
199
200 Allow to store container data.
201
202 vztmpl:::
203
204 Container templates.
205
206 backup:::
207
208 Backup files (`vzdump`).
209
210 iso:::
211
212 ISO images
213
214 shared::
215
216 Mark storage as shared.
217
218 disable::
219
220 You can use this flag to disable the storage completely.
221
222 maxfiles::
223
224 Maximum number of backup files per VM. Use `0` for unlimited.
225
226 format::
227
228 Default image format (`raw|qcow2|vmdk`)
229
230
231 WARNING: It is not advisable to use the same storage pool on different
232 {pve} clusters. Some storage operation need exclusive access to the
233 storage, so proper locking is required. While this is implemented
234 within a cluster, it does not work between different clusters.
235
236
237 Volumes
238 -------
239
240 We use a special notation to address storage data. When you allocate
241 data from a storage pool, it returns such a volume identifier. A volume
242 is identified by the `<STORAGE_ID>`, followed by a storage type
243 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
244 like:
245
246 local:230/example-image.raw
247
248 local:iso/debian-501-amd64-netinst.iso
249
250 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
251
252 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
253
254 To get the file system path for a `<VOLUME_ID>` use:
255
256 pvesm path <VOLUME_ID>
257
258
259 Volume Ownership
260 ~~~~~~~~~~~~~~~~
261
262 There exists an ownership relation for `image` type volumes. Each such
263 volume is owned by a VM or Container. For example volume
264 `local:230/example-image.raw` is owned by VM 230. Most storage
265 backends encodes this ownership information into the volume name.
266
267 When you remove a VM or Container, the system also removes all
268 associated volumes which are owned by that VM or Container.
269
270
271 Using the Command Line Interface
272 --------------------------------
273
274 It is recommended to familiarize yourself with the concept behind storage
275 pools and volume identifiers, but in real life, you are not forced to do any
276 of those low level operations on the command line. Normally,
277 allocation and removal of volumes is done by the VM and Container
278 management tools.
279
280 Nevertheless, there is a command line tool called `pvesm` (``{pve}
281 Storage Manager''), which is able to perform common storage management
282 tasks.
283
284
285 Examples
286 ~~~~~~~~
287
288 Add storage pools
289
290 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
291 pvesm add dir <STORAGE_ID> --path <PATH>
292 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
293 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
294 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
295
296 Disable storage pools
297
298 pvesm set <STORAGE_ID> --disable 1
299
300 Enable storage pools
301
302 pvesm set <STORAGE_ID> --disable 0
303
304 Change/set storage options
305
306 pvesm set <STORAGE_ID> <OPTIONS>
307 pvesm set <STORAGE_ID> --shared 1
308 pvesm set local --format qcow2
309 pvesm set <STORAGE_ID> --content iso
310
311 Remove storage pools. This does not delete any data, and does not
312 disconnect or unmount anything. It just removes the storage
313 configuration.
314
315 pvesm remove <STORAGE_ID>
316
317 Allocate volumes
318
319 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
320
321 Allocate a 4G volume in local storage. The name is auto-generated if
322 you pass an empty string as `<name>`
323
324 pvesm alloc local <VMID> '' 4G
325
326 Free volumes
327
328 pvesm free <VOLUME_ID>
329
330 WARNING: This really destroys all volume data.
331
332 List storage status
333
334 pvesm status
335
336 List storage contents
337
338 pvesm list <STORAGE_ID> [--vmid <VMID>]
339
340 List volumes allocated by VMID
341
342 pvesm list <STORAGE_ID> --vmid <VMID>
343
344 List iso images
345
346 pvesm list <STORAGE_ID> --iso
347
348 List container templates
349
350 pvesm list <STORAGE_ID> --vztmpl
351
352 Show file system path for a volume
353
354 pvesm path <VOLUME_ID>
355
356 ifdef::wiki[]
357
358 See Also
359 --------
360
361 * link:/wiki/Storage:_Directory[Storage: Directory]
362
363 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
364
365 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
366
367 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
368
369 * link:/wiki/Storage:_LVM[Storage: LVM]
370
371 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
372
373 * link:/wiki/Storage:_NFS[Storage: NFS]
374
375 * link:/wiki/Storage:_CIFS[Storage: CIFS]
376
377 * link:/wiki/Storage:_RBD[Storage: RBD]
378
379 * link:/wiki/Storage:_RBD[Storage: CephFS]
380
381 * link:/wiki/Storage:_ZFS[Storage: ZFS]
382
383 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
384
385 endif::wiki[]
386
387 ifndef::wiki[]
388
389 // backend documentation
390
391 include::pve-storage-dir.adoc[]
392
393 include::pve-storage-nfs.adoc[]
394
395 include::pve-storage-cifs.adoc[]
396
397 include::pve-storage-glusterfs.adoc[]
398
399 include::pve-storage-zfspool.adoc[]
400
401 include::pve-storage-lvm.adoc[]
402
403 include::pve-storage-lvmthin.adoc[]
404
405 include::pve-storage-iscsi.adoc[]
406
407 include::pve-storage-iscsidirect.adoc[]
408
409 include::pve-storage-rbd.adoc[]
410
411 include::pve-storage-cephfs.adoc[]
412
413
414
415 ifdef::manvolnum[]
416 include::pve-copyright.adoc[]
417 endif::manvolnum[]
418
419 endif::wiki[]
420