]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
62d190efb9a06bbcd0600c060e0bdcaeca32fd3d
[pve-docs.git] / pvesm.adoc
1 [[chapter_storage]]
2 ifdef::manvolnum[]
3 pvesm(1)
4 ========
5 :pve-toplevel:
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSIS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21 ifndef::manvolnum[]
22 {pve} Storage
23 =============
24 :pve-toplevel:
25 endif::manvolnum[]
26 ifdef::wiki[]
27 :title: Storage
28 endif::wiki[]
29
30 The {pve} storage model is very flexible. Virtual machine images
31 can either be stored on one or several local storages, or on shared
32 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
33 configure as many storage pools as you like. You can use all
34 storage technologies available for Debian Linux.
35
36 One major benefit of storing VMs on shared storage is the ability to
37 live-migrate running machines without any downtime, as all nodes in
38 the cluster have direct access to VM disk images. There is no need to
39 copy VM image data, so live migration is very fast in that case.
40
41 The storage library (package `libpve-storage-perl`) uses a flexible
42 plugin system to provide a common interface to all storage types. This
43 can be easily adopted to include further storage types in future.
44
45
46 Storage Types
47 -------------
48
49 There are basically two different classes of storage types:
50
51 Block level storage::
52
53 Allows to store large 'raw' images. It is usually not possible to store
54 other files (ISO, backups, ..) on such storage types. Most modern
55 block level storage implementations support snapshots and clones.
56 RADOS, Sheepdog and GlusterFS are distributed systems, replicating storage
57 data to different nodes.
58
59 File level storage::
60
61 They allow access to a full featured (POSIX) file system. They are
62 more flexible, and allows you to store any content type. ZFS is
63 probably the most advanced system, and it has full support for
64 snapshots and clones.
65
66
67 .Available storage types
68 [width="100%",cols="<d,1*m,4*d",options="header"]
69 |===========================================================
70 |Description |PVE type |Level |Shared|Snapshots|Stable
71 |ZFS (local) |zfspool |file |no |yes |yes
72 |Directory |dir |file |no |no^1^ |yes
73 |NFS |nfs |file |yes |no^1^ |yes
74 |GlusterFS |glusterfs |file |yes |no^1^ |yes
75 |LVM |lvm |block |no^2^ |no |yes
76 |LVM-thin |lvmthin |block |no |yes |yes
77 |iSCSI/kernel |iscsi |block |yes |no |yes
78 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
79 |Ceph/RBD |rbd |block |yes |yes |yes
80 |Sheepdog |sheepdog |block |yes |yes |beta
81 |ZFS over iSCSI |zfs |block |yes |yes |yes
82 |=========================================================
83
84 ^1^: On file based storages, snapshots are possible with the 'qcow2' format.
85
86 ^2^: It is possible to use LVM on top of an iSCSI storage. That way
87 you get a `shared` LVM storage.
88
89
90 Thin Provisioning
91 ~~~~~~~~~~~~~~~~~
92
93 A number of storages, and the Qemu image format `qcow2`, support 'thin
94 provisioning'. With thin provisioning activated, only the blocks that
95 the guest system actually use will be written to the storage.
96
97 Say for instance you create a VM with a 32GB hard disk, and after
98 installing the guest system OS, the root file system of the VM contains
99 3 GB of data. In that case only 3GB are written to the storage, even
100 if the guest VM sees a 32GB hard drive. In this way thin provisioning
101 allows you to create disk images which are larger than the currently
102 available storage blocks. You can create large disk images for your
103 VMs, and when the need arises, add more disks to your storage without
104 resizing the VMs' file systems.
105
106 All storage types which have the ``Snapshots'' feature also support thin
107 provisioning.
108
109 CAUTION: If a storage runs full, all guests using volumes on that
110 storage receive IO errors. This can cause file system inconsistencies
111 and may corrupt your data. So it is advisable to avoid
112 over-provisioning of your storage resources, or carefully observe
113 free space to avoid such conditions.
114
115
116 Storage Configuration
117 ---------------------
118
119 All {pve} related storage configuration is stored within a single text
120 file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it
121 gets automatically distributed to all cluster nodes. So all nodes
122 share the same storage configuration.
123
124 Sharing storage configuration make perfect sense for shared storage,
125 because the same ``shared'' storage is accessible from all nodes. But is
126 also useful for local storage types. In this case such local storage
127 is available on all nodes, but it is physically different and can have
128 totally different content.
129
130
131 Storage Pools
132 ~~~~~~~~~~~~~
133
134 Each storage pool has a `<type>`, and is uniquely identified by its
135 `<STORAGE_ID>`. A pool configuration looks like this:
136
137 ----
138 <type>: <STORAGE_ID>
139 <property> <value>
140 <property> <value>
141 ...
142 ----
143
144 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
145 followed by a list of properties. Most properties have values, but some of
146 them come with reasonable default. In that case you can omit the value.
147
148 To be more specific, take a look at the default storage configuration
149 after installation. It contains one special local storage pool named
150 `local`, which refers to the directory `/var/lib/vz` and is always
151 available. The {pve} installer creates additional storage entries
152 depending on the storage type chosen at installation time.
153
154 .Default storage configuration (`/etc/pve/storage.cfg`)
155 ----
156 dir: local
157 path /var/lib/vz
158 content iso,vztmpl,backup
159
160 # default image store on LVM based installation
161 lvmthin: local-lvm
162 thinpool data
163 vgname pve
164 content rootdir,images
165
166 # default image store on ZFS based installation
167 zfspool: local-zfs
168 pool rpool/data
169 sparse
170 content images,rootdir
171 ----
172
173
174 Common Storage Properties
175 ~~~~~~~~~~~~~~~~~~~~~~~~~
176
177 A few storage properties are common among different storage types.
178
179 nodes::
180
181 List of cluster node names where this storage is
182 usable/accessible. One can use this property to restrict storage
183 access to a limited set of nodes.
184
185 content::
186
187 A storage can support several content types, for example virtual disk
188 images, cdrom iso images, container templates or container root
189 directories. Not all storage types support all content types. One can set
190 this property to select for what this storage is used for.
191
192 images:::
193
194 KVM-Qemu VM images.
195
196 rootdir:::
197
198 Allow to store container data.
199
200 vztmpl:::
201
202 Container templates.
203
204 backup:::
205
206 Backup files (`vzdump`).
207
208 iso:::
209
210 ISO images
211
212 shared::
213
214 Mark storage as shared.
215
216 disable::
217
218 You can use this flag to disable the storage completely.
219
220 maxfiles::
221
222 Maximum number of backup files per VM. Use `0` for unlimited.
223
224 format::
225
226 Default image format (`raw|qcow2|vmdk`)
227
228
229 WARNING: It is not advisable to use the same storage pool on different
230 {pve} clusters. Some storage operation need exclusive access to the
231 storage, so proper locking is required. While this is implemented
232 within a cluster, it does not work between different clusters.
233
234
235 Volumes
236 -------
237
238 We use a special notation to address storage data. When you allocate
239 data from a storage pool, it returns such a volume identifier. A volume
240 is identified by the `<STORAGE_ID>`, followed by a storage type
241 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
242 like:
243
244 local:230/example-image.raw
245
246 local:iso/debian-501-amd64-netinst.iso
247
248 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
249
250 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
251
252 To get the file system path for a `<VOLUME_ID>` use:
253
254 pvesm path <VOLUME_ID>
255
256
257 Volume Ownership
258 ~~~~~~~~~~~~~~~~
259
260 There exists an ownership relation for `image` type volumes. Each such
261 volume is owned by a VM or Container. For example volume
262 `local:230/example-image.raw` is owned by VM 230. Most storage
263 backends encodes this ownership information into the volume name.
264
265 When you remove a VM or Container, the system also removes all
266 associated volumes which are owned by that VM or Container.
267
268
269 Using the Command Line Interface
270 --------------------------------
271
272 It is recommended to familiarize yourself with the concept behind storage
273 pools and volume identifiers, but in real life, you are not forced to do any
274 of those low level operations on the command line. Normally,
275 allocation and removal of volumes is done by the VM and Container
276 management tools.
277
278 Nevertheless, there is a command line tool called `pvesm` (``{pve}
279 Storage Manager''), which is able to perform common storage management
280 tasks.
281
282
283 Examples
284 ~~~~~~~~
285
286 Add storage pools
287
288 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
289 pvesm add dir <STORAGE_ID> --path <PATH>
290 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
291 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
292 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
293
294 Disable storage pools
295
296 pvesm set <STORAGE_ID> --disable 1
297
298 Enable storage pools
299
300 pvesm set <STORAGE_ID> --disable 0
301
302 Change/set storage options
303
304 pvesm set <STORAGE_ID> <OPTIONS>
305 pvesm set <STORAGE_ID> --shared 1
306 pvesm set local --format qcow2
307 pvesm set <STORAGE_ID> --content iso
308
309 Remove storage pools. This does not delete any data, and does not
310 disconnect or unmount anything. It just removes the storage
311 configuration.
312
313 pvesm remove <STORAGE_ID>
314
315 Allocate volumes
316
317 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
318
319 Allocate a 4G volume in local storage. The name is auto-generated if
320 you pass an empty string as `<name>`
321
322 pvesm alloc local <VMID> '' 4G
323
324 Free volumes
325
326 pvesm free <VOLUME_ID>
327
328 WARNING: This really destroys all volume data.
329
330 List storage status
331
332 pvesm status
333
334 List storage contents
335
336 pvesm list <STORAGE_ID> [--vmid <VMID>]
337
338 List volumes allocated by VMID
339
340 pvesm list <STORAGE_ID> --vmid <VMID>
341
342 List iso images
343
344 pvesm list <STORAGE_ID> --iso
345
346 List container templates
347
348 pvesm list <STORAGE_ID> --vztmpl
349
350 Show file system path for a volume
351
352 pvesm path <VOLUME_ID>
353
354 ifdef::wiki[]
355
356 See Also
357 --------
358
359 * link:/wiki/Storage:_Directory[Storage: Directory]
360
361 * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS]
362
363 * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
364
365 * link:/wiki/Storage:_iSCSI[Storage: iSCSI]
366
367 * link:/wiki/Storage:_LVM[Storage: LVM]
368
369 * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin]
370
371 * link:/wiki/Storage:_NFS[Storage: NFS]
372
373 * link:/wiki/Storage:_RBD[Storage: RBD]
374
375 * link:/wiki/Storage:_ZFS[Storage: ZFS]
376
377 * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI]
378
379 endif::wiki[]
380
381 ifndef::wiki[]
382
383 // backend documentation
384
385 include::pve-storage-dir.adoc[]
386
387 include::pve-storage-nfs.adoc[]
388
389 include::pve-storage-glusterfs.adoc[]
390
391 include::pve-storage-zfspool.adoc[]
392
393 include::pve-storage-lvm.adoc[]
394
395 include::pve-storage-lvmthin.adoc[]
396
397 include::pve-storage-iscsi.adoc[]
398
399 include::pve-storage-iscsidirect.adoc[]
400
401 include::pve-storage-rbd.adoc[]
402
403
404
405 ifdef::manvolnum[]
406 include::pve-copyright.adoc[]
407 endif::manvolnum[]
408
409 endif::wiki[]
410