]>
Commit | Line | Data |
---|---|---|
1 | [[chapter_storage]] | |
2 | ifdef::manvolnum[] | |
3 | pvesm(1) | |
4 | ======== | |
5 | :pve-toplevel: | |
6 | ||
7 | NAME | |
8 | ---- | |
9 | ||
10 | pvesm - Proxmox VE Storage Manager | |
11 | ||
12 | ||
13 | SYNOPSIS | |
14 | -------- | |
15 | ||
16 | include::pvesm.1-synopsis.adoc[] | |
17 | ||
18 | DESCRIPTION | |
19 | ----------- | |
20 | endif::manvolnum[] | |
21 | ifndef::manvolnum[] | |
22 | {pve} Storage | |
23 | ============= | |
24 | :pve-toplevel: | |
25 | endif::manvolnum[] | |
26 | ifdef::wiki[] | |
27 | :title: Storage | |
28 | endif::wiki[] | |
29 | ||
30 | The {pve} storage model is very flexible. Virtual machine images | |
31 | can either be stored on one or several local storages, or on shared | |
32 | storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may | |
33 | configure as many storage pools as you like. You can use all | |
34 | storage technologies available for Debian Linux. | |
35 | ||
36 | One major benefit of storing VMs on shared storage is the ability to | |
37 | live-migrate running machines without any downtime, as all nodes in | |
38 | the cluster have direct access to VM disk images. There is no need to | |
39 | copy VM image data, so live migration is very fast in that case. | |
40 | ||
41 | The storage library (package `libpve-storage-perl`) uses a flexible | |
42 | plugin system to provide a common interface to all storage types. This | |
43 | can be easily adopted to include further storage types in the future. | |
44 | ||
45 | ||
46 | Storage Types | |
47 | ------------- | |
48 | ||
49 | There are basically two different classes of storage types: | |
50 | ||
51 | File level storage:: | |
52 | ||
53 | File level based storage technologies allow access to a fully featured (POSIX) | |
54 | file system. They are in general more flexible than any Block level storage | |
55 | (see below), and allow you to store content of any type. ZFS is probably the | |
56 | most advanced system, and it has full support for snapshots and clones. | |
57 | ||
58 | Block level storage:: | |
59 | ||
60 | Allows to store large 'raw' images. It is usually not possible to store | |
61 | other files (ISO, backups, ..) on such storage types. Most modern | |
62 | block level storage implementations support snapshots and clones. | |
63 | RADOS and GlusterFS are distributed systems, replicating storage | |
64 | data to different nodes. | |
65 | ||
66 | ||
67 | .Available storage types | |
68 | [width="100%",cols="<2d,1*m,4*d",options="header"] | |
69 | |=========================================================== | |
70 | |Description |PVE type |Level |Shared|Snapshots|Stable | |
71 | |ZFS (local) |zfspool |file |no |yes |yes | |
72 | |Directory |dir |file |no |no^1^ |yes | |
73 | |BTRFS |btrfs |file |no |yes |technology preview | |
74 | |NFS |nfs |file |yes |no^1^ |yes | |
75 | |CIFS |cifs |file |yes |no^1^ |yes | |
76 | |Proxmox Backup |pbs |both |yes |n/a |yes | |
77 | |GlusterFS |glusterfs |file |yes |no^1^ |yes | |
78 | |CephFS |cephfs |file |yes |yes |yes | |
79 | |LVM |lvm |block |no^2^ |no |yes | |
80 | |LVM-thin |lvmthin |block |no |yes |yes | |
81 | |iSCSI/kernel |iscsi |block |yes |no |yes | |
82 | |iSCSI/libiscsi |iscsidirect |block |yes |no |yes | |
83 | |Ceph/RBD |rbd |block |yes |yes |yes | |
84 | |ZFS over iSCSI |zfs |block |yes |yes |yes | |
85 | |=========================================================== | |
86 | ||
87 | ^1^: On file based storages, snapshots are possible with the 'qcow2' format. | |
88 | ||
89 | ^2^: It is possible to use LVM on top of an iSCSI or FC-based storage. | |
90 | That way you get a `shared` LVM storage. | |
91 | ||
92 | ||
93 | Thin Provisioning | |
94 | ~~~~~~~~~~~~~~~~~ | |
95 | ||
96 | A number of storages, and the Qemu image format `qcow2`, support 'thin | |
97 | provisioning'. With thin provisioning activated, only the blocks that | |
98 | the guest system actually use will be written to the storage. | |
99 | ||
100 | Say for instance you create a VM with a 32GB hard disk, and after | |
101 | installing the guest system OS, the root file system of the VM contains | |
102 | 3 GB of data. In that case only 3GB are written to the storage, even | |
103 | if the guest VM sees a 32GB hard drive. In this way thin provisioning | |
104 | allows you to create disk images which are larger than the currently | |
105 | available storage blocks. You can create large disk images for your | |
106 | VMs, and when the need arises, add more disks to your storage without | |
107 | resizing the VMs' file systems. | |
108 | ||
109 | All storage types which have the ``Snapshots'' feature also support thin | |
110 | provisioning. | |
111 | ||
112 | CAUTION: If a storage runs full, all guests using volumes on that | |
113 | storage receive IO errors. This can cause file system inconsistencies | |
114 | and may corrupt your data. So it is advisable to avoid | |
115 | over-provisioning of your storage resources, or carefully observe | |
116 | free space to avoid such conditions. | |
117 | ||
118 | ||
119 | Storage Configuration | |
120 | --------------------- | |
121 | ||
122 | All {pve} related storage configuration is stored within a single text | |
123 | file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it | |
124 | gets automatically distributed to all cluster nodes. So all nodes | |
125 | share the same storage configuration. | |
126 | ||
127 | Sharing storage configuration makes perfect sense for shared storage, | |
128 | because the same ``shared'' storage is accessible from all nodes. But it is | |
129 | also useful for local storage types. In this case such local storage | |
130 | is available on all nodes, but it is physically different and can have | |
131 | totally different content. | |
132 | ||
133 | ||
134 | Storage Pools | |
135 | ~~~~~~~~~~~~~ | |
136 | ||
137 | Each storage pool has a `<type>`, and is uniquely identified by its | |
138 | `<STORAGE_ID>`. A pool configuration looks like this: | |
139 | ||
140 | ---- | |
141 | <type>: <STORAGE_ID> | |
142 | <property> <value> | |
143 | <property> <value> | |
144 | <property> | |
145 | ... | |
146 | ---- | |
147 | ||
148 | The `<type>: <STORAGE_ID>` line starts the pool definition, which is then | |
149 | followed by a list of properties. Most properties require a value. Some have | |
150 | reasonable defaults, in which case you can omit the value. | |
151 | ||
152 | To be more specific, take a look at the default storage configuration | |
153 | after installation. It contains one special local storage pool named | |
154 | `local`, which refers to the directory `/var/lib/vz` and is always | |
155 | available. The {pve} installer creates additional storage entries | |
156 | depending on the storage type chosen at installation time. | |
157 | ||
158 | .Default storage configuration (`/etc/pve/storage.cfg`) | |
159 | ---- | |
160 | dir: local | |
161 | path /var/lib/vz | |
162 | content iso,vztmpl,backup | |
163 | ||
164 | # default image store on LVM based installation | |
165 | lvmthin: local-lvm | |
166 | thinpool data | |
167 | vgname pve | |
168 | content rootdir,images | |
169 | ||
170 | # default image store on ZFS based installation | |
171 | zfspool: local-zfs | |
172 | pool rpool/data | |
173 | sparse | |
174 | content images,rootdir | |
175 | ---- | |
176 | ||
177 | ||
178 | Common Storage Properties | |
179 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
180 | ||
181 | A few storage properties are common among different storage types. | |
182 | ||
183 | nodes:: | |
184 | ||
185 | List of cluster node names where this storage is | |
186 | usable/accessible. One can use this property to restrict storage | |
187 | access to a limited set of nodes. | |
188 | ||
189 | content:: | |
190 | ||
191 | A storage can support several content types, for example virtual disk | |
192 | images, cdrom iso images, container templates or container root | |
193 | directories. Not all storage types support all content types. One can set | |
194 | this property to select what this storage is used for. | |
195 | ||
196 | images::: | |
197 | ||
198 | KVM-Qemu VM images. | |
199 | ||
200 | rootdir::: | |
201 | ||
202 | Allow to store container data. | |
203 | ||
204 | vztmpl::: | |
205 | ||
206 | Container templates. | |
207 | ||
208 | backup::: | |
209 | ||
210 | Backup files (`vzdump`). | |
211 | ||
212 | iso::: | |
213 | ||
214 | ISO images | |
215 | ||
216 | snippets::: | |
217 | ||
218 | Snippet files, for example guest hook scripts | |
219 | ||
220 | shared:: | |
221 | ||
222 | Mark storage as shared. | |
223 | ||
224 | disable:: | |
225 | ||
226 | You can use this flag to disable the storage completely. | |
227 | ||
228 | maxfiles:: | |
229 | ||
230 | Deprecated, please use `prune-backups` instead. Maximum number of backup files | |
231 | per VM. Use `0` for unlimited. | |
232 | ||
233 | prune-backups:: | |
234 | ||
235 | Retention options for backups. For details, see | |
236 | xref:vzdump_retention[Backup Retention]. | |
237 | ||
238 | format:: | |
239 | ||
240 | Default image format (`raw|qcow2|vmdk`) | |
241 | ||
242 | preallocation:: | |
243 | ||
244 | Preallocation mode (`off|metadata|falloc|full`) for `raw` and `qcow2` images on | |
245 | file-based storages. The default is `metadata`, which is treated like `off` for | |
246 | `raw` images. When using network storages in combination with large `qcow2` | |
247 | images, using `off` can help to avoid timeouts. | |
248 | ||
249 | WARNING: It is not advisable to use the same storage pool on different | |
250 | {pve} clusters. Some storage operation need exclusive access to the | |
251 | storage, so proper locking is required. While this is implemented | |
252 | within a cluster, it does not work between different clusters. | |
253 | ||
254 | ||
255 | Volumes | |
256 | ------- | |
257 | ||
258 | We use a special notation to address storage data. When you allocate | |
259 | data from a storage pool, it returns such a volume identifier. A volume | |
260 | is identified by the `<STORAGE_ID>`, followed by a storage type | |
261 | dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks | |
262 | like: | |
263 | ||
264 | local:230/example-image.raw | |
265 | ||
266 | local:iso/debian-501-amd64-netinst.iso | |
267 | ||
268 | local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz | |
269 | ||
270 | iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61 | |
271 | ||
272 | To get the file system path for a `<VOLUME_ID>` use: | |
273 | ||
274 | pvesm path <VOLUME_ID> | |
275 | ||
276 | ||
277 | Volume Ownership | |
278 | ~~~~~~~~~~~~~~~~ | |
279 | ||
280 | There exists an ownership relation for `image` type volumes. Each such | |
281 | volume is owned by a VM or Container. For example volume | |
282 | `local:230/example-image.raw` is owned by VM 230. Most storage | |
283 | backends encodes this ownership information into the volume name. | |
284 | ||
285 | When you remove a VM or Container, the system also removes all | |
286 | associated volumes which are owned by that VM or Container. | |
287 | ||
288 | ||
289 | Using the Command Line Interface | |
290 | -------------------------------- | |
291 | ||
292 | It is recommended to familiarize yourself with the concept behind storage | |
293 | pools and volume identifiers, but in real life, you are not forced to do any | |
294 | of those low level operations on the command line. Normally, | |
295 | allocation and removal of volumes is done by the VM and Container | |
296 | management tools. | |
297 | ||
298 | Nevertheless, there is a command line tool called `pvesm` (``{pve} | |
299 | Storage Manager''), which is able to perform common storage management | |
300 | tasks. | |
301 | ||
302 | ||
303 | Examples | |
304 | ~~~~~~~~ | |
305 | ||
306 | Add storage pools | |
307 | ||
308 | pvesm add <TYPE> <STORAGE_ID> <OPTIONS> | |
309 | pvesm add dir <STORAGE_ID> --path <PATH> | |
310 | pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT> | |
311 | pvesm add lvm <STORAGE_ID> --vgname <VGNAME> | |
312 | pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET> | |
313 | ||
314 | Disable storage pools | |
315 | ||
316 | pvesm set <STORAGE_ID> --disable 1 | |
317 | ||
318 | Enable storage pools | |
319 | ||
320 | pvesm set <STORAGE_ID> --disable 0 | |
321 | ||
322 | Change/set storage options | |
323 | ||
324 | pvesm set <STORAGE_ID> <OPTIONS> | |
325 | pvesm set <STORAGE_ID> --shared 1 | |
326 | pvesm set local --format qcow2 | |
327 | pvesm set <STORAGE_ID> --content iso | |
328 | ||
329 | Remove storage pools. This does not delete any data, and does not | |
330 | disconnect or unmount anything. It just removes the storage | |
331 | configuration. | |
332 | ||
333 | pvesm remove <STORAGE_ID> | |
334 | ||
335 | Allocate volumes | |
336 | ||
337 | pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>] | |
338 | ||
339 | Allocate a 4G volume in local storage. The name is auto-generated if | |
340 | you pass an empty string as `<name>` | |
341 | ||
342 | pvesm alloc local <VMID> '' 4G | |
343 | ||
344 | Free volumes | |
345 | ||
346 | pvesm free <VOLUME_ID> | |
347 | ||
348 | WARNING: This really destroys all volume data. | |
349 | ||
350 | List storage status | |
351 | ||
352 | pvesm status | |
353 | ||
354 | List storage contents | |
355 | ||
356 | pvesm list <STORAGE_ID> [--vmid <VMID>] | |
357 | ||
358 | List volumes allocated by VMID | |
359 | ||
360 | pvesm list <STORAGE_ID> --vmid <VMID> | |
361 | ||
362 | List iso images | |
363 | ||
364 | pvesm list <STORAGE_ID> --iso | |
365 | ||
366 | List container templates | |
367 | ||
368 | pvesm list <STORAGE_ID> --vztmpl | |
369 | ||
370 | Show file system path for a volume | |
371 | ||
372 | pvesm path <VOLUME_ID> | |
373 | ||
374 | Exporting the volume `local:103/vm-103-disk-0.qcow2` to the file `target`. | |
375 | This is mostly used internally with `pvesm import`. | |
376 | The stream format qcow2+size is different to the qcow2 format. | |
377 | Consequently, the exported file cannot simply be attached to a VM. | |
378 | This also holds for the other formats. | |
379 | ||
380 | pvesm export local:103/vm-103-disk-0.qcow2 qcow2+size target --with-snapshots 1 | |
381 | ||
382 | ifdef::wiki[] | |
383 | ||
384 | See Also | |
385 | -------- | |
386 | ||
387 | * link:/wiki/Storage:_Directory[Storage: Directory] | |
388 | ||
389 | * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS] | |
390 | ||
391 | * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI] | |
392 | ||
393 | * link:/wiki/Storage:_iSCSI[Storage: iSCSI] | |
394 | ||
395 | * link:/wiki/Storage:_LVM[Storage: LVM] | |
396 | ||
397 | * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin] | |
398 | ||
399 | * link:/wiki/Storage:_NFS[Storage: NFS] | |
400 | ||
401 | * link:/wiki/Storage:_CIFS[Storage: CIFS] | |
402 | ||
403 | * link:/wiki/Storage:_Proxmox_Backup_Server[Storage: Proxmox Backup Server] | |
404 | ||
405 | * link:/wiki/Storage:_RBD[Storage: RBD] | |
406 | ||
407 | * link:/wiki/Storage:_CephFS[Storage: CephFS] | |
408 | ||
409 | * link:/wiki/Storage:_ZFS[Storage: ZFS] | |
410 | ||
411 | * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] | |
412 | ||
413 | endif::wiki[] | |
414 | ||
415 | ifndef::wiki[] | |
416 | ||
417 | // backend documentation | |
418 | ||
419 | include::pve-storage-dir.adoc[] | |
420 | ||
421 | include::pve-storage-nfs.adoc[] | |
422 | ||
423 | include::pve-storage-cifs.adoc[] | |
424 | ||
425 | include::pve-storage-pbs.adoc[] | |
426 | ||
427 | include::pve-storage-glusterfs.adoc[] | |
428 | ||
429 | include::pve-storage-zfspool.adoc[] | |
430 | ||
431 | include::pve-storage-lvm.adoc[] | |
432 | ||
433 | include::pve-storage-lvmthin.adoc[] | |
434 | ||
435 | include::pve-storage-iscsi.adoc[] | |
436 | ||
437 | include::pve-storage-iscsidirect.adoc[] | |
438 | ||
439 | include::pve-storage-rbd.adoc[] | |
440 | ||
441 | include::pve-storage-cephfs.adoc[] | |
442 | ||
443 | include::pve-storage-btrfs.adoc[] | |
444 | ||
445 | include::pve-storage-zfs.adoc[] | |
446 | ||
447 | ||
448 | ifdef::manvolnum[] | |
449 | include::pve-copyright.adoc[] | |
450 | endif::manvolnum[] | |
451 | ||
452 | endif::wiki[] | |
453 |