]>
Commit | Line | Data |
---|---|---|
aa039b0f DM |
1 | [[chapter-storage]] |
2 | ifdef::manvolnum[] | |
3 | PVE({manvolnum}) | |
4 | ================ | |
38fd0958 | 5 | include::attributes.txt[] |
aa039b0f | 6 | |
5f09af76 DM |
7 | :pve-toplevel: |
8 | ||
aa039b0f DM |
9 | NAME |
10 | ---- | |
11 | ||
12 | pvesm - Proxmox VE Storage Manager | |
13 | ||
14 | ||
49a5e11c | 15 | SYNOPSIS |
aa039b0f DM |
16 | -------- |
17 | ||
18 | include::pvesm.1-synopsis.adoc[] | |
19 | ||
20 | DESCRIPTION | |
21 | ----------- | |
22 | endif::manvolnum[] | |
23 | ||
24 | ifndef::manvolnum[] | |
25 | {pve} Storage | |
26 | ============= | |
38fd0958 | 27 | include::attributes.txt[] |
aa039b0f DM |
28 | endif::manvolnum[] |
29 | ||
5f09af76 DM |
30 | ifdef::wiki[] |
31 | :pve-toplevel: | |
cb84ed18 | 32 | :title: Storage |
5f09af76 DM |
33 | endif::wiki[] |
34 | ||
aa039b0f DM |
35 | The {pve} storage model is very flexible. Virtual machine images |
36 | can either be stored on one or several local storages, or on shared | |
37 | storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may | |
38 | configure as many storage pools as you like. You can use all | |
39 | storage technologies available for Debian Linux. | |
40 | ||
41 | One major benefit of storing VMs on shared storage is the ability to | |
42 | live-migrate running machines without any downtime, as all nodes in | |
43 | the cluster have direct access to VM disk images. There is no need to | |
44 | copy VM image data, so live migration is very fast in that case. | |
45 | ||
8c1189b6 | 46 | The storage library (package `libpve-storage-perl`) uses a flexible |
aa039b0f DM |
47 | plugin system to provide a common interface to all storage types. This |
48 | can be easily adopted to include further storage types in future. | |
49 | ||
50 | ||
51 | Storage Types | |
52 | ------------- | |
53 | ||
54 | There are basically two different classes of storage types: | |
55 | ||
56 | Block level storage:: | |
57 | ||
58 | Allows to store large 'raw' images. It is usually not possible to store | |
59 | other files (ISO, backups, ..) on such storage types. Most modern | |
60 | block level storage implementations support snapshots and clones. | |
61 | RADOS, Sheepdog and DRBD are distributed systems, replicating storage | |
62 | data to different nodes. | |
63 | ||
64 | File level storage:: | |
65 | ||
66 | They allow access to a full featured (POSIX) file system. They are | |
67 | more flexible, and allows you to store any content type. ZFS is | |
68 | probably the most advanced system, and it has full support for | |
69 | snapshots and clones. | |
70 | ||
71 | ||
72 | .Available storage types | |
73 | [width="100%",cols="<d,1*m,4*d",options="header"] | |
74 | |=========================================================== | |
75 | |Description |PVE type |Level |Shared|Snapshots|Stable | |
76 | |ZFS (local) |zfspool |file |no |yes |yes | |
77 | |Directory |dir |file |no |no |yes | |
78 | |NFS |nfs |file |yes |no |yes | |
79 | |GlusterFS |glusterfs |file |yes |no |yes | |
80 | |LVM |lvm |block |no |no |yes | |
9801e1c3 | 81 | |LVM-thin |lvmthin |block |no |yes |yes |
aa039b0f DM |
82 | |iSCSI/kernel |iscsi |block |yes |no |yes |
83 | |iSCSI/libiscsi |iscsidirect |block |yes |no |yes | |
84 | |Ceph/RBD |rbd |block |yes |yes |yes | |
85 | |Sheepdog |sheepdog |block |yes |yes |beta | |
86 | |DRBD9 |drbd |block |yes |yes |beta | |
87 | |ZFS over iSCSI |zfs |block |yes |yes |yes | |
88 | |========================================================= | |
89 | ||
90 | TIP: It is possible to use LVM on top of an iSCSI storage. That way | |
8c1189b6 | 91 | you get a `shared` LVM storage. |
aa039b0f | 92 | |
5eba0743 FG |
93 | |
94 | Thin Provisioning | |
2afe468c | 95 | ~~~~~~~~~~~~~~~~~ |
ebc15cbc | 96 | |
8c1189b6 FG |
97 | A number of storages, and the Qemu image format `qcow2`, support 'thin |
98 | provisioning'. With thin provisioning activated, only the blocks that | |
2afe468c | 99 | the guest system actually use will be written to the storage. |
ebc15cbc | 100 | |
2afe468c | 101 | Say for instance you create a VM with a 32GB hard disk, and after |
5eba0743 | 102 | installing the guest system OS, the root file system of the VM contains |
2afe468c DM |
103 | 3 GB of data. In that case only 3GB are written to the storage, even |
104 | if the guest VM sees a 32GB hard drive. In this way thin provisioning | |
105 | allows you to create disk images which are larger than the currently | |
106 | available storage blocks. You can create large disk images for your | |
107 | VMs, and when the need arises, add more disks to your storage without | |
5eba0743 | 108 | resizing the VMs' file systems. |
2afe468c | 109 | |
8c1189b6 | 110 | All storage types which have the ``Snapshots'' feature also support thin |
2afe468c | 111 | provisioning. |
ebc15cbc | 112 | |
ba1d96fd DM |
113 | CAUTION: If a storage runs full, all guests using volumes on that |
114 | storage receives IO error. This can cause file system inconsistencies | |
115 | and may corrupt your data. So it is advisable to avoid | |
116 | over-provisioning of your storage resources, or carefully observe | |
117 | free space to avoid such conditions. | |
ebc15cbc | 118 | |
5eba0743 | 119 | |
aa039b0f DM |
120 | Storage Configuration |
121 | --------------------- | |
122 | ||
123 | All {pve} related storage configuration is stored within a single text | |
8c1189b6 | 124 | file at `/etc/pve/storage.cfg`. As this file is within `/etc/pve/`, it |
aa039b0f DM |
125 | gets automatically distributed to all cluster nodes. So all nodes |
126 | share the same storage configuration. | |
127 | ||
128 | Sharing storage configuration make perfect sense for shared storage, | |
8c1189b6 | 129 | because the same ``shared'' storage is accessible from all nodes. But is |
aa039b0f DM |
130 | also useful for local storage types. In this case such local storage |
131 | is available on all nodes, but it is physically different and can have | |
132 | totally different content. | |
133 | ||
5eba0743 | 134 | |
aa039b0f DM |
135 | Storage Pools |
136 | ~~~~~~~~~~~~~ | |
137 | ||
5eba0743 FG |
138 | Each storage pool has a `<type>`, and is uniquely identified by its |
139 | `<STORAGE_ID>`. A pool configuration looks like this: | |
aa039b0f DM |
140 | |
141 | ---- | |
142 | <type>: <STORAGE_ID> | |
143 | <property> <value> | |
144 | <property> <value> | |
145 | ... | |
146 | ---- | |
147 | ||
aa039b0f | 148 | The `<type>: <STORAGE_ID>` line starts the pool definition, which is then |
871e1fd6 FG |
149 | followed by a list of properties. Most properties have values, but some of |
150 | them come with reasonable default. In that case you can omit the value. | |
aa039b0f | 151 | |
9c41b54d DM |
152 | To be more specific, take a look at the default storage configuration |
153 | after installation. It contains one special local storage pool named | |
8c1189b6 | 154 | `local`, which refers to the directory `/var/lib/vz` and is always |
9c41b54d DM |
155 | available. The {pve} installer creates additional storage entries |
156 | depending on the storage type chosen at installation time. | |
157 | ||
8c1189b6 | 158 | .Default storage configuration (`/etc/pve/storage.cfg`) |
9801e1c3 DM |
159 | ---- |
160 | dir: local | |
aa039b0f | 161 | path /var/lib/vz |
9801e1c3 DM |
162 | content iso,vztmpl,backup |
163 | ||
9c41b54d | 164 | # default image store on LVM based installation |
9801e1c3 DM |
165 | lvmthin: local-lvm |
166 | thinpool data | |
167 | vgname pve | |
168 | content rootdir,images | |
9c41b54d DM |
169 | |
170 | # default image store on ZFS based installation | |
171 | zfspool: local-zfs | |
172 | pool rpool/data | |
173 | sparse | |
174 | content images,rootdir | |
9801e1c3 | 175 | ---- |
aa039b0f | 176 | |
5eba0743 | 177 | |
aa039b0f DM |
178 | Common Storage Properties |
179 | ~~~~~~~~~~~~~~~~~~~~~~~~~ | |
180 | ||
871e1fd6 | 181 | A few storage properties are common among different storage types. |
aa039b0f DM |
182 | |
183 | nodes:: | |
184 | ||
185 | List of cluster node names where this storage is | |
186 | usable/accessible. One can use this property to restrict storage | |
187 | access to a limited set of nodes. | |
188 | ||
189 | content:: | |
190 | ||
191 | A storage can support several content types, for example virtual disk | |
192 | images, cdrom iso images, container templates or container root | |
871e1fd6 | 193 | directories. Not all storage types support all content types. One can set |
aa039b0f DM |
194 | this property to select for what this storage is used for. |
195 | ||
196 | images::: | |
197 | ||
198 | KVM-Qemu VM images. | |
199 | ||
200 | rootdir::: | |
201 | ||
871e1fd6 | 202 | Allow to store container data. |
aa039b0f DM |
203 | |
204 | vztmpl::: | |
205 | ||
206 | Container templates. | |
207 | ||
208 | backup::: | |
209 | ||
8c1189b6 | 210 | Backup files (`vzdump`). |
aa039b0f DM |
211 | |
212 | iso::: | |
213 | ||
214 | ISO images | |
215 | ||
216 | shared:: | |
217 | ||
218 | Mark storage as shared. | |
219 | ||
220 | disable:: | |
221 | ||
222 | You can use this flag to disable the storage completely. | |
223 | ||
224 | maxfiles:: | |
225 | ||
5eba0743 | 226 | Maximum number of backup files per VM. Use `0` for unlimited. |
aa039b0f DM |
227 | |
228 | format:: | |
229 | ||
230 | Default image format (`raw|qcow2|vmdk`) | |
231 | ||
232 | ||
233 | WARNING: It is not advisable to use the same storage pool on different | |
871e1fd6 | 234 | {pve} clusters. Some storage operation need exclusive access to the |
aa039b0f | 235 | storage, so proper locking is required. While this is implemented |
871e1fd6 | 236 | within a cluster, it does not work between different clusters. |
aa039b0f DM |
237 | |
238 | ||
239 | Volumes | |
240 | ------- | |
241 | ||
242 | We use a special notation to address storage data. When you allocate | |
871e1fd6 | 243 | data from a storage pool, it returns such a volume identifier. A volume |
aa039b0f DM |
244 | is identified by the `<STORAGE_ID>`, followed by a storage type |
245 | dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks | |
246 | like: | |
247 | ||
248 | local:230/example-image.raw | |
249 | ||
250 | local:iso/debian-501-amd64-netinst.iso | |
251 | ||
252 | local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz | |
253 | ||
254 | iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61 | |
255 | ||
5eba0743 | 256 | To get the file system path for a `<VOLUME_ID>` use: |
aa039b0f DM |
257 | |
258 | pvesm path <VOLUME_ID> | |
259 | ||
5eba0743 | 260 | |
aa039b0f DM |
261 | Volume Ownership |
262 | ~~~~~~~~~~~~~~~~ | |
263 | ||
8c1189b6 | 264 | There exists an ownership relation for `image` type volumes. Each such |
aa039b0f DM |
265 | volume is owned by a VM or Container. For example volume |
266 | `local:230/example-image.raw` is owned by VM 230. Most storage | |
267 | backends encodes this ownership information into the volume name. | |
268 | ||
871e1fd6 | 269 | When you remove a VM or Container, the system also removes all |
aa039b0f DM |
270 | associated volumes which are owned by that VM or Container. |
271 | ||
272 | ||
273 | Using the Command Line Interface | |
274 | -------------------------------- | |
275 | ||
871e1fd6 FG |
276 | It is recommended to familiarize yourself with the concept behind storage |
277 | pools and volume identifiers, but in real life, you are not forced to do any | |
aa039b0f DM |
278 | of those low level operations on the command line. Normally, |
279 | allocation and removal of volumes is done by the VM and Container | |
280 | management tools. | |
281 | ||
8c1189b6 FG |
282 | Nevertheless, there is a command line tool called `pvesm` (``{pve} |
283 | Storage Manager''), which is able to perform common storage management | |
aa039b0f DM |
284 | tasks. |
285 | ||
286 | ||
287 | Examples | |
288 | ~~~~~~~~ | |
289 | ||
290 | Add storage pools | |
291 | ||
292 | pvesm add <TYPE> <STORAGE_ID> <OPTIONS> | |
293 | pvesm add dir <STORAGE_ID> --path <PATH> | |
294 | pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT> | |
295 | pvesm add lvm <STORAGE_ID> --vgname <VGNAME> | |
296 | pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET> | |
297 | ||
298 | Disable storage pools | |
299 | ||
300 | pvesm set <STORAGE_ID> --disable 1 | |
301 | ||
302 | Enable storage pools | |
303 | ||
304 | pvesm set <STORAGE_ID> --disable 0 | |
305 | ||
306 | Change/set storage options | |
307 | ||
308 | pvesm set <STORAGE_ID> <OPTIONS> | |
309 | pvesm set <STORAGE_ID> --shared 1 | |
310 | pvesm set local --format qcow2 | |
311 | pvesm set <STORAGE_ID> --content iso | |
312 | ||
313 | Remove storage pools. This does not delete any data, and does not | |
314 | disconnect or unmount anything. It just removes the storage | |
315 | configuration. | |
316 | ||
317 | pvesm remove <STORAGE_ID> | |
318 | ||
319 | Allocate volumes | |
320 | ||
321 | pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>] | |
322 | ||
323 | Allocate a 4G volume in local storage. The name is auto-generated if | |
324 | you pass an empty string as `<name>` | |
325 | ||
326 | pvesm alloc local <VMID> '' 4G | |
327 | ||
5eba0743 | 328 | Free volumes |
aa039b0f DM |
329 | |
330 | pvesm free <VOLUME_ID> | |
331 | ||
332 | WARNING: This really destroys all volume data. | |
333 | ||
334 | List storage status | |
335 | ||
336 | pvesm status | |
337 | ||
338 | List storage contents | |
339 | ||
340 | pvesm list <STORAGE_ID> [--vmid <VMID>] | |
341 | ||
342 | List volumes allocated by VMID | |
343 | ||
344 | pvesm list <STORAGE_ID> --vmid <VMID> | |
345 | ||
346 | List iso images | |
347 | ||
348 | pvesm list <STORAGE_ID> --iso | |
349 | ||
350 | List container templates | |
351 | ||
352 | pvesm list <STORAGE_ID> --vztmpl | |
353 | ||
5eba0743 | 354 | Show file system path for a volume |
aa039b0f DM |
355 | |
356 | pvesm path <VOLUME_ID> | |
357 | ||
deb4673f DM |
358 | ifdef::wiki[] |
359 | ||
360 | See Also | |
361 | -------- | |
362 | ||
f532afb7 | 363 | * link:/wiki/Storage:_Directory[Storage: Directory] |
deb4673f | 364 | |
f532afb7 | 365 | * link:/wiki/Storage:_GlusterFS[Storage: GlusterFS] |
deb4673f | 366 | |
f532afb7 | 367 | * link:/wiki/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI] |
deb4673f | 368 | |
f532afb7 | 369 | * link:/wiki/Storage:_iSCSI[Storage: iSCSI] |
deb4673f | 370 | |
f532afb7 | 371 | * link:/wiki/Storage:_LVM[Storage: LVM] |
deb4673f | 372 | |
f532afb7 | 373 | * link:/wiki/Storage:_LVM_Thin[Storage: LVM Thin] |
deb4673f | 374 | |
f532afb7 | 375 | * link:/wiki/Storage:_NFS[Storage: NFS] |
deb4673f | 376 | |
f532afb7 | 377 | * link:/wiki/Storage:_RBD[Storage: RBD] |
deb4673f | 378 | |
f532afb7 | 379 | * link:/wiki/Storage:_ZFS[Storage: ZFS] |
deb4673f | 380 | |
032e755c | 381 | * link:/wiki/Storage:_ZFS_over_iSCSI[Storage: ZFS over iSCSI] |
deb4673f DM |
382 | |
383 | endif::wiki[] | |
384 | ||
251666be DM |
385 | ifndef::wiki[] |
386 | ||
aa039b0f DM |
387 | // backend documentation |
388 | ||
389 | include::pve-storage-dir.adoc[] | |
390 | ||
391 | include::pve-storage-nfs.adoc[] | |
392 | ||
393 | include::pve-storage-glusterfs.adoc[] | |
394 | ||
395 | include::pve-storage-zfspool.adoc[] | |
396 | ||
397 | include::pve-storage-lvm.adoc[] | |
398 | ||
9801e1c3 DM |
399 | include::pve-storage-lvmthin.adoc[] |
400 | ||
aa039b0f DM |
401 | include::pve-storage-iscsi.adoc[] |
402 | ||
403 | include::pve-storage-iscsidirect.adoc[] | |
404 | ||
405 | include::pve-storage-rbd.adoc[] | |
406 | ||
407 | ||
251666be | 408 | |
aa039b0f DM |
409 | ifdef::manvolnum[] |
410 | include::pve-copyright.adoc[] | |
411 | endif::manvolnum[] | |
412 | ||
251666be DM |
413 | endif::wiki[] |
414 |