]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
allow other archs than amd64
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 include::attributes.txt[]
26 endif::manvolnum[]
27
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
33
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
38
39 The storage library (package 'libpve-storage-perl') uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
42
43
44 Storage Types
45 -------------
46
47 There are basically two different classes of storage types:
48
49 Block level storage::
50
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
56
57 File level storage::
58
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
62 snapshots and clones.
63
64
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |yes
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
82
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a 'shared' LVM storage.
85
86 Thin provisioning
87 ~~~~~~~~~~~~~~~~~
88
89 A number of storages, and the Qemu image format `qcow2`, support _thin
90 provisioning_. With thin provisioning activated, only the blocks that
91 the guest system actually use will be written to the storage.
92
93 Say for instance you create a VM with a 32GB hard disk, and after
94 installing the guest system OS, the root filesystem of the VM contains
95 3 GB of data. In that case only 3GB are written to the storage, even
96 if the guest VM sees a 32GB hard drive. In this way thin provisioning
97 allows you to create disk images which are larger than the currently
98 available storage blocks. You can create large disk images for your
99 VMs, and when the need arises, add more disks to your storage without
100 resizing the VMs filesystems.
101
102 All storage types which have the 'Snapshots' feature also support thin
103 provisioning.
104
105
106 Storage Configuration
107 ---------------------
108
109 All {pve} related storage configuration is stored within a single text
110 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
111 gets automatically distributed to all cluster nodes. So all nodes
112 share the same storage configuration.
113
114 Sharing storage configuration make perfect sense for shared storage,
115 because the same 'shared' storage is accessible from all nodes. But is
116 also useful for local storage types. In this case such local storage
117 is available on all nodes, but it is physically different and can have
118 totally different content.
119
120 Storage Pools
121 ~~~~~~~~~~~~~
122
123 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
124
125 ----
126 <type>: <STORAGE_ID>
127 <property> <value>
128 <property> <value>
129 ...
130 ----
131
132 NOTE: There is one special local storage pool named `local`. It refers to
133 the directory '/var/lib/vz' and is automatically generated at installation
134 time.
135
136 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
137 followed by a list of properties. Most properties have values, but some of
138 them come with reasonable default. In that case you can omit the value.
139
140 .Default storage configuration ('/etc/pve/storage.cfg')
141 ----
142 dir: local
143 path /var/lib/vz
144 content iso,vztmpl,backup
145
146 lvmthin: local-lvm
147 thinpool data
148 vgname pve
149 content rootdir,images
150 ----
151
152 Common Storage Properties
153 ~~~~~~~~~~~~~~~~~~~~~~~~~
154
155 A few storage properties are common among different storage types.
156
157 nodes::
158
159 List of cluster node names where this storage is
160 usable/accessible. One can use this property to restrict storage
161 access to a limited set of nodes.
162
163 content::
164
165 A storage can support several content types, for example virtual disk
166 images, cdrom iso images, container templates or container root
167 directories. Not all storage types support all content types. One can set
168 this property to select for what this storage is used for.
169
170 images:::
171
172 KVM-Qemu VM images.
173
174 rootdir:::
175
176 Allow to store container data.
177
178 vztmpl:::
179
180 Container templates.
181
182 backup:::
183
184 Backup files ('vzdump').
185
186 iso:::
187
188 ISO images
189
190 shared::
191
192 Mark storage as shared.
193
194 disable::
195
196 You can use this flag to disable the storage completely.
197
198 maxfiles::
199
200 Maximal number of backup files per VM. Use `0` for unlimted.
201
202 format::
203
204 Default image format (`raw|qcow2|vmdk`)
205
206
207 WARNING: It is not advisable to use the same storage pool on different
208 {pve} clusters. Some storage operation need exclusive access to the
209 storage, so proper locking is required. While this is implemented
210 within a cluster, it does not work between different clusters.
211
212
213 Volumes
214 -------
215
216 We use a special notation to address storage data. When you allocate
217 data from a storage pool, it returns such a volume identifier. A volume
218 is identified by the `<STORAGE_ID>`, followed by a storage type
219 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
220 like:
221
222 local:230/example-image.raw
223
224 local:iso/debian-501-amd64-netinst.iso
225
226 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
227
228 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
229
230 To get the filesystem path for a `<VOLUME_ID>` use:
231
232 pvesm path <VOLUME_ID>
233
234 Volume Ownership
235 ~~~~~~~~~~~~~~~~
236
237 There exists an ownership relation for 'image' type volumes. Each such
238 volume is owned by a VM or Container. For example volume
239 `local:230/example-image.raw` is owned by VM 230. Most storage
240 backends encodes this ownership information into the volume name.
241
242 When you remove a VM or Container, the system also removes all
243 associated volumes which are owned by that VM or Container.
244
245
246 Using the Command Line Interface
247 --------------------------------
248
249 It is recommended to familiarize yourself with the concept behind storage
250 pools and volume identifiers, but in real life, you are not forced to do any
251 of those low level operations on the command line. Normally,
252 allocation and removal of volumes is done by the VM and Container
253 management tools.
254
255 Nevertheless, there is a command line tool called 'pvesm' ({pve}
256 storage manager), which is able to perform common storage management
257 tasks.
258
259
260 Examples
261 ~~~~~~~~
262
263 Add storage pools
264
265 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
266 pvesm add dir <STORAGE_ID> --path <PATH>
267 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
268 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
269 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
270
271 Disable storage pools
272
273 pvesm set <STORAGE_ID> --disable 1
274
275 Enable storage pools
276
277 pvesm set <STORAGE_ID> --disable 0
278
279 Change/set storage options
280
281 pvesm set <STORAGE_ID> <OPTIONS>
282 pvesm set <STORAGE_ID> --shared 1
283 pvesm set local --format qcow2
284 pvesm set <STORAGE_ID> --content iso
285
286 Remove storage pools. This does not delete any data, and does not
287 disconnect or unmount anything. It just removes the storage
288 configuration.
289
290 pvesm remove <STORAGE_ID>
291
292 Allocate volumes
293
294 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
295
296 Allocate a 4G volume in local storage. The name is auto-generated if
297 you pass an empty string as `<name>`
298
299 pvesm alloc local <VMID> '' 4G
300
301 Free volumes
302
303 pvesm free <VOLUME_ID>
304
305 WARNING: This really destroys all volume data.
306
307 List storage status
308
309 pvesm status
310
311 List storage contents
312
313 pvesm list <STORAGE_ID> [--vmid <VMID>]
314
315 List volumes allocated by VMID
316
317 pvesm list <STORAGE_ID> --vmid <VMID>
318
319 List iso images
320
321 pvesm list <STORAGE_ID> --iso
322
323 List container templates
324
325 pvesm list <STORAGE_ID> --vztmpl
326
327 Show filesystem path for a volume
328
329 pvesm path <VOLUME_ID>
330
331 ifdef::wiki[]
332
333 See Also
334 --------
335
336 * link:/index.php/Storage:_Directory[Storage: Directory]
337
338 * link:/index.php/Storage:_GlusterFS[Storage: GlusterFS]
339
340 * link:/index.php/Storage:_User_Mode_iSCSI[Storage: User Mode iSCSI]
341
342 * link:/index.php/Storage:_iSCSI[Storage: iSCSI]
343
344 * link:/index.php/Storage:_LVM[Storage: LVM]
345
346 * link:/index.php/Storage:_LVM_Thin[Storage: LVM Thin]
347
348 * link:/index.php/Storage:_NFS[Storage: NFS]
349
350 * link:/index.php/Storage:_RBD[Storage: RBD]
351
352 * link:/index.php/Storage:_ZFS[Storage: ZFS]
353
354
355 endif::wiki[]
356
357 ifndef::wiki[]
358
359 // backend documentation
360
361 include::pve-storage-dir.adoc[]
362
363 include::pve-storage-nfs.adoc[]
364
365 include::pve-storage-glusterfs.adoc[]
366
367 include::pve-storage-zfspool.adoc[]
368
369 include::pve-storage-lvm.adoc[]
370
371 include::pve-storage-lvmthin.adoc[]
372
373 include::pve-storage-iscsi.adoc[]
374
375 include::pve-storage-iscsidirect.adoc[]
376
377 include::pve-storage-rbd.adoc[]
378
379
380
381 ifdef::manvolnum[]
382 include::pve-copyright.adoc[]
383 endif::manvolnum[]
384
385 endif::wiki[]
386