]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
Merge pull request #1 from EmmanuelKasper/patch-1
[pve-docs.git] / pvesm.adoc
1 include::attributes.txt[]
2 [[chapter-storage]]
3 ifdef::manvolnum[]
4 PVE({manvolnum})
5 ================
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 endif::manvolnum[]
26
27 The {pve} storage model is very flexible. Virtual machine images
28 can either be stored on one or several local storages, or on shared
29 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
30 configure as many storage pools as you like. You can use all
31 storage technologies available for Debian Linux.
32
33 One major benefit of storing VMs on shared storage is the ability to
34 live-migrate running machines without any downtime, as all nodes in
35 the cluster have direct access to VM disk images. There is no need to
36 copy VM image data, so live migration is very fast in that case.
37
38 The storage library (package 'libpve-storage-perl') uses a flexible
39 plugin system to provide a common interface to all storage types. This
40 can be easily adopted to include further storage types in future.
41
42
43 Storage Types
44 -------------
45
46 There are basically two different classes of storage types:
47
48 Block level storage::
49
50 Allows to store large 'raw' images. It is usually not possible to store
51 other files (ISO, backups, ..) on such storage types. Most modern
52 block level storage implementations support snapshots and clones.
53 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
54 data to different nodes.
55
56 File level storage::
57
58 They allow access to a full featured (POSIX) file system. They are
59 more flexible, and allows you to store any content type. ZFS is
60 probably the most advanced system, and it has full support for
61 snapshots and clones.
62
63
64 .Available storage types
65 [width="100%",cols="<d,1*m,4*d",options="header"]
66 |===========================================================
67 |Description |PVE type |Level |Shared|Snapshots|Stable
68 |ZFS (local) |zfspool |file |no |yes |yes
69 |Directory |dir |file |no |no |yes
70 |NFS |nfs |file |yes |no |yes
71 |GlusterFS |glusterfs |file |yes |no |yes
72 |LVM |lvm |block |no |no |yes
73 |LVM-thin |lvmthin |block |no |yes |beta
74 |iSCSI/kernel |iscsi |block |yes |no |yes
75 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
76 |Ceph/RBD |rbd |block |yes |yes |yes
77 |Sheepdog |sheepdog |block |yes |yes |beta
78 |DRBD9 |drbd |block |yes |yes |beta
79 |ZFS over iSCSI |zfs |block |yes |yes |yes
80 |=========================================================
81
82 TIP: It is possible to use LVM on top of an iSCSI storage. That way
83 you get a 'shared' LVM storage.
84
85 Storage Configuration
86 ---------------------
87
88 All {pve} related storage configuration is stored within a single text
89 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
90 gets automatically distributed to all cluster nodes. So all nodes
91 share the same storage configuration.
92
93 Sharing storage configuration make perfect sense for shared storage,
94 because the same 'shared' storage is accessible from all nodes. But is
95 also useful for local storage types. In this case such local storage
96 is available on all nodes, but it is physically different and can have
97 totally different content.
98
99 Storage Pools
100 ~~~~~~~~~~~~~
101
102 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
103
104 ----
105 <type>: <STORAGE_ID>
106 <property> <value>
107 <property> <value>
108 ...
109 ----
110
111 NOTE: There is one special local storage pool named `local`. It refers to
112 directory '/var/lib/vz' and is automatically generated at installation
113 time.
114
115 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
116 followed by a list of properties. Most properties have values, but some of them comes
117 with reasonable default. In that case you can omit the value.
118
119 .Default storage configuration ('/etc/pve/storage.cfg')
120 ====
121 dir: local
122 path /var/lib/vz
123 content backup,iso,vztmpl,images,rootdir
124 maxfiles 3
125 ====
126
127 Common Storage Properties
128 ~~~~~~~~~~~~~~~~~~~~~~~~~
129
130 A few storage properties are common among differenty storage types.
131
132 nodes::
133
134 List of cluster node names where this storage is
135 usable/accessible. One can use this property to restrict storage
136 access to a limited set of nodes.
137
138 content::
139
140 A storage can support several content types, for example virtual disk
141 images, cdrom iso images, container templates or container root
142 directories. Not all storage types supports all content types. One can set
143 this property to select for what this storage is used for.
144
145 images:::
146
147 KVM-Qemu VM images.
148
149 rootdir:::
150
151 Allow to store Container data.
152
153 vztmpl:::
154
155 Container templates.
156
157 backup:::
158
159 Backup files ('vzdump').
160
161 iso:::
162
163 ISO images
164
165 shared::
166
167 Mark storage as shared.
168
169 disable::
170
171 You can use this flag to disable the storage completely.
172
173 maxfiles::
174
175 Maximal number of backup files per VM. Use `0` for unlimted.
176
177 format::
178
179 Default image format (`raw|qcow2|vmdk`)
180
181
182 WARNING: It is not advisable to use the same storage pool on different
183 {pve} clusters. Some storage operation needs exclusive access to the
184 storage, so proper locking is required. While this is implemented
185 within an cluster, it does not work between different clusters.
186
187
188 Volumes
189 -------
190
191 We use a special notation to address storage data. When you allocate
192 data from a storage pool, it returns such volume identifier. A volume
193 is identified by the `<STORAGE_ID>`, followed by a storage type
194 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
195 like:
196
197 local:230/example-image.raw
198
199 local:iso/debian-501-amd64-netinst.iso
200
201 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
202
203 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
204
205 To get the filesystem path for a `<VOLUME_ID>` use:
206
207 pvesm path <VOLUME_ID>
208
209 Volume Ownership
210 ~~~~~~~~~~~~~~~~
211
212 There exists an ownership relation for 'image' type volumes. Each such
213 volume is owned by a VM or Container. For example volume
214 `local:230/example-image.raw` is owned by VM 230. Most storage
215 backends encodes this ownership information into the volume name.
216
217 When you remove a VM or Container, the system also remove all
218 associated volumes which are owned by that VM or Container.
219
220
221 Using the Command Line Interface
222 --------------------------------
223
224 I think it is required to understand the concept behind storage pools
225 and volume identifier, but in real life, you are not forced to do any
226 of those low level operations on the command line. Normally,
227 allocation and removal of volumes is done by the VM and Container
228 management tools.
229
230 Nevertheless, there is a command line tool called 'pvesm' ({pve}
231 storage manager), which is able to perform common storage management
232 tasks.
233
234
235 Examples
236 ~~~~~~~~
237
238 Add storage pools
239
240 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
241 pvesm add dir <STORAGE_ID> --path <PATH>
242 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
243 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
244 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
245
246 Disable storage pools
247
248 pvesm set <STORAGE_ID> --disable 1
249
250 Enable storage pools
251
252 pvesm set <STORAGE_ID> --disable 0
253
254 Change/set storage options
255
256 pvesm set <STORAGE_ID> <OPTIONS>
257 pvesm set <STORAGE_ID> --shared 1
258 pvesm set local --format qcow2
259 pvesm set <STORAGE_ID> --content iso
260
261 Remove storage pools. This does not delete any data, and does not
262 disconnect or unmount anything. It just removes the storage
263 configuration.
264
265 pvesm remove <STORAGE_ID>
266
267 Allocate volumes
268
269 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
270
271 Allocate a 4G volume in local storage. The name is auto-generated if
272 you pass an empty string as `<name>`
273
274 pvesm alloc local <VMID> '' 4G
275
276 Free volumes
277
278 pvesm free <VOLUME_ID>
279
280 WARNING: This really destroys all volume data.
281
282 List storage status
283
284 pvesm status
285
286 List storage contents
287
288 pvesm list <STORAGE_ID> [--vmid <VMID>]
289
290 List volumes allocated by VMID
291
292 pvesm list <STORAGE_ID> --vmid <VMID>
293
294 List iso images
295
296 pvesm list <STORAGE_ID> --iso
297
298 List container templates
299
300 pvesm list <STORAGE_ID> --vztmpl
301
302 Show filesystem path for a volume
303
304 pvesm path <VOLUME_ID>
305
306 // backend documentation
307
308 include::pve-storage-dir.adoc[]
309
310 include::pve-storage-nfs.adoc[]
311
312 include::pve-storage-glusterfs.adoc[]
313
314 include::pve-storage-zfspool.adoc[]
315
316 include::pve-storage-lvm.adoc[]
317
318 include::pve-storage-iscsi.adoc[]
319
320 include::pve-storage-iscsidirect.adoc[]
321
322 include::pve-storage-rbd.adoc[]
323
324
325 ifdef::manvolnum[]
326 include::pve-copyright.adoc[]
327 endif::manvolnum[]
328