]> git.proxmox.com Git - pve-docs.git/blob - pvesm.adoc
firewall: link-local addresses and sysctls
[pve-docs.git] / pvesm.adoc
1 [[chapter-storage]]
2 ifdef::manvolnum[]
3 PVE({manvolnum})
4 ================
5 include::attributes.txt[]
6
7 NAME
8 ----
9
10 pvesm - Proxmox VE Storage Manager
11
12
13 SYNOPSYS
14 --------
15
16 include::pvesm.1-synopsis.adoc[]
17
18 DESCRIPTION
19 -----------
20 endif::manvolnum[]
21
22 ifndef::manvolnum[]
23 {pve} Storage
24 =============
25 include::attributes.txt[]
26 endif::manvolnum[]
27
28 The {pve} storage model is very flexible. Virtual machine images
29 can either be stored on one or several local storages, or on shared
30 storage like NFS or iSCSI (NAS, SAN). There are no limits, and you may
31 configure as many storage pools as you like. You can use all
32 storage technologies available for Debian Linux.
33
34 One major benefit of storing VMs on shared storage is the ability to
35 live-migrate running machines without any downtime, as all nodes in
36 the cluster have direct access to VM disk images. There is no need to
37 copy VM image data, so live migration is very fast in that case.
38
39 The storage library (package 'libpve-storage-perl') uses a flexible
40 plugin system to provide a common interface to all storage types. This
41 can be easily adopted to include further storage types in future.
42
43
44 Storage Types
45 -------------
46
47 There are basically two different classes of storage types:
48
49 Block level storage::
50
51 Allows to store large 'raw' images. It is usually not possible to store
52 other files (ISO, backups, ..) on such storage types. Most modern
53 block level storage implementations support snapshots and clones.
54 RADOS, Sheepdog and DRBD are distributed systems, replicating storage
55 data to different nodes.
56
57 File level storage::
58
59 They allow access to a full featured (POSIX) file system. They are
60 more flexible, and allows you to store any content type. ZFS is
61 probably the most advanced system, and it has full support for
62 snapshots and clones.
63
64
65 .Available storage types
66 [width="100%",cols="<d,1*m,4*d",options="header"]
67 |===========================================================
68 |Description |PVE type |Level |Shared|Snapshots|Stable
69 |ZFS (local) |zfspool |file |no |yes |yes
70 |Directory |dir |file |no |no |yes
71 |NFS |nfs |file |yes |no |yes
72 |GlusterFS |glusterfs |file |yes |no |yes
73 |LVM |lvm |block |no |no |yes
74 |LVM-thin |lvmthin |block |no |yes |beta
75 |iSCSI/kernel |iscsi |block |yes |no |yes
76 |iSCSI/libiscsi |iscsidirect |block |yes |no |yes
77 |Ceph/RBD |rbd |block |yes |yes |yes
78 |Sheepdog |sheepdog |block |yes |yes |beta
79 |DRBD9 |drbd |block |yes |yes |beta
80 |ZFS over iSCSI |zfs |block |yes |yes |yes
81 |=========================================================
82
83 TIP: It is possible to use LVM on top of an iSCSI storage. That way
84 you get a 'shared' LVM storage.
85
86 Storage Configuration
87 ---------------------
88
89 All {pve} related storage configuration is stored within a single text
90 file at '/etc/pve/storage.cfg'. As this file is within '/etc/pve/', it
91 gets automatically distributed to all cluster nodes. So all nodes
92 share the same storage configuration.
93
94 Sharing storage configuration make perfect sense for shared storage,
95 because the same 'shared' storage is accessible from all nodes. But is
96 also useful for local storage types. In this case such local storage
97 is available on all nodes, but it is physically different and can have
98 totally different content.
99
100 Storage Pools
101 ~~~~~~~~~~~~~
102
103 Each storage pool has a `<type>`, and is uniquely identified by its `<STORAGE_ID>`. A pool configuration looks like this:
104
105 ----
106 <type>: <STORAGE_ID>
107 <property> <value>
108 <property> <value>
109 ...
110 ----
111
112 NOTE: There is one special local storage pool named `local`. It refers to
113 the directory '/var/lib/vz' and is automatically generated at installation
114 time.
115
116 The `<type>: <STORAGE_ID>` line starts the pool definition, which is then
117 followed by a list of properties. Most properties have values, but some of
118 them come with reasonable default. In that case you can omit the value.
119
120 .Default storage configuration ('/etc/pve/storage.cfg')
121 ====
122 dir: local
123 path /var/lib/vz
124 content backup,iso,vztmpl,images,rootdir
125 maxfiles 3
126 ====
127
128 Common Storage Properties
129 ~~~~~~~~~~~~~~~~~~~~~~~~~
130
131 A few storage properties are common among different storage types.
132
133 nodes::
134
135 List of cluster node names where this storage is
136 usable/accessible. One can use this property to restrict storage
137 access to a limited set of nodes.
138
139 content::
140
141 A storage can support several content types, for example virtual disk
142 images, cdrom iso images, container templates or container root
143 directories. Not all storage types support all content types. One can set
144 this property to select for what this storage is used for.
145
146 images:::
147
148 KVM-Qemu VM images.
149
150 rootdir:::
151
152 Allow to store container data.
153
154 vztmpl:::
155
156 Container templates.
157
158 backup:::
159
160 Backup files ('vzdump').
161
162 iso:::
163
164 ISO images
165
166 shared::
167
168 Mark storage as shared.
169
170 disable::
171
172 You can use this flag to disable the storage completely.
173
174 maxfiles::
175
176 Maximal number of backup files per VM. Use `0` for unlimted.
177
178 format::
179
180 Default image format (`raw|qcow2|vmdk`)
181
182
183 WARNING: It is not advisable to use the same storage pool on different
184 {pve} clusters. Some storage operation need exclusive access to the
185 storage, so proper locking is required. While this is implemented
186 within a cluster, it does not work between different clusters.
187
188
189 Volumes
190 -------
191
192 We use a special notation to address storage data. When you allocate
193 data from a storage pool, it returns such a volume identifier. A volume
194 is identified by the `<STORAGE_ID>`, followed by a storage type
195 dependent volume name, separated by colon. A valid `<VOLUME_ID>` looks
196 like:
197
198 local:230/example-image.raw
199
200 local:iso/debian-501-amd64-netinst.iso
201
202 local:vztmpl/debian-5.0-joomla_1.5.9-1_i386.tar.gz
203
204 iscsi-storage:0.0.2.scsi-14f504e46494c4500494b5042546d2d646744372d31616d61
205
206 To get the filesystem path for a `<VOLUME_ID>` use:
207
208 pvesm path <VOLUME_ID>
209
210 Volume Ownership
211 ~~~~~~~~~~~~~~~~
212
213 There exists an ownership relation for 'image' type volumes. Each such
214 volume is owned by a VM or Container. For example volume
215 `local:230/example-image.raw` is owned by VM 230. Most storage
216 backends encodes this ownership information into the volume name.
217
218 When you remove a VM or Container, the system also removes all
219 associated volumes which are owned by that VM or Container.
220
221
222 Using the Command Line Interface
223 --------------------------------
224
225 It is recommended to familiarize yourself with the concept behind storage
226 pools and volume identifiers, but in real life, you are not forced to do any
227 of those low level operations on the command line. Normally,
228 allocation and removal of volumes is done by the VM and Container
229 management tools.
230
231 Nevertheless, there is a command line tool called 'pvesm' ({pve}
232 storage manager), which is able to perform common storage management
233 tasks.
234
235
236 Examples
237 ~~~~~~~~
238
239 Add storage pools
240
241 pvesm add <TYPE> <STORAGE_ID> <OPTIONS>
242 pvesm add dir <STORAGE_ID> --path <PATH>
243 pvesm add nfs <STORAGE_ID> --path <PATH> --server <SERVER> --export <EXPORT>
244 pvesm add lvm <STORAGE_ID> --vgname <VGNAME>
245 pvesm add iscsi <STORAGE_ID> --portal <HOST[:PORT]> --target <TARGET>
246
247 Disable storage pools
248
249 pvesm set <STORAGE_ID> --disable 1
250
251 Enable storage pools
252
253 pvesm set <STORAGE_ID> --disable 0
254
255 Change/set storage options
256
257 pvesm set <STORAGE_ID> <OPTIONS>
258 pvesm set <STORAGE_ID> --shared 1
259 pvesm set local --format qcow2
260 pvesm set <STORAGE_ID> --content iso
261
262 Remove storage pools. This does not delete any data, and does not
263 disconnect or unmount anything. It just removes the storage
264 configuration.
265
266 pvesm remove <STORAGE_ID>
267
268 Allocate volumes
269
270 pvesm alloc <STORAGE_ID> <VMID> <name> <size> [--format <raw|qcow2>]
271
272 Allocate a 4G volume in local storage. The name is auto-generated if
273 you pass an empty string as `<name>`
274
275 pvesm alloc local <VMID> '' 4G
276
277 Free volumes
278
279 pvesm free <VOLUME_ID>
280
281 WARNING: This really destroys all volume data.
282
283 List storage status
284
285 pvesm status
286
287 List storage contents
288
289 pvesm list <STORAGE_ID> [--vmid <VMID>]
290
291 List volumes allocated by VMID
292
293 pvesm list <STORAGE_ID> --vmid <VMID>
294
295 List iso images
296
297 pvesm list <STORAGE_ID> --iso
298
299 List container templates
300
301 pvesm list <STORAGE_ID> --vztmpl
302
303 Show filesystem path for a volume
304
305 pvesm path <VOLUME_ID>
306
307 // backend documentation
308
309 include::pve-storage-dir.adoc[]
310
311 include::pve-storage-nfs.adoc[]
312
313 include::pve-storage-glusterfs.adoc[]
314
315 include::pve-storage-zfspool.adoc[]
316
317 include::pve-storage-lvm.adoc[]
318
319 include::pve-storage-iscsi.adoc[]
320
321 include::pve-storage-iscsidirect.adoc[]
322
323 include::pve-storage-rbd.adoc[]
324
325
326 ifdef::manvolnum[]
327 include::pve-copyright.adoc[]
328 endif::manvolnum[]
329