]> git.proxmox.com Git - pve-docs.git/blob - local-lvm.adoc
pve-gui.adoc: minor cleanups
[pve-docs.git] / local-lvm.adoc
1 Logical Volume Manager (LVM)
2 ----------------------------
3 ifdef::wiki[]
4 :pve-toplevel:
5 endif::wiki[]
6
7 Most people install {pve} directly on a local disk. The {pve}
8 installation CD offers several options for local disk management, and
9 the current default setup uses LVM. The installer let you select a
10 single disk for such setup, and uses that disk as physical volume for
11 the **V**olume **G**roup (VG) `pve`. The following output is from a
12 test installation using a small 8GB disk:
13
14 ----
15 # pvs
16 PV VG Fmt Attr PSize PFree
17 /dev/sda3 pve lvm2 a-- 7.87g 876.00m
18
19 # vgs
20 VG #PV #LV #SN Attr VSize VFree
21 pve 1 3 0 wz--n- 7.87g 876.00m
22 ----
23
24 The installer allocates three **L**ogical **V**olumes (LV) inside this
25 VG:
26
27 ----
28 # lvs
29 LV VG Attr LSize Pool Origin Data% Meta%
30 data pve twi-a-tz-- 4.38g 0.00 0.63
31 root pve -wi-ao---- 1.75g
32 swap pve -wi-ao---- 896.00m
33 ----
34
35 root:: Formatted as `ext4`, and contains the operation system.
36
37 swap:: Swap partition
38
39 data:: This volume uses LVM-thin, and is used to store VM
40 images. LVM-thin is preferable for this task, because it offers
41 efficient support for snapshots and clones.
42
43 For {pve} versions up to 4.1, the installer creates a standard logical
44 volume called ``data'', which is mounted at `/var/lib/vz`.
45
46 Starting from version 4.2, the logical volume ``data'' is a LVM-thin pool,
47 used to store block based guest images, and `/var/lib/vz` is simply a
48 directory on the root file system.
49
50 Hardware
51 ~~~~~~~~
52
53 We highly recommend to use a hardware RAID controller (with BBU) for
54 such setups. This increases performance, provides redundancy, and make
55 disk replacements easier (hot-pluggable).
56
57 LVM itself does not need any special hardware, and memory requirements
58 are very low.
59
60
61 Bootloader
62 ~~~~~~~~~~
63
64 We install two boot loaders by default. The first partition contains
65 the standard GRUB boot loader. The second partition is an **E**FI **S**ystem
66 **P**artition (ESP), which makes it possible to boot on EFI systems.
67
68
69 Creating a Volume Group
70 ~~~~~~~~~~~~~~~~~~~~~~~
71
72 Let's assume we have an empty disk `/dev/sdb`, onto which we want to
73 create a volume group named ``vmdata''.
74
75 CAUTION: Please note that the following commands will destroy all
76 existing data on `/dev/sdb`.
77
78 First create a partition.
79
80 # sgdisk -N 1 /dev/sdb
81
82
83 Create a **P**hysical **V**olume (PV) without confirmation and 250K
84 metadatasize.
85
86 # pvcreate --metadatasize 250k -y -ff /dev/sdb1
87
88
89 Create a volume group named ``vmdata'' on `/dev/sdb1`
90
91 # vgcreate vmdata /dev/sdb1
92
93
94 Creating an extra LV for `/var/lib/vz`
95 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
96
97 This can be easily done by creating a new thin LV.
98
99 # lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>
100
101 A real world example:
102
103 # lvcreate -n vz -V 10G pve/data
104
105 Now a filesystem must be created on the LV.
106
107 # mkfs.ext4 /dev/data/vz
108
109 At last this has to be mounted.
110
111 WARNING: be sure that `/var/lib/vz` is empty. On a default
112 installation it's not.
113
114 To make it always accessible add the following line in `/etc/fstab`.
115
116 # echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
117
118
119 Resizing the thin pool
120 ~~~~~~~~~~~~~~~~~~~~~~
121
122 Resize the LV and the metadata pool can be achieved with the following
123 command.
124
125 # lvresize --size +<size[\M,G,T]> --poolmetadatasize +<size[\M,G]> <VG>/<LVThin_pool>
126
127 NOTE: When extending the data pool, the metadata pool must also be
128 extended.
129
130
131 Create a LVM-thin pool
132 ~~~~~~~~~~~~~~~~~~~~~~
133
134 A thin pool has to be created on top of a volume group.
135 How to create a volume group see Section LVM.
136
137 # lvcreate -L 80G -T -n vmstore vmdata