2 Logical Volume Manager (LVM)
3 ----------------------------
8 Most people install {pve} directly on a local disk. The {pve}
9 installation CD offers several options for local disk management, and
10 the current default setup uses LVM. The installer lets you select a
11 single disk for such setup, and uses that disk as physical volume for
12 the **V**olume **G**roup (VG) `pve`. The following output is from a
13 test installation using a small 8GB disk:
17 PV VG Fmt Attr PSize PFree
18 /dev/sda3 pve lvm2 a-- 7.87g 876.00m
21 VG #PV #LV #SN Attr VSize VFree
22 pve 1 3 0 wz--n- 7.87g 876.00m
25 The installer allocates three **L**ogical **V**olumes (LV) inside this
30 LV VG Attr LSize Pool Origin Data% Meta%
31 data pve twi-a-tz-- 4.38g 0.00 0.63
32 root pve -wi-ao---- 1.75g
33 swap pve -wi-ao---- 896.00m
36 root:: Formatted as `ext4`, and contains the operating system.
40 data:: This volume uses LVM-thin, and is used to store VM
41 images. LVM-thin is preferable for this task, because it offers
42 efficient support for snapshots and clones.
44 For {pve} versions up to 4.1, the installer creates a standard logical
45 volume called ``data'', which is mounted at `/var/lib/vz`.
47 Starting from version 4.2, the logical volume ``data'' is a LVM-thin pool,
48 used to store block based guest images, and `/var/lib/vz` is simply a
49 directory on the root file system.
54 We highly recommend to use a hardware RAID controller (with BBU) for
55 such setups. This increases performance, provides redundancy, and make
56 disk replacements easier (hot-pluggable).
58 LVM itself does not need any special hardware, and memory requirements
65 We install two boot loaders by default. The first partition contains
66 the standard GRUB boot loader. The second partition is an **E**FI **S**ystem
67 **P**artition (ESP), which makes it possible to boot on EFI systems and to
68 apply xref:sysadmin_firmware_persistent[persistent firmware updates] from the
72 Creating a Volume Group
73 ~~~~~~~~~~~~~~~~~~~~~~~
75 Let's assume we have an empty disk `/dev/sdb`, onto which we want to
76 create a volume group named ``vmdata''.
78 CAUTION: Please note that the following commands will destroy all
79 existing data on `/dev/sdb`.
81 First create a partition.
83 # sgdisk -N 1 /dev/sdb
86 Create a **P**hysical **V**olume (PV) without confirmation and 250K
89 # pvcreate --metadatasize 250k -y -ff /dev/sdb1
92 Create a volume group named ``vmdata'' on `/dev/sdb1`
94 # vgcreate vmdata /dev/sdb1
97 Creating an extra LV for `/var/lib/vz`
98 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100 This can be easily done by creating a new thin LV.
102 # lvcreate -n <Name> -V <Size[M,G,T]> <VG>/<LVThin_pool>
104 A real world example:
106 # lvcreate -n vz -V 10G pve/data
108 Now a filesystem must be created on the LV.
110 # mkfs.ext4 /dev/pve/vz
112 At last this has to be mounted.
114 WARNING: be sure that `/var/lib/vz` is empty. On a default
115 installation it's not.
117 To make it always accessible add the following line in `/etc/fstab`.
119 # echo '/dev/pve/vz /var/lib/vz ext4 defaults 0 2' >> /etc/fstab
122 Resizing the thin pool
123 ~~~~~~~~~~~~~~~~~~~~~~
125 Resize the LV and the metadata pool with the following command:
127 # lvresize --size +<size[\M,G,T]> --poolmetadatasize +<size[\M,G]> <VG>/<LVThin_pool>
129 NOTE: When extending the data pool, the metadata pool must also be
133 Create a LVM-thin pool
134 ~~~~~~~~~~~~~~~~~~~~~~
136 A thin pool has to be created on top of a volume group.
137 How to create a volume group see Section LVM.
139 # lvcreate -L 80G -T -n vmstore vmdata