]> git.proxmox.com Git - pve-docs.git/blame - local-zfs.adoc
build: refactor build process
[pve-docs.git] / local-zfs.adoc
CommitLineData
0235c741 1[[chapter_zfs]]
9ee94323
DM
2ZFS on Linux
3------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
9ee94323
DM
8ZFS is a combined file system and logical volume manager designed by
9Sun Microsystems. Starting with {pve} 3.4, the native Linux
10kernel port of the ZFS file system is introduced as optional
5eba0743
FG
11file system and also as an additional selection for the root
12file system. There is no need for manually compile ZFS modules - all
9ee94323
DM
13packages are included.
14
5eba0743 15By using ZFS, its possible to achieve maximum enterprise features with
9ee94323
DM
16low budget hardware, but also high performance systems by leveraging
17SSD caching or even SSD only setups. ZFS can replace cost intense
18hardware raid cards by moderate CPU and memory load combined with easy
19management.
20
21.General ZFS advantages
22
23* Easy configuration and management with {pve} GUI and CLI.
24
25* Reliable
26
27* Protection against data corruption
28
5eba0743 29* Data compression on file system level
9ee94323
DM
30
31* Snapshots
32
33* Copy-on-write clone
34
35* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37* Can use SSD for cache
38
39* Self healing
40
41* Continuous integrity checking
42
43* Designed for high storage capacities
44
45* Protection against data corruption
46
47* Asynchronous replication over network
48
49* Open Source
50
51* Encryption
52
53* ...
54
55
56Hardware
57~~~~~~~~
58
59ZFS depends heavily on memory, so you need at least 8GB to start. In
60practice, use as much you can get for your hardware/budget. To prevent
61data corruption, we recommend the use of high quality ECC RAM.
62
d48bdcf2 63If you use a dedicated cache and/or log disk, you should use an
9ee94323
DM
64enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
65increase the overall performance significantly.
66
5eba0743 67IMPORTANT: Do not use ZFS on top of hardware controller which has its
9ee94323
DM
68own cache management. ZFS needs to directly communicate with disks. An
69HBA adapter is the way to go, or something like LSI controller flashed
8c1189b6 70in ``IT'' mode.
9ee94323
DM
71
72If you are experimenting with an installation of {pve} inside a VM
8c1189b6 73(Nested Virtualization), don't use `virtio` for disks of that VM,
9ee94323 74since they are not supported by ZFS. Use IDE or SCSI instead (works
8c1189b6 75also with `virtio` SCSI controller type).
9ee94323
DM
76
77
5eba0743 78Installation as Root File System
9ee94323
DM
79~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80
81When you install using the {pve} installer, you can choose ZFS for the
82root file system. You need to select the RAID type at installation
83time:
84
85[horizontal]
8c1189b6
FG
86RAID0:: Also called ``striping''. The capacity of such volume is the sum
87of the capacities of all disks. But RAID0 does not add any redundancy,
9ee94323
DM
88so the failure of a single drive makes the volume unusable.
89
8c1189b6 90RAID1:: Also called ``mirroring''. Data is written identically to all
9ee94323
DM
91disks. This mode requires at least 2 disks with the same size. The
92resulting capacity is that of a single disk.
93
94RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
95
96RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
97
98RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
99
100RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
101
102The installer automatically partitions the disks, creates a ZFS pool
8c1189b6
FG
103called `rpool`, and installs the root file system on the ZFS subvolume
104`rpool/ROOT/pve-1`.
9ee94323 105
8c1189b6 106Another subvolume called `rpool/data` is created to store VM
9ee94323 107images. In order to use that with the {pve} tools, the installer
8c1189b6 108creates the following configuration entry in `/etc/pve/storage.cfg`:
9ee94323
DM
109
110----
111zfspool: local-zfs
112 pool rpool/data
113 sparse
114 content images,rootdir
115----
116
117After installation, you can view your ZFS pool status using the
8c1189b6 118`zpool` command:
9ee94323
DM
119
120----
121# zpool status
122 pool: rpool
123 state: ONLINE
124 scan: none requested
125config:
126
127 NAME STATE READ WRITE CKSUM
128 rpool ONLINE 0 0 0
129 mirror-0 ONLINE 0 0 0
130 sda2 ONLINE 0 0 0
131 sdb2 ONLINE 0 0 0
132 mirror-1 ONLINE 0 0 0
133 sdc ONLINE 0 0 0
134 sdd ONLINE 0 0 0
135
136errors: No known data errors
137----
138
8c1189b6 139The `zfs` command is used configure and manage your ZFS file
9ee94323
DM
140systems. The following command lists all file systems after
141installation:
142
143----
144# zfs list
145NAME USED AVAIL REFER MOUNTPOINT
146rpool 4.94G 7.68T 96K /rpool
147rpool/ROOT 702M 7.68T 96K /rpool/ROOT
148rpool/ROOT/pve-1 702M 7.68T 702M /
149rpool/data 96K 7.68T 96K /rpool/data
150rpool/swap 4.25G 7.69T 64K -
151----
152
153
154Bootloader
155~~~~~~~~~~
156
157The default ZFS disk partitioning scheme does not use the first 2048
158sectors. This gives enough room to install a GRUB boot partition. The
159{pve} installer automatically allocates that space, and installs the
160GRUB boot loader there. If you use a redundant RAID setup, it installs
161the boot loader on all disk required for booting. So you can boot
162even if some disks fail.
163
e300cf7d 164NOTE: It is not possible to use ZFS as root file system with UEFI
9ee94323
DM
165boot.
166
167
168ZFS Administration
169~~~~~~~~~~~~~~~~~~
170
171This section gives you some usage examples for common tasks. ZFS
172itself is really powerful and provides many options. The main commands
8c1189b6
FG
173to manage ZFS are `zfs` and `zpool`. Both commands come with great
174manual pages, which can be read with:
9ee94323
DM
175
176----
177# man zpool
178# man zfs
179-----
180
5eba0743 181.Create a new zpool
9ee94323 182
8c1189b6
FG
183To create a new pool, at least one disk is needed. The `ashift` should
184have the same sector-size (2 power of `ashift`) or larger as the
9ee94323
DM
185underlying disk.
186
187 zpool create -f -o ashift=12 <pool> <device>
188
5eba0743 189To activate compression
9ee94323
DM
190
191 zfs set compression=lz4 <pool>
192
193.Create a new pool with RAID-0
194
195Minimum 1 Disk
196
197 zpool create -f -o ashift=12 <pool> <device1> <device2>
198
199.Create a new pool with RAID-1
200
201Minimum 2 Disks
202
203 zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
204
205.Create a new pool with RAID-10
206
207Minimum 4 Disks
208
209 zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
210
211.Create a new pool with RAIDZ-1
212
213Minimum 3 Disks
214
215 zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
216
217.Create a new pool with RAIDZ-2
218
219Minimum 4 Disks
220
221 zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
222
5eba0743 223.Create a new pool with cache (L2ARC)
9ee94323
DM
224
225It is possible to use a dedicated cache drive partition to increase
226the performance (use SSD).
227
8c1189b6 228As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
229"Create a new pool with RAID*".
230
231 zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
232
5eba0743 233.Create a new pool with log (ZIL)
9ee94323
DM
234
235It is possible to use a dedicated cache drive partition to increase
236the performance(SSD).
237
8c1189b6 238As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
239"Create a new pool with RAID*".
240
241 zpool create -f -o ashift=12 <pool> <device> log <log_device>
242
5eba0743 243.Add cache and log to an existing pool
9ee94323
DM
244
245If you have an pool without cache and log. First partition the SSD in
8c1189b6 2462 partition with `parted` or `gdisk`
9ee94323 247
e300cf7d 248IMPORTANT: Always use GPT partition tables.
9ee94323
DM
249
250The maximum size of a log device should be about half the size of
251physical memory, so this is usually quite small. The rest of the SSD
5eba0743 252can be used as cache.
9ee94323
DM
253
254 zpool add -f <pool> log <device-part1> cache <device-part2>
255
5eba0743 256.Changing a failed device
9ee94323
DM
257
258 zpool replace -f <pool> <old device> <new-device>
259
260
261Activate E-Mail Notification
262~~~~~~~~~~~~~~~~~~~~~~~~~~~~
263
264ZFS comes with an event daemon, which monitors events generated by the
5eba0743 265ZFS kernel module. The daemon can also send emails on ZFS events like
470d4313 266pool errors. Newer ZFS packages ships the daemon in a separate package,
e280a948
DM
267and you can install it using `apt-get`:
268
269----
270# apt-get install zfs-zed
271----
9ee94323 272
8c1189b6
FG
273To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
274favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
9ee94323 275
083adc34 276--------
9ee94323 277ZED_EMAIL_ADDR="root"
083adc34 278--------
9ee94323 279
8c1189b6 280Please note {pve} forwards mails to `root` to the email address
9ee94323
DM
281configured for the root user.
282
8c1189b6 283IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
9ee94323
DM
284other settings are optional.
285
286
5eba0743 287Limit ZFS Memory Usage
9ee94323
DM
288~~~~~~~~~~~~~~~~~~~~~~
289
5eba0743 290It is good to use at most 50 percent (which is the default) of the
d362b7f4
DM
291system memory for ZFS ARC to prevent performance shortage of the
292host. Use your preferred editor to change the configuration in
8c1189b6 293`/etc/modprobe.d/zfs.conf` and insert:
9ee94323 294
5eba0743
FG
295--------
296options zfs zfs_arc_max=8589934592
297--------
9ee94323
DM
298
299This example setting limits the usage to 8GB.
300
301[IMPORTANT]
302====
5eba0743
FG
303If your root file system is ZFS you must update your initramfs every
304time this value changes:
9ee94323
DM
305
306 update-initramfs -u
307====
308
309
310.SWAP on ZFS
311
312SWAP on ZFS on Linux may generate some troubles, like blocking the
313server or generating a high IO load, often seen when starting a Backup
314to an external Storage.
315
316We strongly recommend to use enough memory, so that you normally do not
317run into low memory situations. Additionally, you can lower the
8c1189b6 318``swappiness'' value. A good value for servers is 10:
9ee94323
DM
319
320 sysctl -w vm.swappiness=10
321
8c1189b6 322To make the swappiness persistent, open `/etc/sysctl.conf` with
9ee94323
DM
323an editor of your choice and add the following line:
324
083adc34
FG
325--------
326vm.swappiness = 10
327--------
9ee94323 328
8c1189b6 329.Linux kernel `swappiness` parameter values
9ee94323
DM
330[width="100%",cols="<m,2d",options="header"]
331|===========================================================
332| Value | Strategy
333| vm.swappiness = 0 | The kernel will swap only to avoid
334an 'out of memory' condition
335| vm.swappiness = 1 | Minimum amount of swapping without
336disabling it entirely.
337| vm.swappiness = 10 | This value is sometimes recommended to
338improve performance when sufficient memory exists in a system.
339| vm.swappiness = 60 | The default value.
340| vm.swappiness = 100 | The kernel will swap aggressively.
341|===========================================================