]> git.proxmox.com Git - pve-docs.git/blame - local-zfs.adoc
make sure we have a text for all xrefs
[pve-docs.git] / local-zfs.adoc
CommitLineData
9ee94323
DM
1ZFS on Linux
2------------
3include::attributes.txt[]
4
5f09af76
DM
5ifdef::wiki[]
6:pve-toplevel:
7endif::wiki[]
8
9ee94323
DM
9ZFS is a combined file system and logical volume manager designed by
10Sun Microsystems. Starting with {pve} 3.4, the native Linux
11kernel port of the ZFS file system is introduced as optional
5eba0743
FG
12file system and also as an additional selection for the root
13file system. There is no need for manually compile ZFS modules - all
9ee94323
DM
14packages are included.
15
5eba0743 16By using ZFS, its possible to achieve maximum enterprise features with
9ee94323
DM
17low budget hardware, but also high performance systems by leveraging
18SSD caching or even SSD only setups. ZFS can replace cost intense
19hardware raid cards by moderate CPU and memory load combined with easy
20management.
21
22.General ZFS advantages
23
24* Easy configuration and management with {pve} GUI and CLI.
25
26* Reliable
27
28* Protection against data corruption
29
5eba0743 30* Data compression on file system level
9ee94323
DM
31
32* Snapshots
33
34* Copy-on-write clone
35
36* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
37
38* Can use SSD for cache
39
40* Self healing
41
42* Continuous integrity checking
43
44* Designed for high storage capacities
45
46* Protection against data corruption
47
48* Asynchronous replication over network
49
50* Open Source
51
52* Encryption
53
54* ...
55
56
57Hardware
58~~~~~~~~
59
60ZFS depends heavily on memory, so you need at least 8GB to start. In
61practice, use as much you can get for your hardware/budget. To prevent
62data corruption, we recommend the use of high quality ECC RAM.
63
64If you use a dedicated cache and/or log disk, you should use a
65enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
66increase the overall performance significantly.
67
5eba0743 68IMPORTANT: Do not use ZFS on top of hardware controller which has its
9ee94323
DM
69own cache management. ZFS needs to directly communicate with disks. An
70HBA adapter is the way to go, or something like LSI controller flashed
8c1189b6 71in ``IT'' mode.
9ee94323
DM
72
73If you are experimenting with an installation of {pve} inside a VM
8c1189b6 74(Nested Virtualization), don't use `virtio` for disks of that VM,
9ee94323 75since they are not supported by ZFS. Use IDE or SCSI instead (works
8c1189b6 76also with `virtio` SCSI controller type).
9ee94323
DM
77
78
5eba0743 79Installation as Root File System
9ee94323
DM
80~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
81
82When you install using the {pve} installer, you can choose ZFS for the
83root file system. You need to select the RAID type at installation
84time:
85
86[horizontal]
8c1189b6
FG
87RAID0:: Also called ``striping''. The capacity of such volume is the sum
88of the capacities of all disks. But RAID0 does not add any redundancy,
9ee94323
DM
89so the failure of a single drive makes the volume unusable.
90
8c1189b6 91RAID1:: Also called ``mirroring''. Data is written identically to all
9ee94323
DM
92disks. This mode requires at least 2 disks with the same size. The
93resulting capacity is that of a single disk.
94
95RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
96
97RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
98
99RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
100
101RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
102
103The installer automatically partitions the disks, creates a ZFS pool
8c1189b6
FG
104called `rpool`, and installs the root file system on the ZFS subvolume
105`rpool/ROOT/pve-1`.
9ee94323 106
8c1189b6 107Another subvolume called `rpool/data` is created to store VM
9ee94323 108images. In order to use that with the {pve} tools, the installer
8c1189b6 109creates the following configuration entry in `/etc/pve/storage.cfg`:
9ee94323
DM
110
111----
112zfspool: local-zfs
113 pool rpool/data
114 sparse
115 content images,rootdir
116----
117
118After installation, you can view your ZFS pool status using the
8c1189b6 119`zpool` command:
9ee94323
DM
120
121----
122# zpool status
123 pool: rpool
124 state: ONLINE
125 scan: none requested
126config:
127
128 NAME STATE READ WRITE CKSUM
129 rpool ONLINE 0 0 0
130 mirror-0 ONLINE 0 0 0
131 sda2 ONLINE 0 0 0
132 sdb2 ONLINE 0 0 0
133 mirror-1 ONLINE 0 0 0
134 sdc ONLINE 0 0 0
135 sdd ONLINE 0 0 0
136
137errors: No known data errors
138----
139
8c1189b6 140The `zfs` command is used configure and manage your ZFS file
9ee94323
DM
141systems. The following command lists all file systems after
142installation:
143
144----
145# zfs list
146NAME USED AVAIL REFER MOUNTPOINT
147rpool 4.94G 7.68T 96K /rpool
148rpool/ROOT 702M 7.68T 96K /rpool/ROOT
149rpool/ROOT/pve-1 702M 7.68T 702M /
150rpool/data 96K 7.68T 96K /rpool/data
151rpool/swap 4.25G 7.69T 64K -
152----
153
154
155Bootloader
156~~~~~~~~~~
157
158The default ZFS disk partitioning scheme does not use the first 2048
159sectors. This gives enough room to install a GRUB boot partition. The
160{pve} installer automatically allocates that space, and installs the
161GRUB boot loader there. If you use a redundant RAID setup, it installs
162the boot loader on all disk required for booting. So you can boot
163even if some disks fail.
164
e300cf7d 165NOTE: It is not possible to use ZFS as root file system with UEFI
9ee94323
DM
166boot.
167
168
169ZFS Administration
170~~~~~~~~~~~~~~~~~~
171
172This section gives you some usage examples for common tasks. ZFS
173itself is really powerful and provides many options. The main commands
8c1189b6
FG
174to manage ZFS are `zfs` and `zpool`. Both commands come with great
175manual pages, which can be read with:
9ee94323
DM
176
177----
178# man zpool
179# man zfs
180-----
181
5eba0743 182.Create a new zpool
9ee94323 183
8c1189b6
FG
184To create a new pool, at least one disk is needed. The `ashift` should
185have the same sector-size (2 power of `ashift`) or larger as the
9ee94323
DM
186underlying disk.
187
188 zpool create -f -o ashift=12 <pool> <device>
189
5eba0743 190To activate compression
9ee94323
DM
191
192 zfs set compression=lz4 <pool>
193
194.Create a new pool with RAID-0
195
196Minimum 1 Disk
197
198 zpool create -f -o ashift=12 <pool> <device1> <device2>
199
200.Create a new pool with RAID-1
201
202Minimum 2 Disks
203
204 zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
205
206.Create a new pool with RAID-10
207
208Minimum 4 Disks
209
210 zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
211
212.Create a new pool with RAIDZ-1
213
214Minimum 3 Disks
215
216 zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
217
218.Create a new pool with RAIDZ-2
219
220Minimum 4 Disks
221
222 zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
223
5eba0743 224.Create a new pool with cache (L2ARC)
9ee94323
DM
225
226It is possible to use a dedicated cache drive partition to increase
227the performance (use SSD).
228
8c1189b6 229As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
230"Create a new pool with RAID*".
231
232 zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
233
5eba0743 234.Create a new pool with log (ZIL)
9ee94323
DM
235
236It is possible to use a dedicated cache drive partition to increase
237the performance(SSD).
238
8c1189b6 239As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
240"Create a new pool with RAID*".
241
242 zpool create -f -o ashift=12 <pool> <device> log <log_device>
243
5eba0743 244.Add cache and log to an existing pool
9ee94323
DM
245
246If you have an pool without cache and log. First partition the SSD in
8c1189b6 2472 partition with `parted` or `gdisk`
9ee94323 248
e300cf7d 249IMPORTANT: Always use GPT partition tables.
9ee94323
DM
250
251The maximum size of a log device should be about half the size of
252physical memory, so this is usually quite small. The rest of the SSD
5eba0743 253can be used as cache.
9ee94323
DM
254
255 zpool add -f <pool> log <device-part1> cache <device-part2>
256
5eba0743 257.Changing a failed device
9ee94323
DM
258
259 zpool replace -f <pool> <old device> <new-device>
260
261
262Activate E-Mail Notification
263~~~~~~~~~~~~~~~~~~~~~~~~~~~~
264
265ZFS comes with an event daemon, which monitors events generated by the
5eba0743 266ZFS kernel module. The daemon can also send emails on ZFS events like
9ee94323
DM
267pool errors.
268
8c1189b6
FG
269To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
270favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
9ee94323 271
083adc34 272--------
9ee94323 273ZED_EMAIL_ADDR="root"
083adc34 274--------
9ee94323 275
8c1189b6 276Please note {pve} forwards mails to `root` to the email address
9ee94323
DM
277configured for the root user.
278
8c1189b6 279IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
9ee94323
DM
280other settings are optional.
281
282
5eba0743 283Limit ZFS Memory Usage
9ee94323
DM
284~~~~~~~~~~~~~~~~~~~~~~
285
5eba0743 286It is good to use at most 50 percent (which is the default) of the
d362b7f4
DM
287system memory for ZFS ARC to prevent performance shortage of the
288host. Use your preferred editor to change the configuration in
8c1189b6 289`/etc/modprobe.d/zfs.conf` and insert:
9ee94323 290
5eba0743
FG
291--------
292options zfs zfs_arc_max=8589934592
293--------
9ee94323
DM
294
295This example setting limits the usage to 8GB.
296
297[IMPORTANT]
298====
5eba0743
FG
299If your root file system is ZFS you must update your initramfs every
300time this value changes:
9ee94323
DM
301
302 update-initramfs -u
303====
304
305
306.SWAP on ZFS
307
308SWAP on ZFS on Linux may generate some troubles, like blocking the
309server or generating a high IO load, often seen when starting a Backup
310to an external Storage.
311
312We strongly recommend to use enough memory, so that you normally do not
313run into low memory situations. Additionally, you can lower the
8c1189b6 314``swappiness'' value. A good value for servers is 10:
9ee94323
DM
315
316 sysctl -w vm.swappiness=10
317
8c1189b6 318To make the swappiness persistent, open `/etc/sysctl.conf` with
9ee94323
DM
319an editor of your choice and add the following line:
320
083adc34
FG
321--------
322vm.swappiness = 10
323--------
9ee94323 324
8c1189b6 325.Linux kernel `swappiness` parameter values
9ee94323
DM
326[width="100%",cols="<m,2d",options="header"]
327|===========================================================
328| Value | Strategy
329| vm.swappiness = 0 | The kernel will swap only to avoid
330an 'out of memory' condition
331| vm.swappiness = 1 | Minimum amount of swapping without
332disabling it entirely.
333| vm.swappiness = 10 | This value is sometimes recommended to
334improve performance when sufficient memory exists in a system.
335| vm.swappiness = 60 | The default value.
336| vm.swappiness = 100 | The kernel will swap aggressively.
337|===========================================================