]>
Commit | Line | Data |
---|---|---|
0235c741 | 1 | [[chapter_zfs]] |
9ee94323 DM |
2 | ZFS on Linux |
3 | ------------ | |
5f09af76 DM |
4 | ifdef::wiki[] |
5 | :pve-toplevel: | |
6 | endif::wiki[] | |
7 | ||
9ee94323 DM |
8 | ZFS is a combined file system and logical volume manager designed by |
9 | Sun Microsystems. Starting with {pve} 3.4, the native Linux | |
10 | kernel port of the ZFS file system is introduced as optional | |
5eba0743 FG |
11 | file system and also as an additional selection for the root |
12 | file system. There is no need for manually compile ZFS modules - all | |
9ee94323 DM |
13 | packages are included. |
14 | ||
5eba0743 | 15 | By using ZFS, its possible to achieve maximum enterprise features with |
9ee94323 DM |
16 | low budget hardware, but also high performance systems by leveraging |
17 | SSD caching or even SSD only setups. ZFS can replace cost intense | |
18 | hardware raid cards by moderate CPU and memory load combined with easy | |
19 | management. | |
20 | ||
21 | .General ZFS advantages | |
22 | ||
23 | * Easy configuration and management with {pve} GUI and CLI. | |
24 | ||
25 | * Reliable | |
26 | ||
27 | * Protection against data corruption | |
28 | ||
5eba0743 | 29 | * Data compression on file system level |
9ee94323 DM |
30 | |
31 | * Snapshots | |
32 | ||
33 | * Copy-on-write clone | |
34 | ||
35 | * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3 | |
36 | ||
37 | * Can use SSD for cache | |
38 | ||
39 | * Self healing | |
40 | ||
41 | * Continuous integrity checking | |
42 | ||
43 | * Designed for high storage capacities | |
44 | ||
45 | * Protection against data corruption | |
46 | ||
47 | * Asynchronous replication over network | |
48 | ||
49 | * Open Source | |
50 | ||
51 | * Encryption | |
52 | ||
53 | * ... | |
54 | ||
55 | ||
56 | Hardware | |
57 | ~~~~~~~~ | |
58 | ||
59 | ZFS depends heavily on memory, so you need at least 8GB to start. In | |
60 | practice, use as much you can get for your hardware/budget. To prevent | |
61 | data corruption, we recommend the use of high quality ECC RAM. | |
62 | ||
d48bdcf2 | 63 | If you use a dedicated cache and/or log disk, you should use an |
9ee94323 DM |
64 | enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can |
65 | increase the overall performance significantly. | |
66 | ||
5eba0743 | 67 | IMPORTANT: Do not use ZFS on top of hardware controller which has its |
9ee94323 DM |
68 | own cache management. ZFS needs to directly communicate with disks. An |
69 | HBA adapter is the way to go, or something like LSI controller flashed | |
8c1189b6 | 70 | in ``IT'' mode. |
9ee94323 DM |
71 | |
72 | If you are experimenting with an installation of {pve} inside a VM | |
8c1189b6 | 73 | (Nested Virtualization), don't use `virtio` for disks of that VM, |
9ee94323 | 74 | since they are not supported by ZFS. Use IDE or SCSI instead (works |
8c1189b6 | 75 | also with `virtio` SCSI controller type). |
9ee94323 DM |
76 | |
77 | ||
5eba0743 | 78 | Installation as Root File System |
9ee94323 DM |
79 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
80 | ||
81 | When you install using the {pve} installer, you can choose ZFS for the | |
82 | root file system. You need to select the RAID type at installation | |
83 | time: | |
84 | ||
85 | [horizontal] | |
8c1189b6 FG |
86 | RAID0:: Also called ``striping''. The capacity of such volume is the sum |
87 | of the capacities of all disks. But RAID0 does not add any redundancy, | |
9ee94323 DM |
88 | so the failure of a single drive makes the volume unusable. |
89 | ||
8c1189b6 | 90 | RAID1:: Also called ``mirroring''. Data is written identically to all |
9ee94323 DM |
91 | disks. This mode requires at least 2 disks with the same size. The |
92 | resulting capacity is that of a single disk. | |
93 | ||
94 | RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks. | |
95 | ||
96 | RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks. | |
97 | ||
98 | RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks. | |
99 | ||
100 | RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks. | |
101 | ||
102 | The installer automatically partitions the disks, creates a ZFS pool | |
8c1189b6 FG |
103 | called `rpool`, and installs the root file system on the ZFS subvolume |
104 | `rpool/ROOT/pve-1`. | |
9ee94323 | 105 | |
8c1189b6 | 106 | Another subvolume called `rpool/data` is created to store VM |
9ee94323 | 107 | images. In order to use that with the {pve} tools, the installer |
8c1189b6 | 108 | creates the following configuration entry in `/etc/pve/storage.cfg`: |
9ee94323 DM |
109 | |
110 | ---- | |
111 | zfspool: local-zfs | |
112 | pool rpool/data | |
113 | sparse | |
114 | content images,rootdir | |
115 | ---- | |
116 | ||
117 | After installation, you can view your ZFS pool status using the | |
8c1189b6 | 118 | `zpool` command: |
9ee94323 DM |
119 | |
120 | ---- | |
121 | # zpool status | |
122 | pool: rpool | |
123 | state: ONLINE | |
124 | scan: none requested | |
125 | config: | |
126 | ||
127 | NAME STATE READ WRITE CKSUM | |
128 | rpool ONLINE 0 0 0 | |
129 | mirror-0 ONLINE 0 0 0 | |
130 | sda2 ONLINE 0 0 0 | |
131 | sdb2 ONLINE 0 0 0 | |
132 | mirror-1 ONLINE 0 0 0 | |
133 | sdc ONLINE 0 0 0 | |
134 | sdd ONLINE 0 0 0 | |
135 | ||
136 | errors: No known data errors | |
137 | ---- | |
138 | ||
8c1189b6 | 139 | The `zfs` command is used configure and manage your ZFS file |
9ee94323 DM |
140 | systems. The following command lists all file systems after |
141 | installation: | |
142 | ||
143 | ---- | |
144 | # zfs list | |
145 | NAME USED AVAIL REFER MOUNTPOINT | |
146 | rpool 4.94G 7.68T 96K /rpool | |
147 | rpool/ROOT 702M 7.68T 96K /rpool/ROOT | |
148 | rpool/ROOT/pve-1 702M 7.68T 702M / | |
149 | rpool/data 96K 7.68T 96K /rpool/data | |
150 | rpool/swap 4.25G 7.69T 64K - | |
151 | ---- | |
152 | ||
153 | ||
154 | Bootloader | |
155 | ~~~~~~~~~~ | |
156 | ||
157 | The default ZFS disk partitioning scheme does not use the first 2048 | |
158 | sectors. This gives enough room to install a GRUB boot partition. The | |
159 | {pve} installer automatically allocates that space, and installs the | |
160 | GRUB boot loader there. If you use a redundant RAID setup, it installs | |
161 | the boot loader on all disk required for booting. So you can boot | |
162 | even if some disks fail. | |
163 | ||
e300cf7d | 164 | NOTE: It is not possible to use ZFS as root file system with UEFI |
9ee94323 DM |
165 | boot. |
166 | ||
167 | ||
168 | ZFS Administration | |
169 | ~~~~~~~~~~~~~~~~~~ | |
170 | ||
171 | This section gives you some usage examples for common tasks. ZFS | |
172 | itself is really powerful and provides many options. The main commands | |
8c1189b6 FG |
173 | to manage ZFS are `zfs` and `zpool`. Both commands come with great |
174 | manual pages, which can be read with: | |
9ee94323 DM |
175 | |
176 | ---- | |
177 | # man zpool | |
178 | # man zfs | |
179 | ----- | |
180 | ||
5eba0743 | 181 | .Create a new zpool |
9ee94323 | 182 | |
8c1189b6 FG |
183 | To create a new pool, at least one disk is needed. The `ashift` should |
184 | have the same sector-size (2 power of `ashift`) or larger as the | |
9ee94323 DM |
185 | underlying disk. |
186 | ||
187 | zpool create -f -o ashift=12 <pool> <device> | |
188 | ||
5eba0743 | 189 | To activate compression |
9ee94323 DM |
190 | |
191 | zfs set compression=lz4 <pool> | |
192 | ||
193 | .Create a new pool with RAID-0 | |
194 | ||
195 | Minimum 1 Disk | |
196 | ||
197 | zpool create -f -o ashift=12 <pool> <device1> <device2> | |
198 | ||
199 | .Create a new pool with RAID-1 | |
200 | ||
201 | Minimum 2 Disks | |
202 | ||
203 | zpool create -f -o ashift=12 <pool> mirror <device1> <device2> | |
204 | ||
205 | .Create a new pool with RAID-10 | |
206 | ||
207 | Minimum 4 Disks | |
208 | ||
209 | zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4> | |
210 | ||
211 | .Create a new pool with RAIDZ-1 | |
212 | ||
213 | Minimum 3 Disks | |
214 | ||
215 | zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3> | |
216 | ||
217 | .Create a new pool with RAIDZ-2 | |
218 | ||
219 | Minimum 4 Disks | |
220 | ||
221 | zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4> | |
222 | ||
5eba0743 | 223 | .Create a new pool with cache (L2ARC) |
9ee94323 DM |
224 | |
225 | It is possible to use a dedicated cache drive partition to increase | |
226 | the performance (use SSD). | |
227 | ||
8c1189b6 | 228 | As `<device>` it is possible to use more devices, like it's shown in |
9ee94323 DM |
229 | "Create a new pool with RAID*". |
230 | ||
231 | zpool create -f -o ashift=12 <pool> <device> cache <cache_device> | |
232 | ||
5eba0743 | 233 | .Create a new pool with log (ZIL) |
9ee94323 DM |
234 | |
235 | It is possible to use a dedicated cache drive partition to increase | |
236 | the performance(SSD). | |
237 | ||
8c1189b6 | 238 | As `<device>` it is possible to use more devices, like it's shown in |
9ee94323 DM |
239 | "Create a new pool with RAID*". |
240 | ||
241 | zpool create -f -o ashift=12 <pool> <device> log <log_device> | |
242 | ||
5eba0743 | 243 | .Add cache and log to an existing pool |
9ee94323 DM |
244 | |
245 | If you have an pool without cache and log. First partition the SSD in | |
8c1189b6 | 246 | 2 partition with `parted` or `gdisk` |
9ee94323 | 247 | |
e300cf7d | 248 | IMPORTANT: Always use GPT partition tables. |
9ee94323 DM |
249 | |
250 | The maximum size of a log device should be about half the size of | |
251 | physical memory, so this is usually quite small. The rest of the SSD | |
5eba0743 | 252 | can be used as cache. |
9ee94323 DM |
253 | |
254 | zpool add -f <pool> log <device-part1> cache <device-part2> | |
255 | ||
5eba0743 | 256 | .Changing a failed device |
9ee94323 DM |
257 | |
258 | zpool replace -f <pool> <old device> <new-device> | |
259 | ||
260 | ||
261 | Activate E-Mail Notification | |
262 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
263 | ||
264 | ZFS comes with an event daemon, which monitors events generated by the | |
5eba0743 | 265 | ZFS kernel module. The daemon can also send emails on ZFS events like |
470d4313 | 266 | pool errors. Newer ZFS packages ships the daemon in a separate package, |
e280a948 DM |
267 | and you can install it using `apt-get`: |
268 | ||
269 | ---- | |
270 | # apt-get install zfs-zed | |
271 | ---- | |
9ee94323 | 272 | |
8c1189b6 FG |
273 | To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your |
274 | favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: | |
9ee94323 | 275 | |
083adc34 | 276 | -------- |
9ee94323 | 277 | ZED_EMAIL_ADDR="root" |
083adc34 | 278 | -------- |
9ee94323 | 279 | |
8c1189b6 | 280 | Please note {pve} forwards mails to `root` to the email address |
9ee94323 DM |
281 | configured for the root user. |
282 | ||
8c1189b6 | 283 | IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All |
9ee94323 DM |
284 | other settings are optional. |
285 | ||
286 | ||
5eba0743 | 287 | Limit ZFS Memory Usage |
9ee94323 DM |
288 | ~~~~~~~~~~~~~~~~~~~~~~ |
289 | ||
5eba0743 | 290 | It is good to use at most 50 percent (which is the default) of the |
d362b7f4 DM |
291 | system memory for ZFS ARC to prevent performance shortage of the |
292 | host. Use your preferred editor to change the configuration in | |
8c1189b6 | 293 | `/etc/modprobe.d/zfs.conf` and insert: |
9ee94323 | 294 | |
5eba0743 FG |
295 | -------- |
296 | options zfs zfs_arc_max=8589934592 | |
297 | -------- | |
9ee94323 DM |
298 | |
299 | This example setting limits the usage to 8GB. | |
300 | ||
301 | [IMPORTANT] | |
302 | ==== | |
5eba0743 FG |
303 | If your root file system is ZFS you must update your initramfs every |
304 | time this value changes: | |
9ee94323 DM |
305 | |
306 | update-initramfs -u | |
307 | ==== | |
308 | ||
309 | ||
310 | .SWAP on ZFS | |
311 | ||
312 | SWAP on ZFS on Linux may generate some troubles, like blocking the | |
313 | server or generating a high IO load, often seen when starting a Backup | |
314 | to an external Storage. | |
315 | ||
316 | We strongly recommend to use enough memory, so that you normally do not | |
317 | run into low memory situations. Additionally, you can lower the | |
8c1189b6 | 318 | ``swappiness'' value. A good value for servers is 10: |
9ee94323 DM |
319 | |
320 | sysctl -w vm.swappiness=10 | |
321 | ||
8c1189b6 | 322 | To make the swappiness persistent, open `/etc/sysctl.conf` with |
9ee94323 DM |
323 | an editor of your choice and add the following line: |
324 | ||
083adc34 FG |
325 | -------- |
326 | vm.swappiness = 10 | |
327 | -------- | |
9ee94323 | 328 | |
8c1189b6 | 329 | .Linux kernel `swappiness` parameter values |
9ee94323 DM |
330 | [width="100%",cols="<m,2d",options="header"] |
331 | |=========================================================== | |
332 | | Value | Strategy | |
333 | | vm.swappiness = 0 | The kernel will swap only to avoid | |
334 | an 'out of memory' condition | |
335 | | vm.swappiness = 1 | Minimum amount of swapping without | |
336 | disabling it entirely. | |
337 | | vm.swappiness = 10 | This value is sometimes recommended to | |
338 | improve performance when sufficient memory exists in a system. | |
339 | | vm.swappiness = 60 | The default value. | |
340 | | vm.swappiness = 100 | The kernel will swap aggressively. | |
341 | |=========================================================== |