]>
Commit | Line | Data |
---|---|---|
0235c741 | 1 | [[chapter_zfs]] |
9ee94323 DM |
2 | ZFS on Linux |
3 | ------------ | |
5f09af76 DM |
4 | ifdef::wiki[] |
5 | :pve-toplevel: | |
6 | endif::wiki[] | |
7 | ||
9ee94323 DM |
8 | ZFS is a combined file system and logical volume manager designed by |
9 | Sun Microsystems. Starting with {pve} 3.4, the native Linux | |
10 | kernel port of the ZFS file system is introduced as optional | |
5eba0743 FG |
11 | file system and also as an additional selection for the root |
12 | file system. There is no need for manually compile ZFS modules - all | |
9ee94323 DM |
13 | packages are included. |
14 | ||
5eba0743 | 15 | By using ZFS, its possible to achieve maximum enterprise features with |
9ee94323 DM |
16 | low budget hardware, but also high performance systems by leveraging |
17 | SSD caching or even SSD only setups. ZFS can replace cost intense | |
18 | hardware raid cards by moderate CPU and memory load combined with easy | |
19 | management. | |
20 | ||
21 | .General ZFS advantages | |
22 | ||
23 | * Easy configuration and management with {pve} GUI and CLI. | |
24 | ||
25 | * Reliable | |
26 | ||
27 | * Protection against data corruption | |
28 | ||
5eba0743 | 29 | * Data compression on file system level |
9ee94323 DM |
30 | |
31 | * Snapshots | |
32 | ||
33 | * Copy-on-write clone | |
34 | ||
35 | * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3 | |
36 | ||
37 | * Can use SSD for cache | |
38 | ||
39 | * Self healing | |
40 | ||
41 | * Continuous integrity checking | |
42 | ||
43 | * Designed for high storage capacities | |
44 | ||
9ee94323 DM |
45 | * Asynchronous replication over network |
46 | ||
47 | * Open Source | |
48 | ||
49 | * Encryption | |
50 | ||
51 | * ... | |
52 | ||
53 | ||
54 | Hardware | |
55 | ~~~~~~~~ | |
56 | ||
57 | ZFS depends heavily on memory, so you need at least 8GB to start. In | |
58 | practice, use as much you can get for your hardware/budget. To prevent | |
59 | data corruption, we recommend the use of high quality ECC RAM. | |
60 | ||
d48bdcf2 | 61 | If you use a dedicated cache and/or log disk, you should use an |
9ee94323 DM |
62 | enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can |
63 | increase the overall performance significantly. | |
64 | ||
5eba0743 | 65 | IMPORTANT: Do not use ZFS on top of hardware controller which has its |
9ee94323 DM |
66 | own cache management. ZFS needs to directly communicate with disks. An |
67 | HBA adapter is the way to go, or something like LSI controller flashed | |
8c1189b6 | 68 | in ``IT'' mode. |
9ee94323 DM |
69 | |
70 | If you are experimenting with an installation of {pve} inside a VM | |
8c1189b6 | 71 | (Nested Virtualization), don't use `virtio` for disks of that VM, |
9ee94323 | 72 | since they are not supported by ZFS. Use IDE or SCSI instead (works |
8c1189b6 | 73 | also with `virtio` SCSI controller type). |
9ee94323 DM |
74 | |
75 | ||
5eba0743 | 76 | Installation as Root File System |
9ee94323 DM |
77 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ |
78 | ||
79 | When you install using the {pve} installer, you can choose ZFS for the | |
80 | root file system. You need to select the RAID type at installation | |
81 | time: | |
82 | ||
83 | [horizontal] | |
8c1189b6 FG |
84 | RAID0:: Also called ``striping''. The capacity of such volume is the sum |
85 | of the capacities of all disks. But RAID0 does not add any redundancy, | |
9ee94323 DM |
86 | so the failure of a single drive makes the volume unusable. |
87 | ||
8c1189b6 | 88 | RAID1:: Also called ``mirroring''. Data is written identically to all |
9ee94323 DM |
89 | disks. This mode requires at least 2 disks with the same size. The |
90 | resulting capacity is that of a single disk. | |
91 | ||
92 | RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks. | |
93 | ||
94 | RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks. | |
95 | ||
96 | RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks. | |
97 | ||
98 | RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks. | |
99 | ||
100 | The installer automatically partitions the disks, creates a ZFS pool | |
8c1189b6 FG |
101 | called `rpool`, and installs the root file system on the ZFS subvolume |
102 | `rpool/ROOT/pve-1`. | |
9ee94323 | 103 | |
8c1189b6 | 104 | Another subvolume called `rpool/data` is created to store VM |
9ee94323 | 105 | images. In order to use that with the {pve} tools, the installer |
8c1189b6 | 106 | creates the following configuration entry in `/etc/pve/storage.cfg`: |
9ee94323 DM |
107 | |
108 | ---- | |
109 | zfspool: local-zfs | |
110 | pool rpool/data | |
111 | sparse | |
112 | content images,rootdir | |
113 | ---- | |
114 | ||
115 | After installation, you can view your ZFS pool status using the | |
8c1189b6 | 116 | `zpool` command: |
9ee94323 DM |
117 | |
118 | ---- | |
119 | # zpool status | |
120 | pool: rpool | |
121 | state: ONLINE | |
122 | scan: none requested | |
123 | config: | |
124 | ||
125 | NAME STATE READ WRITE CKSUM | |
126 | rpool ONLINE 0 0 0 | |
127 | mirror-0 ONLINE 0 0 0 | |
128 | sda2 ONLINE 0 0 0 | |
129 | sdb2 ONLINE 0 0 0 | |
130 | mirror-1 ONLINE 0 0 0 | |
131 | sdc ONLINE 0 0 0 | |
132 | sdd ONLINE 0 0 0 | |
133 | ||
134 | errors: No known data errors | |
135 | ---- | |
136 | ||
8c1189b6 | 137 | The `zfs` command is used configure and manage your ZFS file |
9ee94323 DM |
138 | systems. The following command lists all file systems after |
139 | installation: | |
140 | ||
141 | ---- | |
142 | # zfs list | |
143 | NAME USED AVAIL REFER MOUNTPOINT | |
144 | rpool 4.94G 7.68T 96K /rpool | |
145 | rpool/ROOT 702M 7.68T 96K /rpool/ROOT | |
146 | rpool/ROOT/pve-1 702M 7.68T 702M / | |
147 | rpool/data 96K 7.68T 96K /rpool/data | |
148 | rpool/swap 4.25G 7.69T 64K - | |
149 | ---- | |
150 | ||
151 | ||
e4262cac AL |
152 | [[sysadmin_zfs_raid_considerations]] |
153 | ZFS RAID Level Considerations | |
154 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
155 | ||
156 | There are a few factors to take into consideration when choosing the layout of | |
157 | a ZFS pool. The basic building block of a ZFS pool is the virtual device, or | |
158 | `vdev`. All vdevs in a pool are used equally and the data is striped among them | |
159 | (RAID0). Check the `zpool(8)` manpage for more details on vdevs. | |
160 | ||
161 | [[sysadmin_zfs_raid_performance]] | |
162 | Performance | |
163 | ^^^^^^^^^^^ | |
164 | ||
165 | Each `vdev` type has different performance behaviors. The two | |
166 | parameters of interest are the IOPS (Input/Output Operations per Second) and | |
167 | the bandwidth with which data can be written or read. | |
168 | ||
169 | A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards | |
170 | to both parameters when writing data. When reading data if will behave like the | |
171 | number of disks in the mirror. | |
172 | ||
173 | A common situation is to have 4 disks. When setting it up as 2 mirror vdevs | |
174 | (RAID10) the pool will have the write characteristics as two single disks in | |
175 | regard of IOPS and bandwidth. For read operations it will resemble 4 single | |
176 | disks. | |
177 | ||
178 | A 'RAIDZ' of any redundancy level will approximately behave like a single disk | |
179 | in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the | |
180 | size of the RAIDZ vdev and the redundancy level. | |
181 | ||
182 | For running VMs, IOPS is the more important metric in most situations. | |
183 | ||
184 | ||
185 | [[sysadmin_zfs_raid_size_space_usage_redundancy]] | |
186 | Size, Space usage and Redundancy | |
187 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
188 | ||
189 | While a pool made of 'mirror' vdevs will have the best performance | |
190 | characteristics, the usable space will be 50% of the disks available. Less if a | |
191 | mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At | |
192 | least one healthy disk per mirror is needed for the pool to stay functional. | |
193 | ||
194 | The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being | |
195 | the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail | |
196 | without losing data. A special case is a 4 disk pool with RAIDZ2. In this | |
197 | situation it is usually better to use 2 mirror vdevs for the better performance | |
198 | as the usable space will be the same. | |
199 | ||
200 | Another important factor when using any RAIDZ level is how ZVOL datasets, which | |
201 | are used for VM disks, behave. For each data block the pool needs parity data | |
202 | which is at least the size of the minimum block size defined by the `ashift` | |
203 | value of the pool. With an ashift of 12 the block size of the pool is 4k. The | |
204 | default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block | |
205 | written will cause two additional 4k parity blocks to be written, | |
206 | 8k + 4k + 4k = 16k. This is of course a simplified approach and the real | |
207 | situation will be slightly different with metadata, compression and such not | |
208 | being accounted for in this example. | |
209 | ||
210 | This behavior can be observed when checking the following properties of the | |
211 | ZVOL: | |
212 | ||
213 | * `volsize` | |
214 | * `refreservation` (if the pool is not thin provisioned) | |
215 | * `used` (if the pool is thin provisioned and without snapshots present) | |
216 | ||
217 | ---- | |
218 | # zfs get volsize,refreservation,used <pool>/vm-<vmid>-disk-X | |
219 | ---- | |
220 | ||
221 | `volsize` is the size of the disk as it is presented to the VM, while | |
222 | `refreservation` shows the reserved space on the pool which includes the | |
223 | expected space needed for the parity data. If the pool is thin provisioned, the | |
224 | `refreservation` will be set to 0. Another way to observe the behavior is to | |
225 | compare the used disk space within the VM and the `used` property. Be aware | |
226 | that snapshots will skew the value. | |
227 | ||
228 | There are a few options to counter the increased use of space: | |
229 | ||
230 | * Increase the `volblocksize` to improve the data to parity ratio | |
231 | * Use 'mirror' vdevs instead of 'RAIDZ' | |
232 | * Use `ashift=9` (block size of 512 bytes) | |
233 | ||
234 | The `volblocksize` property can only be set when creating a ZVOL. The default | |
235 | value can be changed in the storage configuration. When doing this, the guest | |
236 | needs to be tuned accordingly and depending on the use case, the problem of | |
237 | write amplification if just moved from the ZFS layer up to the guest. | |
238 | ||
239 | Using `ashift=9` when creating the pool can lead to bad | |
240 | performance, depending on the disks underneath, and cannot be changed later on. | |
241 | ||
242 | Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use | |
f4abc68a | 243 | them, unless your environment has specific needs and characteristics where |
e4262cac AL |
244 | RAIDZ performance characteristics are acceptable. |
245 | ||
246 | ||
9ee94323 DM |
247 | Bootloader |
248 | ~~~~~~~~~~ | |
249 | ||
1748211a SI |
250 | Depending on whether the system is booted in EFI or legacy BIOS mode the |
251 | {pve} installer sets up either `grub` or `systemd-boot` as main bootloader. | |
69055103 | 252 | See the chapter on xref:sysboot[{pve} host bootladers] for details. |
9ee94323 DM |
253 | |
254 | ||
255 | ZFS Administration | |
256 | ~~~~~~~~~~~~~~~~~~ | |
257 | ||
258 | This section gives you some usage examples for common tasks. ZFS | |
259 | itself is really powerful and provides many options. The main commands | |
8c1189b6 FG |
260 | to manage ZFS are `zfs` and `zpool`. Both commands come with great |
261 | manual pages, which can be read with: | |
9ee94323 DM |
262 | |
263 | ---- | |
264 | # man zpool | |
265 | # man zfs | |
266 | ----- | |
267 | ||
42449bdf TL |
268 | [[sysadmin_zfs_create_new_zpool]] |
269 | Create a new zpool | |
270 | ^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 271 | |
8c1189b6 FG |
272 | To create a new pool, at least one disk is needed. The `ashift` should |
273 | have the same sector-size (2 power of `ashift`) or larger as the | |
9ee94323 DM |
274 | underlying disk. |
275 | ||
eaefe614 FE |
276 | ---- |
277 | # zpool create -f -o ashift=12 <pool> <device> | |
278 | ---- | |
9ee94323 | 279 | |
e06707f2 | 280 | To activate compression (see section <<zfs_compression,Compression in ZFS>>): |
9ee94323 | 281 | |
eaefe614 FE |
282 | ---- |
283 | # zfs set compression=lz4 <pool> | |
284 | ---- | |
9ee94323 | 285 | |
42449bdf TL |
286 | [[sysadmin_zfs_create_new_zpool_raid0]] |
287 | Create a new pool with RAID-0 | |
288 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 289 | |
dc2d00a0 | 290 | Minimum 1 disk |
9ee94323 | 291 | |
eaefe614 FE |
292 | ---- |
293 | # zpool create -f -o ashift=12 <pool> <device1> <device2> | |
294 | ---- | |
9ee94323 | 295 | |
42449bdf TL |
296 | [[sysadmin_zfs_create_new_zpool_raid1]] |
297 | Create a new pool with RAID-1 | |
298 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 299 | |
dc2d00a0 | 300 | Minimum 2 disks |
9ee94323 | 301 | |
eaefe614 FE |
302 | ---- |
303 | # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> | |
304 | ---- | |
9ee94323 | 305 | |
42449bdf TL |
306 | [[sysadmin_zfs_create_new_zpool_raid10]] |
307 | Create a new pool with RAID-10 | |
308 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 309 | |
dc2d00a0 | 310 | Minimum 4 disks |
9ee94323 | 311 | |
eaefe614 FE |
312 | ---- |
313 | # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4> | |
314 | ---- | |
9ee94323 | 315 | |
42449bdf TL |
316 | [[sysadmin_zfs_create_new_zpool_raidz1]] |
317 | Create a new pool with RAIDZ-1 | |
318 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 319 | |
dc2d00a0 | 320 | Minimum 3 disks |
9ee94323 | 321 | |
eaefe614 FE |
322 | ---- |
323 | # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3> | |
324 | ---- | |
9ee94323 | 325 | |
42449bdf TL |
326 | Create a new pool with RAIDZ-2 |
327 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 328 | |
dc2d00a0 | 329 | Minimum 4 disks |
9ee94323 | 330 | |
eaefe614 FE |
331 | ---- |
332 | # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4> | |
333 | ---- | |
9ee94323 | 334 | |
42449bdf TL |
335 | [[sysadmin_zfs_create_new_zpool_with_cache]] |
336 | Create a new pool with cache (L2ARC) | |
337 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 DM |
338 | |
339 | It is possible to use a dedicated cache drive partition to increase | |
340 | the performance (use SSD). | |
341 | ||
8c1189b6 | 342 | As `<device>` it is possible to use more devices, like it's shown in |
9ee94323 DM |
343 | "Create a new pool with RAID*". |
344 | ||
eaefe614 FE |
345 | ---- |
346 | # zpool create -f -o ashift=12 <pool> <device> cache <cache_device> | |
347 | ---- | |
9ee94323 | 348 | |
42449bdf TL |
349 | [[sysadmin_zfs_create_new_zpool_with_log]] |
350 | Create a new pool with log (ZIL) | |
351 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 DM |
352 | |
353 | It is possible to use a dedicated cache drive partition to increase | |
354 | the performance(SSD). | |
355 | ||
8c1189b6 | 356 | As `<device>` it is possible to use more devices, like it's shown in |
9ee94323 DM |
357 | "Create a new pool with RAID*". |
358 | ||
eaefe614 FE |
359 | ---- |
360 | # zpool create -f -o ashift=12 <pool> <device> log <log_device> | |
361 | ---- | |
9ee94323 | 362 | |
42449bdf TL |
363 | [[sysadmin_zfs_add_cache_and_log_dev]] |
364 | Add cache and log to an existing pool | |
365 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 366 | |
5dfeeece | 367 | If you have a pool without cache and log. First partition the SSD in |
8c1189b6 | 368 | 2 partition with `parted` or `gdisk` |
9ee94323 | 369 | |
e300cf7d | 370 | IMPORTANT: Always use GPT partition tables. |
9ee94323 DM |
371 | |
372 | The maximum size of a log device should be about half the size of | |
373 | physical memory, so this is usually quite small. The rest of the SSD | |
5eba0743 | 374 | can be used as cache. |
9ee94323 | 375 | |
eaefe614 | 376 | ---- |
237007eb | 377 | # zpool add -f <pool> log <device-part1> cache <device-part2> |
eaefe614 | 378 | ---- |
9ee94323 | 379 | |
42449bdf TL |
380 | [[sysadmin_zfs_change_failed_dev]] |
381 | Changing a failed device | |
382 | ^^^^^^^^^^^^^^^^^^^^^^^^ | |
9ee94323 | 383 | |
eaefe614 FE |
384 | ---- |
385 | # zpool replace -f <pool> <old device> <new device> | |
386 | ---- | |
1748211a | 387 | |
11a6e022 AL |
388 | .Changing a failed bootable device |
389 | ||
390 | Depending on how {pve} was installed it is either using `grub` or `systemd-boot` | |
391 | as bootloader (see xref:sysboot[Host Bootloader]). | |
392 | ||
393 | The first steps of copying the partition table, reissuing GUIDs and replacing | |
394 | the ZFS partition are the same. To make the system bootable from the new disk, | |
395 | different steps are needed which depend on the bootloader in use. | |
1748211a | 396 | |
eaefe614 FE |
397 | ---- |
398 | # sgdisk <healthy bootable device> -R <new device> | |
399 | # sgdisk -G <new device> | |
400 | # zpool replace -f <pool> <old zfs partition> <new zfs partition> | |
11a6e022 AL |
401 | ---- |
402 | ||
44aee838 | 403 | NOTE: Use the `zpool status -v` command to monitor how far the resilvering |
11a6e022 AL |
404 | process of the new disk has progressed. |
405 | ||
42449bdf | 406 | .With `systemd-boot`: |
11a6e022 AL |
407 | |
408 | ---- | |
eaefe614 FE |
409 | # pve-efiboot-tool format <new disk's ESP> |
410 | # pve-efiboot-tool init <new disk's ESP> | |
411 | ---- | |
0daaddbd FG |
412 | |
413 | NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on | |
414 | bootable disks setup by the {pve} installer since version 5.4. For details, see | |
415 | xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP]. | |
9ee94323 | 416 | |
42449bdf | 417 | .With `grub`: |
11a6e022 AL |
418 | |
419 | ---- | |
420 | # grub-install <new disk> | |
421 | ---- | |
9ee94323 DM |
422 | |
423 | Activate E-Mail Notification | |
424 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
425 | ||
426 | ZFS comes with an event daemon, which monitors events generated by the | |
5eba0743 | 427 | ZFS kernel module. The daemon can also send emails on ZFS events like |
5dfeeece | 428 | pool errors. Newer ZFS packages ship the daemon in a separate package, |
e280a948 DM |
429 | and you can install it using `apt-get`: |
430 | ||
431 | ---- | |
432 | # apt-get install zfs-zed | |
433 | ---- | |
9ee94323 | 434 | |
8c1189b6 FG |
435 | To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your |
436 | favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: | |
9ee94323 | 437 | |
083adc34 | 438 | -------- |
9ee94323 | 439 | ZED_EMAIL_ADDR="root" |
083adc34 | 440 | -------- |
9ee94323 | 441 | |
8c1189b6 | 442 | Please note {pve} forwards mails to `root` to the email address |
9ee94323 DM |
443 | configured for the root user. |
444 | ||
8c1189b6 | 445 | IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All |
9ee94323 DM |
446 | other settings are optional. |
447 | ||
448 | ||
42449bdf | 449 | [[sysadmin_zfs_limit_memory_usage]] |
5eba0743 | 450 | Limit ZFS Memory Usage |
9ee94323 DM |
451 | ~~~~~~~~~~~~~~~~~~~~~~ |
452 | ||
5eba0743 | 453 | It is good to use at most 50 percent (which is the default) of the |
d362b7f4 DM |
454 | system memory for ZFS ARC to prevent performance shortage of the |
455 | host. Use your preferred editor to change the configuration in | |
8c1189b6 | 456 | `/etc/modprobe.d/zfs.conf` and insert: |
9ee94323 | 457 | |
5eba0743 FG |
458 | -------- |
459 | options zfs zfs_arc_max=8589934592 | |
460 | -------- | |
9ee94323 DM |
461 | |
462 | This example setting limits the usage to 8GB. | |
463 | ||
464 | [IMPORTANT] | |
465 | ==== | |
5eba0743 FG |
466 | If your root file system is ZFS you must update your initramfs every |
467 | time this value changes: | |
9ee94323 | 468 | |
eaefe614 FE |
469 | ---- |
470 | # update-initramfs -u | |
471 | ---- | |
9ee94323 DM |
472 | ==== |
473 | ||
474 | ||
dc74fc63 | 475 | [[zfs_swap]] |
4128e7ff TL |
476 | SWAP on ZFS |
477 | ~~~~~~~~~~~ | |
9ee94323 | 478 | |
dc74fc63 | 479 | Swap-space created on a zvol may generate some troubles, like blocking the |
9ee94323 DM |
480 | server or generating a high IO load, often seen when starting a Backup |
481 | to an external Storage. | |
482 | ||
483 | We strongly recommend to use enough memory, so that you normally do not | |
dc74fc63 SI |
484 | run into low memory situations. Should you need or want to add swap, it is |
485 | preferred to create a partition on a physical disk and use it as swapdevice. | |
486 | You can leave some space free for this purpose in the advanced options of the | |
487 | installer. Additionally, you can lower the | |
8c1189b6 | 488 | ``swappiness'' value. A good value for servers is 10: |
9ee94323 | 489 | |
eaefe614 FE |
490 | ---- |
491 | # sysctl -w vm.swappiness=10 | |
492 | ---- | |
9ee94323 | 493 | |
8c1189b6 | 494 | To make the swappiness persistent, open `/etc/sysctl.conf` with |
9ee94323 DM |
495 | an editor of your choice and add the following line: |
496 | ||
083adc34 FG |
497 | -------- |
498 | vm.swappiness = 10 | |
499 | -------- | |
9ee94323 | 500 | |
8c1189b6 | 501 | .Linux kernel `swappiness` parameter values |
9ee94323 DM |
502 | [width="100%",cols="<m,2d",options="header"] |
503 | |=========================================================== | |
504 | | Value | Strategy | |
505 | | vm.swappiness = 0 | The kernel will swap only to avoid | |
506 | an 'out of memory' condition | |
507 | | vm.swappiness = 1 | Minimum amount of swapping without | |
508 | disabling it entirely. | |
509 | | vm.swappiness = 10 | This value is sometimes recommended to | |
510 | improve performance when sufficient memory exists in a system. | |
511 | | vm.swappiness = 60 | The default value. | |
512 | | vm.swappiness = 100 | The kernel will swap aggressively. | |
513 | |=========================================================== | |
cca0540e FG |
514 | |
515 | [[zfs_encryption]] | |
4128e7ff TL |
516 | Encrypted ZFS Datasets |
517 | ~~~~~~~~~~~~~~~~~~~~~~ | |
cca0540e FG |
518 | |
519 | ZFS on Linux version 0.8.0 introduced support for native encryption of | |
520 | datasets. After an upgrade from previous ZFS on Linux versions, the encryption | |
229426eb | 521 | feature can be enabled per pool: |
cca0540e FG |
522 | |
523 | ---- | |
524 | # zpool get feature@encryption tank | |
525 | NAME PROPERTY VALUE SOURCE | |
526 | tank feature@encryption disabled local | |
527 | ||
528 | # zpool set feature@encryption=enabled | |
529 | ||
530 | # zpool get feature@encryption tank | |
531 | NAME PROPERTY VALUE SOURCE | |
532 | tank feature@encryption enabled local | |
533 | ---- | |
534 | ||
535 | WARNING: There is currently no support for booting from pools with encrypted | |
536 | datasets using Grub, and only limited support for automatically unlocking | |
537 | encrypted datasets on boot. Older versions of ZFS without encryption support | |
538 | will not be able to decrypt stored data. | |
539 | ||
540 | NOTE: It is recommended to either unlock storage datasets manually after | |
541 | booting, or to write a custom unit to pass the key material needed for | |
542 | unlocking on boot to `zfs load-key`. | |
543 | ||
544 | WARNING: Establish and test a backup procedure before enabling encryption of | |
5dfeeece | 545 | production data. If the associated key material/passphrase/keyfile has been |
cca0540e FG |
546 | lost, accessing the encrypted data is no longer possible. |
547 | ||
548 | Encryption needs to be setup when creating datasets/zvols, and is inherited by | |
549 | default to child datasets. For example, to create an encrypted dataset | |
550 | `tank/encrypted_data` and configure it as storage in {pve}, run the following | |
551 | commands: | |
552 | ||
553 | ---- | |
554 | # zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data | |
555 | Enter passphrase: | |
556 | Re-enter passphrase: | |
557 | ||
558 | # pvesm add zfspool encrypted_zfs -pool tank/encrypted_data | |
559 | ---- | |
560 | ||
561 | All guest volumes/disks create on this storage will be encrypted with the | |
562 | shared key material of the parent dataset. | |
563 | ||
564 | To actually use the storage, the associated key material needs to be loaded | |
7353437b | 565 | and the dataset needs to be mounted. This can be done in one step with: |
cca0540e FG |
566 | |
567 | ---- | |
7353437b | 568 | # zfs mount -l tank/encrypted_data |
cca0540e FG |
569 | Enter passphrase for 'tank/encrypted_data': |
570 | ---- | |
571 | ||
572 | It is also possible to use a (random) keyfile instead of prompting for a | |
573 | passphrase by setting the `keylocation` and `keyformat` properties, either at | |
229426eb | 574 | creation time or with `zfs change-key` on existing datasets: |
cca0540e FG |
575 | |
576 | ---- | |
577 | # dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1 | |
578 | ||
579 | # zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data | |
580 | ---- | |
581 | ||
582 | WARNING: When using a keyfile, special care needs to be taken to secure the | |
583 | keyfile against unauthorized access or accidental loss. Without the keyfile, it | |
584 | is not possible to access the plaintext data! | |
585 | ||
586 | A guest volume created underneath an encrypted dataset will have its | |
587 | `encryptionroot` property set accordingly. The key material only needs to be | |
588 | loaded once per encryptionroot to be available to all encrypted datasets | |
589 | underneath it. | |
590 | ||
591 | See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and | |
592 | `keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs | |
593 | change-key` commands and the `Encryption` section from `man zfs` for more | |
594 | details and advanced usage. | |
68029ec8 FE |
595 | |
596 | ||
e06707f2 FE |
597 | [[zfs_compression]] |
598 | Compression in ZFS | |
599 | ~~~~~~~~~~~~~~~~~~ | |
600 | ||
601 | When compression is enabled on a dataset, ZFS tries to compress all *new* | |
602 | blocks before writing them and decompresses them on reading. Already | |
603 | existing data will not be compressed retroactively. | |
604 | ||
605 | You can enable compression with: | |
606 | ||
607 | ---- | |
608 | # zfs set compression=<algorithm> <dataset> | |
609 | ---- | |
610 | ||
611 | We recommend using the `lz4` algorithm, because it adds very little CPU | |
612 | overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an | |
613 | integer from `1` (fastest) to `9` (best compression ratio), are also | |
614 | available. Depending on the algorithm and how compressible the data is, | |
615 | having compression enabled can even increase I/O performance. | |
616 | ||
617 | You can disable compression at any time with: | |
618 | ||
619 | ---- | |
620 | # zfs set compression=off <dataset> | |
621 | ---- | |
622 | ||
623 | Again, only new blocks will be affected by this change. | |
624 | ||
625 | ||
42449bdf | 626 | [[sysadmin_zfs_special_device]] |
68029ec8 FE |
627 | ZFS Special Device |
628 | ~~~~~~~~~~~~~~~~~~ | |
629 | ||
630 | Since version 0.8.0 ZFS supports `special` devices. A `special` device in a | |
631 | pool is used to store metadata, deduplication tables, and optionally small | |
632 | file blocks. | |
633 | ||
634 | A `special` device can improve the speed of a pool consisting of slow spinning | |
51e544b6 TL |
635 | hard disks with a lot of metadata changes. For example workloads that involve |
636 | creating, updating or deleting a large number of files will benefit from the | |
637 | presence of a `special` device. ZFS datasets can also be configured to store | |
638 | whole small files on the `special` device which can further improve the | |
639 | performance. Use fast SSDs for the `special` device. | |
68029ec8 FE |
640 | |
641 | IMPORTANT: The redundancy of the `special` device should match the one of the | |
642 | pool, since the `special` device is a point of failure for the whole pool. | |
643 | ||
644 | WARNING: Adding a `special` device to a pool cannot be undone! | |
645 | ||
646 | .Create a pool with `special` device and RAID-1: | |
647 | ||
eaefe614 FE |
648 | ---- |
649 | # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4> | |
650 | ---- | |
68029ec8 FE |
651 | |
652 | .Add a `special` device to an existing pool with RAID-1: | |
653 | ||
eaefe614 FE |
654 | ---- |
655 | # zpool add <pool> special mirror <device1> <device2> | |
656 | ---- | |
68029ec8 FE |
657 | |
658 | ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be | |
659 | `0` to disable storing small file blocks on the `special` device or a power of | |
660 | two in the range between `512B` to `128K`. After setting the property new file | |
661 | blocks smaller than `size` will be allocated on the `special` device. | |
662 | ||
663 | IMPORTANT: If the value for `special_small_blocks` is greater than or equal to | |
51e544b6 TL |
664 | the `recordsize` (default `128K`) of the dataset, *all* data will be written to |
665 | the `special` device, so be careful! | |
68029ec8 FE |
666 | |
667 | Setting the `special_small_blocks` property on a pool will change the default | |
668 | value of that property for all child ZFS datasets (for example all containers | |
669 | in the pool will opt in for small file blocks). | |
670 | ||
51e544b6 | 671 | .Opt in for all file smaller than 4K-blocks pool-wide: |
68029ec8 | 672 | |
eaefe614 FE |
673 | ---- |
674 | # zfs set special_small_blocks=4K <pool> | |
675 | ---- | |
68029ec8 FE |
676 | |
677 | .Opt in for small file blocks for a single dataset: | |
678 | ||
eaefe614 FE |
679 | ---- |
680 | # zfs set special_small_blocks=4K <pool>/<filesystem> | |
681 | ---- | |
68029ec8 FE |
682 | |
683 | .Opt out from small file blocks for a single dataset: | |
684 | ||
eaefe614 FE |
685 | ---- |
686 | # zfs set special_small_blocks=0 <pool>/<filesystem> | |
687 | ---- |