]> git.proxmox.com Git - pve-docs.git/blame - local-zfs.adoc
zfs: add some emphasis on *not* upgrading features
[pve-docs.git] / local-zfs.adoc
CommitLineData
0235c741 1[[chapter_zfs]]
9ee94323
DM
2ZFS on Linux
3------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
9ee94323
DM
8ZFS is a combined file system and logical volume manager designed by
9Sun Microsystems. Starting with {pve} 3.4, the native Linux
10kernel port of the ZFS file system is introduced as optional
5eba0743
FG
11file system and also as an additional selection for the root
12file system. There is no need for manually compile ZFS modules - all
9ee94323
DM
13packages are included.
14
5eba0743 15By using ZFS, its possible to achieve maximum enterprise features with
9ee94323
DM
16low budget hardware, but also high performance systems by leveraging
17SSD caching or even SSD only setups. ZFS can replace cost intense
18hardware raid cards by moderate CPU and memory load combined with easy
19management.
20
21.General ZFS advantages
22
23* Easy configuration and management with {pve} GUI and CLI.
24
25* Reliable
26
27* Protection against data corruption
28
5eba0743 29* Data compression on file system level
9ee94323
DM
30
31* Snapshots
32
33* Copy-on-write clone
34
35* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37* Can use SSD for cache
38
39* Self healing
40
41* Continuous integrity checking
42
43* Designed for high storage capacities
44
9ee94323
DM
45* Asynchronous replication over network
46
47* Open Source
48
49* Encryption
50
51* ...
52
53
54Hardware
55~~~~~~~~
56
57ZFS depends heavily on memory, so you need at least 8GB to start. In
58practice, use as much you can get for your hardware/budget. To prevent
59data corruption, we recommend the use of high quality ECC RAM.
60
d48bdcf2 61If you use a dedicated cache and/or log disk, you should use an
9ee94323
DM
62enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
63increase the overall performance significantly.
64
5eba0743 65IMPORTANT: Do not use ZFS on top of hardware controller which has its
9ee94323
DM
66own cache management. ZFS needs to directly communicate with disks. An
67HBA adapter is the way to go, or something like LSI controller flashed
8c1189b6 68in ``IT'' mode.
9ee94323
DM
69
70If you are experimenting with an installation of {pve} inside a VM
8c1189b6 71(Nested Virtualization), don't use `virtio` for disks of that VM,
9ee94323 72since they are not supported by ZFS. Use IDE or SCSI instead (works
8c1189b6 73also with `virtio` SCSI controller type).
9ee94323
DM
74
75
5eba0743 76Installation as Root File System
9ee94323
DM
77~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
78
79When you install using the {pve} installer, you can choose ZFS for the
80root file system. You need to select the RAID type at installation
81time:
82
83[horizontal]
8c1189b6
FG
84RAID0:: Also called ``striping''. The capacity of such volume is the sum
85of the capacities of all disks. But RAID0 does not add any redundancy,
9ee94323
DM
86so the failure of a single drive makes the volume unusable.
87
8c1189b6 88RAID1:: Also called ``mirroring''. Data is written identically to all
9ee94323
DM
89disks. This mode requires at least 2 disks with the same size. The
90resulting capacity is that of a single disk.
91
92RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
93
94RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
95
96RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
97
98RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
99
100The installer automatically partitions the disks, creates a ZFS pool
8c1189b6
FG
101called `rpool`, and installs the root file system on the ZFS subvolume
102`rpool/ROOT/pve-1`.
9ee94323 103
8c1189b6 104Another subvolume called `rpool/data` is created to store VM
9ee94323 105images. In order to use that with the {pve} tools, the installer
8c1189b6 106creates the following configuration entry in `/etc/pve/storage.cfg`:
9ee94323
DM
107
108----
109zfspool: local-zfs
110 pool rpool/data
111 sparse
112 content images,rootdir
113----
114
115After installation, you can view your ZFS pool status using the
8c1189b6 116`zpool` command:
9ee94323
DM
117
118----
119# zpool status
120 pool: rpool
121 state: ONLINE
122 scan: none requested
123config:
124
125 NAME STATE READ WRITE CKSUM
126 rpool ONLINE 0 0 0
127 mirror-0 ONLINE 0 0 0
128 sda2 ONLINE 0 0 0
129 sdb2 ONLINE 0 0 0
130 mirror-1 ONLINE 0 0 0
131 sdc ONLINE 0 0 0
132 sdd ONLINE 0 0 0
133
134errors: No known data errors
135----
136
8c1189b6 137The `zfs` command is used configure and manage your ZFS file
9ee94323
DM
138systems. The following command lists all file systems after
139installation:
140
141----
142# zfs list
143NAME USED AVAIL REFER MOUNTPOINT
144rpool 4.94G 7.68T 96K /rpool
145rpool/ROOT 702M 7.68T 96K /rpool/ROOT
146rpool/ROOT/pve-1 702M 7.68T 702M /
147rpool/data 96K 7.68T 96K /rpool/data
148rpool/swap 4.25G 7.69T 64K -
149----
150
151
e4262cac
AL
152[[sysadmin_zfs_raid_considerations]]
153ZFS RAID Level Considerations
154~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
155
156There are a few factors to take into consideration when choosing the layout of
157a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
158`vdev`. All vdevs in a pool are used equally and the data is striped among them
159(RAID0). Check the `zpool(8)` manpage for more details on vdevs.
160
161[[sysadmin_zfs_raid_performance]]
162Performance
163^^^^^^^^^^^
164
165Each `vdev` type has different performance behaviors. The two
166parameters of interest are the IOPS (Input/Output Operations per Second) and
167the bandwidth with which data can be written or read.
168
169A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards
170to both parameters when writing data. When reading data if will behave like the
171number of disks in the mirror.
172
173A common situation is to have 4 disks. When setting it up as 2 mirror vdevs
174(RAID10) the pool will have the write characteristics as two single disks in
175regard of IOPS and bandwidth. For read operations it will resemble 4 single
176disks.
177
178A 'RAIDZ' of any redundancy level will approximately behave like a single disk
179in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the
180size of the RAIDZ vdev and the redundancy level.
181
182For running VMs, IOPS is the more important metric in most situations.
183
184
185[[sysadmin_zfs_raid_size_space_usage_redundancy]]
186Size, Space usage and Redundancy
187^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
188
189While a pool made of 'mirror' vdevs will have the best performance
190characteristics, the usable space will be 50% of the disks available. Less if a
191mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At
192least one healthy disk per mirror is needed for the pool to stay functional.
193
194The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being
195the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail
196without losing data. A special case is a 4 disk pool with RAIDZ2. In this
197situation it is usually better to use 2 mirror vdevs for the better performance
198as the usable space will be the same.
199
200Another important factor when using any RAIDZ level is how ZVOL datasets, which
201are used for VM disks, behave. For each data block the pool needs parity data
202which is at least the size of the minimum block size defined by the `ashift`
203value of the pool. With an ashift of 12 the block size of the pool is 4k. The
204default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block
205written will cause two additional 4k parity blocks to be written,
2068k + 4k + 4k = 16k. This is of course a simplified approach and the real
207situation will be slightly different with metadata, compression and such not
208being accounted for in this example.
209
210This behavior can be observed when checking the following properties of the
211ZVOL:
212
213 * `volsize`
214 * `refreservation` (if the pool is not thin provisioned)
215 * `used` (if the pool is thin provisioned and without snapshots present)
216
217----
218# zfs get volsize,refreservation,used <pool>/vm-<vmid>-disk-X
219----
220
221`volsize` is the size of the disk as it is presented to the VM, while
222`refreservation` shows the reserved space on the pool which includes the
223expected space needed for the parity data. If the pool is thin provisioned, the
224`refreservation` will be set to 0. Another way to observe the behavior is to
225compare the used disk space within the VM and the `used` property. Be aware
226that snapshots will skew the value.
227
228There are a few options to counter the increased use of space:
229
230* Increase the `volblocksize` to improve the data to parity ratio
231* Use 'mirror' vdevs instead of 'RAIDZ'
232* Use `ashift=9` (block size of 512 bytes)
233
234The `volblocksize` property can only be set when creating a ZVOL. The default
235value can be changed in the storage configuration. When doing this, the guest
236needs to be tuned accordingly and depending on the use case, the problem of
237write amplification if just moved from the ZFS layer up to the guest.
238
239Using `ashift=9` when creating the pool can lead to bad
240performance, depending on the disks underneath, and cannot be changed later on.
241
242Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use
f4abc68a 243them, unless your environment has specific needs and characteristics where
e4262cac
AL
244RAIDZ performance characteristics are acceptable.
245
246
9ee94323
DM
247Bootloader
248~~~~~~~~~~
249
1748211a
SI
250Depending on whether the system is booted in EFI or legacy BIOS mode the
251{pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
69055103 252See the chapter on xref:sysboot[{pve} host bootladers] for details.
9ee94323
DM
253
254
255ZFS Administration
256~~~~~~~~~~~~~~~~~~
257
258This section gives you some usage examples for common tasks. ZFS
259itself is really powerful and provides many options. The main commands
8c1189b6
FG
260to manage ZFS are `zfs` and `zpool`. Both commands come with great
261manual pages, which can be read with:
9ee94323
DM
262
263----
264# man zpool
265# man zfs
266-----
267
42449bdf
TL
268[[sysadmin_zfs_create_new_zpool]]
269Create a new zpool
270^^^^^^^^^^^^^^^^^^
9ee94323 271
8c1189b6
FG
272To create a new pool, at least one disk is needed. The `ashift` should
273have the same sector-size (2 power of `ashift`) or larger as the
9ee94323
DM
274underlying disk.
275
eaefe614
FE
276----
277# zpool create -f -o ashift=12 <pool> <device>
278----
9ee94323 279
e06707f2 280To activate compression (see section <<zfs_compression,Compression in ZFS>>):
9ee94323 281
eaefe614
FE
282----
283# zfs set compression=lz4 <pool>
284----
9ee94323 285
42449bdf
TL
286[[sysadmin_zfs_create_new_zpool_raid0]]
287Create a new pool with RAID-0
288^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 289
dc2d00a0 290Minimum 1 disk
9ee94323 291
eaefe614
FE
292----
293# zpool create -f -o ashift=12 <pool> <device1> <device2>
294----
9ee94323 295
42449bdf
TL
296[[sysadmin_zfs_create_new_zpool_raid1]]
297Create a new pool with RAID-1
298^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 299
dc2d00a0 300Minimum 2 disks
9ee94323 301
eaefe614
FE
302----
303# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
304----
9ee94323 305
42449bdf
TL
306[[sysadmin_zfs_create_new_zpool_raid10]]
307Create a new pool with RAID-10
308^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 309
dc2d00a0 310Minimum 4 disks
9ee94323 311
eaefe614
FE
312----
313# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
314----
9ee94323 315
42449bdf
TL
316[[sysadmin_zfs_create_new_zpool_raidz1]]
317Create a new pool with RAIDZ-1
318^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 319
dc2d00a0 320Minimum 3 disks
9ee94323 321
eaefe614
FE
322----
323# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
324----
9ee94323 325
42449bdf
TL
326Create a new pool with RAIDZ-2
327^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 328
dc2d00a0 329Minimum 4 disks
9ee94323 330
eaefe614
FE
331----
332# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
333----
9ee94323 334
42449bdf
TL
335[[sysadmin_zfs_create_new_zpool_with_cache]]
336Create a new pool with cache (L2ARC)
337^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323
DM
338
339It is possible to use a dedicated cache drive partition to increase
340the performance (use SSD).
341
8c1189b6 342As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
343"Create a new pool with RAID*".
344
eaefe614
FE
345----
346# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
347----
9ee94323 348
42449bdf
TL
349[[sysadmin_zfs_create_new_zpool_with_log]]
350Create a new pool with log (ZIL)
351^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323
DM
352
353It is possible to use a dedicated cache drive partition to increase
354the performance(SSD).
355
8c1189b6 356As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
357"Create a new pool with RAID*".
358
eaefe614
FE
359----
360# zpool create -f -o ashift=12 <pool> <device> log <log_device>
361----
9ee94323 362
42449bdf
TL
363[[sysadmin_zfs_add_cache_and_log_dev]]
364Add cache and log to an existing pool
365^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 366
5dfeeece 367If you have a pool without cache and log. First partition the SSD in
8c1189b6 3682 partition with `parted` or `gdisk`
9ee94323 369
e300cf7d 370IMPORTANT: Always use GPT partition tables.
9ee94323
DM
371
372The maximum size of a log device should be about half the size of
373physical memory, so this is usually quite small. The rest of the SSD
5eba0743 374can be used as cache.
9ee94323 375
eaefe614 376----
237007eb 377# zpool add -f <pool> log <device-part1> cache <device-part2>
eaefe614 378----
9ee94323 379
42449bdf
TL
380[[sysadmin_zfs_change_failed_dev]]
381Changing a failed device
382^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 383
eaefe614
FE
384----
385# zpool replace -f <pool> <old device> <new device>
386----
1748211a 387
11a6e022
AL
388.Changing a failed bootable device
389
390Depending on how {pve} was installed it is either using `grub` or `systemd-boot`
391as bootloader (see xref:sysboot[Host Bootloader]).
392
393The first steps of copying the partition table, reissuing GUIDs and replacing
394the ZFS partition are the same. To make the system bootable from the new disk,
395different steps are needed which depend on the bootloader in use.
1748211a 396
eaefe614
FE
397----
398# sgdisk <healthy bootable device> -R <new device>
399# sgdisk -G <new device>
400# zpool replace -f <pool> <old zfs partition> <new zfs partition>
11a6e022
AL
401----
402
44aee838 403NOTE: Use the `zpool status -v` command to monitor how far the resilvering
11a6e022
AL
404process of the new disk has progressed.
405
42449bdf 406.With `systemd-boot`:
11a6e022
AL
407
408----
eaefe614
FE
409# pve-efiboot-tool format <new disk's ESP>
410# pve-efiboot-tool init <new disk's ESP>
411----
0daaddbd
FG
412
413NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
414bootable disks setup by the {pve} installer since version 5.4. For details, see
415xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
9ee94323 416
42449bdf 417.With `grub`:
11a6e022
AL
418
419----
420# grub-install <new disk>
421----
9ee94323
DM
422
423Activate E-Mail Notification
424~~~~~~~~~~~~~~~~~~~~~~~~~~~~
425
426ZFS comes with an event daemon, which monitors events generated by the
5eba0743 427ZFS kernel module. The daemon can also send emails on ZFS events like
5dfeeece 428pool errors. Newer ZFS packages ship the daemon in a separate package,
e280a948
DM
429and you can install it using `apt-get`:
430
431----
432# apt-get install zfs-zed
433----
9ee94323 434
8c1189b6
FG
435To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
436favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
9ee94323 437
083adc34 438--------
9ee94323 439ZED_EMAIL_ADDR="root"
083adc34 440--------
9ee94323 441
8c1189b6 442Please note {pve} forwards mails to `root` to the email address
9ee94323
DM
443configured for the root user.
444
8c1189b6 445IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
9ee94323
DM
446other settings are optional.
447
448
42449bdf 449[[sysadmin_zfs_limit_memory_usage]]
5eba0743 450Limit ZFS Memory Usage
9ee94323
DM
451~~~~~~~~~~~~~~~~~~~~~~
452
9060abb9
TL
453ZFS uses '50 %' of the host memory for the **A**daptive **R**eplacement
454**C**ache (ARC) by default. Allocating enough memory for the ARC is crucial for
455IO performance, so reduce it with caution. As a general rule of thumb, allocate
456at least +2 GiB Base + 1 GiB/TiB-Storage+. For example, if you have a pool with
457+8 TiB+ of available storage space then you should use +10 GiB+ of memory for
458the ARC.
459
460You can change the ARC usage limit for the current boot (a reboot resets this
461change again) by writing to the +zfs_arc_max+ module parameter directly:
462
463----
464 echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
465----
466
467To *permanently change* the ARC limits, add the following line to
468`/etc/modprobe.d/zfs.conf`:
9ee94323 469
5eba0743
FG
470--------
471options zfs zfs_arc_max=8589934592
472--------
9ee94323 473
9060abb9 474This example setting limits the usage to 8 GiB ('8 * 2^30^').
9ee94323
DM
475
476[IMPORTANT]
477====
9060abb9 478If your root file system is ZFS, you must update your initramfs every
5eba0743 479time this value changes:
9ee94323 480
eaefe614
FE
481----
482# update-initramfs -u
483----
9060abb9
TL
484
485You *must reboot* to activate these changes.
9ee94323
DM
486====
487
488
dc74fc63 489[[zfs_swap]]
4128e7ff
TL
490SWAP on ZFS
491~~~~~~~~~~~
9ee94323 492
dc74fc63 493Swap-space created on a zvol may generate some troubles, like blocking the
9ee94323
DM
494server or generating a high IO load, often seen when starting a Backup
495to an external Storage.
496
497We strongly recommend to use enough memory, so that you normally do not
dc74fc63
SI
498run into low memory situations. Should you need or want to add swap, it is
499preferred to create a partition on a physical disk and use it as swapdevice.
500You can leave some space free for this purpose in the advanced options of the
501installer. Additionally, you can lower the
8c1189b6 502``swappiness'' value. A good value for servers is 10:
9ee94323 503
eaefe614
FE
504----
505# sysctl -w vm.swappiness=10
506----
9ee94323 507
8c1189b6 508To make the swappiness persistent, open `/etc/sysctl.conf` with
9ee94323
DM
509an editor of your choice and add the following line:
510
083adc34
FG
511--------
512vm.swappiness = 10
513--------
9ee94323 514
8c1189b6 515.Linux kernel `swappiness` parameter values
9ee94323
DM
516[width="100%",cols="<m,2d",options="header"]
517|===========================================================
518| Value | Strategy
519| vm.swappiness = 0 | The kernel will swap only to avoid
520an 'out of memory' condition
521| vm.swappiness = 1 | Minimum amount of swapping without
522disabling it entirely.
523| vm.swappiness = 10 | This value is sometimes recommended to
524improve performance when sufficient memory exists in a system.
525| vm.swappiness = 60 | The default value.
526| vm.swappiness = 100 | The kernel will swap aggressively.
527|===========================================================
cca0540e
FG
528
529[[zfs_encryption]]
4128e7ff
TL
530Encrypted ZFS Datasets
531~~~~~~~~~~~~~~~~~~~~~~
cca0540e
FG
532
533ZFS on Linux version 0.8.0 introduced support for native encryption of
534datasets. After an upgrade from previous ZFS on Linux versions, the encryption
229426eb 535feature can be enabled per pool:
cca0540e
FG
536
537----
538# zpool get feature@encryption tank
539NAME PROPERTY VALUE SOURCE
540tank feature@encryption disabled local
541
542# zpool set feature@encryption=enabled
543
544# zpool get feature@encryption tank
545NAME PROPERTY VALUE SOURCE
546tank feature@encryption enabled local
547----
548
549WARNING: There is currently no support for booting from pools with encrypted
550datasets using Grub, and only limited support for automatically unlocking
551encrypted datasets on boot. Older versions of ZFS without encryption support
552will not be able to decrypt stored data.
553
554NOTE: It is recommended to either unlock storage datasets manually after
555booting, or to write a custom unit to pass the key material needed for
556unlocking on boot to `zfs load-key`.
557
558WARNING: Establish and test a backup procedure before enabling encryption of
5dfeeece 559production data. If the associated key material/passphrase/keyfile has been
cca0540e
FG
560lost, accessing the encrypted data is no longer possible.
561
562Encryption needs to be setup when creating datasets/zvols, and is inherited by
563default to child datasets. For example, to create an encrypted dataset
564`tank/encrypted_data` and configure it as storage in {pve}, run the following
565commands:
566
567----
568# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
569Enter passphrase:
570Re-enter passphrase:
571
572# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
573----
574
575All guest volumes/disks create on this storage will be encrypted with the
576shared key material of the parent dataset.
577
578To actually use the storage, the associated key material needs to be loaded
7353437b 579and the dataset needs to be mounted. This can be done in one step with:
cca0540e
FG
580
581----
7353437b 582# zfs mount -l tank/encrypted_data
cca0540e
FG
583Enter passphrase for 'tank/encrypted_data':
584----
585
586It is also possible to use a (random) keyfile instead of prompting for a
587passphrase by setting the `keylocation` and `keyformat` properties, either at
229426eb 588creation time or with `zfs change-key` on existing datasets:
cca0540e
FG
589
590----
591# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
592
593# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
594----
595
596WARNING: When using a keyfile, special care needs to be taken to secure the
597keyfile against unauthorized access or accidental loss. Without the keyfile, it
598is not possible to access the plaintext data!
599
600A guest volume created underneath an encrypted dataset will have its
601`encryptionroot` property set accordingly. The key material only needs to be
602loaded once per encryptionroot to be available to all encrypted datasets
603underneath it.
604
605See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
606`keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
607change-key` commands and the `Encryption` section from `man zfs` for more
608details and advanced usage.
68029ec8
FE
609
610
e06707f2
FE
611[[zfs_compression]]
612Compression in ZFS
613~~~~~~~~~~~~~~~~~~
614
615When compression is enabled on a dataset, ZFS tries to compress all *new*
616blocks before writing them and decompresses them on reading. Already
617existing data will not be compressed retroactively.
618
619You can enable compression with:
620
621----
622# zfs set compression=<algorithm> <dataset>
623----
624
625We recommend using the `lz4` algorithm, because it adds very little CPU
626overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
627integer from `1` (fastest) to `9` (best compression ratio), are also
628available. Depending on the algorithm and how compressible the data is,
629having compression enabled can even increase I/O performance.
630
631You can disable compression at any time with:
632
633----
634# zfs set compression=off <dataset>
635----
636
637Again, only new blocks will be affected by this change.
638
639
42449bdf 640[[sysadmin_zfs_special_device]]
68029ec8
FE
641ZFS Special Device
642~~~~~~~~~~~~~~~~~~
643
644Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
645pool is used to store metadata, deduplication tables, and optionally small
646file blocks.
647
648A `special` device can improve the speed of a pool consisting of slow spinning
51e544b6
TL
649hard disks with a lot of metadata changes. For example workloads that involve
650creating, updating or deleting a large number of files will benefit from the
651presence of a `special` device. ZFS datasets can also be configured to store
652whole small files on the `special` device which can further improve the
653performance. Use fast SSDs for the `special` device.
68029ec8
FE
654
655IMPORTANT: The redundancy of the `special` device should match the one of the
656pool, since the `special` device is a point of failure for the whole pool.
657
658WARNING: Adding a `special` device to a pool cannot be undone!
659
660.Create a pool with `special` device and RAID-1:
661
eaefe614
FE
662----
663# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
664----
68029ec8
FE
665
666.Add a `special` device to an existing pool with RAID-1:
667
eaefe614
FE
668----
669# zpool add <pool> special mirror <device1> <device2>
670----
68029ec8
FE
671
672ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
673`0` to disable storing small file blocks on the `special` device or a power of
674two in the range between `512B` to `128K`. After setting the property new file
675blocks smaller than `size` will be allocated on the `special` device.
676
677IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
51e544b6
TL
678the `recordsize` (default `128K`) of the dataset, *all* data will be written to
679the `special` device, so be careful!
68029ec8
FE
680
681Setting the `special_small_blocks` property on a pool will change the default
682value of that property for all child ZFS datasets (for example all containers
683in the pool will opt in for small file blocks).
684
51e544b6 685.Opt in for all file smaller than 4K-blocks pool-wide:
68029ec8 686
eaefe614
FE
687----
688# zfs set special_small_blocks=4K <pool>
689----
68029ec8
FE
690
691.Opt in for small file blocks for a single dataset:
692
eaefe614
FE
693----
694# zfs set special_small_blocks=4K <pool>/<filesystem>
695----
68029ec8
FE
696
697.Opt out from small file blocks for a single dataset:
698
eaefe614
FE
699----
700# zfs set special_small_blocks=0 <pool>/<filesystem>
701----
18d0d68e
SI
702
703[[sysadmin_zfs_features]]
704ZFS Pool Features
705~~~~~~~~~~~~~~~~~
706
707Changes to the on-disk format in ZFS are only made between major version changes
708and are specified through *features*. All features, as well as the general
709mechanism are well documented in the `zpool-features(5)` manpage.
710
711Since enabling new features can render a pool not importable by an older version
712of ZFS, this needs to be done actively by the administrator, by running
713`zpool upgrade` on the pool (see the `zpool-upgrade(8)` manpage).
714
715Unless you need to use one of the new features, there is no upside to enabling
716them.
717
718In fact, there are some downsides to enabling new features:
719
720* A system with root on ZFS, that still boots using `grub` will become
721 unbootable if a new feature is active on the rpool, due to the incompatible
722 implementation of ZFS in grub.
723* The system will not be able to import any upgraded pool when booted with an
724 older kernel, which still ships with the old ZFS modules.
725* Booting an older {pve} ISO to repair a non-booting system will likewise not
726 work.
727
27adc096
TL
728IMPORTANT: Do *not* upgrade your rpool if your system is still booted with
729`grub`, as this will render your system unbootable. This includes systems
730installed before {pve} 5.4, and systems booting with legacy BIOS boot (see
18d0d68e
SI
731xref:sysboot_determine_bootloader_used[how to determine the bootloader]).
732
27adc096 733.Enable new features for a ZFS pool:
18d0d68e
SI
734----
735# zpool upgrade <pool>
736----