8 ZFS is a combined file system and logical volume manager designed by
9 Sun Microsystems. Starting with {pve} 3.4, the native Linux
10 kernel port of the ZFS file system is introduced as optional
11 file system and also as an additional selection for the root
12 file system. There is no need for manually compile ZFS modules - all
13 packages are included.
15 By using ZFS, its possible to achieve maximum enterprise features with
16 low budget hardware, but also high performance systems by leveraging
17 SSD caching or even SSD only setups. ZFS can replace cost intense
18 hardware raid cards by moderate CPU and memory load combined with easy
21 .General ZFS advantages
23 * Easy configuration and management with {pve} GUI and CLI.
27 * Protection against data corruption
29 * Data compression on file system level
35 * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2, RAIDZ-3,
38 * Can use SSD for cache
42 * Continuous integrity checking
44 * Designed for high storage capacities
46 * Asynchronous replication over network
58 ZFS depends heavily on memory, so you need at least 8GB to start. In
59 practice, use as much as you can get for your hardware/budget. To prevent
60 data corruption, we recommend the use of high quality ECC RAM.
62 If you use a dedicated cache and/or log disk, you should use an
63 enterprise class SSD. This can
64 increase the overall performance significantly.
66 IMPORTANT: Do not use ZFS on top of a hardware RAID controller which has its
67 own cache management. ZFS needs to communicate directly with the disks. An
68 HBA adapter or something like an LSI controller flashed in ``IT'' mode is more
71 If you are experimenting with an installation of {pve} inside a VM
72 (Nested Virtualization), don't use `virtio` for disks of that VM,
73 as they are not supported by ZFS. Use IDE or SCSI instead (also works
74 with the `virtio` SCSI controller type).
77 Installation as Root File System
78 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80 When you install using the {pve} installer, you can choose ZFS for the
81 root file system. You need to select the RAID type at installation
85 RAID0:: Also called ``striping''. The capacity of such volume is the sum
86 of the capacities of all disks. But RAID0 does not add any redundancy,
87 so the failure of a single drive makes the volume unusable.
89 RAID1:: Also called ``mirroring''. Data is written identically to all
90 disks. This mode requires at least 2 disks with the same size. The
91 resulting capacity is that of a single disk.
93 RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
95 RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
97 RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
99 RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
101 The installer automatically partitions the disks, creates a ZFS pool
102 called `rpool`, and installs the root file system on the ZFS subvolume
105 Another subvolume called `rpool/data` is created to store VM
106 images. In order to use that with the {pve} tools, the installer
107 creates the following configuration entry in `/etc/pve/storage.cfg`:
113 content images,rootdir
116 After installation, you can view your ZFS pool status using the
126 NAME STATE READ WRITE CKSUM
128 mirror-0 ONLINE 0 0 0
131 mirror-1 ONLINE 0 0 0
135 errors: No known data errors
138 The `zfs` command is used configure and manage your ZFS file
139 systems. The following command lists all file systems after
144 NAME USED AVAIL REFER MOUNTPOINT
145 rpool 4.94G 7.68T 96K /rpool
146 rpool/ROOT 702M 7.68T 96K /rpool/ROOT
147 rpool/ROOT/pve-1 702M 7.68T 702M /
148 rpool/data 96K 7.68T 96K /rpool/data
149 rpool/swap 4.25G 7.69T 64K -
153 [[sysadmin_zfs_raid_considerations]]
154 ZFS RAID Level Considerations
155 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
157 There are a few factors to take into consideration when choosing the layout of
158 a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
159 `vdev`. All vdevs in a pool are used equally and the data is striped among them
160 (RAID0). Check the `zpoolconcepts(7)` manpage for more details on vdevs.
162 [[sysadmin_zfs_raid_performance]]
166 Each `vdev` type has different performance behaviors. The two
167 parameters of interest are the IOPS (Input/Output Operations per Second) and
168 the bandwidth with which data can be written or read.
170 A 'mirror' vdev (RAID1) will approximately behave like a single disk in regard
171 to both parameters when writing data. When reading data the performance will
172 scale linearly with the number of disks in the mirror.
174 A common situation is to have 4 disks. When setting it up as 2 mirror vdevs
175 (RAID10) the pool will have the write characteristics as two single disks in
176 regard to IOPS and bandwidth. For read operations it will resemble 4 single
179 A 'RAIDZ' of any redundancy level will approximately behave like a single disk
180 in regard to IOPS with a lot of bandwidth. How much bandwidth depends on the
181 size of the RAIDZ vdev and the redundancy level.
183 For running VMs, IOPS is the more important metric in most situations.
186 [[sysadmin_zfs_raid_size_space_usage_redundancy]]
187 Size, Space usage and Redundancy
188 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
190 While a pool made of 'mirror' vdevs will have the best performance
191 characteristics, the usable space will be 50% of the disks available. Less if a
192 mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At
193 least one healthy disk per mirror is needed for the pool to stay functional.
195 The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being
196 the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail
197 without losing data. A special case is a 4 disk pool with RAIDZ2. In this
198 situation it is usually better to use 2 mirror vdevs for the better performance
199 as the usable space will be the same.
201 Another important factor when using any RAIDZ level is how ZVOL datasets, which
202 are used for VM disks, behave. For each data block the pool needs parity data
203 which is at least the size of the minimum block size defined by the `ashift`
204 value of the pool. With an ashift of 12 the block size of the pool is 4k. The
205 default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block
206 written will cause two additional 4k parity blocks to be written,
207 8k + 4k + 4k = 16k. This is of course a simplified approach and the real
208 situation will be slightly different with metadata, compression and such not
209 being accounted for in this example.
211 This behavior can be observed when checking the following properties of the
215 * `refreservation` (if the pool is not thin provisioned)
216 * `used` (if the pool is thin provisioned and without snapshots present)
219 # zfs get volsize,refreservation,used <pool>/vm-<vmid>-disk-X
222 `volsize` is the size of the disk as it is presented to the VM, while
223 `refreservation` shows the reserved space on the pool which includes the
224 expected space needed for the parity data. If the pool is thin provisioned, the
225 `refreservation` will be set to 0. Another way to observe the behavior is to
226 compare the used disk space within the VM and the `used` property. Be aware
227 that snapshots will skew the value.
229 There are a few options to counter the increased use of space:
231 * Increase the `volblocksize` to improve the data to parity ratio
232 * Use 'mirror' vdevs instead of 'RAIDZ'
233 * Use `ashift=9` (block size of 512 bytes)
235 The `volblocksize` property can only be set when creating a ZVOL. The default
236 value can be changed in the storage configuration. When doing this, the guest
237 needs to be tuned accordingly and depending on the use case, the problem of
238 write amplification is just moved from the ZFS layer up to the guest.
240 Using `ashift=9` when creating the pool can lead to bad
241 performance, depending on the disks underneath, and cannot be changed later on.
243 Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use
244 them, unless your environment has specific needs and characteristics where
245 RAIDZ performance characteristics are acceptable.
251 In a ZFS dRAID (declustered RAID) the hot spare drive(s) participate in the RAID.
252 Their spare capacity is reserved and used for rebuilding when one drive fails.
253 This provides, depending on the configuration, faster rebuilding compared to a
254 RAIDZ in case of drive failure. More information can be found in the official
255 OpenZFS documentation. footnote:[OpenZFS dRAID
256 https://openzfs.github.io/openzfs-docs/Basic%20Concepts/dRAID%20Howto.html]
258 NOTE: dRAID is intended for more than 10-15 disks in a dRAID. A RAIDZ
259 setup should be better for a lower amount of disks in most use cases.
261 NOTE: The GUI requires one more disk than the minimum (i.e. dRAID1 needs 3). It
262 expects that a spare disk is added as well.
264 * `dRAID1` or `dRAID`: requires at least 2 disks, one can fail before data is
266 * `dRAID2`: requires at least 3 disks, two can fail before data is lost
267 * `dRAID3`: requires at least 4 disks, three can fail before data is lost
270 Additional information can be found on the manual page:
278 The number of `spares` tells the system how many disks it should keep ready in
279 case of a disk failure. The default value is 0 `spares`. Without spares,
280 rebuilding won't get any speed benefits.
282 `data` defines the number of devices in a redundancy group. The default value is
283 8. Except when `disks - parity - spares` equal something less than 8, the lower
284 number is used. In general, a smaller number of `data` devices leads to higher
285 IOPS, better compression ratios and faster resilvering, but defining fewer data
286 devices reduces the available storage capacity of the pool.
292 {pve} uses xref:sysboot_proxmox_boot_tool[`proxmox-boot-tool`] to manage the
293 bootloader configuration.
294 See the chapter on xref:sysboot[{pve} host bootloaders] for details.
300 This section gives you some usage examples for common tasks. ZFS
301 itself is really powerful and provides many options. The main commands
302 to manage ZFS are `zfs` and `zpool`. Both commands come with great
303 manual pages, which can be read with:
310 [[sysadmin_zfs_create_new_zpool]]
314 To create a new pool, at least one disk is needed. The `ashift` should have the
315 same sector-size (2 power of `ashift`) or larger as the underlying disk.
318 # zpool create -f -o ashift=12 <pool> <device>
323 Pool names must adhere to the following rules:
325 * begin with a letter (a-z or A-Z)
326 * contain only alphanumeric, `-`, `_`, `.`, `:` or ` ` (space) characters
327 * must *not begin* with one of `mirror`, `raidz`, `draid` or `spare`
331 To activate compression (see section <<zfs_compression,Compression in ZFS>>):
334 # zfs set compression=lz4 <pool>
337 [[sysadmin_zfs_create_new_zpool_raid0]]
338 Create a new pool with RAID-0
339 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
344 # zpool create -f -o ashift=12 <pool> <device1> <device2>
347 [[sysadmin_zfs_create_new_zpool_raid1]]
348 Create a new pool with RAID-1
349 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
354 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
357 [[sysadmin_zfs_create_new_zpool_raid10]]
358 Create a new pool with RAID-10
359 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
364 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
367 [[sysadmin_zfs_create_new_zpool_raidz1]]
368 Create a new pool with RAIDZ-1
369 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
374 # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
377 Create a new pool with RAIDZ-2
378 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
383 # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
386 Please read the section for
387 xref:sysadmin_zfs_raid_considerations[ZFS RAID Level Considerations]
388 to get a rough estimate on how IOPS and bandwidth expectations before setting up
389 a pool, especially when wanting to use a RAID-Z mode.
391 [[sysadmin_zfs_create_new_zpool_with_cache]]
392 Create a new pool with cache (L2ARC)
393 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
395 It is possible to use a dedicated device, or partition, as second-level cache to
396 increase the performance. Such a cache device will especially help with
397 random-read workloads of data that is mostly static. As it acts as additional
398 caching layer between the actual storage, and the in-memory ARC, it can also
399 help if the ARC must be reduced due to memory constraints.
401 .Create ZFS pool with a on-disk cache
403 # zpool create -f -o ashift=12 <pool> <device> cache <cache-device>
406 Here only a single `<device>` and a single `<cache-device>` was used, but it is
407 possible to use more devices, like it's shown in
408 xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID].
410 Note that for cache devices no mirror or raid modi exist, they are all simply
413 If any cache device produces errors on read, ZFS will transparently divert that
414 request to the underlying storage layer.
417 [[sysadmin_zfs_create_new_zpool_with_log]]
418 Create a new pool with log (ZIL)
419 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
421 It is possible to use a dedicated drive, or partition, for the ZFS Intent Log
422 (ZIL), it is mainly used to provide safe synchronous transactions, so often in
423 performance critical paths like databases, or other programs that issue `fsync`
424 operations more frequently.
426 The pool is used as default ZIL location, diverting the ZIL IO load to a
427 separate device can, help to reduce transaction latencies while relieving the
428 main pool at the same time, increasing overall performance.
430 For disks to be used as log devices, directly or through a partition, it's
433 - use fast SSDs with power-loss protection, as those have much smaller commit
436 - Use at least a few GB for the partition (or whole device), but using more than
437 half of your installed memory won't provide you with any real advantage.
439 .Create ZFS pool with separate log device
441 # zpool create -f -o ashift=12 <pool> <device> log <log-device>
444 In above example a single `<device>` and a single `<log-device>` is used, but you
445 can also combine this with other RAID variants, as described in the
446 xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID] section.
448 You can also mirror the log device to multiple devices, this is mainly useful to
449 ensure that performance doesn't immediately degrades if a single log device
452 If all log devices fail the ZFS main pool itself will be used again, until the
453 log device(s) get replaced.
455 [[sysadmin_zfs_add_cache_and_log_dev]]
456 Add cache and log to an existing pool
457 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
459 If you have a pool without cache and log you can still add both, or just one of
462 For example, let's assume you got a good enterprise SSD with power-loss
463 protection that you want to use for improving the overall performance of your
466 As the maximum size of a log device should be about half the size of the
467 installed physical memory, it means that the ZIL will mostly likely only take up
468 a relatively small part of the SSD, the remaining space can be used as cache.
470 First you have to create two GPT partitions on the SSD with `parted` or `gdisk`.
472 Then you're ready to add them to an pool:
474 .Add both, a separate log device and a second-level cache, to an existing pool
476 # zpool add -f <pool> log <device-part1> cache <device-part2>
479 Just replay `<pool>`, `<device-part1>` and `<device-part2>` with the pool name
480 and the two `/dev/disk/by-id/` paths to the partitions.
482 You can also add ZIL and cache separately.
484 .Add a log device to an existing ZFS pool
486 # zpool add <pool> log <log-device>
490 [[sysadmin_zfs_change_failed_dev]]
491 Changing a failed device
492 ^^^^^^^^^^^^^^^^^^^^^^^^
495 # zpool replace -f <pool> <old-device> <new-device>
498 .Changing a failed bootable device
500 Depending on how {pve} was installed it is either using `systemd-boot` or GRUB
501 through `proxmox-boot-tool` footnote:[Systems installed with {pve} 6.4 or later,
502 EFI systems installed with {pve} 5.4 or later] or plain GRUB as bootloader (see
503 xref:sysboot[Host Bootloader]). You can check by running:
506 # proxmox-boot-tool status
509 The first steps of copying the partition table, reissuing GUIDs and replacing
510 the ZFS partition are the same. To make the system bootable from the new disk,
511 different steps are needed which depend on the bootloader in use.
514 # sgdisk <healthy bootable device> -R <new device>
515 # sgdisk -G <new device>
516 # zpool replace -f <pool> <old zfs partition> <new zfs partition>
519 NOTE: Use the `zpool status -v` command to monitor how far the resilvering
520 process of the new disk has progressed.
522 .With `proxmox-boot-tool`:
525 # proxmox-boot-tool format <new disk's ESP>
526 # proxmox-boot-tool init <new disk's ESP> [grub]
529 NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
530 bootable disks setup by the {pve} installer since version 5.4. For details, see
531 xref:sysboot_proxmox_boot_setup[Setting up a new partition for use as synced ESP].
533 NOTE: Make sure to pass 'grub' as mode to `proxmox-boot-tool init` if
534 `proxmox-boot-tool status` indicates your current disks are using GRUB,
535 especially if Secure Boot is enabled!
540 # grub-install <new disk>
542 NOTE: Plain GRUB is only used on systems installed with {pve} 6.3 or earlier,
543 which have not been manually migrated to using `proxmox-boot-tool` yet.
546 Configure E-Mail Notification
547 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
549 ZFS comes with an event daemon `ZED`, which monitors events generated by the ZFS
550 kernel module. The daemon can also send emails on ZFS events like pool errors.
551 Newer ZFS packages ship the daemon in a separate `zfs-zed` package, which should
552 already be installed by default in {pve}.
554 You can configure the daemon via the file `/etc/zfs/zed.d/zed.rc` with your
555 favorite editor. The required setting for email notification is
556 `ZED_EMAIL_ADDR`, which is set to `root` by default.
559 ZED_EMAIL_ADDR="root"
562 Please note {pve} forwards mails to `root` to the email address
563 configured for the root user.
566 [[sysadmin_zfs_limit_memory_usage]]
567 Limit ZFS Memory Usage
568 ~~~~~~~~~~~~~~~~~~~~~~
570 ZFS uses '50 %' of the host memory for the **A**daptive **R**eplacement
571 **C**ache (ARC) by default. Allocating enough memory for the ARC is crucial for
572 IO performance, so reduce it with caution. As a general rule of thumb, allocate
573 at least +2 GiB Base + 1 GiB/TiB-Storage+. For example, if you have a pool with
574 +8 TiB+ of available storage space then you should use +10 GiB+ of memory for
577 You can change the ARC usage limit for the current boot (a reboot resets this
578 change again) by writing to the +zfs_arc_max+ module parameter directly:
581 echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
584 To *permanently change* the ARC limits, add the following line to
585 `/etc/modprobe.d/zfs.conf`:
588 options zfs zfs_arc_max=8589934592
591 This example setting limits the usage to 8 GiB ('8 * 2^30^').
593 IMPORTANT: In case your desired +zfs_arc_max+ value is lower than or equal to
594 +zfs_arc_min+ (which defaults to 1/32 of the system memory), +zfs_arc_max+ will
595 be ignored unless you also set +zfs_arc_min+ to at most +zfs_arc_max - 1+.
598 echo "$[8 * 1024*1024*1024 - 1]" >/sys/module/zfs/parameters/zfs_arc_min
599 echo "$[8 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
602 This example setting (temporarily) limits the usage to 8 GiB ('8 * 2^30^') on
603 systems with more than 256 GiB of total memory, where simply setting
604 +zfs_arc_max+ alone would not work.
608 If your root file system is ZFS, you must update your initramfs every
609 time this value changes:
612 # update-initramfs -u -k all
615 You *must reboot* to activate these changes.
623 Swap-space created on a zvol may generate some troubles, like blocking the
624 server or generating a high IO load, often seen when starting a Backup
625 to an external Storage.
627 We strongly recommend to use enough memory, so that you normally do not
628 run into low memory situations. Should you need or want to add swap, it is
629 preferred to create a partition on a physical disk and use it as a swap device.
630 You can leave some space free for this purpose in the advanced options of the
631 installer. Additionally, you can lower the
632 ``swappiness'' value. A good value for servers is 10:
635 # sysctl -w vm.swappiness=10
638 To make the swappiness persistent, open `/etc/sysctl.conf` with
639 an editor of your choice and add the following line:
645 .Linux kernel `swappiness` parameter values
646 [width="100%",cols="<m,2d",options="header"]
647 |===========================================================
649 | vm.swappiness = 0 | The kernel will swap only to avoid
650 an 'out of memory' condition
651 | vm.swappiness = 1 | Minimum amount of swapping without
652 disabling it entirely.
653 | vm.swappiness = 10 | This value is sometimes recommended to
654 improve performance when sufficient memory exists in a system.
655 | vm.swappiness = 60 | The default value.
656 | vm.swappiness = 100 | The kernel will swap aggressively.
657 |===========================================================
660 Encrypted ZFS Datasets
661 ~~~~~~~~~~~~~~~~~~~~~~
663 WARNING: Native ZFS encryption in {pve} is experimental. Known limitations and
664 issues include Replication with encrypted datasets
665 footnote:[https://bugzilla.proxmox.com/show_bug.cgi?id=2350],
666 as well as checksum errors when using Snapshots or ZVOLs.
667 footnote:[https://github.com/openzfs/zfs/issues/11688]
669 ZFS on Linux version 0.8.0 introduced support for native encryption of
670 datasets. After an upgrade from previous ZFS on Linux versions, the encryption
671 feature can be enabled per pool:
674 # zpool get feature@encryption tank
675 NAME PROPERTY VALUE SOURCE
676 tank feature@encryption disabled local
678 # zpool set feature@encryption=enabled
680 # zpool get feature@encryption tank
681 NAME PROPERTY VALUE SOURCE
682 tank feature@encryption enabled local
685 WARNING: There is currently no support for booting from pools with encrypted
686 datasets using GRUB, and only limited support for automatically unlocking
687 encrypted datasets on boot. Older versions of ZFS without encryption support
688 will not be able to decrypt stored data.
690 NOTE: It is recommended to either unlock storage datasets manually after
691 booting, or to write a custom unit to pass the key material needed for
692 unlocking on boot to `zfs load-key`.
694 WARNING: Establish and test a backup procedure before enabling encryption of
695 production data. If the associated key material/passphrase/keyfile has been
696 lost, accessing the encrypted data is no longer possible.
698 Encryption needs to be setup when creating datasets/zvols, and is inherited by
699 default to child datasets. For example, to create an encrypted dataset
700 `tank/encrypted_data` and configure it as storage in {pve}, run the following
704 # zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
708 # pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
711 All guest volumes/disks create on this storage will be encrypted with the
712 shared key material of the parent dataset.
714 To actually use the storage, the associated key material needs to be loaded
715 and the dataset needs to be mounted. This can be done in one step with:
718 # zfs mount -l tank/encrypted_data
719 Enter passphrase for 'tank/encrypted_data':
722 It is also possible to use a (random) keyfile instead of prompting for a
723 passphrase by setting the `keylocation` and `keyformat` properties, either at
724 creation time or with `zfs change-key` on existing datasets:
727 # dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
729 # zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
732 WARNING: When using a keyfile, special care needs to be taken to secure the
733 keyfile against unauthorized access or accidental loss. Without the keyfile, it
734 is not possible to access the plaintext data!
736 A guest volume created underneath an encrypted dataset will have its
737 `encryptionroot` property set accordingly. The key material only needs to be
738 loaded once per encryptionroot to be available to all encrypted datasets
741 See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
742 `keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
743 change-key` commands and the `Encryption` section from `man zfs` for more
744 details and advanced usage.
751 When compression is enabled on a dataset, ZFS tries to compress all *new*
752 blocks before writing them and decompresses them on reading. Already
753 existing data will not be compressed retroactively.
755 You can enable compression with:
758 # zfs set compression=<algorithm> <dataset>
761 We recommend using the `lz4` algorithm, because it adds very little CPU
762 overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
763 integer from `1` (fastest) to `9` (best compression ratio), are also
764 available. Depending on the algorithm and how compressible the data is,
765 having compression enabled can even increase I/O performance.
767 You can disable compression at any time with:
770 # zfs set compression=off <dataset>
773 Again, only new blocks will be affected by this change.
776 [[sysadmin_zfs_special_device]]
780 Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
781 pool is used to store metadata, deduplication tables, and optionally small
784 A `special` device can improve the speed of a pool consisting of slow spinning
785 hard disks with a lot of metadata changes. For example workloads that involve
786 creating, updating or deleting a large number of files will benefit from the
787 presence of a `special` device. ZFS datasets can also be configured to store
788 whole small files on the `special` device which can further improve the
789 performance. Use fast SSDs for the `special` device.
791 IMPORTANT: The redundancy of the `special` device should match the one of the
792 pool, since the `special` device is a point of failure for the whole pool.
794 WARNING: Adding a `special` device to a pool cannot be undone!
796 .Create a pool with `special` device and RAID-1:
799 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
802 .Add a `special` device to an existing pool with RAID-1:
805 # zpool add <pool> special mirror <device1> <device2>
808 ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
809 `0` to disable storing small file blocks on the `special` device or a power of
810 two in the range between `512B` to `1M`. After setting the property new file
811 blocks smaller than `size` will be allocated on the `special` device.
813 IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
814 the `recordsize` (default `128K`) of the dataset, *all* data will be written to
815 the `special` device, so be careful!
817 Setting the `special_small_blocks` property on a pool will change the default
818 value of that property for all child ZFS datasets (for example all containers
819 in the pool will opt in for small file blocks).
821 .Opt in for all file smaller than 4K-blocks pool-wide:
824 # zfs set special_small_blocks=4K <pool>
827 .Opt in for small file blocks for a single dataset:
830 # zfs set special_small_blocks=4K <pool>/<filesystem>
833 .Opt out from small file blocks for a single dataset:
836 # zfs set special_small_blocks=0 <pool>/<filesystem>
839 [[sysadmin_zfs_features]]
843 Changes to the on-disk format in ZFS are only made between major version changes
844 and are specified through *features*. All features, as well as the general
845 mechanism are well documented in the `zpool-features(5)` manpage.
847 Since enabling new features can render a pool not importable by an older version
848 of ZFS, this needs to be done actively by the administrator, by running
849 `zpool upgrade` on the pool (see the `zpool-upgrade(8)` manpage).
851 Unless you need to use one of the new features, there is no upside to enabling
854 In fact, there are some downsides to enabling new features:
856 * A system with root on ZFS, that still boots using GRUB will become
857 unbootable if a new feature is active on the rpool, due to the incompatible
858 implementation of ZFS in GRUB.
859 * The system will not be able to import any upgraded pool when booted with an
860 older kernel, which still ships with the old ZFS modules.
861 * Booting an older {pve} ISO to repair a non-booting system will likewise not
864 IMPORTANT: Do *not* upgrade your rpool if your system is still booted with
865 GRUB, as this will render your system unbootable. This includes systems
866 installed before {pve} 5.4, and systems booting with legacy BIOS boot (see
867 xref:sysboot_determine_bootloader_used[how to determine the bootloader]).
869 .Enable new features for a ZFS pool:
871 # zpool upgrade <pool>