[[chapter_zfs]] ZFS on Linux ------------ ifdef::wiki[] :pve-toplevel: endif::wiki[] ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with {pve} 3.4, the native Linux kernel port of the ZFS file system is introduced as optional file system and also as an additional selection for the root file system. There is no need for manually compile ZFS modules - all packages are included. By using ZFS, its possible to achieve maximum enterprise features with low budget hardware, but also high performance systems by leveraging SSD caching or even SSD only setups. ZFS can replace cost intense hardware raid cards by moderate CPU and memory load combined with easy management. .General ZFS advantages * Easy configuration and management with {pve} GUI and CLI. * Reliable * Protection against data corruption * Data compression on file system level * Snapshots * Copy-on-write clone * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3 * Can use SSD for cache * Self healing * Continuous integrity checking * Designed for high storage capacities * Protection against data corruption * Asynchronous replication over network * Open Source * Encryption * ... Hardware ~~~~~~~~ ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM. If you use a dedicated cache and/or log disk, you should use an enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly. IMPORTANT: Do not use ZFS on top of hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in ``IT'' mode. If you are experimenting with an installation of {pve} inside a VM (Nested Virtualization), don't use `virtio` for disks of that VM, since they are not supported by ZFS. Use IDE or SCSI instead (works also with `virtio` SCSI controller type). Installation as Root File System ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When you install using the {pve} installer, you can choose ZFS for the root file system. You need to select the RAID type at installation time: [horizontal] RAID0:: Also called ``striping''. The capacity of such volume is the sum of the capacities of all disks. But RAID0 does not add any redundancy, so the failure of a single drive makes the volume unusable. RAID1:: Also called ``mirroring''. Data is written identically to all disks. This mode requires at least 2 disks with the same size. The resulting capacity is that of a single disk. RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks. RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks. RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks. RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks. The installer automatically partitions the disks, creates a ZFS pool called `rpool`, and installs the root file system on the ZFS subvolume `rpool/ROOT/pve-1`. Another subvolume called `rpool/data` is created to store VM images. In order to use that with the {pve} tools, the installer creates the following configuration entry in `/etc/pve/storage.cfg`: ---- zfspool: local-zfs pool rpool/data sparse content images,rootdir ---- After installation, you can view your ZFS pool status using the `zpool` command: ---- # zpool status pool: rpool state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM rpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sda2 ONLINE 0 0 0 sdb2 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdc ONLINE 0 0 0 sdd ONLINE 0 0 0 errors: No known data errors ---- The `zfs` command is used configure and manage your ZFS file systems. The following command lists all file systems after installation: ---- # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool 4.94G 7.68T 96K /rpool rpool/ROOT 702M 7.68T 96K /rpool/ROOT rpool/ROOT/pve-1 702M 7.68T 702M / rpool/data 96K 7.68T 96K /rpool/data rpool/swap 4.25G 7.69T 64K - ---- [[sysadmin_zfs_raid_considerations]] ZFS RAID Level Considerations ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ There are a few factors to take into consideration when choosing the layout of a ZFS pool. The basic building block of a ZFS pool is the virtual device, or `vdev`. All vdevs in a pool are used equally and the data is striped among them (RAID0). Check the `zpool(8)` manpage for more details on vdevs. [[sysadmin_zfs_raid_performance]] Performance ^^^^^^^^^^^ Each `vdev` type has different performance behaviors. The two parameters of interest are the IOPS (Input/Output Operations per Second) and the bandwidth with which data can be written or read. A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards to both parameters when writing data. When reading data if will behave like the number of disks in the mirror. A common situation is to have 4 disks. When setting it up as 2 mirror vdevs (RAID10) the pool will have the write characteristics as two single disks in regard of IOPS and bandwidth. For read operations it will resemble 4 single disks. A 'RAIDZ' of any redundancy level will approximately behave like a single disk in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the size of the RAIDZ vdev and the redundancy level. For running VMs, IOPS is the more important metric in most situations. [[sysadmin_zfs_raid_size_space_usage_redundancy]] Size, Space usage and Redundancy ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ While a pool made of 'mirror' vdevs will have the best performance characteristics, the usable space will be 50% of the disks available. Less if a mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At least one healthy disk per mirror is needed for the pool to stay functional. The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail without losing data. A special case is a 4 disk pool with RAIDZ2. In this situation it is usually better to use 2 mirror vdevs for the better performance as the usable space will be the same. Another important factor when using any RAIDZ level is how ZVOL datasets, which are used for VM disks, behave. For each data block the pool needs parity data which is at least the size of the minimum block size defined by the `ashift` value of the pool. With an ashift of 12 the block size of the pool is 4k. The default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block written will cause two additional 4k parity blocks to be written, 8k + 4k + 4k = 16k. This is of course a simplified approach and the real situation will be slightly different with metadata, compression and such not being accounted for in this example. This behavior can be observed when checking the following properties of the ZVOL: * `volsize` * `refreservation` (if the pool is not thin provisioned) * `used` (if the pool is thin provisioned and without snapshots present) ---- # zfs get volsize,refreservation,used /vm--disk-X ---- `volsize` is the size of the disk as it is presented to the VM, while `refreservation` shows the reserved space on the pool which includes the expected space needed for the parity data. If the pool is thin provisioned, the `refreservation` will be set to 0. Another way to observe the behavior is to compare the used disk space within the VM and the `used` property. Be aware that snapshots will skew the value. There are a few options to counter the increased use of space: * Increase the `volblocksize` to improve the data to parity ratio * Use 'mirror' vdevs instead of 'RAIDZ' * Use `ashift=9` (block size of 512 bytes) The `volblocksize` property can only be set when creating a ZVOL. The default value can be changed in the storage configuration. When doing this, the guest needs to be tuned accordingly and depending on the use case, the problem of write amplification if just moved from the ZFS layer up to the guest. Using `ashift=9` when creating the pool can lead to bad performance, depending on the disks underneath, and cannot be changed later on. Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use them, unless your environmanet has specific needs and charactersitics where RAIDZ performance characteristics are acceptable. Bootloader ~~~~~~~~~~ Depending on whether the system is booted in EFI or legacy BIOS mode the {pve} installer sets up either `grub` or `systemd-boot` as main bootloader. See the chapter on xref:sysboot[{pve} host bootladers] for details. ZFS Administration ~~~~~~~~~~~~~~~~~~ This section gives you some usage examples for common tasks. ZFS itself is really powerful and provides many options. The main commands to manage ZFS are `zfs` and `zpool`. Both commands come with great manual pages, which can be read with: ---- # man zpool # man zfs ----- [[sysadmin_zfs_create_new_zpool]] Create a new zpool ^^^^^^^^^^^^^^^^^^ To create a new pool, at least one disk is needed. The `ashift` should have the same sector-size (2 power of `ashift`) or larger as the underlying disk. ---- # zpool create -f -o ashift=12 ---- To activate compression (see section <>): ---- # zfs set compression=lz4 ---- [[sysadmin_zfs_create_new_zpool_raid0]] Create a new pool with RAID-0 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 1 disk ---- # zpool create -f -o ashift=12 ---- [[sysadmin_zfs_create_new_zpool_raid1]] Create a new pool with RAID-1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 2 disks ---- # zpool create -f -o ashift=12 mirror ---- [[sysadmin_zfs_create_new_zpool_raid10]] Create a new pool with RAID-10 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 4 disks ---- # zpool create -f -o ashift=12 mirror mirror ---- [[sysadmin_zfs_create_new_zpool_raidz1]] Create a new pool with RAIDZ-1 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 3 disks ---- # zpool create -f -o ashift=12 raidz1 ---- Create a new pool with RAIDZ-2 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ Minimum 4 disks ---- # zpool create -f -o ashift=12 raidz2 ---- [[sysadmin_zfs_create_new_zpool_with_cache]] Create a new pool with cache (L2ARC) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance (use SSD). As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". ---- # zpool create -f -o ashift=12 cache ---- [[sysadmin_zfs_create_new_zpool_with_log]] Create a new pool with log (ZIL) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance(SSD). As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". ---- # zpool create -f -o ashift=12 log ---- [[sysadmin_zfs_add_cache_and_log_dev]] Add cache and log to an existing pool ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ If you have a pool without cache and log. First partition the SSD in 2 partition with `parted` or `gdisk` IMPORTANT: Always use GPT partition tables. The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache. ---- # zpool add -f log cache ---- [[sysadmin_zfs_change_failed_dev]] Changing a failed device ^^^^^^^^^^^^^^^^^^^^^^^^ ---- # zpool replace -f ---- .Changing a failed bootable device Depending on how {pve} was installed it is either using `grub` or `systemd-boot` as bootloader (see xref:sysboot[Host Bootloader]). The first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, different steps are needed which depend on the bootloader in use. ---- # sgdisk -R # sgdisk -G # zpool replace -f ---- NOTE: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed. .With `systemd-boot`: ---- # pve-efiboot-tool format # pve-efiboot-tool init ---- NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the {pve} installer since version 5.4. For details, see xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP]. .With `grub`: ---- # grub-install ---- Activate E-Mail Notification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like pool errors. Newer ZFS packages ship the daemon in a separate package, and you can install it using `apt-get`: ---- # apt-get install zfs-zed ---- To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: -------- ZED_EMAIL_ADDR="root" -------- Please note {pve} forwards mails to `root` to the email address configured for the root user. IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All other settings are optional. [[sysadmin_zfs_limit_memory_usage]] Limit ZFS Memory Usage ~~~~~~~~~~~~~~~~~~~~~~ It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in `/etc/modprobe.d/zfs.conf` and insert: -------- options zfs zfs_arc_max=8589934592 -------- This example setting limits the usage to 8GB. [IMPORTANT] ==== If your root file system is ZFS you must update your initramfs every time this value changes: ---- # update-initramfs -u ---- ==== [[zfs_swap]] SWAP on ZFS ~~~~~~~~~~~ Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage. We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is preferred to create a partition on a physical disk and use it as swapdevice. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the ``swappiness'' value. A good value for servers is 10: ---- # sysctl -w vm.swappiness=10 ---- To make the swappiness persistent, open `/etc/sysctl.conf` with an editor of your choice and add the following line: -------- vm.swappiness = 10 -------- .Linux kernel `swappiness` parameter values [width="100%",cols=" ---- We recommend using the `lz4` algorithm, because it adds very little CPU overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an integer from `1` (fastest) to `9` (best compression ratio), are also available. Depending on the algorithm and how compressible the data is, having compression enabled can even increase I/O performance. You can disable compression at any time with: ---- # zfs set compression=off ---- Again, only new blocks will be affected by this change. [[sysadmin_zfs_special_device]] ZFS Special Device ~~~~~~~~~~~~~~~~~~ Since version 0.8.0 ZFS supports `special` devices. A `special` device in a pool is used to store metadata, deduplication tables, and optionally small file blocks. A `special` device can improve the speed of a pool consisting of slow spinning hard disks with a lot of metadata changes. For example workloads that involve creating, updating or deleting a large number of files will benefit from the presence of a `special` device. ZFS datasets can also be configured to store whole small files on the `special` device which can further improve the performance. Use fast SSDs for the `special` device. IMPORTANT: The redundancy of the `special` device should match the one of the pool, since the `special` device is a point of failure for the whole pool. WARNING: Adding a `special` device to a pool cannot be undone! .Create a pool with `special` device and RAID-1: ---- # zpool create -f -o ashift=12 mirror special mirror ---- .Add a `special` device to an existing pool with RAID-1: ---- # zpool add special mirror ---- ZFS datasets expose the `special_small_blocks=` property. `size` can be `0` to disable storing small file blocks on the `special` device or a power of two in the range between `512B` to `128K`. After setting the property new file blocks smaller than `size` will be allocated on the `special` device. IMPORTANT: If the value for `special_small_blocks` is greater than or equal to the `recordsize` (default `128K`) of the dataset, *all* data will be written to the `special` device, so be careful! Setting the `special_small_blocks` property on a pool will change the default value of that property for all child ZFS datasets (for example all containers in the pool will opt in for small file blocks). .Opt in for all file smaller than 4K-blocks pool-wide: ---- # zfs set special_small_blocks=4K ---- .Opt in for small file blocks for a single dataset: ---- # zfs set special_small_blocks=4K / ---- .Opt out from small file blocks for a single dataset: ---- # zfs set special_small_blocks=0 / ----