X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=local-zfs.adoc;h=89ab8bd847d2bc5da9c5ee8c95ab3df6b0f3ca3a;hb=921926036a9bb3824833ca705e68a45a66f38f09;hp=fae3c35134c7e59df03dd303650d6787456df177;hpb=470d43137c9ca6601bebe603486d45da39fed921;p=pve-docs.git diff --git a/local-zfs.adoc b/local-zfs.adoc index fae3c35..89ab8bd 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -1,3 +1,4 @@ +[[chapter_zfs]] ZFS on Linux ------------ ifdef::wiki[] @@ -150,18 +151,107 @@ rpool/swap 4.25G 7.69T 64K - ---- +[[sysadmin_zfs_raid_considerations]] +ZFS RAID Level Considerations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +There are a few factors to take into consideration when choosing the layout of +a ZFS pool. The basic building block of a ZFS pool is the virtual device, or +`vdev`. All vdevs in a pool are used equally and the data is striped among them +(RAID0). Check the `zpool(8)` manpage for more details on vdevs. + +[[sysadmin_zfs_raid_performance]] +Performance +^^^^^^^^^^^ + +Each `vdev` type has different performance behaviors. The two +parameters of interest are the IOPS (Input/Output Operations per Second) and +the bandwidth with which data can be written or read. + +A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards +to both parameters when writing data. When reading data if will behave like the +number of disks in the mirror. + +A common situation is to have 4 disks. When setting it up as 2 mirror vdevs +(RAID10) the pool will have the write characteristics as two single disks in +regard of IOPS and bandwidth. For read operations it will resemble 4 single +disks. + +A 'RAIDZ' of any redundancy level will approximately behave like a single disk +in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the +size of the RAIDZ vdev and the redundancy level. + +For running VMs, IOPS is the more important metric in most situations. + + +[[sysadmin_zfs_raid_size_space_usage_redundancy]] +Size, Space usage and Redundancy +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +While a pool made of 'mirror' vdevs will have the best performance +characteristics, the usable space will be 50% of the disks available. Less if a +mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At +least one healthy disk per mirror is needed for the pool to stay functional. + +The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being +the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail +without losing data. A special case is a 4 disk pool with RAIDZ2. In this +situation it is usually better to use 2 mirror vdevs for the better performance +as the usable space will be the same. + +Another important factor when using any RAIDZ level is how ZVOL datasets, which +are used for VM disks, behave. For each data block the pool needs parity data +which is at least the size of the minimum block size defined by the `ashift` +value of the pool. With an ashift of 12 the block size of the pool is 4k. The +default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block +written will cause two additional 4k parity blocks to be written, +8k + 4k + 4k = 16k. This is of course a simplified approach and the real +situation will be slightly different with metadata, compression and such not +being accounted for in this example. + +This behavior can be observed when checking the following properties of the +ZVOL: + + * `volsize` + * `refreservation` (if the pool is not thin provisioned) + * `used` (if the pool is thin provisioned and without snapshots present) + +---- +# zfs get volsize,refreservation,used /vm--disk-X +---- + +`volsize` is the size of the disk as it is presented to the VM, while +`refreservation` shows the reserved space on the pool which includes the +expected space needed for the parity data. If the pool is thin provisioned, the +`refreservation` will be set to 0. Another way to observe the behavior is to +compare the used disk space within the VM and the `used` property. Be aware +that snapshots will skew the value. + +There are a few options to counter the increased use of space: + +* Increase the `volblocksize` to improve the data to parity ratio +* Use 'mirror' vdevs instead of 'RAIDZ' +* Use `ashift=9` (block size of 512 bytes) + +The `volblocksize` property can only be set when creating a ZVOL. The default +value can be changed in the storage configuration. When doing this, the guest +needs to be tuned accordingly and depending on the use case, the problem of +write amplification if just moved from the ZFS layer up to the guest. + +Using `ashift=9` when creating the pool can lead to bad +performance, depending on the disks underneath, and cannot be changed later on. + +Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use +them, unless your environment has specific needs and characteristics where +RAIDZ performance characteristics are acceptable. + + Bootloader ~~~~~~~~~~ -The default ZFS disk partitioning scheme does not use the first 2048 -sectors. This gives enough room to install a GRUB boot partition. The -{pve} installer automatically allocates that space, and installs the -GRUB boot loader there. If you use a redundant RAID setup, it installs -the boot loader on all disk required for booting. So you can boot -even if some disks fail. - -NOTE: It is not possible to use ZFS as root file system with UEFI -boot. +Depending on whether the system is booted in EFI or legacy BIOS mode the +{pve} installer sets up either `grub` or `systemd-boot` as main bootloader. +See the chapter on xref:sysboot[{pve} host bootladers] for details. ZFS Administration @@ -177,49 +267,76 @@ manual pages, which can be read with: # man zfs ----- -.Create a new zpool +[[sysadmin_zfs_create_new_zpool]] +Create a new zpool +^^^^^^^^^^^^^^^^^^ To create a new pool, at least one disk is needed. The `ashift` should have the same sector-size (2 power of `ashift`) or larger as the underlying disk. - zpool create -f -o ashift=12 +---- +# zpool create -f -o ashift=12 +---- -To activate compression +To activate compression (see section <>): - zfs set compression=lz4 +---- +# zfs set compression=lz4 +---- -.Create a new pool with RAID-0 +[[sysadmin_zfs_create_new_zpool_raid0]] +Create a new pool with RAID-0 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Minimum 1 Disk +Minimum 1 disk - zpool create -f -o ashift=12 +---- +# zpool create -f -o ashift=12 +---- -.Create a new pool with RAID-1 +[[sysadmin_zfs_create_new_zpool_raid1]] +Create a new pool with RAID-1 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Minimum 2 Disks +Minimum 2 disks - zpool create -f -o ashift=12 mirror +---- +# zpool create -f -o ashift=12 mirror +---- -.Create a new pool with RAID-10 +[[sysadmin_zfs_create_new_zpool_raid10]] +Create a new pool with RAID-10 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Minimum 4 Disks +Minimum 4 disks - zpool create -f -o ashift=12 mirror mirror +---- +# zpool create -f -o ashift=12 mirror mirror +---- -.Create a new pool with RAIDZ-1 +[[sysadmin_zfs_create_new_zpool_raidz1]] +Create a new pool with RAIDZ-1 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Minimum 3 Disks +Minimum 3 disks - zpool create -f -o ashift=12 raidz1 +---- +# zpool create -f -o ashift=12 raidz1 +---- -.Create a new pool with RAIDZ-2 +Create a new pool with RAIDZ-2 +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -Minimum 4 Disks +Minimum 4 disks - zpool create -f -o ashift=12 raidz2 +---- +# zpool create -f -o ashift=12 raidz2 +---- -.Create a new pool with cache (L2ARC) +[[sysadmin_zfs_create_new_zpool_with_cache]] +Create a new pool with cache (L2ARC) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance (use SSD). @@ -227,9 +344,13 @@ the performance (use SSD). As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". - zpool create -f -o ashift=12 cache +---- +# zpool create -f -o ashift=12 cache +---- -.Create a new pool with log (ZIL) +[[sysadmin_zfs_create_new_zpool_with_log]] +Create a new pool with log (ZIL) +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ It is possible to use a dedicated cache drive partition to increase the performance(SSD). @@ -237,11 +358,15 @@ the performance(SSD). As `` it is possible to use more devices, like it's shown in "Create a new pool with RAID*". - zpool create -f -o ashift=12 log +---- +# zpool create -f -o ashift=12 log +---- -.Add cache and log to an existing pool +[[sysadmin_zfs_add_cache_and_log_dev]] +Add cache and log to an existing pool +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ -If you have an pool without cache and log. First partition the SSD in +If you have a pool without cache and log. First partition the SSD in 2 partition with `parted` or `gdisk` IMPORTANT: Always use GPT partition tables. @@ -250,19 +375,59 @@ The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD can be used as cache. - zpool add -f log cache +---- +# zpool add -f log cache +---- + +[[sysadmin_zfs_change_failed_dev]] +Changing a failed device +^^^^^^^^^^^^^^^^^^^^^^^^ -.Changing a failed device +---- +# zpool replace -f +---- - zpool replace -f +.Changing a failed bootable device +Depending on how {pve} was installed it is either using `grub` or `systemd-boot` +as bootloader (see xref:sysboot[Host Bootloader]). + +The first steps of copying the partition table, reissuing GUIDs and replacing +the ZFS partition are the same. To make the system bootable from the new disk, +different steps are needed which depend on the bootloader in use. + +---- +# sgdisk -R +# sgdisk -G +# zpool replace -f +---- + +NOTE: Use the `zpool status -v` command to monitor how far the resilvering +process of the new disk has progressed. + +.With `systemd-boot`: + +---- +# pve-efiboot-tool format +# pve-efiboot-tool init +---- + +NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on +bootable disks setup by the {pve} installer since version 5.4. For details, see +xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP]. + +.With `grub`: + +---- +# grub-install +---- Activate E-Mail Notification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ZFS comes with an event daemon, which monitors events generated by the ZFS kernel module. The daemon can also send emails on ZFS events like -pool errors. Newer ZFS packages ships the daemon in a separate package, +pool errors. Newer ZFS packages ship the daemon in a separate package, and you can install it using `apt-get`: ---- @@ -283,6 +448,7 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All other settings are optional. +[[sysadmin_zfs_limit_memory_usage]] Limit ZFS Memory Usage ~~~~~~~~~~~~~~~~~~~~~~ @@ -302,21 +468,30 @@ This example setting limits the usage to 8GB. If your root file system is ZFS you must update your initramfs every time this value changes: - update-initramfs -u +---- +# update-initramfs -u +---- ==== -.SWAP on ZFS +[[zfs_swap]] +SWAP on ZFS +~~~~~~~~~~~ -SWAP on ZFS on Linux may generate some troubles, like blocking the +Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage. We strongly recommend to use enough memory, so that you normally do not -run into low memory situations. Additionally, you can lower the +run into low memory situations. Should you need or want to add swap, it is +preferred to create a partition on a physical disk and use it as swapdevice. +You can leave some space free for this purpose in the advanced options of the +installer. Additionally, you can lower the ``swappiness'' value. A good value for servers is 10: - sysctl -w vm.swappiness=10 +---- +# sysctl -w vm.swappiness=10 +---- To make the swappiness persistent, open `/etc/sysctl.conf` with an editor of your choice and add the following line: @@ -338,3 +513,177 @@ improve performance when sufficient memory exists in a system. | vm.swappiness = 60 | The default value. | vm.swappiness = 100 | The kernel will swap aggressively. |=========================================================== + +[[zfs_encryption]] +Encrypted ZFS Datasets +~~~~~~~~~~~~~~~~~~~~~~ + +ZFS on Linux version 0.8.0 introduced support for native encryption of +datasets. After an upgrade from previous ZFS on Linux versions, the encryption +feature can be enabled per pool: + +---- +# zpool get feature@encryption tank +NAME PROPERTY VALUE SOURCE +tank feature@encryption disabled local + +# zpool set feature@encryption=enabled + +# zpool get feature@encryption tank +NAME PROPERTY VALUE SOURCE +tank feature@encryption enabled local +---- + +WARNING: There is currently no support for booting from pools with encrypted +datasets using Grub, and only limited support for automatically unlocking +encrypted datasets on boot. Older versions of ZFS without encryption support +will not be able to decrypt stored data. + +NOTE: It is recommended to either unlock storage datasets manually after +booting, or to write a custom unit to pass the key material needed for +unlocking on boot to `zfs load-key`. + +WARNING: Establish and test a backup procedure before enabling encryption of +production data. If the associated key material/passphrase/keyfile has been +lost, accessing the encrypted data is no longer possible. + +Encryption needs to be setup when creating datasets/zvols, and is inherited by +default to child datasets. For example, to create an encrypted dataset +`tank/encrypted_data` and configure it as storage in {pve}, run the following +commands: + +---- +# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data +Enter passphrase: +Re-enter passphrase: + +# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data +---- + +All guest volumes/disks create on this storage will be encrypted with the +shared key material of the parent dataset. + +To actually use the storage, the associated key material needs to be loaded +and the dataset needs to be mounted. This can be done in one step with: + +---- +# zfs mount -l tank/encrypted_data +Enter passphrase for 'tank/encrypted_data': +---- + +It is also possible to use a (random) keyfile instead of prompting for a +passphrase by setting the `keylocation` and `keyformat` properties, either at +creation time or with `zfs change-key` on existing datasets: + +---- +# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1 + +# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data +---- + +WARNING: When using a keyfile, special care needs to be taken to secure the +keyfile against unauthorized access or accidental loss. Without the keyfile, it +is not possible to access the plaintext data! + +A guest volume created underneath an encrypted dataset will have its +`encryptionroot` property set accordingly. The key material only needs to be +loaded once per encryptionroot to be available to all encrypted datasets +underneath it. + +See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and +`keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs +change-key` commands and the `Encryption` section from `man zfs` for more +details and advanced usage. + + +[[zfs_compression]] +Compression in ZFS +~~~~~~~~~~~~~~~~~~ + +When compression is enabled on a dataset, ZFS tries to compress all *new* +blocks before writing them and decompresses them on reading. Already +existing data will not be compressed retroactively. + +You can enable compression with: + +---- +# zfs set compression= +---- + +We recommend using the `lz4` algorithm, because it adds very little CPU +overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an +integer from `1` (fastest) to `9` (best compression ratio), are also +available. Depending on the algorithm and how compressible the data is, +having compression enabled can even increase I/O performance. + +You can disable compression at any time with: + +---- +# zfs set compression=off +---- + +Again, only new blocks will be affected by this change. + + +[[sysadmin_zfs_special_device]] +ZFS Special Device +~~~~~~~~~~~~~~~~~~ + +Since version 0.8.0 ZFS supports `special` devices. A `special` device in a +pool is used to store metadata, deduplication tables, and optionally small +file blocks. + +A `special` device can improve the speed of a pool consisting of slow spinning +hard disks with a lot of metadata changes. For example workloads that involve +creating, updating or deleting a large number of files will benefit from the +presence of a `special` device. ZFS datasets can also be configured to store +whole small files on the `special` device which can further improve the +performance. Use fast SSDs for the `special` device. + +IMPORTANT: The redundancy of the `special` device should match the one of the +pool, since the `special` device is a point of failure for the whole pool. + +WARNING: Adding a `special` device to a pool cannot be undone! + +.Create a pool with `special` device and RAID-1: + +---- +# zpool create -f -o ashift=12 mirror special mirror +---- + +.Add a `special` device to an existing pool with RAID-1: + +---- +# zpool add special mirror +---- + +ZFS datasets expose the `special_small_blocks=` property. `size` can be +`0` to disable storing small file blocks on the `special` device or a power of +two in the range between `512B` to `128K`. After setting the property new file +blocks smaller than `size` will be allocated on the `special` device. + +IMPORTANT: If the value for `special_small_blocks` is greater than or equal to +the `recordsize` (default `128K`) of the dataset, *all* data will be written to +the `special` device, so be careful! + +Setting the `special_small_blocks` property on a pool will change the default +value of that property for all child ZFS datasets (for example all containers +in the pool will opt in for small file blocks). + +.Opt in for all file smaller than 4K-blocks pool-wide: + +---- +# zfs set special_small_blocks=4K +---- + +.Opt in for small file blocks for a single dataset: + +---- +# zfs set special_small_blocks=4K / +---- + +.Opt out from small file blocks for a single dataset: + +---- +# zfs set special_small_blocks=0 / +----