X-Git-Url: https://git.proxmox.com/?a=blobdiff_plain;f=local-zfs.adoc;h=d38a4c9531150f6cd338b3ac7412c1cfd72dc716;hb=0593681f9ded237059d64a7914fee7d5f5fcdb2b;hp=e794286d68050621d8512359a486458cd0807118;hpb=60077fc155c73905009c57874bffca1ec1fff49f;p=pve-docs.git diff --git a/local-zfs.adoc b/local-zfs.adoc index e794286..d38a4c9 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -55,22 +55,22 @@ Hardware ~~~~~~~~ ZFS depends heavily on memory, so you need at least 8GB to start. In -practice, use as much you can get for your hardware/budget. To prevent +practice, use as much as you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM. If you use a dedicated cache and/or log disk, you should use an enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly. -IMPORTANT: Do not use ZFS on top of hardware controller which has its -own cache management. ZFS needs to directly communicate with disks. An -HBA adapter is the way to go, or something like LSI controller flashed -in ``IT'' mode. +IMPORTANT: Do not use ZFS on top of a hardware RAID controller which has its +own cache management. ZFS needs to communicate directly with the disks. An +HBA adapter or something like an LSI controller flashed in ``IT'' mode is more +appropriate. If you are experimenting with an installation of {pve} inside a VM (Nested Virtualization), don't use `virtio` for disks of that VM, -since they are not supported by ZFS. Use IDE or SCSI instead (works -also with `virtio` SCSI controller type). +as they are not supported by ZFS. Use IDE or SCSI instead (also works +with the `virtio` SCSI controller type). Installation as Root File System @@ -247,9 +247,9 @@ RAIDZ performance characteristics are acceptable. Bootloader ~~~~~~~~~~ -Depending on whether the system is booted in EFI or legacy BIOS mode the -{pve} installer sets up either `grub` or `systemd-boot` as main bootloader. -See the chapter on xref:sysboot[{pve} host bootladers] for details. +{pve} uses xref:sysboot_proxmox_boot_tool[`proxmox-boot-tool`] to manage the +bootloader configuration. +See the chapter on xref:sysboot[{pve} host bootloaders] for details. ZFS Administration @@ -387,8 +387,14 @@ Changing a failed device .Changing a failed bootable device -Depending on how {pve} was installed it is either using `grub` or `systemd-boot` -as bootloader (see xref:sysboot[Host Bootloader]). +Depending on how {pve} was installed it is either using `proxmox-boot-tool` +footnote:[Systems installed with {pve} 6.4 or later, EFI systems installed with +{pve} 5.4 or later] or plain `grub` as bootloader (see +xref:sysboot[Host Bootloader]). You can check by running: + +---- +# proxmox-boot-tool status +---- The first steps of copying the partition table, reissuing GUIDs and replacing the ZFS partition are the same. To make the system bootable from the new disk, @@ -403,16 +409,16 @@ different steps are needed which depend on the bootloader in use. NOTE: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed. -.With `systemd-boot`: +.With `proxmox-boot-tool`: ---- -# pve-efiboot-tool format -# pve-efiboot-tool init +# proxmox-boot-tool format +# proxmox-boot-tool init ---- NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on bootable disks setup by the {pve} installer since version 5.4. For details, see -xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP]. +xref:sysboot_proxmox_boot_setup[Setting up a new partition for use as synced ESP]. .With `grub`: @@ -433,7 +439,7 @@ and you can install it using `apt-get`: ---- To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your -favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: +favorite editor, and uncomment the `ZED_EMAIL_ADDR` setting: -------- ZED_EMAIL_ADDR="root" @@ -450,25 +456,52 @@ other settings are optional. Limit ZFS Memory Usage ~~~~~~~~~~~~~~~~~~~~~~ -It is good to use at most 50 percent (which is the default) of the -system memory for ZFS ARC to prevent performance shortage of the -host. Use your preferred editor to change the configuration in -`/etc/modprobe.d/zfs.conf` and insert: +ZFS uses '50 %' of the host memory for the **A**daptive **R**eplacement +**C**ache (ARC) by default. Allocating enough memory for the ARC is crucial for +IO performance, so reduce it with caution. As a general rule of thumb, allocate +at least +2 GiB Base + 1 GiB/TiB-Storage+. For example, if you have a pool with ++8 TiB+ of available storage space then you should use +10 GiB+ of memory for +the ARC. + +You can change the ARC usage limit for the current boot (a reboot resets this +change again) by writing to the +zfs_arc_max+ module parameter directly: + +---- + echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max +---- + +To *permanently change* the ARC limits, add the following line to +`/etc/modprobe.d/zfs.conf`: -------- options zfs zfs_arc_max=8589934592 -------- -This example setting limits the usage to 8GB. +This example setting limits the usage to 8 GiB ('8 * 2^30^'). + +IMPORTANT: In case your desired +zfs_arc_max+ value is lower than or equal to ++zfs_arc_min+ (which defaults to 1/32 of the system memory), +zfs_arc_max+ will +be ignored unless you also set +zfs_arc_min+ to at most +zfs_arc_max - 1+. + +---- +echo "$[8 * 1024*1024*1024 - 1]" >/sys/module/zfs/parameters/zfs_arc_min +echo "$[8 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max +---- + +This example setting (temporarily) limits the usage to 8 GiB ('8 * 2^30^') on +systems with more than 256 GiB of total memory, where simply setting ++zfs_arc_max+ alone would not work. [IMPORTANT] ==== -If your root file system is ZFS you must update your initramfs every +If your root file system is ZFS, you must update your initramfs every time this value changes: ---- # update-initramfs -u ---- + +You *must reboot* to activate these changes. ==== @@ -482,7 +515,7 @@ to an external Storage. We strongly recommend to use enough memory, so that you normally do not run into low memory situations. Should you need or want to add swap, it is -preferred to create a partition on a physical disk and use it as swapdevice. +preferred to create a partition on a physical disk and use it as a swap device. You can leave some space free for this purpose in the advanced options of the installer. Additionally, you can lower the ``swappiness'' value. A good value for servers is 10: @@ -685,3 +718,38 @@ in the pool will opt in for small file blocks). ---- # zfs set special_small_blocks=0 / ---- + +[[sysadmin_zfs_features]] +ZFS Pool Features +~~~~~~~~~~~~~~~~~ + +Changes to the on-disk format in ZFS are only made between major version changes +and are specified through *features*. All features, as well as the general +mechanism are well documented in the `zpool-features(5)` manpage. + +Since enabling new features can render a pool not importable by an older version +of ZFS, this needs to be done actively by the administrator, by running +`zpool upgrade` on the pool (see the `zpool-upgrade(8)` manpage). + +Unless you need to use one of the new features, there is no upside to enabling +them. + +In fact, there are some downsides to enabling new features: + +* A system with root on ZFS, that still boots using `grub` will become + unbootable if a new feature is active on the rpool, due to the incompatible + implementation of ZFS in grub. +* The system will not be able to import any upgraded pool when booted with an + older kernel, which still ships with the old ZFS modules. +* Booting an older {pve} ISO to repair a non-booting system will likewise not + work. + +IMPORTANT: Do *not* upgrade your rpool if your system is still booted with +`grub`, as this will render your system unbootable. This includes systems +installed before {pve} 5.4, and systems booting with legacy BIOS boot (see +xref:sysboot_determine_bootloader_used[how to determine the bootloader]). + +.Enable new features for a ZFS pool: +---- +# zpool upgrade +----