X-Git-Url: https://git.proxmox.com/?p=pve-docs.git;a=blobdiff_plain;f=local-zfs.adoc;h=13f6050f6d8021dc2bc08d6a156c14c1ee69d6c6;hp=a20903f9af8a5fb951cd2abd12e77823dbb6764b;hb=856993e4166495537f42e0b9c3a51c966227feab;hpb=8c1189b640ae7d10119ff1c046580f48749d38bd diff --git a/local-zfs.adoc b/local-zfs.adoc index a20903f..13f6050 100644 --- a/local-zfs.adoc +++ b/local-zfs.adoc @@ -1,15 +1,18 @@ +[[chapter_zfs]] ZFS on Linux ------------ -include::attributes.txt[] +ifdef::wiki[] +:pve-toplevel: +endif::wiki[] ZFS is a combined file system and logical volume manager designed by Sun Microsystems. Starting with {pve} 3.4, the native Linux kernel port of the ZFS file system is introduced as optional -file-system and also as an additional selection for the root -file-system. There is no need for manually compile ZFS modules - all +file system and also as an additional selection for the root +file system. There is no need for manually compile ZFS modules - all packages are included. -By using ZFS, its possible to achieve maximal enterprise features with +By using ZFS, its possible to achieve maximum enterprise features with low budget hardware, but also high performance systems by leveraging SSD caching or even SSD only setups. ZFS can replace cost intense hardware raid cards by moderate CPU and memory load combined with easy @@ -23,7 +26,7 @@ management. * Protection against data corruption -* Data compression on file-system level +* Data compression on file system level * Snapshots @@ -57,11 +60,11 @@ ZFS depends heavily on memory, so you need at least 8GB to start. In practice, use as much you can get for your hardware/budget. To prevent data corruption, we recommend the use of high quality ECC RAM. -If you use a dedicated cache and/or log disk, you should use a +If you use a dedicated cache and/or log disk, you should use an enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can increase the overall performance significantly. -IMPORTANT: Do not use ZFS on top of hardware controller which has it's +IMPORTANT: Do not use ZFS on top of hardware controller which has its own cache management. ZFS needs to directly communicate with disks. An HBA adapter is the way to go, or something like LSI controller flashed in ``IT'' mode. @@ -72,7 +75,7 @@ since they are not supported by ZFS. Use IDE or SCSI instead (works also with `virtio` SCSI controller type). -Installation as root file system +Installation as Root File System ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ When you install using the {pve} installer, you can choose ZFS for the @@ -158,7 +161,7 @@ GRUB boot loader there. If you use a redundant RAID setup, it installs the boot loader on all disk required for booting. So you can boot even if some disks fail. -NOTE: It is not possible to use ZFS as root partition with UEFI +NOTE: It is not possible to use ZFS as root file system with UEFI boot. @@ -175,7 +178,7 @@ manual pages, which can be read with: # man zfs ----- -.Create a new ZPool +.Create a new zpool To create a new pool, at least one disk is needed. The `ashift` should have the same sector-size (2 power of `ashift`) or larger as the @@ -183,7 +186,7 @@ underlying disk. zpool create -f -o ashift=12 -To activate the compression +To activate compression zfs set compression=lz4 @@ -217,7 +220,7 @@ Minimum 4 Disks zpool create -f -o ashift=12 raidz2 -.Create a new pool with Cache (L2ARC) +.Create a new pool with cache (L2ARC) It is possible to use a dedicated cache drive partition to increase the performance (use SSD). @@ -227,7 +230,7 @@ As `` it is possible to use more devices, like it's shown in zpool create -f -o ashift=12 cache -.Create a new pool with Log (ZIL) +.Create a new pool with log (ZIL) It is possible to use a dedicated cache drive partition to increase the performance(SSD). @@ -237,20 +240,20 @@ As `` it is possible to use more devices, like it's shown in zpool create -f -o ashift=12 log -.Add Cache and Log to an existing pool +.Add cache and log to an existing pool If you have an pool without cache and log. First partition the SSD in 2 partition with `parted` or `gdisk` -IMPORTANT: Always use GPT partition tables (gdisk or parted). +IMPORTANT: Always use GPT partition tables. The maximum size of a log device should be about half the size of physical memory, so this is usually quite small. The rest of the SSD -can be used to the cache. +can be used as cache. zpool add -f log cache -.Changing a failed Device +.Changing a failed device zpool replace -f @@ -259,13 +262,20 @@ Activate E-Mail Notification ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ZFS comes with an event daemon, which monitors events generated by the -ZFS kernel module. The daemon can also send E-Mails on ZFS event like -pool errors. +ZFS kernel module. The daemon can also send emails on ZFS events like +pool errors. Newer ZFS packages ships the daemon in a separate package, +and you can install it using `apt-get`: + +---- +# apt-get install zfs-zed +---- To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: +-------- ZED_EMAIL_ADDR="root" +-------- Please note {pve} forwards mails to `root` to the email address configured for the root user. @@ -274,35 +284,41 @@ IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All other settings are optional. -Limit ZFS memory usage +Limit ZFS Memory Usage ~~~~~~~~~~~~~~~~~~~~~~ -It is good to use maximal 50 percent (which is the default) of the +It is good to use at most 50 percent (which is the default) of the system memory for ZFS ARC to prevent performance shortage of the host. Use your preferred editor to change the configuration in `/etc/modprobe.d/zfs.conf` and insert: - options zfs zfs_arc_max=8589934592 +-------- +options zfs zfs_arc_max=8589934592 +-------- This example setting limits the usage to 8GB. [IMPORTANT] ==== -If your root fs is ZFS you must update your initramfs every -time this value changes. +If your root file system is ZFS you must update your initramfs every +time this value changes: update-initramfs -u ==== +[[zfs_swap]] .SWAP on ZFS -SWAP on ZFS on Linux may generate some troubles, like blocking the +Swap-space created on a zvol may generate some troubles, like blocking the server or generating a high IO load, often seen when starting a Backup to an external Storage. We strongly recommend to use enough memory, so that you normally do not -run into low memory situations. Additionally, you can lower the +run into low memory situations. Should you need or want to add swap, it is +preferred to create a partition on a physical disk and use it as swapdevice. +You can leave some space free for this purpose in the advanced options of the +installer. Additionally, you can lower the ``swappiness'' value. A good value for servers is 10: sysctl -w vm.swappiness=10 @@ -310,7 +326,9 @@ run into low memory situations. Additionally, you can lower the To make the swappiness persistent, open `/etc/sysctl.conf` with an editor of your choice and add the following line: - vm.swappiness = 10 +-------- +vm.swappiness = 10 +-------- .Linux kernel `swappiness` parameter values [width="100%",cols="