errors: No known data errors
----
-The `zfs` command is used configure and manage your ZFS file
-systems. The following command lists all file systems after
-installation:
+The `zfs` command is used to configure and manage your ZFS file systems. The
+following command lists all file systems after installation:
----
# zfs list
There are a few factors to take into consideration when choosing the layout of
a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
`vdev`. All vdevs in a pool are used equally and the data is striped among them
-(RAID0). Check the `zpool(8)` manpage for more details on vdevs.
+(RAID0). Check the `zpoolconcepts(7)` manpage for more details on vdevs.
[[sysadmin_zfs_raid_performance]]
Performance
in regard to IOPS with a lot of bandwidth. How much bandwidth depends on the
size of the RAIDZ vdev and the redundancy level.
+A 'dRAID' pool should match the performance of an equivalent 'RAIDZ' pool.
+
For running VMs, IOPS is the more important metric in most situations.
# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
----
+Please read the section for
+xref:sysadmin_zfs_raid_considerations[ZFS RAID Level Considerations]
+to get a rough estimate on how IOPS and bandwidth expectations before setting up
+a pool, especially when wanting to use a RAID-Z mode.
+
[[sysadmin_zfs_create_new_zpool_with_cache]]
Create a new pool with cache (L2ARC)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-It is possible to use a dedicated cache drive partition to increase
-the performance (use SSD).
-
-As `<device>` it is possible to use more devices, like it's shown in
-"Create a new pool with RAID*".
+It is possible to use a dedicated device, or partition, as second-level cache to
+increase the performance. Such a cache device will especially help with
+random-read workloads of data that is mostly static. As it acts as additional
+caching layer between the actual storage, and the in-memory ARC, it can also
+help if the ARC must be reduced due to memory constraints.
+.Create ZFS pool with a on-disk cache
----
-# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
+# zpool create -f -o ashift=12 <pool> <device> cache <cache-device>
----
+Here only a single `<device>` and a single `<cache-device>` was used, but it is
+possible to use more devices, like it's shown in
+xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID].
+
+Note that for cache devices no mirror or raid modi exist, they are all simply
+accumulated.
+
+If any cache device produces errors on read, ZFS will transparently divert that
+request to the underlying storage layer.
+
+
[[sysadmin_zfs_create_new_zpool_with_log]]
Create a new pool with log (ZIL)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-It is possible to use a dedicated cache drive partition to increase
-the performance(SSD).
+It is possible to use a dedicated drive, or partition, for the ZFS Intent Log
+(ZIL), it is mainly used to provide safe synchronous transactions, so often in
+performance critical paths like databases, or other programs that issue `fsync`
+operations more frequently.
+
+The pool is used as default ZIL location, diverting the ZIL IO load to a
+separate device can, help to reduce transaction latencies while relieving the
+main pool at the same time, increasing overall performance.
+
+For disks to be used as log devices, directly or through a partition, it's
+recommend to:
-As `<device>` it is possible to use more devices, like it's shown in
-"Create a new pool with RAID*".
+- use fast SSDs with power-loss protection, as those have much smaller commit
+ latencies.
+- Use at least a few GB for the partition (or whole device), but using more than
+ half of your installed memory won't provide you with any real advantage.
+
+.Create ZFS pool with separate log device
----
-# zpool create -f -o ashift=12 <pool> <device> log <log_device>
+# zpool create -f -o ashift=12 <pool> <device> log <log-device>
----
+In above example a single `<device>` and a single `<log-device>` is used, but you
+can also combine this with other RAID variants, as described in the
+xref:sysadmin_zfs_create_new_zpool_raid0[Create a new pool with RAID] section.
+
+You can also mirror the log device to multiple devices, this is mainly useful to
+ensure that performance doesn't immediately degrades if a single log device
+fails.
+
+If all log devices fail the ZFS main pool itself will be used again, until the
+log device(s) get replaced.
+
[[sysadmin_zfs_add_cache_and_log_dev]]
Add cache and log to an existing pool
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-If you have a pool without cache and log, first create 2 partitions on the SSD
-with `parted` or `gdisk`.
+If you have a pool without cache and log you can still add both, or just one of
+them, at any time.
+
+For example, let's assume you got a good enterprise SSD with power-loss
+protection that you want to use for improving the overall performance of your
+pool.
+
+As the maximum size of a log device should be about half the size of the
+installed physical memory, it means that the ZIL will mostly likely only take up
+a relatively small part of the SSD, the remaining space can be used as cache.
-IMPORTANT: Always use GPT partition tables.
+First you have to create two GPT partitions on the SSD with `parted` or `gdisk`.
-The maximum size of a log device should be about half the size of
-physical memory, so this is usually quite small. The rest of the SSD
-can be used as cache.
+Then you're ready to add them to an pool:
+.Add both, a separate log device and a second-level cache, to an existing pool
----
# zpool add -f <pool> log <device-part1> cache <device-part2>
----
+Just replay `<pool>`, `<device-part1>` and `<device-part2>` with the pool name
+and the two `/dev/disk/by-id/` paths to the partitions.
+
+You can also add ZIL and cache separately.
+
+.Add a log device to an existing ZFS pool
+----
+# zpool add <pool> log <log-device>
+----
+
+
[[sysadmin_zfs_change_failed_dev]]
Changing a failed device
^^^^^^^^^^^^^^^^^^^^^^^^
----
-# zpool replace -f <pool> <old device> <new device>
+# zpool replace -f <pool> <old-device> <new-device>
----
.Changing a failed bootable device
-Depending on how {pve} was installed it is either using `systemd-boot` or `grub`
-through `proxmox-boot-tool`
-footnote:[Systems installed with {pve} 6.4 or later, EFI systems installed with
-{pve} 5.4 or later] or plain `grub` as bootloader (see
+Depending on how {pve} was installed it is either using `systemd-boot` or GRUB
+through `proxmox-boot-tool` footnote:[Systems installed with {pve} 6.4 or later,
+EFI systems installed with {pve} 5.4 or later] or plain GRUB as bootloader (see
xref:sysboot[Host Bootloader]). You can check by running:
----
----
# proxmox-boot-tool format <new disk's ESP>
-# proxmox-boot-tool init <new disk's ESP>
+# proxmox-boot-tool init <new disk's ESP> [grub]
----
NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
bootable disks setup by the {pve} installer since version 5.4. For details, see
xref:sysboot_proxmox_boot_setup[Setting up a new partition for use as synced ESP].
-.With plain `grub`:
+NOTE: Make sure to pass 'grub' as mode to `proxmox-boot-tool init` if
+`proxmox-boot-tool status` indicates your current disks are using GRUB,
+especially if Secure Boot is enabled!
+
+.With plain GRUB:
----
# grub-install <new disk>
----
-NOTE: plain `grub` is only used on systems installed with {pve} 6.3 or earlier,
+NOTE: Plain GRUB is only used on systems installed with {pve} 6.3 or earlier,
which have not been manually migrated to using `proxmox-boot-tool` yet.
~~~~~~~~~~~~~~~~~~~~~~
ZFS uses '50 %' of the host memory for the **A**daptive **R**eplacement
-**C**ache (ARC) by default. Allocating enough memory for the ARC is crucial for
-IO performance, so reduce it with caution. As a general rule of thumb, allocate
-at least +2 GiB Base + 1 GiB/TiB-Storage+. For example, if you have a pool with
-+8 TiB+ of available storage space then you should use +10 GiB+ of memory for
-the ARC.
+**C**ache (ARC) by default. For new installations starting with {pve} 8.1, the
+ARC usage limit will be set to '10 %' of the installed physical memory, clamped
+to a maximum of +16 GiB+. This value is written to `/etc/modprobe.d/zfs.conf`.
+
+Allocating enough memory for the ARC is crucial for IO performance, so reduce it
+with caution. As a general rule of thumb, allocate at least +2 GiB Base + 1
+GiB/TiB-Storage+. For example, if you have a pool with +8 TiB+ of available
+storage space then you should use +10 GiB+ of memory for the ARC.
+
+ZFS also enforces a minimum value of +64 MiB+.
You can change the ARC usage limit for the current boot (a reboot resets this
change again) by writing to the +zfs_arc_max+ module parameter directly:
echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
----
-To *permanently change* the ARC limits, add the following line to
-`/etc/modprobe.d/zfs.conf`:
+To *permanently change* the ARC limits, add (or change if already present) the
+following line to `/etc/modprobe.d/zfs.conf`:
--------
options zfs zfs_arc_max=8589934592
----
WARNING: There is currently no support for booting from pools with encrypted
-datasets using Grub, and only limited support for automatically unlocking
+datasets using GRUB, and only limited support for automatically unlocking
encrypted datasets on boot. Older versions of ZFS without encryption support
will not be able to decrypt stored data.
In fact, there are some downsides to enabling new features:
-* A system with root on ZFS, that still boots using `grub` will become
+* A system with root on ZFS, that still boots using GRUB will become
unbootable if a new feature is active on the rpool, due to the incompatible
- implementation of ZFS in grub.
+ implementation of ZFS in GRUB.
* The system will not be able to import any upgraded pool when booted with an
older kernel, which still ships with the old ZFS modules.
* Booting an older {pve} ISO to repair a non-booting system will likewise not
work.
IMPORTANT: Do *not* upgrade your rpool if your system is still booted with
-`grub`, as this will render your system unbootable. This includes systems
+GRUB, as this will render your system unbootable. This includes systems
installed before {pve} 5.4, and systems booting with legacy BIOS boot (see
xref:sysboot_determine_bootloader_used[how to determine the bootloader]).