ZFS is a combined file system and logical volume manager designed by
Sun Microsystems. Starting with {pve} 3.4, the native Linux
kernel port of the ZFS file system is introduced as optional
-file-system and also as an additional selection for the root
-file-system. There is no need for manually compile ZFS modules - all
+file system and also as an additional selection for the root
+file system. There is no need for manually compile ZFS modules - all
packages are included.
-By using ZFS, its possible to achieve maximal enterprise features with
+By using ZFS, its possible to achieve maximum enterprise features with
low budget hardware, but also high performance systems by leveraging
SSD caching or even SSD only setups. ZFS can replace cost intense
hardware raid cards by moderate CPU and memory load combined with easy
* Protection against data corruption
-* Data compression on file-system level
+* Data compression on file system level
* Snapshots
enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
increase the overall performance significantly.
-IMPORTANT: Do not use ZFS on top of hardware controller which has it's
+IMPORTANT: Do not use ZFS on top of hardware controller which has its
own cache management. ZFS needs to directly communicate with disks. An
HBA adapter is the way to go, or something like LSI controller flashed
in ``IT'' mode.
also with `virtio` SCSI controller type).
-Installation as root file system
+Installation as Root File System
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When you install using the {pve} installer, you can choose ZFS for the
# man zfs
-----
-.Create a new ZPool
+.Create a new zpool
To create a new pool, at least one disk is needed. The `ashift` should
have the same sector-size (2 power of `ashift`) or larger as the
zpool create -f -o ashift=12 <pool> <device>
-To activate the compression
+To activate compression
zfs set compression=lz4 <pool>
zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
-.Create a new pool with Cache (L2ARC)
+.Create a new pool with cache (L2ARC)
It is possible to use a dedicated cache drive partition to increase
the performance (use SSD).
zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
-.Create a new pool with Log (ZIL)
+.Create a new pool with log (ZIL)
It is possible to use a dedicated cache drive partition to increase
the performance(SSD).
zpool create -f -o ashift=12 <pool> <device> log <log_device>
-.Add Cache and Log to an existing pool
+.Add cache and log to an existing pool
If you have an pool without cache and log. First partition the SSD in
2 partition with `parted` or `gdisk`
The maximum size of a log device should be about half the size of
physical memory, so this is usually quite small. The rest of the SSD
-can be used to the cache.
+can be used as cache.
zpool add -f <pool> log <device-part1> cache <device-part2>
-.Changing a failed Device
+.Changing a failed device
zpool replace -f <pool> <old device> <new-device>
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
ZFS comes with an event daemon, which monitors events generated by the
-ZFS kernel module. The daemon can also send E-Mails on ZFS event like
+ZFS kernel module. The daemon can also send emails on ZFS events like
pool errors.
To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
other settings are optional.
-Limit ZFS memory usage
+Limit ZFS Memory Usage
~~~~~~~~~~~~~~~~~~~~~~
-It is good to use maximal 50 percent (which is the default) of the
+It is good to use at most 50 percent (which is the default) of the
system memory for ZFS ARC to prevent performance shortage of the
host. Use your preferred editor to change the configuration in
`/etc/modprobe.d/zfs.conf` and insert:
- options zfs zfs_arc_max=8589934592
+--------
+options zfs zfs_arc_max=8589934592
+--------
This example setting limits the usage to 8GB.
[IMPORTANT]
====
-If your root fs is ZFS you must update your initramfs every
-time this value changes.
+If your root file system is ZFS you must update your initramfs every
+time this value changes:
update-initramfs -u
====