zpool create -f -o ashift=12 <pool> <device>
-To activate compression
+To activate compression (see section <<zfs_compression,Compression in ZFS>>):
zfs set compression=lz4 <pool>
details and advanced usage.
+[[zfs_compression]]
+Compression in ZFS
+~~~~~~~~~~~~~~~~~~
+
+When compression is enabled on a dataset, ZFS tries to compress all *new*
+blocks before writing them and decompresses them on reading. Already
+existing data will not be compressed retroactively.
+
+You can enable compression with:
+
+----
+# zfs set compression=<algorithm> <dataset>
+----
+
+We recommend using the `lz4` algorithm, because it adds very little CPU
+overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
+integer from `1` (fastest) to `9` (best compression ratio), are also
+available. Depending on the algorithm and how compressible the data is,
+having compression enabled can even increase I/O performance.
+
+You can disable compression at any time with:
+
+----
+# zfs set compression=off <dataset>
+----
+
+Again, only new blocks will be affected by this change.
+
+
ZFS Special Device
~~~~~~~~~~~~~~~~~~
file blocks.
A `special` device can improve the speed of a pool consisting of slow spinning
-hard disks with a lot of changing metadata. For example workloads that involve
-creating or deleting a large number of files will benefit from the presence of
-a `special` device. ZFS datasets can be configured to store whole small files
-on the `special` device which can further improve the performance. Use SSDs for
-the `special` device.
+hard disks with a lot of metadata changes. For example workloads that involve
+creating, updating or deleting a large number of files will benefit from the
+presence of a `special` device. ZFS datasets can also be configured to store
+whole small files on the `special` device which can further improve the
+performance. Use fast SSDs for the `special` device.
IMPORTANT: The redundancy of the `special` device should match the one of the
pool, since the `special` device is a point of failure for the whole pool.
blocks smaller than `size` will be allocated on the `special` device.
IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
-the `recordsize` of the dataset, *all* data will be written to the `special`
-device, so be careful!
+the `recordsize` (default `128K`) of the dataset, *all* data will be written to
+the `special` device, so be careful!
Setting the `special_small_blocks` property on a pool will change the default
value of that property for all child ZFS datasets (for example all containers
in the pool will opt in for small file blocks).
-.Opt in for small file blocks pool-wide:
+.Opt in for all file smaller than 4K-blocks pool-wide:
zfs set special_small_blocks=4K <pool>