7 ZFS is a combined file system and logical volume manager designed by
8 Sun Microsystems. There is no need to manually compile ZFS modules - all
11 By using ZFS, it's possible to achieve maximum enterprise features with
12 low budget hardware, but also high performance systems by leveraging
13 SSD caching or even SSD only setups. ZFS can replace cost intense
14 hardware raid cards by moderate CPU and memory load combined with easy
17 General ZFS advantages
19 * Easy configuration and management with GUI and CLI.
21 * Protection against data corruption
22 * Data compression on file system level
25 * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
26 * Can use SSD for cache
28 * Continuous integrity checking
29 * Designed for high storage capacities
30 * Asynchronous replication over network
37 ZFS depends heavily on memory, so you need at least 8GB to start. In
38 practice, use as much you can get for your hardware/budget. To prevent
39 data corruption, we recommend the use of high quality ECC RAM.
41 If you use a dedicated cache and/or log disk, you should use an
42 enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
43 increase the overall performance significantly.
45 IMPORTANT: Do not use ZFS on top of hardware controller which has its
46 own cache management. ZFS needs to directly communicate with disks. An
47 HBA adapter is the way to go, or something like LSI controller flashed
54 This section gives you some usage examples for common tasks. ZFS
55 itself is really powerful and provides many options. The main commands
56 to manage ZFS are `zfs` and `zpool`. Both commands come with great
57 manual pages, which can be read with:
59 .. code-block:: console
67 To create a new pool, at least one disk is needed. The `ashift` should
68 have the same sector-size (2 power of `ashift`) or larger as the
71 .. code-block:: console
73 # zpool create -f -o ashift=12 <pool> <device>
75 Create a new pool with RAID-0
76 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
80 .. code-block:: console
82 # zpool create -f -o ashift=12 <pool> <device1> <device2>
84 Create a new pool with RAID-1
85 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
89 .. code-block:: console
91 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
93 Create a new pool with RAID-10
94 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
98 .. code-block:: console
100 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
102 Create a new pool with RAIDZ-1
103 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
107 .. code-block:: console
109 # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
111 Create a new pool with RAIDZ-2
112 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
116 .. code-block:: console
118 # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
120 Create a new pool with cache (L2ARC)
121 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
123 It is possible to use a dedicated cache drive partition to increase
124 the performance (use SSD).
126 As `<device>` it is possible to use more devices, like it's shown in
127 "Create a new pool with RAID*".
129 .. code-block:: console
131 # zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
133 Create a new pool with log (ZIL)
134 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
136 It is possible to use a dedicated cache drive partition to increase
137 the performance (SSD).
139 As `<device>` it is possible to use more devices, like it's shown in
140 "Create a new pool with RAID*".
142 .. code-block:: console
144 # zpool create -f -o ashift=12 <pool> <device> log <log_device>
146 Add cache and log to an existing pool
147 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
149 If you have a pool without cache and log. First partition the SSD in
150 2 partition with `parted` or `gdisk`
152 .. important:: Always use GPT partition tables.
154 The maximum size of a log device should be about half the size of
155 physical memory, so this is usually quite small. The rest of the SSD
156 can be used as cache.
158 .. code-block:: console
160 # zpool add -f <pool> log <device-part1> cache <device-part2>
163 Changing a failed device
164 ^^^^^^^^^^^^^^^^^^^^^^^^
166 .. code-block:: console
168 # zpool replace -f <pool> <old device> <new device>
171 Changing a failed bootable device
172 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
174 Depending on how Proxmox Backup was installed it is either using `grub` or `systemd-boot`
177 The first steps of copying the partition table, reissuing GUIDs and replacing
178 the ZFS partition are the same. To make the system bootable from the new disk,
179 different steps are needed which depend on the bootloader in use.
181 .. code-block:: console
183 # sgdisk <healthy bootable device> -R <new device>
184 # sgdisk -G <new device>
185 # zpool replace -f <pool> <old zfs partition> <new zfs partition>
187 .. NOTE:: Use the `zpool status -v` command to monitor how far the resilvering process of the new disk has progressed.
191 .. code-block:: console
193 # pve-efiboot-tool format <new disk's ESP>
194 # pve-efiboot-tool init <new disk's ESP>
196 .. NOTE:: `ESP` stands for EFI System Partition, which is setup as partition #2 on
197 bootable disks setup by the {pve} installer since version 5.4. For details, see
198 xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
202 Usually `grub.cfg` is located in `/boot/grub/grub.cfg`
204 .. code-block:: console
206 # grub-install <new disk>
207 # grub-mkconfig -o /path/to/grub.cfg
210 Activate E-Mail Notification
211 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
213 ZFS comes with an event daemon, which monitors events generated by the
214 ZFS kernel module. The daemon can also send emails on ZFS events like
215 pool errors. Newer ZFS packages ship the daemon in a separate package,
216 and you can install it using `apt-get`:
218 .. code-block:: console
220 # apt-get install zfs-zed
222 To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
223 favorite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
225 .. code-block:: console
227 ZED_EMAIL_ADDR="root"
229 Please note Proxmox Backup forwards mails to `root` to the email address
230 configured for the root user.
232 IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
233 other settings are optional.
235 Limit ZFS Memory Usage
236 ^^^^^^^^^^^^^^^^^^^^^^
238 It is good to use at most 50 percent (which is the default) of the
239 system memory for ZFS ARC to prevent performance shortage of the
240 host. Use your preferred editor to change the configuration in
241 `/etc/modprobe.d/zfs.conf` and insert:
243 .. code-block:: console
245 options zfs zfs_arc_max=8589934592
247 This example setting limits the usage to 8GB.
249 .. IMPORTANT:: If your root file system is ZFS you must update your initramfs every time this value changes:
251 .. code-block:: console
253 # update-initramfs -u
259 Swap-space created on a zvol may generate some troubles, like blocking the
260 server or generating a high IO load, often seen when starting a Backup
261 to an external Storage.
263 We strongly recommend to use enough memory, so that you normally do not
264 run into low memory situations. Should you need or want to add swap, it is
265 preferred to create a partition on a physical disk and use it as swap device.
266 You can leave some space free for this purpose in the advanced options of the
267 installer. Additionally, you can lower the `swappiness` value.
268 A good value for servers is 10:
270 .. code-block:: console
272 # sysctl -w vm.swappiness=10
274 To make the swappiness persistent, open `/etc/sysctl.conf` with
275 an editor of your choice and add the following line:
277 .. code-block:: console
281 .. table:: Linux kernel `swappiness` parameter values
284 ==================== ===============================================================
286 ==================== ===============================================================
287 vm.swappiness = 0 The kernel will swap only to avoid an 'out of memory' condition
288 vm.swappiness = 1 Minimum amount of swapping without disabling it entirely.
289 vm.swappiness = 10 Sometimes recommended to improve performance when sufficient memory exists in a system.
290 vm.swappiness = 60 The default value.
291 vm.swappiness = 100 The kernel will swap aggressively.
292 ==================== ===============================================================
297 To activate compression:
298 .. code-block:: console
300 # zpool set compression=lz4 <pool>
302 We recommend using the `lz4` algorithm, since it adds very little CPU overhead.
303 Other algorithms such as `lzjb` and `gzip-N` (where `N` is an integer `1-9` representing
304 the compression ratio, 1 is fastest and 9 is best compression) are also available.
305 Depending on the algorithm and how compressible the data is, having compression enabled can even increase
308 You can disable compression at any time with:
309 .. code-block:: console
311 # zfs set compression=off <dataset>
313 Only new blocks will be affected by this change.
315 .. _local_zfs_special_device:
320 Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
321 pool is used to store metadata, deduplication tables, and optionally small
324 A `special` device can improve the speed of a pool consisting of slow spinning
325 hard disks with a lot of metadata changes. For example workloads that involve
326 creating, updating or deleting a large number of files will benefit from the
327 presence of a `special` device. ZFS datasets can also be configured to store
328 whole small files on the `special` device which can further improve the
329 performance. Use fast SSDs for the `special` device.
331 .. IMPORTANT:: The redundancy of the `special` device should match the one of the
332 pool, since the `special` device is a point of failure for the whole pool.
334 .. WARNING:: Adding a `special` device to a pool cannot be undone!
336 Create a pool with `special` device and RAID-1:
338 .. code-block:: console
340 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
342 Adding a `special` device to an existing pool with RAID-1:
344 .. code-block:: console
346 # zpool add <pool> special mirror <device1> <device2>
348 ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
349 `0` to disable storing small file blocks on the `special` device or a power of
350 two in the range between `512B` to `128K`. After setting the property new file
351 blocks smaller than `size` will be allocated on the `special` device.
353 .. IMPORTANT:: If the value for `special_small_blocks` is greater than or equal to
354 the `recordsize` (default `128K`) of the dataset, *all* data will be written to
355 the `special` device, so be careful!
357 Setting the `special_small_blocks` property on a pool will change the default
358 value of that property for all child ZFS datasets (for example all containers
359 in the pool will opt in for small file blocks).
361 Opt in for all file smaller than 4K-blocks pool-wide:
363 .. code-block:: console
365 # zfs set special_small_blocks=4K <pool>
367 Opt in for small file blocks for a single dataset:
369 .. code-block:: console
371 # zfs set special_small_blocks=4K <pool>/<filesystem>
373 Opt out from small file blocks for a single dataset:
375 .. code-block:: console
377 # zfs set special_small_blocks=0 <pool>/<filesystem>
384 In case of a corrupted ZFS cachefile, some volumes may not be mounted during
385 boot until mounted manually later.
389 .. code-block:: console
391 # zpool set cachefile=/etc/zfs/zpool.cache POOLNAME
393 and afterwards update the `initramfs` by running:
395 .. code-block:: console
397 # update-initramfs -u -k all
399 and finally reboot your node.
401 Sometimes the ZFS cachefile can get corrupted, and `zfs-import-cache.service`
402 doesn't import the pools that aren't present in the cachefile.
404 Another workaround to this problem is enabling the `zfs-import-scan.service`,
405 which searches and imports pools via device scanning (usually slower).