]> git.proxmox.com Git - pve-docs.git/blame - local-zfs.adoc
fix #3884: Add section for kernel samepage merging
[pve-docs.git] / local-zfs.adoc
CommitLineData
0235c741 1[[chapter_zfs]]
9ee94323
DM
2ZFS on Linux
3------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
9ee94323
DM
8ZFS is a combined file system and logical volume manager designed by
9Sun Microsystems. Starting with {pve} 3.4, the native Linux
10kernel port of the ZFS file system is introduced as optional
5eba0743
FG
11file system and also as an additional selection for the root
12file system. There is no need for manually compile ZFS modules - all
9ee94323
DM
13packages are included.
14
5eba0743 15By using ZFS, its possible to achieve maximum enterprise features with
9ee94323
DM
16low budget hardware, but also high performance systems by leveraging
17SSD caching or even SSD only setups. ZFS can replace cost intense
18hardware raid cards by moderate CPU and memory load combined with easy
19management.
20
21.General ZFS advantages
22
23* Easy configuration and management with {pve} GUI and CLI.
24
25* Reliable
26
27* Protection against data corruption
28
5eba0743 29* Data compression on file system level
9ee94323
DM
30
31* Snapshots
32
33* Copy-on-write clone
34
35* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37* Can use SSD for cache
38
39* Self healing
40
41* Continuous integrity checking
42
43* Designed for high storage capacities
44
9ee94323
DM
45* Asynchronous replication over network
46
47* Open Source
48
49* Encryption
50
51* ...
52
53
54Hardware
55~~~~~~~~
56
57ZFS depends heavily on memory, so you need at least 8GB to start. In
60ed554f 58practice, use as much as you can get for your hardware/budget. To prevent
9ee94323
DM
59data corruption, we recommend the use of high quality ECC RAM.
60
d48bdcf2 61If you use a dedicated cache and/or log disk, you should use an
9ee94323
DM
62enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
63increase the overall performance significantly.
64
60ed554f
DW
65IMPORTANT: Do not use ZFS on top of a hardware RAID controller which has its
66own cache management. ZFS needs to communicate directly with the disks. An
67HBA adapter or something like an LSI controller flashed in ``IT'' mode is more
68appropriate.
9ee94323
DM
69
70If you are experimenting with an installation of {pve} inside a VM
8c1189b6 71(Nested Virtualization), don't use `virtio` for disks of that VM,
60ed554f
DW
72as they are not supported by ZFS. Use IDE or SCSI instead (also works
73with the `virtio` SCSI controller type).
9ee94323
DM
74
75
5eba0743 76Installation as Root File System
9ee94323
DM
77~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
78
79When you install using the {pve} installer, you can choose ZFS for the
80root file system. You need to select the RAID type at installation
81time:
82
83[horizontal]
8c1189b6
FG
84RAID0:: Also called ``striping''. The capacity of such volume is the sum
85of the capacities of all disks. But RAID0 does not add any redundancy,
9ee94323
DM
86so the failure of a single drive makes the volume unusable.
87
8c1189b6 88RAID1:: Also called ``mirroring''. Data is written identically to all
9ee94323
DM
89disks. This mode requires at least 2 disks with the same size. The
90resulting capacity is that of a single disk.
91
92RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
93
94RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
95
96RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
97
98RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
99
100The installer automatically partitions the disks, creates a ZFS pool
8c1189b6
FG
101called `rpool`, and installs the root file system on the ZFS subvolume
102`rpool/ROOT/pve-1`.
9ee94323 103
8c1189b6 104Another subvolume called `rpool/data` is created to store VM
9ee94323 105images. In order to use that with the {pve} tools, the installer
8c1189b6 106creates the following configuration entry in `/etc/pve/storage.cfg`:
9ee94323
DM
107
108----
109zfspool: local-zfs
110 pool rpool/data
111 sparse
112 content images,rootdir
113----
114
115After installation, you can view your ZFS pool status using the
8c1189b6 116`zpool` command:
9ee94323
DM
117
118----
119# zpool status
120 pool: rpool
121 state: ONLINE
122 scan: none requested
123config:
124
125 NAME STATE READ WRITE CKSUM
126 rpool ONLINE 0 0 0
127 mirror-0 ONLINE 0 0 0
128 sda2 ONLINE 0 0 0
129 sdb2 ONLINE 0 0 0
130 mirror-1 ONLINE 0 0 0
131 sdc ONLINE 0 0 0
132 sdd ONLINE 0 0 0
133
134errors: No known data errors
135----
136
8c1189b6 137The `zfs` command is used configure and manage your ZFS file
9ee94323
DM
138systems. The following command lists all file systems after
139installation:
140
141----
142# zfs list
143NAME USED AVAIL REFER MOUNTPOINT
144rpool 4.94G 7.68T 96K /rpool
145rpool/ROOT 702M 7.68T 96K /rpool/ROOT
146rpool/ROOT/pve-1 702M 7.68T 702M /
147rpool/data 96K 7.68T 96K /rpool/data
148rpool/swap 4.25G 7.69T 64K -
149----
150
151
e4262cac
AL
152[[sysadmin_zfs_raid_considerations]]
153ZFS RAID Level Considerations
154~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
155
156There are a few factors to take into consideration when choosing the layout of
157a ZFS pool. The basic building block of a ZFS pool is the virtual device, or
158`vdev`. All vdevs in a pool are used equally and the data is striped among them
159(RAID0). Check the `zpool(8)` manpage for more details on vdevs.
160
161[[sysadmin_zfs_raid_performance]]
162Performance
163^^^^^^^^^^^
164
165Each `vdev` type has different performance behaviors. The two
166parameters of interest are the IOPS (Input/Output Operations per Second) and
167the bandwidth with which data can be written or read.
168
169A 'mirror' vdev (RAID1) will approximately behave like a single disk in regards
170to both parameters when writing data. When reading data if will behave like the
171number of disks in the mirror.
172
173A common situation is to have 4 disks. When setting it up as 2 mirror vdevs
174(RAID10) the pool will have the write characteristics as two single disks in
175regard of IOPS and bandwidth. For read operations it will resemble 4 single
176disks.
177
178A 'RAIDZ' of any redundancy level will approximately behave like a single disk
179in regard of IOPS with a lot of bandwidth. How much bandwidth depends on the
180size of the RAIDZ vdev and the redundancy level.
181
182For running VMs, IOPS is the more important metric in most situations.
183
184
185[[sysadmin_zfs_raid_size_space_usage_redundancy]]
186Size, Space usage and Redundancy
187^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
188
189While a pool made of 'mirror' vdevs will have the best performance
190characteristics, the usable space will be 50% of the disks available. Less if a
191mirror vdev consists of more than 2 disks, for example in a 3-way mirror. At
192least one healthy disk per mirror is needed for the pool to stay functional.
193
194The usable space of a 'RAIDZ' type vdev of N disks is roughly N-P, with P being
195the RAIDZ-level. The RAIDZ-level indicates how many arbitrary disks can fail
196without losing data. A special case is a 4 disk pool with RAIDZ2. In this
197situation it is usually better to use 2 mirror vdevs for the better performance
198as the usable space will be the same.
199
200Another important factor when using any RAIDZ level is how ZVOL datasets, which
201are used for VM disks, behave. For each data block the pool needs parity data
202which is at least the size of the minimum block size defined by the `ashift`
203value of the pool. With an ashift of 12 the block size of the pool is 4k. The
204default block size for a ZVOL is 8k. Therefore, in a RAIDZ2 each 8k block
205written will cause two additional 4k parity blocks to be written,
2068k + 4k + 4k = 16k. This is of course a simplified approach and the real
207situation will be slightly different with metadata, compression and such not
208being accounted for in this example.
209
210This behavior can be observed when checking the following properties of the
211ZVOL:
212
213 * `volsize`
214 * `refreservation` (if the pool is not thin provisioned)
215 * `used` (if the pool is thin provisioned and without snapshots present)
216
217----
218# zfs get volsize,refreservation,used <pool>/vm-<vmid>-disk-X
219----
220
221`volsize` is the size of the disk as it is presented to the VM, while
222`refreservation` shows the reserved space on the pool which includes the
223expected space needed for the parity data. If the pool is thin provisioned, the
224`refreservation` will be set to 0. Another way to observe the behavior is to
225compare the used disk space within the VM and the `used` property. Be aware
226that snapshots will skew the value.
227
228There are a few options to counter the increased use of space:
229
230* Increase the `volblocksize` to improve the data to parity ratio
231* Use 'mirror' vdevs instead of 'RAIDZ'
232* Use `ashift=9` (block size of 512 bytes)
233
234The `volblocksize` property can only be set when creating a ZVOL. The default
235value can be changed in the storage configuration. When doing this, the guest
236needs to be tuned accordingly and depending on the use case, the problem of
237write amplification if just moved from the ZFS layer up to the guest.
238
239Using `ashift=9` when creating the pool can lead to bad
240performance, depending on the disks underneath, and cannot be changed later on.
241
242Mirror vdevs (RAID1, RAID10) have favorable behavior for VM workloads. Use
f4abc68a 243them, unless your environment has specific needs and characteristics where
e4262cac
AL
244RAIDZ performance characteristics are acceptable.
245
246
9ee94323
DM
247Bootloader
248~~~~~~~~~~
249
cb04e768
SI
250{pve} uses xref:sysboot_proxmox_boot_tool[`proxmox-boot-tool`] to manage the
251bootloader configuration.
3a433e9b 252See the chapter on xref:sysboot[{pve} host bootloaders] for details.
9ee94323
DM
253
254
255ZFS Administration
256~~~~~~~~~~~~~~~~~~
257
258This section gives you some usage examples for common tasks. ZFS
259itself is really powerful and provides many options. The main commands
8c1189b6
FG
260to manage ZFS are `zfs` and `zpool`. Both commands come with great
261manual pages, which can be read with:
9ee94323
DM
262
263----
264# man zpool
265# man zfs
266-----
267
42449bdf
TL
268[[sysadmin_zfs_create_new_zpool]]
269Create a new zpool
270^^^^^^^^^^^^^^^^^^
9ee94323 271
8c1189b6
FG
272To create a new pool, at least one disk is needed. The `ashift` should
273have the same sector-size (2 power of `ashift`) or larger as the
9ee94323
DM
274underlying disk.
275
eaefe614
FE
276----
277# zpool create -f -o ashift=12 <pool> <device>
278----
9ee94323 279
e06707f2 280To activate compression (see section <<zfs_compression,Compression in ZFS>>):
9ee94323 281
eaefe614
FE
282----
283# zfs set compression=lz4 <pool>
284----
9ee94323 285
42449bdf
TL
286[[sysadmin_zfs_create_new_zpool_raid0]]
287Create a new pool with RAID-0
288^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 289
dc2d00a0 290Minimum 1 disk
9ee94323 291
eaefe614
FE
292----
293# zpool create -f -o ashift=12 <pool> <device1> <device2>
294----
9ee94323 295
42449bdf
TL
296[[sysadmin_zfs_create_new_zpool_raid1]]
297Create a new pool with RAID-1
298^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 299
dc2d00a0 300Minimum 2 disks
9ee94323 301
eaefe614
FE
302----
303# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
304----
9ee94323 305
42449bdf
TL
306[[sysadmin_zfs_create_new_zpool_raid10]]
307Create a new pool with RAID-10
308^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 309
dc2d00a0 310Minimum 4 disks
9ee94323 311
eaefe614
FE
312----
313# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
314----
9ee94323 315
42449bdf
TL
316[[sysadmin_zfs_create_new_zpool_raidz1]]
317Create a new pool with RAIDZ-1
318^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 319
dc2d00a0 320Minimum 3 disks
9ee94323 321
eaefe614
FE
322----
323# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
324----
9ee94323 325
42449bdf
TL
326Create a new pool with RAIDZ-2
327^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 328
dc2d00a0 329Minimum 4 disks
9ee94323 330
eaefe614
FE
331----
332# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
333----
9ee94323 334
42449bdf
TL
335[[sysadmin_zfs_create_new_zpool_with_cache]]
336Create a new pool with cache (L2ARC)
337^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323
DM
338
339It is possible to use a dedicated cache drive partition to increase
340the performance (use SSD).
341
8c1189b6 342As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
343"Create a new pool with RAID*".
344
eaefe614
FE
345----
346# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
347----
9ee94323 348
42449bdf
TL
349[[sysadmin_zfs_create_new_zpool_with_log]]
350Create a new pool with log (ZIL)
351^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323
DM
352
353It is possible to use a dedicated cache drive partition to increase
354the performance(SSD).
355
8c1189b6 356As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
357"Create a new pool with RAID*".
358
eaefe614
FE
359----
360# zpool create -f -o ashift=12 <pool> <device> log <log_device>
361----
9ee94323 362
42449bdf
TL
363[[sysadmin_zfs_add_cache_and_log_dev]]
364Add cache and log to an existing pool
365^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 366
5dfeeece 367If you have a pool without cache and log. First partition the SSD in
8c1189b6 3682 partition with `parted` or `gdisk`
9ee94323 369
e300cf7d 370IMPORTANT: Always use GPT partition tables.
9ee94323
DM
371
372The maximum size of a log device should be about half the size of
373physical memory, so this is usually quite small. The rest of the SSD
5eba0743 374can be used as cache.
9ee94323 375
eaefe614 376----
237007eb 377# zpool add -f <pool> log <device-part1> cache <device-part2>
eaefe614 378----
9ee94323 379
42449bdf
TL
380[[sysadmin_zfs_change_failed_dev]]
381Changing a failed device
382^^^^^^^^^^^^^^^^^^^^^^^^
9ee94323 383
eaefe614
FE
384----
385# zpool replace -f <pool> <old device> <new device>
386----
1748211a 387
11a6e022
AL
388.Changing a failed bootable device
389
cb04e768
SI
390Depending on how {pve} was installed it is either using `proxmox-boot-tool`
391footnote:[Systems installed with {pve} 6.4 or later, EFI systems installed with
392{pve} 5.4 or later] or plain `grub` as bootloader (see
393xref:sysboot[Host Bootloader]). You can check by running:
394
395----
396# proxmox-boot-tool status
397----
11a6e022
AL
398
399The first steps of copying the partition table, reissuing GUIDs and replacing
400the ZFS partition are the same. To make the system bootable from the new disk,
401different steps are needed which depend on the bootloader in use.
1748211a 402
eaefe614
FE
403----
404# sgdisk <healthy bootable device> -R <new device>
405# sgdisk -G <new device>
406# zpool replace -f <pool> <old zfs partition> <new zfs partition>
11a6e022
AL
407----
408
44aee838 409NOTE: Use the `zpool status -v` command to monitor how far the resilvering
11a6e022
AL
410process of the new disk has progressed.
411
cb04e768 412.With `proxmox-boot-tool`:
11a6e022
AL
413
414----
cb04e768
SI
415# proxmox-boot-tool format <new disk's ESP>
416# proxmox-boot-tool init <new disk's ESP>
eaefe614 417----
0daaddbd
FG
418
419NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
420bootable disks setup by the {pve} installer since version 5.4. For details, see
cb04e768 421xref:sysboot_proxmox_boot_setup[Setting up a new partition for use as synced ESP].
9ee94323 422
42449bdf 423.With `grub`:
11a6e022
AL
424
425----
426# grub-install <new disk>
427----
9ee94323 428
aa425868
FE
429Configure E-Mail Notification
430~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
9ee94323 431
aa425868
FE
432ZFS comes with an event daemon `ZED`, which monitors events generated by the ZFS
433kernel module. The daemon can also send emails on ZFS events like pool errors.
434Newer ZFS packages ship the daemon in a separate `zfs-zed` package, which should
435already be installed by default in {pve}.
e280a948 436
aa425868
FE
437You can configure the daemon via the file `/etc/zfs/zed.d/zed.rc` with your
438favorite editor. The required setting for email notification is
439`ZED_EMAIL_ADDR`, which is set to `root` by default.
9ee94323 440
083adc34 441--------
9ee94323 442ZED_EMAIL_ADDR="root"
083adc34 443--------
9ee94323 444
8c1189b6 445Please note {pve} forwards mails to `root` to the email address
9ee94323
DM
446configured for the root user.
447
9ee94323 448
42449bdf 449[[sysadmin_zfs_limit_memory_usage]]
5eba0743 450Limit ZFS Memory Usage
9ee94323
DM
451~~~~~~~~~~~~~~~~~~~~~~
452
9060abb9
TL
453ZFS uses '50 %' of the host memory for the **A**daptive **R**eplacement
454**C**ache (ARC) by default. Allocating enough memory for the ARC is crucial for
455IO performance, so reduce it with caution. As a general rule of thumb, allocate
456at least +2 GiB Base + 1 GiB/TiB-Storage+. For example, if you have a pool with
457+8 TiB+ of available storage space then you should use +10 GiB+ of memory for
458the ARC.
459
460You can change the ARC usage limit for the current boot (a reboot resets this
461change again) by writing to the +zfs_arc_max+ module parameter directly:
462
463----
464 echo "$[10 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
465----
466
467To *permanently change* the ARC limits, add the following line to
468`/etc/modprobe.d/zfs.conf`:
9ee94323 469
5eba0743
FG
470--------
471options zfs zfs_arc_max=8589934592
472--------
9ee94323 473
9060abb9 474This example setting limits the usage to 8 GiB ('8 * 2^30^').
9ee94323 475
beed14f8
FG
476IMPORTANT: In case your desired +zfs_arc_max+ value is lower than or equal to
477+zfs_arc_min+ (which defaults to 1/32 of the system memory), +zfs_arc_max+ will
478be ignored unless you also set +zfs_arc_min+ to at most +zfs_arc_max - 1+.
479
480----
481echo "$[8 * 1024*1024*1024 - 1]" >/sys/module/zfs/parameters/zfs_arc_min
482echo "$[8 * 1024*1024*1024]" >/sys/module/zfs/parameters/zfs_arc_max
483----
484
485This example setting (temporarily) limits the usage to 8 GiB ('8 * 2^30^') on
486systems with more than 256 GiB of total memory, where simply setting
487+zfs_arc_max+ alone would not work.
488
9ee94323
DM
489[IMPORTANT]
490====
9060abb9 491If your root file system is ZFS, you must update your initramfs every
5eba0743 492time this value changes:
9ee94323 493
eaefe614
FE
494----
495# update-initramfs -u
496----
9060abb9
TL
497
498You *must reboot* to activate these changes.
9ee94323
DM
499====
500
501
dc74fc63 502[[zfs_swap]]
4128e7ff
TL
503SWAP on ZFS
504~~~~~~~~~~~
9ee94323 505
dc74fc63 506Swap-space created on a zvol may generate some troubles, like blocking the
9ee94323
DM
507server or generating a high IO load, often seen when starting a Backup
508to an external Storage.
509
510We strongly recommend to use enough memory, so that you normally do not
dc74fc63 511run into low memory situations. Should you need or want to add swap, it is
3a433e9b 512preferred to create a partition on a physical disk and use it as a swap device.
dc74fc63
SI
513You can leave some space free for this purpose in the advanced options of the
514installer. Additionally, you can lower the
8c1189b6 515``swappiness'' value. A good value for servers is 10:
9ee94323 516
eaefe614
FE
517----
518# sysctl -w vm.swappiness=10
519----
9ee94323 520
8c1189b6 521To make the swappiness persistent, open `/etc/sysctl.conf` with
9ee94323
DM
522an editor of your choice and add the following line:
523
083adc34
FG
524--------
525vm.swappiness = 10
526--------
9ee94323 527
8c1189b6 528.Linux kernel `swappiness` parameter values
9ee94323
DM
529[width="100%",cols="<m,2d",options="header"]
530|===========================================================
531| Value | Strategy
532| vm.swappiness = 0 | The kernel will swap only to avoid
533an 'out of memory' condition
534| vm.swappiness = 1 | Minimum amount of swapping without
535disabling it entirely.
536| vm.swappiness = 10 | This value is sometimes recommended to
537improve performance when sufficient memory exists in a system.
538| vm.swappiness = 60 | The default value.
539| vm.swappiness = 100 | The kernel will swap aggressively.
540|===========================================================
cca0540e
FG
541
542[[zfs_encryption]]
4128e7ff
TL
543Encrypted ZFS Datasets
544~~~~~~~~~~~~~~~~~~~~~~
cca0540e
FG
545
546ZFS on Linux version 0.8.0 introduced support for native encryption of
547datasets. After an upgrade from previous ZFS on Linux versions, the encryption
229426eb 548feature can be enabled per pool:
cca0540e
FG
549
550----
551# zpool get feature@encryption tank
552NAME PROPERTY VALUE SOURCE
553tank feature@encryption disabled local
554
555# zpool set feature@encryption=enabled
556
557# zpool get feature@encryption tank
558NAME PROPERTY VALUE SOURCE
559tank feature@encryption enabled local
560----
561
562WARNING: There is currently no support for booting from pools with encrypted
563datasets using Grub, and only limited support for automatically unlocking
564encrypted datasets on boot. Older versions of ZFS without encryption support
565will not be able to decrypt stored data.
566
567NOTE: It is recommended to either unlock storage datasets manually after
568booting, or to write a custom unit to pass the key material needed for
569unlocking on boot to `zfs load-key`.
570
571WARNING: Establish and test a backup procedure before enabling encryption of
5dfeeece 572production data. If the associated key material/passphrase/keyfile has been
cca0540e
FG
573lost, accessing the encrypted data is no longer possible.
574
575Encryption needs to be setup when creating datasets/zvols, and is inherited by
576default to child datasets. For example, to create an encrypted dataset
577`tank/encrypted_data` and configure it as storage in {pve}, run the following
578commands:
579
580----
581# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
582Enter passphrase:
583Re-enter passphrase:
584
585# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
586----
587
588All guest volumes/disks create on this storage will be encrypted with the
589shared key material of the parent dataset.
590
591To actually use the storage, the associated key material needs to be loaded
7353437b 592and the dataset needs to be mounted. This can be done in one step with:
cca0540e
FG
593
594----
7353437b 595# zfs mount -l tank/encrypted_data
cca0540e
FG
596Enter passphrase for 'tank/encrypted_data':
597----
598
599It is also possible to use a (random) keyfile instead of prompting for a
600passphrase by setting the `keylocation` and `keyformat` properties, either at
229426eb 601creation time or with `zfs change-key` on existing datasets:
cca0540e
FG
602
603----
604# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
605
606# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
607----
608
609WARNING: When using a keyfile, special care needs to be taken to secure the
610keyfile against unauthorized access or accidental loss. Without the keyfile, it
611is not possible to access the plaintext data!
612
613A guest volume created underneath an encrypted dataset will have its
614`encryptionroot` property set accordingly. The key material only needs to be
615loaded once per encryptionroot to be available to all encrypted datasets
616underneath it.
617
618See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
619`keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
620change-key` commands and the `Encryption` section from `man zfs` for more
621details and advanced usage.
68029ec8
FE
622
623
e06707f2
FE
624[[zfs_compression]]
625Compression in ZFS
626~~~~~~~~~~~~~~~~~~
627
628When compression is enabled on a dataset, ZFS tries to compress all *new*
629blocks before writing them and decompresses them on reading. Already
630existing data will not be compressed retroactively.
631
632You can enable compression with:
633
634----
635# zfs set compression=<algorithm> <dataset>
636----
637
638We recommend using the `lz4` algorithm, because it adds very little CPU
639overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
640integer from `1` (fastest) to `9` (best compression ratio), are also
641available. Depending on the algorithm and how compressible the data is,
642having compression enabled can even increase I/O performance.
643
644You can disable compression at any time with:
645
646----
647# zfs set compression=off <dataset>
648----
649
650Again, only new blocks will be affected by this change.
651
652
42449bdf 653[[sysadmin_zfs_special_device]]
68029ec8
FE
654ZFS Special Device
655~~~~~~~~~~~~~~~~~~
656
657Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
658pool is used to store metadata, deduplication tables, and optionally small
659file blocks.
660
661A `special` device can improve the speed of a pool consisting of slow spinning
51e544b6
TL
662hard disks with a lot of metadata changes. For example workloads that involve
663creating, updating or deleting a large number of files will benefit from the
664presence of a `special` device. ZFS datasets can also be configured to store
665whole small files on the `special` device which can further improve the
666performance. Use fast SSDs for the `special` device.
68029ec8
FE
667
668IMPORTANT: The redundancy of the `special` device should match the one of the
669pool, since the `special` device is a point of failure for the whole pool.
670
671WARNING: Adding a `special` device to a pool cannot be undone!
672
673.Create a pool with `special` device and RAID-1:
674
eaefe614
FE
675----
676# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
677----
68029ec8
FE
678
679.Add a `special` device to an existing pool with RAID-1:
680
eaefe614
FE
681----
682# zpool add <pool> special mirror <device1> <device2>
683----
68029ec8
FE
684
685ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
686`0` to disable storing small file blocks on the `special` device or a power of
687two in the range between `512B` to `128K`. After setting the property new file
688blocks smaller than `size` will be allocated on the `special` device.
689
690IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
51e544b6
TL
691the `recordsize` (default `128K`) of the dataset, *all* data will be written to
692the `special` device, so be careful!
68029ec8
FE
693
694Setting the `special_small_blocks` property on a pool will change the default
695value of that property for all child ZFS datasets (for example all containers
696in the pool will opt in for small file blocks).
697
51e544b6 698.Opt in for all file smaller than 4K-blocks pool-wide:
68029ec8 699
eaefe614
FE
700----
701# zfs set special_small_blocks=4K <pool>
702----
68029ec8
FE
703
704.Opt in for small file blocks for a single dataset:
705
eaefe614
FE
706----
707# zfs set special_small_blocks=4K <pool>/<filesystem>
708----
68029ec8
FE
709
710.Opt out from small file blocks for a single dataset:
711
eaefe614
FE
712----
713# zfs set special_small_blocks=0 <pool>/<filesystem>
714----
18d0d68e
SI
715
716[[sysadmin_zfs_features]]
717ZFS Pool Features
718~~~~~~~~~~~~~~~~~
719
720Changes to the on-disk format in ZFS are only made between major version changes
721and are specified through *features*. All features, as well as the general
722mechanism are well documented in the `zpool-features(5)` manpage.
723
724Since enabling new features can render a pool not importable by an older version
725of ZFS, this needs to be done actively by the administrator, by running
726`zpool upgrade` on the pool (see the `zpool-upgrade(8)` manpage).
727
728Unless you need to use one of the new features, there is no upside to enabling
729them.
730
731In fact, there are some downsides to enabling new features:
732
733* A system with root on ZFS, that still boots using `grub` will become
734 unbootable if a new feature is active on the rpool, due to the incompatible
735 implementation of ZFS in grub.
736* The system will not be able to import any upgraded pool when booted with an
737 older kernel, which still ships with the old ZFS modules.
738* Booting an older {pve} ISO to repair a non-booting system will likewise not
739 work.
740
27adc096
TL
741IMPORTANT: Do *not* upgrade your rpool if your system is still booted with
742`grub`, as this will render your system unbootable. This includes systems
743installed before {pve} 5.4, and systems booting with legacy BIOS boot (see
18d0d68e
SI
744xref:sysboot_determine_bootloader_used[how to determine the bootloader]).
745
27adc096 746.Enable new features for a ZFS pool:
18d0d68e
SI
747----
748# zpool upgrade <pool>
749----