]> git.proxmox.com Git - pve-docs.git/blame_incremental - local-zfs.adoc
rework SDN docs a bit
[pve-docs.git] / local-zfs.adoc
... / ...
CommitLineData
1[[chapter_zfs]]
2ZFS on Linux
3------------
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
8ZFS is a combined file system and logical volume manager designed by
9Sun Microsystems. Starting with {pve} 3.4, the native Linux
10kernel port of the ZFS file system is introduced as optional
11file system and also as an additional selection for the root
12file system. There is no need for manually compile ZFS modules - all
13packages are included.
14
15By using ZFS, its possible to achieve maximum enterprise features with
16low budget hardware, but also high performance systems by leveraging
17SSD caching or even SSD only setups. ZFS can replace cost intense
18hardware raid cards by moderate CPU and memory load combined with easy
19management.
20
21.General ZFS advantages
22
23* Easy configuration and management with {pve} GUI and CLI.
24
25* Reliable
26
27* Protection against data corruption
28
29* Data compression on file system level
30
31* Snapshots
32
33* Copy-on-write clone
34
35* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37* Can use SSD for cache
38
39* Self healing
40
41* Continuous integrity checking
42
43* Designed for high storage capacities
44
45* Protection against data corruption
46
47* Asynchronous replication over network
48
49* Open Source
50
51* Encryption
52
53* ...
54
55
56Hardware
57~~~~~~~~
58
59ZFS depends heavily on memory, so you need at least 8GB to start. In
60practice, use as much you can get for your hardware/budget. To prevent
61data corruption, we recommend the use of high quality ECC RAM.
62
63If you use a dedicated cache and/or log disk, you should use an
64enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
65increase the overall performance significantly.
66
67IMPORTANT: Do not use ZFS on top of hardware controller which has its
68own cache management. ZFS needs to directly communicate with disks. An
69HBA adapter is the way to go, or something like LSI controller flashed
70in ``IT'' mode.
71
72If you are experimenting with an installation of {pve} inside a VM
73(Nested Virtualization), don't use `virtio` for disks of that VM,
74since they are not supported by ZFS. Use IDE or SCSI instead (works
75also with `virtio` SCSI controller type).
76
77
78Installation as Root File System
79~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80
81When you install using the {pve} installer, you can choose ZFS for the
82root file system. You need to select the RAID type at installation
83time:
84
85[horizontal]
86RAID0:: Also called ``striping''. The capacity of such volume is the sum
87of the capacities of all disks. But RAID0 does not add any redundancy,
88so the failure of a single drive makes the volume unusable.
89
90RAID1:: Also called ``mirroring''. Data is written identically to all
91disks. This mode requires at least 2 disks with the same size. The
92resulting capacity is that of a single disk.
93
94RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
95
96RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
97
98RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
99
100RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
101
102The installer automatically partitions the disks, creates a ZFS pool
103called `rpool`, and installs the root file system on the ZFS subvolume
104`rpool/ROOT/pve-1`.
105
106Another subvolume called `rpool/data` is created to store VM
107images. In order to use that with the {pve} tools, the installer
108creates the following configuration entry in `/etc/pve/storage.cfg`:
109
110----
111zfspool: local-zfs
112 pool rpool/data
113 sparse
114 content images,rootdir
115----
116
117After installation, you can view your ZFS pool status using the
118`zpool` command:
119
120----
121# zpool status
122 pool: rpool
123 state: ONLINE
124 scan: none requested
125config:
126
127 NAME STATE READ WRITE CKSUM
128 rpool ONLINE 0 0 0
129 mirror-0 ONLINE 0 0 0
130 sda2 ONLINE 0 0 0
131 sdb2 ONLINE 0 0 0
132 mirror-1 ONLINE 0 0 0
133 sdc ONLINE 0 0 0
134 sdd ONLINE 0 0 0
135
136errors: No known data errors
137----
138
139The `zfs` command is used configure and manage your ZFS file
140systems. The following command lists all file systems after
141installation:
142
143----
144# zfs list
145NAME USED AVAIL REFER MOUNTPOINT
146rpool 4.94G 7.68T 96K /rpool
147rpool/ROOT 702M 7.68T 96K /rpool/ROOT
148rpool/ROOT/pve-1 702M 7.68T 702M /
149rpool/data 96K 7.68T 96K /rpool/data
150rpool/swap 4.25G 7.69T 64K -
151----
152
153
154Bootloader
155~~~~~~~~~~
156
157Depending on whether the system is booted in EFI or legacy BIOS mode the
158{pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
159See the chapter on xref:sysboot[{pve} host bootladers] for details.
160
161
162ZFS Administration
163~~~~~~~~~~~~~~~~~~
164
165This section gives you some usage examples for common tasks. ZFS
166itself is really powerful and provides many options. The main commands
167to manage ZFS are `zfs` and `zpool`. Both commands come with great
168manual pages, which can be read with:
169
170----
171# man zpool
172# man zfs
173-----
174
175.Create a new zpool
176
177To create a new pool, at least one disk is needed. The `ashift` should
178have the same sector-size (2 power of `ashift`) or larger as the
179underlying disk.
180
181----
182# zpool create -f -o ashift=12 <pool> <device>
183----
184
185To activate compression (see section <<zfs_compression,Compression in ZFS>>):
186
187----
188# zfs set compression=lz4 <pool>
189----
190
191.Create a new pool with RAID-0
192
193Minimum 1 disk
194
195----
196# zpool create -f -o ashift=12 <pool> <device1> <device2>
197----
198
199.Create a new pool with RAID-1
200
201Minimum 2 disks
202
203----
204# zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
205----
206
207.Create a new pool with RAID-10
208
209Minimum 4 disks
210
211----
212# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
213----
214
215.Create a new pool with RAIDZ-1
216
217Minimum 3 disks
218
219----
220# zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
221----
222
223.Create a new pool with RAIDZ-2
224
225Minimum 4 disks
226
227----
228# zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
229----
230
231.Create a new pool with cache (L2ARC)
232
233It is possible to use a dedicated cache drive partition to increase
234the performance (use SSD).
235
236As `<device>` it is possible to use more devices, like it's shown in
237"Create a new pool with RAID*".
238
239----
240# zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
241----
242
243.Create a new pool with log (ZIL)
244
245It is possible to use a dedicated cache drive partition to increase
246the performance(SSD).
247
248As `<device>` it is possible to use more devices, like it's shown in
249"Create a new pool with RAID*".
250
251----
252# zpool create -f -o ashift=12 <pool> <device> log <log_device>
253----
254
255.Add cache and log to an existing pool
256
257If you have a pool without cache and log. First partition the SSD in
2582 partition with `parted` or `gdisk`
259
260IMPORTANT: Always use GPT partition tables.
261
262The maximum size of a log device should be about half the size of
263physical memory, so this is usually quite small. The rest of the SSD
264can be used as cache.
265
266----
267# zpool add -f <pool> log <device-part1> cache <device-part2>
268----
269
270.Changing a failed device
271
272----
273# zpool replace -f <pool> <old device> <new device>
274----
275
276.Changing a failed bootable device when using systemd-boot
277
278----
279# sgdisk <healthy bootable device> -R <new device>
280# sgdisk -G <new device>
281# zpool replace -f <pool> <old zfs partition> <new zfs partition>
282# pve-efiboot-tool format <new disk's ESP>
283# pve-efiboot-tool init <new disk's ESP>
284----
285
286NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
287bootable disks setup by the {pve} installer since version 5.4. For details, see
288xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
289
290
291Activate E-Mail Notification
292~~~~~~~~~~~~~~~~~~~~~~~~~~~~
293
294ZFS comes with an event daemon, which monitors events generated by the
295ZFS kernel module. The daemon can also send emails on ZFS events like
296pool errors. Newer ZFS packages ship the daemon in a separate package,
297and you can install it using `apt-get`:
298
299----
300# apt-get install zfs-zed
301----
302
303To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
304favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
305
306--------
307ZED_EMAIL_ADDR="root"
308--------
309
310Please note {pve} forwards mails to `root` to the email address
311configured for the root user.
312
313IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
314other settings are optional.
315
316
317Limit ZFS Memory Usage
318~~~~~~~~~~~~~~~~~~~~~~
319
320It is good to use at most 50 percent (which is the default) of the
321system memory for ZFS ARC to prevent performance shortage of the
322host. Use your preferred editor to change the configuration in
323`/etc/modprobe.d/zfs.conf` and insert:
324
325--------
326options zfs zfs_arc_max=8589934592
327--------
328
329This example setting limits the usage to 8GB.
330
331[IMPORTANT]
332====
333If your root file system is ZFS you must update your initramfs every
334time this value changes:
335
336----
337# update-initramfs -u
338----
339====
340
341
342[[zfs_swap]]
343SWAP on ZFS
344~~~~~~~~~~~
345
346Swap-space created on a zvol may generate some troubles, like blocking the
347server or generating a high IO load, often seen when starting a Backup
348to an external Storage.
349
350We strongly recommend to use enough memory, so that you normally do not
351run into low memory situations. Should you need or want to add swap, it is
352preferred to create a partition on a physical disk and use it as swapdevice.
353You can leave some space free for this purpose in the advanced options of the
354installer. Additionally, you can lower the
355``swappiness'' value. A good value for servers is 10:
356
357----
358# sysctl -w vm.swappiness=10
359----
360
361To make the swappiness persistent, open `/etc/sysctl.conf` with
362an editor of your choice and add the following line:
363
364--------
365vm.swappiness = 10
366--------
367
368.Linux kernel `swappiness` parameter values
369[width="100%",cols="<m,2d",options="header"]
370|===========================================================
371| Value | Strategy
372| vm.swappiness = 0 | The kernel will swap only to avoid
373an 'out of memory' condition
374| vm.swappiness = 1 | Minimum amount of swapping without
375disabling it entirely.
376| vm.swappiness = 10 | This value is sometimes recommended to
377improve performance when sufficient memory exists in a system.
378| vm.swappiness = 60 | The default value.
379| vm.swappiness = 100 | The kernel will swap aggressively.
380|===========================================================
381
382[[zfs_encryption]]
383Encrypted ZFS Datasets
384~~~~~~~~~~~~~~~~~~~~~~
385
386ZFS on Linux version 0.8.0 introduced support for native encryption of
387datasets. After an upgrade from previous ZFS on Linux versions, the encryption
388feature can be enabled per pool:
389
390----
391# zpool get feature@encryption tank
392NAME PROPERTY VALUE SOURCE
393tank feature@encryption disabled local
394
395# zpool set feature@encryption=enabled
396
397# zpool get feature@encryption tank
398NAME PROPERTY VALUE SOURCE
399tank feature@encryption enabled local
400----
401
402WARNING: There is currently no support for booting from pools with encrypted
403datasets using Grub, and only limited support for automatically unlocking
404encrypted datasets on boot. Older versions of ZFS without encryption support
405will not be able to decrypt stored data.
406
407NOTE: It is recommended to either unlock storage datasets manually after
408booting, or to write a custom unit to pass the key material needed for
409unlocking on boot to `zfs load-key`.
410
411WARNING: Establish and test a backup procedure before enabling encryption of
412production data. If the associated key material/passphrase/keyfile has been
413lost, accessing the encrypted data is no longer possible.
414
415Encryption needs to be setup when creating datasets/zvols, and is inherited by
416default to child datasets. For example, to create an encrypted dataset
417`tank/encrypted_data` and configure it as storage in {pve}, run the following
418commands:
419
420----
421# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
422Enter passphrase:
423Re-enter passphrase:
424
425# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
426----
427
428All guest volumes/disks create on this storage will be encrypted with the
429shared key material of the parent dataset.
430
431To actually use the storage, the associated key material needs to be loaded
432with `zfs load-key`:
433
434----
435# zfs load-key tank/encrypted_data
436Enter passphrase for 'tank/encrypted_data':
437----
438
439It is also possible to use a (random) keyfile instead of prompting for a
440passphrase by setting the `keylocation` and `keyformat` properties, either at
441creation time or with `zfs change-key` on existing datasets:
442
443----
444# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
445
446# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
447----
448
449WARNING: When using a keyfile, special care needs to be taken to secure the
450keyfile against unauthorized access or accidental loss. Without the keyfile, it
451is not possible to access the plaintext data!
452
453A guest volume created underneath an encrypted dataset will have its
454`encryptionroot` property set accordingly. The key material only needs to be
455loaded once per encryptionroot to be available to all encrypted datasets
456underneath it.
457
458See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
459`keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
460change-key` commands and the `Encryption` section from `man zfs` for more
461details and advanced usage.
462
463
464[[zfs_compression]]
465Compression in ZFS
466~~~~~~~~~~~~~~~~~~
467
468When compression is enabled on a dataset, ZFS tries to compress all *new*
469blocks before writing them and decompresses them on reading. Already
470existing data will not be compressed retroactively.
471
472You can enable compression with:
473
474----
475# zfs set compression=<algorithm> <dataset>
476----
477
478We recommend using the `lz4` algorithm, because it adds very little CPU
479overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
480integer from `1` (fastest) to `9` (best compression ratio), are also
481available. Depending on the algorithm and how compressible the data is,
482having compression enabled can even increase I/O performance.
483
484You can disable compression at any time with:
485
486----
487# zfs set compression=off <dataset>
488----
489
490Again, only new blocks will be affected by this change.
491
492
493ZFS Special Device
494~~~~~~~~~~~~~~~~~~
495
496Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
497pool is used to store metadata, deduplication tables, and optionally small
498file blocks.
499
500A `special` device can improve the speed of a pool consisting of slow spinning
501hard disks with a lot of metadata changes. For example workloads that involve
502creating, updating or deleting a large number of files will benefit from the
503presence of a `special` device. ZFS datasets can also be configured to store
504whole small files on the `special` device which can further improve the
505performance. Use fast SSDs for the `special` device.
506
507IMPORTANT: The redundancy of the `special` device should match the one of the
508pool, since the `special` device is a point of failure for the whole pool.
509
510WARNING: Adding a `special` device to a pool cannot be undone!
511
512.Create a pool with `special` device and RAID-1:
513
514----
515# zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
516----
517
518.Add a `special` device to an existing pool with RAID-1:
519
520----
521# zpool add <pool> special mirror <device1> <device2>
522----
523
524ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
525`0` to disable storing small file blocks on the `special` device or a power of
526two in the range between `512B` to `128K`. After setting the property new file
527blocks smaller than `size` will be allocated on the `special` device.
528
529IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
530the `recordsize` (default `128K`) of the dataset, *all* data will be written to
531the `special` device, so be careful!
532
533Setting the `special_small_blocks` property on a pool will change the default
534value of that property for all child ZFS datasets (for example all containers
535in the pool will opt in for small file blocks).
536
537.Opt in for all file smaller than 4K-blocks pool-wide:
538
539----
540# zfs set special_small_blocks=4K <pool>
541----
542
543.Opt in for small file blocks for a single dataset:
544
545----
546# zfs set special_small_blocks=4K <pool>/<filesystem>
547----
548
549.Opt out from small file blocks for a single dataset:
550
551----
552# zfs set special_small_blocks=0 <pool>/<filesystem>
553----