]> git.proxmox.com Git - pve-docs.git/blob - local-zfs.adoc
Use consistent style for all shell commands
[pve-docs.git] / local-zfs.adoc
1 [[chapter_zfs]]
2 ZFS on Linux
3 ------------
4 ifdef::wiki[]
5 :pve-toplevel:
6 endif::wiki[]
7
8 ZFS is a combined file system and logical volume manager designed by
9 Sun Microsystems. Starting with {pve} 3.4, the native Linux
10 kernel port of the ZFS file system is introduced as optional
11 file system and also as an additional selection for the root
12 file system. There is no need for manually compile ZFS modules - all
13 packages are included.
14
15 By using ZFS, its possible to achieve maximum enterprise features with
16 low budget hardware, but also high performance systems by leveraging
17 SSD caching or even SSD only setups. ZFS can replace cost intense
18 hardware raid cards by moderate CPU and memory load combined with easy
19 management.
20
21 .General ZFS advantages
22
23 * Easy configuration and management with {pve} GUI and CLI.
24
25 * Reliable
26
27 * Protection against data corruption
28
29 * Data compression on file system level
30
31 * Snapshots
32
33 * Copy-on-write clone
34
35 * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37 * Can use SSD for cache
38
39 * Self healing
40
41 * Continuous integrity checking
42
43 * Designed for high storage capacities
44
45 * Protection against data corruption
46
47 * Asynchronous replication over network
48
49 * Open Source
50
51 * Encryption
52
53 * ...
54
55
56 Hardware
57 ~~~~~~~~
58
59 ZFS depends heavily on memory, so you need at least 8GB to start. In
60 practice, use as much you can get for your hardware/budget. To prevent
61 data corruption, we recommend the use of high quality ECC RAM.
62
63 If you use a dedicated cache and/or log disk, you should use an
64 enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
65 increase the overall performance significantly.
66
67 IMPORTANT: Do not use ZFS on top of hardware controller which has its
68 own cache management. ZFS needs to directly communicate with disks. An
69 HBA adapter is the way to go, or something like LSI controller flashed
70 in ``IT'' mode.
71
72 If you are experimenting with an installation of {pve} inside a VM
73 (Nested Virtualization), don't use `virtio` for disks of that VM,
74 since they are not supported by ZFS. Use IDE or SCSI instead (works
75 also with `virtio` SCSI controller type).
76
77
78 Installation as Root File System
79 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80
81 When you install using the {pve} installer, you can choose ZFS for the
82 root file system. You need to select the RAID type at installation
83 time:
84
85 [horizontal]
86 RAID0:: Also called ``striping''. The capacity of such volume is the sum
87 of the capacities of all disks. But RAID0 does not add any redundancy,
88 so the failure of a single drive makes the volume unusable.
89
90 RAID1:: Also called ``mirroring''. Data is written identically to all
91 disks. This mode requires at least 2 disks with the same size. The
92 resulting capacity is that of a single disk.
93
94 RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
95
96 RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
97
98 RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
99
100 RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
101
102 The installer automatically partitions the disks, creates a ZFS pool
103 called `rpool`, and installs the root file system on the ZFS subvolume
104 `rpool/ROOT/pve-1`.
105
106 Another subvolume called `rpool/data` is created to store VM
107 images. In order to use that with the {pve} tools, the installer
108 creates the following configuration entry in `/etc/pve/storage.cfg`:
109
110 ----
111 zfspool: local-zfs
112 pool rpool/data
113 sparse
114 content images,rootdir
115 ----
116
117 After installation, you can view your ZFS pool status using the
118 `zpool` command:
119
120 ----
121 # zpool status
122 pool: rpool
123 state: ONLINE
124 scan: none requested
125 config:
126
127 NAME STATE READ WRITE CKSUM
128 rpool ONLINE 0 0 0
129 mirror-0 ONLINE 0 0 0
130 sda2 ONLINE 0 0 0
131 sdb2 ONLINE 0 0 0
132 mirror-1 ONLINE 0 0 0
133 sdc ONLINE 0 0 0
134 sdd ONLINE 0 0 0
135
136 errors: No known data errors
137 ----
138
139 The `zfs` command is used configure and manage your ZFS file
140 systems. The following command lists all file systems after
141 installation:
142
143 ----
144 # zfs list
145 NAME USED AVAIL REFER MOUNTPOINT
146 rpool 4.94G 7.68T 96K /rpool
147 rpool/ROOT 702M 7.68T 96K /rpool/ROOT
148 rpool/ROOT/pve-1 702M 7.68T 702M /
149 rpool/data 96K 7.68T 96K /rpool/data
150 rpool/swap 4.25G 7.69T 64K -
151 ----
152
153
154 Bootloader
155 ~~~~~~~~~~
156
157 Depending on whether the system is booted in EFI or legacy BIOS mode the
158 {pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
159 See the chapter on xref:sysboot[{pve} host bootladers] for details.
160
161
162 ZFS Administration
163 ~~~~~~~~~~~~~~~~~~
164
165 This section gives you some usage examples for common tasks. ZFS
166 itself is really powerful and provides many options. The main commands
167 to manage ZFS are `zfs` and `zpool`. Both commands come with great
168 manual pages, which can be read with:
169
170 ----
171 # man zpool
172 # man zfs
173 -----
174
175 .Create a new zpool
176
177 To create a new pool, at least one disk is needed. The `ashift` should
178 have the same sector-size (2 power of `ashift`) or larger as the
179 underlying disk.
180
181 ----
182 # zpool create -f -o ashift=12 <pool> <device>
183 ----
184
185 To activate compression (see section <<zfs_compression,Compression in ZFS>>):
186
187 ----
188 # zfs set compression=lz4 <pool>
189 ----
190
191 .Create a new pool with RAID-0
192
193 Minimum 1 Disk
194
195 ----
196 # zpool create -f -o ashift=12 <pool> <device1> <device2>
197 ----
198
199 .Create a new pool with RAID-1
200
201 Minimum 2 Disks
202
203 ----
204 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
205 ----
206
207 .Create a new pool with RAID-10
208
209 Minimum 4 Disks
210
211 ----
212 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
213 ----
214
215 .Create a new pool with RAIDZ-1
216
217 Minimum 3 Disks
218
219 ----
220 # zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
221 ----
222
223 .Create a new pool with RAIDZ-2
224
225 Minimum 4 Disks
226
227 ----
228 # zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
229 ----
230
231 .Create a new pool with cache (L2ARC)
232
233 It is possible to use a dedicated cache drive partition to increase
234 the performance (use SSD).
235
236 As `<device>` it is possible to use more devices, like it's shown in
237 "Create a new pool with RAID*".
238
239 ----
240 # zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
241 ----
242
243 .Create a new pool with log (ZIL)
244
245 It is possible to use a dedicated cache drive partition to increase
246 the performance(SSD).
247
248 As `<device>` it is possible to use more devices, like it's shown in
249 "Create a new pool with RAID*".
250
251 ----
252 # zpool create -f -o ashift=12 <pool> <device> log <log_device>
253 ----
254
255 .Add cache and log to an existing pool
256
257 If you have a pool without cache and log. First partition the SSD in
258 2 partition with `parted` or `gdisk`
259
260 IMPORTANT: Always use GPT partition tables.
261
262 The maximum size of a log device should be about half the size of
263 physical memory, so this is usually quite small. The rest of the SSD
264 can be used as cache.
265
266 ----
267 # zpool add -f <pool> log <device-part1> cache <device-part2>
268 ----
269
270 .Changing a failed device
271
272 ----
273 # zpool replace -f <pool> <old device> <new device>
274 ----
275
276 .Changing a failed bootable device when using systemd-boot
277
278 ----
279 # sgdisk <healthy bootable device> -R <new device>
280 # sgdisk -G <new device>
281 # zpool replace -f <pool> <old zfs partition> <new zfs partition>
282 # pve-efiboot-tool format <new disk's ESP>
283 # pve-efiboot-tool init <new disk's ESP>
284 ----
285
286 NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
287 bootable disks setup by the {pve} installer since version 5.4. For details, see
288 xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
289
290
291 Activate E-Mail Notification
292 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
293
294 ZFS comes with an event daemon, which monitors events generated by the
295 ZFS kernel module. The daemon can also send emails on ZFS events like
296 pool errors. Newer ZFS packages ship the daemon in a separate package,
297 and you can install it using `apt-get`:
298
299 ----
300 # apt-get install zfs-zed
301 ----
302
303 To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
304 favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
305
306 --------
307 ZED_EMAIL_ADDR="root"
308 --------
309
310 Please note {pve} forwards mails to `root` to the email address
311 configured for the root user.
312
313 IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
314 other settings are optional.
315
316
317 Limit ZFS Memory Usage
318 ~~~~~~~~~~~~~~~~~~~~~~
319
320 It is good to use at most 50 percent (which is the default) of the
321 system memory for ZFS ARC to prevent performance shortage of the
322 host. Use your preferred editor to change the configuration in
323 `/etc/modprobe.d/zfs.conf` and insert:
324
325 --------
326 options zfs zfs_arc_max=8589934592
327 --------
328
329 This example setting limits the usage to 8GB.
330
331 [IMPORTANT]
332 ====
333 If your root file system is ZFS you must update your initramfs every
334 time this value changes:
335
336 ----
337 # update-initramfs -u
338 ----
339 ====
340
341
342 [[zfs_swap]]
343 SWAP on ZFS
344 ~~~~~~~~~~~
345
346 Swap-space created on a zvol may generate some troubles, like blocking the
347 server or generating a high IO load, often seen when starting a Backup
348 to an external Storage.
349
350 We strongly recommend to use enough memory, so that you normally do not
351 run into low memory situations. Should you need or want to add swap, it is
352 preferred to create a partition on a physical disk and use it as swapdevice.
353 You can leave some space free for this purpose in the advanced options of the
354 installer. Additionally, you can lower the
355 ``swappiness'' value. A good value for servers is 10:
356
357 ----
358 # sysctl -w vm.swappiness=10
359 ----
360
361 To make the swappiness persistent, open `/etc/sysctl.conf` with
362 an editor of your choice and add the following line:
363
364 --------
365 vm.swappiness = 10
366 --------
367
368 .Linux kernel `swappiness` parameter values
369 [width="100%",cols="<m,2d",options="header"]
370 |===========================================================
371 | Value | Strategy
372 | vm.swappiness = 0 | The kernel will swap only to avoid
373 an 'out of memory' condition
374 | vm.swappiness = 1 | Minimum amount of swapping without
375 disabling it entirely.
376 | vm.swappiness = 10 | This value is sometimes recommended to
377 improve performance when sufficient memory exists in a system.
378 | vm.swappiness = 60 | The default value.
379 | vm.swappiness = 100 | The kernel will swap aggressively.
380 |===========================================================
381
382 [[zfs_encryption]]
383 Encrypted ZFS Datasets
384 ~~~~~~~~~~~~~~~~~~~~~~
385
386 ZFS on Linux version 0.8.0 introduced support for native encryption of
387 datasets. After an upgrade from previous ZFS on Linux versions, the encryption
388 feature can be enabled per pool:
389
390 ----
391 # zpool get feature@encryption tank
392 NAME PROPERTY VALUE SOURCE
393 tank feature@encryption disabled local
394
395 # zpool set feature@encryption=enabled
396
397 # zpool get feature@encryption tank
398 NAME PROPERTY VALUE SOURCE
399 tank feature@encryption enabled local
400 ----
401
402 WARNING: There is currently no support for booting from pools with encrypted
403 datasets using Grub, and only limited support for automatically unlocking
404 encrypted datasets on boot. Older versions of ZFS without encryption support
405 will not be able to decrypt stored data.
406
407 NOTE: It is recommended to either unlock storage datasets manually after
408 booting, or to write a custom unit to pass the key material needed for
409 unlocking on boot to `zfs load-key`.
410
411 WARNING: Establish and test a backup procedure before enabling encryption of
412 production data. If the associated key material/passphrase/keyfile has been
413 lost, accessing the encrypted data is no longer possible.
414
415 Encryption needs to be setup when creating datasets/zvols, and is inherited by
416 default to child datasets. For example, to create an encrypted dataset
417 `tank/encrypted_data` and configure it as storage in {pve}, run the following
418 commands:
419
420 ----
421 # zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
422 Enter passphrase:
423 Re-enter passphrase:
424
425 # pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
426 ----
427
428 All guest volumes/disks create on this storage will be encrypted with the
429 shared key material of the parent dataset.
430
431 To actually use the storage, the associated key material needs to be loaded
432 with `zfs load-key`:
433
434 ----
435 # zfs load-key tank/encrypted_data
436 Enter passphrase for 'tank/encrypted_data':
437 ----
438
439 It is also possible to use a (random) keyfile instead of prompting for a
440 passphrase by setting the `keylocation` and `keyformat` properties, either at
441 creation time or with `zfs change-key` on existing datasets:
442
443 ----
444 # dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
445
446 # zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
447 ----
448
449 WARNING: When using a keyfile, special care needs to be taken to secure the
450 keyfile against unauthorized access or accidental loss. Without the keyfile, it
451 is not possible to access the plaintext data!
452
453 A guest volume created underneath an encrypted dataset will have its
454 `encryptionroot` property set accordingly. The key material only needs to be
455 loaded once per encryptionroot to be available to all encrypted datasets
456 underneath it.
457
458 See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
459 `keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
460 change-key` commands and the `Encryption` section from `man zfs` for more
461 details and advanced usage.
462
463
464 [[zfs_compression]]
465 Compression in ZFS
466 ~~~~~~~~~~~~~~~~~~
467
468 When compression is enabled on a dataset, ZFS tries to compress all *new*
469 blocks before writing them and decompresses them on reading. Already
470 existing data will not be compressed retroactively.
471
472 You can enable compression with:
473
474 ----
475 # zfs set compression=<algorithm> <dataset>
476 ----
477
478 We recommend using the `lz4` algorithm, because it adds very little CPU
479 overhead. Other algorithms like `lzjb` and `gzip-N`, where `N` is an
480 integer from `1` (fastest) to `9` (best compression ratio), are also
481 available. Depending on the algorithm and how compressible the data is,
482 having compression enabled can even increase I/O performance.
483
484 You can disable compression at any time with:
485
486 ----
487 # zfs set compression=off <dataset>
488 ----
489
490 Again, only new blocks will be affected by this change.
491
492
493 ZFS Special Device
494 ~~~~~~~~~~~~~~~~~~
495
496 Since version 0.8.0 ZFS supports `special` devices. A `special` device in a
497 pool is used to store metadata, deduplication tables, and optionally small
498 file blocks.
499
500 A `special` device can improve the speed of a pool consisting of slow spinning
501 hard disks with a lot of metadata changes. For example workloads that involve
502 creating, updating or deleting a large number of files will benefit from the
503 presence of a `special` device. ZFS datasets can also be configured to store
504 whole small files on the `special` device which can further improve the
505 performance. Use fast SSDs for the `special` device.
506
507 IMPORTANT: The redundancy of the `special` device should match the one of the
508 pool, since the `special` device is a point of failure for the whole pool.
509
510 WARNING: Adding a `special` device to a pool cannot be undone!
511
512 .Create a pool with `special` device and RAID-1:
513
514 ----
515 # zpool create -f -o ashift=12 <pool> mirror <device1> <device2> special mirror <device3> <device4>
516 ----
517
518 .Add a `special` device to an existing pool with RAID-1:
519
520 ----
521 # zpool add <pool> special mirror <device1> <device2>
522 ----
523
524 ZFS datasets expose the `special_small_blocks=<size>` property. `size` can be
525 `0` to disable storing small file blocks on the `special` device or a power of
526 two in the range between `512B` to `128K`. After setting the property new file
527 blocks smaller than `size` will be allocated on the `special` device.
528
529 IMPORTANT: If the value for `special_small_blocks` is greater than or equal to
530 the `recordsize` (default `128K`) of the dataset, *all* data will be written to
531 the `special` device, so be careful!
532
533 Setting the `special_small_blocks` property on a pool will change the default
534 value of that property for all child ZFS datasets (for example all containers
535 in the pool will opt in for small file blocks).
536
537 .Opt in for all file smaller than 4K-blocks pool-wide:
538
539 ----
540 # zfs set special_small_blocks=4K <pool>
541 ----
542
543 .Opt in for small file blocks for a single dataset:
544
545 ----
546 # zfs set special_small_blocks=4K <pool>/<filesystem>
547 ----
548
549 .Opt out from small file blocks for a single dataset:
550
551 ----
552 # zfs set special_small_blocks=0 <pool>/<filesystem>
553 ----