]> git.proxmox.com Git - pve-docs.git/blame - local-zfs.adoc
Fix typos in pvecm.adoc
[pve-docs.git] / local-zfs.adoc
CommitLineData
0235c741 1[[chapter_zfs]]
9ee94323
DM
2ZFS on Linux
3------------
5f09af76
DM
4ifdef::wiki[]
5:pve-toplevel:
6endif::wiki[]
7
9ee94323
DM
8ZFS is a combined file system and logical volume manager designed by
9Sun Microsystems. Starting with {pve} 3.4, the native Linux
10kernel port of the ZFS file system is introduced as optional
5eba0743
FG
11file system and also as an additional selection for the root
12file system. There is no need for manually compile ZFS modules - all
9ee94323
DM
13packages are included.
14
5eba0743 15By using ZFS, its possible to achieve maximum enterprise features with
9ee94323
DM
16low budget hardware, but also high performance systems by leveraging
17SSD caching or even SSD only setups. ZFS can replace cost intense
18hardware raid cards by moderate CPU and memory load combined with easy
19management.
20
21.General ZFS advantages
22
23* Easy configuration and management with {pve} GUI and CLI.
24
25* Reliable
26
27* Protection against data corruption
28
5eba0743 29* Data compression on file system level
9ee94323
DM
30
31* Snapshots
32
33* Copy-on-write clone
34
35* Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3
36
37* Can use SSD for cache
38
39* Self healing
40
41* Continuous integrity checking
42
43* Designed for high storage capacities
44
45* Protection against data corruption
46
47* Asynchronous replication over network
48
49* Open Source
50
51* Encryption
52
53* ...
54
55
56Hardware
57~~~~~~~~
58
59ZFS depends heavily on memory, so you need at least 8GB to start. In
60practice, use as much you can get for your hardware/budget. To prevent
61data corruption, we recommend the use of high quality ECC RAM.
62
d48bdcf2 63If you use a dedicated cache and/or log disk, you should use an
9ee94323
DM
64enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can
65increase the overall performance significantly.
66
5eba0743 67IMPORTANT: Do not use ZFS on top of hardware controller which has its
9ee94323
DM
68own cache management. ZFS needs to directly communicate with disks. An
69HBA adapter is the way to go, or something like LSI controller flashed
8c1189b6 70in ``IT'' mode.
9ee94323
DM
71
72If you are experimenting with an installation of {pve} inside a VM
8c1189b6 73(Nested Virtualization), don't use `virtio` for disks of that VM,
9ee94323 74since they are not supported by ZFS. Use IDE or SCSI instead (works
8c1189b6 75also with `virtio` SCSI controller type).
9ee94323
DM
76
77
5eba0743 78Installation as Root File System
9ee94323
DM
79~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80
81When you install using the {pve} installer, you can choose ZFS for the
82root file system. You need to select the RAID type at installation
83time:
84
85[horizontal]
8c1189b6
FG
86RAID0:: Also called ``striping''. The capacity of such volume is the sum
87of the capacities of all disks. But RAID0 does not add any redundancy,
9ee94323
DM
88so the failure of a single drive makes the volume unusable.
89
8c1189b6 90RAID1:: Also called ``mirroring''. Data is written identically to all
9ee94323
DM
91disks. This mode requires at least 2 disks with the same size. The
92resulting capacity is that of a single disk.
93
94RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks.
95
96RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks.
97
98RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks.
99
100RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks.
101
102The installer automatically partitions the disks, creates a ZFS pool
8c1189b6
FG
103called `rpool`, and installs the root file system on the ZFS subvolume
104`rpool/ROOT/pve-1`.
9ee94323 105
8c1189b6 106Another subvolume called `rpool/data` is created to store VM
9ee94323 107images. In order to use that with the {pve} tools, the installer
8c1189b6 108creates the following configuration entry in `/etc/pve/storage.cfg`:
9ee94323
DM
109
110----
111zfspool: local-zfs
112 pool rpool/data
113 sparse
114 content images,rootdir
115----
116
117After installation, you can view your ZFS pool status using the
8c1189b6 118`zpool` command:
9ee94323
DM
119
120----
121# zpool status
122 pool: rpool
123 state: ONLINE
124 scan: none requested
125config:
126
127 NAME STATE READ WRITE CKSUM
128 rpool ONLINE 0 0 0
129 mirror-0 ONLINE 0 0 0
130 sda2 ONLINE 0 0 0
131 sdb2 ONLINE 0 0 0
132 mirror-1 ONLINE 0 0 0
133 sdc ONLINE 0 0 0
134 sdd ONLINE 0 0 0
135
136errors: No known data errors
137----
138
8c1189b6 139The `zfs` command is used configure and manage your ZFS file
9ee94323
DM
140systems. The following command lists all file systems after
141installation:
142
143----
144# zfs list
145NAME USED AVAIL REFER MOUNTPOINT
146rpool 4.94G 7.68T 96K /rpool
147rpool/ROOT 702M 7.68T 96K /rpool/ROOT
148rpool/ROOT/pve-1 702M 7.68T 702M /
149rpool/data 96K 7.68T 96K /rpool/data
150rpool/swap 4.25G 7.69T 64K -
151----
152
153
154Bootloader
155~~~~~~~~~~
156
1748211a
SI
157Depending on whether the system is booted in EFI or legacy BIOS mode the
158{pve} installer sets up either `grub` or `systemd-boot` as main bootloader.
69055103 159See the chapter on xref:sysboot[{pve} host bootladers] for details.
9ee94323
DM
160
161
162ZFS Administration
163~~~~~~~~~~~~~~~~~~
164
165This section gives you some usage examples for common tasks. ZFS
166itself is really powerful and provides many options. The main commands
8c1189b6
FG
167to manage ZFS are `zfs` and `zpool`. Both commands come with great
168manual pages, which can be read with:
9ee94323
DM
169
170----
171# man zpool
172# man zfs
173-----
174
5eba0743 175.Create a new zpool
9ee94323 176
8c1189b6
FG
177To create a new pool, at least one disk is needed. The `ashift` should
178have the same sector-size (2 power of `ashift`) or larger as the
9ee94323
DM
179underlying disk.
180
181 zpool create -f -o ashift=12 <pool> <device>
182
5eba0743 183To activate compression
9ee94323
DM
184
185 zfs set compression=lz4 <pool>
186
187.Create a new pool with RAID-0
188
189Minimum 1 Disk
190
191 zpool create -f -o ashift=12 <pool> <device1> <device2>
192
193.Create a new pool with RAID-1
194
195Minimum 2 Disks
196
197 zpool create -f -o ashift=12 <pool> mirror <device1> <device2>
198
199.Create a new pool with RAID-10
200
201Minimum 4 Disks
202
203 zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4>
204
205.Create a new pool with RAIDZ-1
206
207Minimum 3 Disks
208
209 zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3>
210
211.Create a new pool with RAIDZ-2
212
213Minimum 4 Disks
214
215 zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4>
216
5eba0743 217.Create a new pool with cache (L2ARC)
9ee94323
DM
218
219It is possible to use a dedicated cache drive partition to increase
220the performance (use SSD).
221
8c1189b6 222As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
223"Create a new pool with RAID*".
224
225 zpool create -f -o ashift=12 <pool> <device> cache <cache_device>
226
5eba0743 227.Create a new pool with log (ZIL)
9ee94323
DM
228
229It is possible to use a dedicated cache drive partition to increase
230the performance(SSD).
231
8c1189b6 232As `<device>` it is possible to use more devices, like it's shown in
9ee94323
DM
233"Create a new pool with RAID*".
234
235 zpool create -f -o ashift=12 <pool> <device> log <log_device>
236
5eba0743 237.Add cache and log to an existing pool
9ee94323 238
5dfeeece 239If you have a pool without cache and log. First partition the SSD in
8c1189b6 2402 partition with `parted` or `gdisk`
9ee94323 241
e300cf7d 242IMPORTANT: Always use GPT partition tables.
9ee94323
DM
243
244The maximum size of a log device should be about half the size of
245physical memory, so this is usually quite small. The rest of the SSD
5eba0743 246can be used as cache.
9ee94323
DM
247
248 zpool add -f <pool> log <device-part1> cache <device-part2>
249
5eba0743 250.Changing a failed device
9ee94323 251
1748211a
SI
252 zpool replace -f <pool> <old device> <new device>
253
254.Changing a failed bootable device when using systemd-boot
255
256 sgdisk <healthy bootable device> -R <new device>
257 sgdisk -G <new device>
258 zpool replace -f <pool> <old zfs partition> <new zfs partition>
0daaddbd
FG
259 pve-efiboot-tool format <new disk's ESP>
260 pve-efiboot-tool init <new disk's ESP>
261
262NOTE: `ESP` stands for EFI System Partition, which is setup as partition #2 on
263bootable disks setup by the {pve} installer since version 5.4. For details, see
264xref:sysboot_systemd_boot_setup[Setting up a new partition for use as synced ESP].
9ee94323
DM
265
266
267Activate E-Mail Notification
268~~~~~~~~~~~~~~~~~~~~~~~~~~~~
269
270ZFS comes with an event daemon, which monitors events generated by the
5eba0743 271ZFS kernel module. The daemon can also send emails on ZFS events like
5dfeeece 272pool errors. Newer ZFS packages ship the daemon in a separate package,
e280a948
DM
273and you can install it using `apt-get`:
274
275----
276# apt-get install zfs-zed
277----
9ee94323 278
8c1189b6
FG
279To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your
280favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting:
9ee94323 281
083adc34 282--------
9ee94323 283ZED_EMAIL_ADDR="root"
083adc34 284--------
9ee94323 285
8c1189b6 286Please note {pve} forwards mails to `root` to the email address
9ee94323
DM
287configured for the root user.
288
8c1189b6 289IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All
9ee94323
DM
290other settings are optional.
291
292
5eba0743 293Limit ZFS Memory Usage
9ee94323
DM
294~~~~~~~~~~~~~~~~~~~~~~
295
5eba0743 296It is good to use at most 50 percent (which is the default) of the
d362b7f4
DM
297system memory for ZFS ARC to prevent performance shortage of the
298host. Use your preferred editor to change the configuration in
8c1189b6 299`/etc/modprobe.d/zfs.conf` and insert:
9ee94323 300
5eba0743
FG
301--------
302options zfs zfs_arc_max=8589934592
303--------
9ee94323
DM
304
305This example setting limits the usage to 8GB.
306
307[IMPORTANT]
308====
5eba0743
FG
309If your root file system is ZFS you must update your initramfs every
310time this value changes:
9ee94323
DM
311
312 update-initramfs -u
313====
314
315
dc74fc63 316[[zfs_swap]]
4128e7ff
TL
317SWAP on ZFS
318~~~~~~~~~~~
9ee94323 319
dc74fc63 320Swap-space created on a zvol may generate some troubles, like blocking the
9ee94323
DM
321server or generating a high IO load, often seen when starting a Backup
322to an external Storage.
323
324We strongly recommend to use enough memory, so that you normally do not
dc74fc63
SI
325run into low memory situations. Should you need or want to add swap, it is
326preferred to create a partition on a physical disk and use it as swapdevice.
327You can leave some space free for this purpose in the advanced options of the
328installer. Additionally, you can lower the
8c1189b6 329``swappiness'' value. A good value for servers is 10:
9ee94323
DM
330
331 sysctl -w vm.swappiness=10
332
8c1189b6 333To make the swappiness persistent, open `/etc/sysctl.conf` with
9ee94323
DM
334an editor of your choice and add the following line:
335
083adc34
FG
336--------
337vm.swappiness = 10
338--------
9ee94323 339
8c1189b6 340.Linux kernel `swappiness` parameter values
9ee94323
DM
341[width="100%",cols="<m,2d",options="header"]
342|===========================================================
343| Value | Strategy
344| vm.swappiness = 0 | The kernel will swap only to avoid
345an 'out of memory' condition
346| vm.swappiness = 1 | Minimum amount of swapping without
347disabling it entirely.
348| vm.swappiness = 10 | This value is sometimes recommended to
349improve performance when sufficient memory exists in a system.
350| vm.swappiness = 60 | The default value.
351| vm.swappiness = 100 | The kernel will swap aggressively.
352|===========================================================
cca0540e
FG
353
354[[zfs_encryption]]
4128e7ff
TL
355Encrypted ZFS Datasets
356~~~~~~~~~~~~~~~~~~~~~~
cca0540e
FG
357
358ZFS on Linux version 0.8.0 introduced support for native encryption of
359datasets. After an upgrade from previous ZFS on Linux versions, the encryption
229426eb 360feature can be enabled per pool:
cca0540e
FG
361
362----
363# zpool get feature@encryption tank
364NAME PROPERTY VALUE SOURCE
365tank feature@encryption disabled local
366
367# zpool set feature@encryption=enabled
368
369# zpool get feature@encryption tank
370NAME PROPERTY VALUE SOURCE
371tank feature@encryption enabled local
372----
373
374WARNING: There is currently no support for booting from pools with encrypted
375datasets using Grub, and only limited support for automatically unlocking
376encrypted datasets on boot. Older versions of ZFS without encryption support
377will not be able to decrypt stored data.
378
379NOTE: It is recommended to either unlock storage datasets manually after
380booting, or to write a custom unit to pass the key material needed for
381unlocking on boot to `zfs load-key`.
382
383WARNING: Establish and test a backup procedure before enabling encryption of
5dfeeece 384production data. If the associated key material/passphrase/keyfile has been
cca0540e
FG
385lost, accessing the encrypted data is no longer possible.
386
387Encryption needs to be setup when creating datasets/zvols, and is inherited by
388default to child datasets. For example, to create an encrypted dataset
389`tank/encrypted_data` and configure it as storage in {pve}, run the following
390commands:
391
392----
393# zfs create -o encryption=on -o keyformat=passphrase tank/encrypted_data
394Enter passphrase:
395Re-enter passphrase:
396
397# pvesm add zfspool encrypted_zfs -pool tank/encrypted_data
398----
399
400All guest volumes/disks create on this storage will be encrypted with the
401shared key material of the parent dataset.
402
403To actually use the storage, the associated key material needs to be loaded
404with `zfs load-key`:
405
406----
407# zfs load-key tank/encrypted_data
408Enter passphrase for 'tank/encrypted_data':
409----
410
411It is also possible to use a (random) keyfile instead of prompting for a
412passphrase by setting the `keylocation` and `keyformat` properties, either at
229426eb 413creation time or with `zfs change-key` on existing datasets:
cca0540e
FG
414
415----
416# dd if=/dev/urandom of=/path/to/keyfile bs=32 count=1
417
418# zfs change-key -o keyformat=raw -o keylocation=file:///path/to/keyfile tank/encrypted_data
419----
420
421WARNING: When using a keyfile, special care needs to be taken to secure the
422keyfile against unauthorized access or accidental loss. Without the keyfile, it
423is not possible to access the plaintext data!
424
425A guest volume created underneath an encrypted dataset will have its
426`encryptionroot` property set accordingly. The key material only needs to be
427loaded once per encryptionroot to be available to all encrypted datasets
428underneath it.
429
430See the `encryptionroot`, `encryption`, `keylocation`, `keyformat` and
431`keystatus` properties, the `zfs load-key`, `zfs unload-key` and `zfs
432change-key` commands and the `Encryption` section from `man zfs` for more
433details and advanced usage.