]>
Commit | Line | Data |
---|---|---|
1 | ZFS on Linux | |
2 | ------------ | |
3 | include::attributes.txt[] | |
4 | ||
5 | ifdef::wiki[] | |
6 | :pve-toplevel: | |
7 | endif::wiki[] | |
8 | ||
9 | ZFS is a combined file system and logical volume manager designed by | |
10 | Sun Microsystems. Starting with {pve} 3.4, the native Linux | |
11 | kernel port of the ZFS file system is introduced as optional | |
12 | file system and also as an additional selection for the root | |
13 | file system. There is no need for manually compile ZFS modules - all | |
14 | packages are included. | |
15 | ||
16 | By using ZFS, its possible to achieve maximum enterprise features with | |
17 | low budget hardware, but also high performance systems by leveraging | |
18 | SSD caching or even SSD only setups. ZFS can replace cost intense | |
19 | hardware raid cards by moderate CPU and memory load combined with easy | |
20 | management. | |
21 | ||
22 | .General ZFS advantages | |
23 | ||
24 | * Easy configuration and management with {pve} GUI and CLI. | |
25 | ||
26 | * Reliable | |
27 | ||
28 | * Protection against data corruption | |
29 | ||
30 | * Data compression on file system level | |
31 | ||
32 | * Snapshots | |
33 | ||
34 | * Copy-on-write clone | |
35 | ||
36 | * Various raid levels: RAID0, RAID1, RAID10, RAIDZ-1, RAIDZ-2 and RAIDZ-3 | |
37 | ||
38 | * Can use SSD for cache | |
39 | ||
40 | * Self healing | |
41 | ||
42 | * Continuous integrity checking | |
43 | ||
44 | * Designed for high storage capacities | |
45 | ||
46 | * Protection against data corruption | |
47 | ||
48 | * Asynchronous replication over network | |
49 | ||
50 | * Open Source | |
51 | ||
52 | * Encryption | |
53 | ||
54 | * ... | |
55 | ||
56 | ||
57 | Hardware | |
58 | ~~~~~~~~ | |
59 | ||
60 | ZFS depends heavily on memory, so you need at least 8GB to start. In | |
61 | practice, use as much you can get for your hardware/budget. To prevent | |
62 | data corruption, we recommend the use of high quality ECC RAM. | |
63 | ||
64 | If you use a dedicated cache and/or log disk, you should use a | |
65 | enterprise class SSD (e.g. Intel SSD DC S3700 Series). This can | |
66 | increase the overall performance significantly. | |
67 | ||
68 | IMPORTANT: Do not use ZFS on top of hardware controller which has its | |
69 | own cache management. ZFS needs to directly communicate with disks. An | |
70 | HBA adapter is the way to go, or something like LSI controller flashed | |
71 | in ``IT'' mode. | |
72 | ||
73 | If you are experimenting with an installation of {pve} inside a VM | |
74 | (Nested Virtualization), don't use `virtio` for disks of that VM, | |
75 | since they are not supported by ZFS. Use IDE or SCSI instead (works | |
76 | also with `virtio` SCSI controller type). | |
77 | ||
78 | ||
79 | Installation as Root File System | |
80 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
81 | ||
82 | When you install using the {pve} installer, you can choose ZFS for the | |
83 | root file system. You need to select the RAID type at installation | |
84 | time: | |
85 | ||
86 | [horizontal] | |
87 | RAID0:: Also called ``striping''. The capacity of such volume is the sum | |
88 | of the capacities of all disks. But RAID0 does not add any redundancy, | |
89 | so the failure of a single drive makes the volume unusable. | |
90 | ||
91 | RAID1:: Also called ``mirroring''. Data is written identically to all | |
92 | disks. This mode requires at least 2 disks with the same size. The | |
93 | resulting capacity is that of a single disk. | |
94 | ||
95 | RAID10:: A combination of RAID0 and RAID1. Requires at least 4 disks. | |
96 | ||
97 | RAIDZ-1:: A variation on RAID-5, single parity. Requires at least 3 disks. | |
98 | ||
99 | RAIDZ-2:: A variation on RAID-5, double parity. Requires at least 4 disks. | |
100 | ||
101 | RAIDZ-3:: A variation on RAID-5, triple parity. Requires at least 5 disks. | |
102 | ||
103 | The installer automatically partitions the disks, creates a ZFS pool | |
104 | called `rpool`, and installs the root file system on the ZFS subvolume | |
105 | `rpool/ROOT/pve-1`. | |
106 | ||
107 | Another subvolume called `rpool/data` is created to store VM | |
108 | images. In order to use that with the {pve} tools, the installer | |
109 | creates the following configuration entry in `/etc/pve/storage.cfg`: | |
110 | ||
111 | ---- | |
112 | zfspool: local-zfs | |
113 | pool rpool/data | |
114 | sparse | |
115 | content images,rootdir | |
116 | ---- | |
117 | ||
118 | After installation, you can view your ZFS pool status using the | |
119 | `zpool` command: | |
120 | ||
121 | ---- | |
122 | # zpool status | |
123 | pool: rpool | |
124 | state: ONLINE | |
125 | scan: none requested | |
126 | config: | |
127 | ||
128 | NAME STATE READ WRITE CKSUM | |
129 | rpool ONLINE 0 0 0 | |
130 | mirror-0 ONLINE 0 0 0 | |
131 | sda2 ONLINE 0 0 0 | |
132 | sdb2 ONLINE 0 0 0 | |
133 | mirror-1 ONLINE 0 0 0 | |
134 | sdc ONLINE 0 0 0 | |
135 | sdd ONLINE 0 0 0 | |
136 | ||
137 | errors: No known data errors | |
138 | ---- | |
139 | ||
140 | The `zfs` command is used configure and manage your ZFS file | |
141 | systems. The following command lists all file systems after | |
142 | installation: | |
143 | ||
144 | ---- | |
145 | # zfs list | |
146 | NAME USED AVAIL REFER MOUNTPOINT | |
147 | rpool 4.94G 7.68T 96K /rpool | |
148 | rpool/ROOT 702M 7.68T 96K /rpool/ROOT | |
149 | rpool/ROOT/pve-1 702M 7.68T 702M / | |
150 | rpool/data 96K 7.68T 96K /rpool/data | |
151 | rpool/swap 4.25G 7.69T 64K - | |
152 | ---- | |
153 | ||
154 | ||
155 | Bootloader | |
156 | ~~~~~~~~~~ | |
157 | ||
158 | The default ZFS disk partitioning scheme does not use the first 2048 | |
159 | sectors. This gives enough room to install a GRUB boot partition. The | |
160 | {pve} installer automatically allocates that space, and installs the | |
161 | GRUB boot loader there. If you use a redundant RAID setup, it installs | |
162 | the boot loader on all disk required for booting. So you can boot | |
163 | even if some disks fail. | |
164 | ||
165 | NOTE: It is not possible to use ZFS as root file system with UEFI | |
166 | boot. | |
167 | ||
168 | ||
169 | ZFS Administration | |
170 | ~~~~~~~~~~~~~~~~~~ | |
171 | ||
172 | This section gives you some usage examples for common tasks. ZFS | |
173 | itself is really powerful and provides many options. The main commands | |
174 | to manage ZFS are `zfs` and `zpool`. Both commands come with great | |
175 | manual pages, which can be read with: | |
176 | ||
177 | ---- | |
178 | # man zpool | |
179 | # man zfs | |
180 | ----- | |
181 | ||
182 | .Create a new zpool | |
183 | ||
184 | To create a new pool, at least one disk is needed. The `ashift` should | |
185 | have the same sector-size (2 power of `ashift`) or larger as the | |
186 | underlying disk. | |
187 | ||
188 | zpool create -f -o ashift=12 <pool> <device> | |
189 | ||
190 | To activate compression | |
191 | ||
192 | zfs set compression=lz4 <pool> | |
193 | ||
194 | .Create a new pool with RAID-0 | |
195 | ||
196 | Minimum 1 Disk | |
197 | ||
198 | zpool create -f -o ashift=12 <pool> <device1> <device2> | |
199 | ||
200 | .Create a new pool with RAID-1 | |
201 | ||
202 | Minimum 2 Disks | |
203 | ||
204 | zpool create -f -o ashift=12 <pool> mirror <device1> <device2> | |
205 | ||
206 | .Create a new pool with RAID-10 | |
207 | ||
208 | Minimum 4 Disks | |
209 | ||
210 | zpool create -f -o ashift=12 <pool> mirror <device1> <device2> mirror <device3> <device4> | |
211 | ||
212 | .Create a new pool with RAIDZ-1 | |
213 | ||
214 | Minimum 3 Disks | |
215 | ||
216 | zpool create -f -o ashift=12 <pool> raidz1 <device1> <device2> <device3> | |
217 | ||
218 | .Create a new pool with RAIDZ-2 | |
219 | ||
220 | Minimum 4 Disks | |
221 | ||
222 | zpool create -f -o ashift=12 <pool> raidz2 <device1> <device2> <device3> <device4> | |
223 | ||
224 | .Create a new pool with cache (L2ARC) | |
225 | ||
226 | It is possible to use a dedicated cache drive partition to increase | |
227 | the performance (use SSD). | |
228 | ||
229 | As `<device>` it is possible to use more devices, like it's shown in | |
230 | "Create a new pool with RAID*". | |
231 | ||
232 | zpool create -f -o ashift=12 <pool> <device> cache <cache_device> | |
233 | ||
234 | .Create a new pool with log (ZIL) | |
235 | ||
236 | It is possible to use a dedicated cache drive partition to increase | |
237 | the performance(SSD). | |
238 | ||
239 | As `<device>` it is possible to use more devices, like it's shown in | |
240 | "Create a new pool with RAID*". | |
241 | ||
242 | zpool create -f -o ashift=12 <pool> <device> log <log_device> | |
243 | ||
244 | .Add cache and log to an existing pool | |
245 | ||
246 | If you have an pool without cache and log. First partition the SSD in | |
247 | 2 partition with `parted` or `gdisk` | |
248 | ||
249 | IMPORTANT: Always use GPT partition tables. | |
250 | ||
251 | The maximum size of a log device should be about half the size of | |
252 | physical memory, so this is usually quite small. The rest of the SSD | |
253 | can be used as cache. | |
254 | ||
255 | zpool add -f <pool> log <device-part1> cache <device-part2> | |
256 | ||
257 | .Changing a failed device | |
258 | ||
259 | zpool replace -f <pool> <old device> <new-device> | |
260 | ||
261 | ||
262 | Activate E-Mail Notification | |
263 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | |
264 | ||
265 | ZFS comes with an event daemon, which monitors events generated by the | |
266 | ZFS kernel module. The daemon can also send emails on ZFS events like | |
267 | pool errors. | |
268 | ||
269 | To activate the daemon it is necessary to edit `/etc/zfs/zed.d/zed.rc` with your | |
270 | favourite editor, and uncomment the `ZED_EMAIL_ADDR` setting: | |
271 | ||
272 | -------- | |
273 | ZED_EMAIL_ADDR="root" | |
274 | -------- | |
275 | ||
276 | Please note {pve} forwards mails to `root` to the email address | |
277 | configured for the root user. | |
278 | ||
279 | IMPORTANT: The only setting that is required is `ZED_EMAIL_ADDR`. All | |
280 | other settings are optional. | |
281 | ||
282 | ||
283 | Limit ZFS Memory Usage | |
284 | ~~~~~~~~~~~~~~~~~~~~~~ | |
285 | ||
286 | It is good to use at most 50 percent (which is the default) of the | |
287 | system memory for ZFS ARC to prevent performance shortage of the | |
288 | host. Use your preferred editor to change the configuration in | |
289 | `/etc/modprobe.d/zfs.conf` and insert: | |
290 | ||
291 | -------- | |
292 | options zfs zfs_arc_max=8589934592 | |
293 | -------- | |
294 | ||
295 | This example setting limits the usage to 8GB. | |
296 | ||
297 | [IMPORTANT] | |
298 | ==== | |
299 | If your root file system is ZFS you must update your initramfs every | |
300 | time this value changes: | |
301 | ||
302 | update-initramfs -u | |
303 | ==== | |
304 | ||
305 | ||
306 | .SWAP on ZFS | |
307 | ||
308 | SWAP on ZFS on Linux may generate some troubles, like blocking the | |
309 | server or generating a high IO load, often seen when starting a Backup | |
310 | to an external Storage. | |
311 | ||
312 | We strongly recommend to use enough memory, so that you normally do not | |
313 | run into low memory situations. Additionally, you can lower the | |
314 | ``swappiness'' value. A good value for servers is 10: | |
315 | ||
316 | sysctl -w vm.swappiness=10 | |
317 | ||
318 | To make the swappiness persistent, open `/etc/sysctl.conf` with | |
319 | an editor of your choice and add the following line: | |
320 | ||
321 | -------- | |
322 | vm.swappiness = 10 | |
323 | -------- | |
324 | ||
325 | .Linux kernel `swappiness` parameter values | |
326 | [width="100%",cols="<m,2d",options="header"] | |
327 | |=========================================================== | |
328 | | Value | Strategy | |
329 | | vm.swappiness = 0 | The kernel will swap only to avoid | |
330 | an 'out of memory' condition | |
331 | | vm.swappiness = 1 | Minimum amount of swapping without | |
332 | disabling it entirely. | |
333 | | vm.swappiness = 10 | This value is sometimes recommended to | |
334 | improve performance when sufficient memory exists in a system. | |
335 | | vm.swappiness = 60 | The default value. | |
336 | | vm.swappiness = 100 | The kernel will swap aggressively. | |
337 | |=========================================================== |