]>
Commit | Line | Data |
---|---|---|
29714574 TF |
1 | '\" te |
2 | .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. | |
3 | .\" The contents of this file are subject to the terms of the Common Development | |
4 | .\" and Distribution License (the "License"). You may not use this file except | |
5 | .\" in compliance with the License. You can obtain a copy of the license at | |
6 | .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. | |
7 | .\" | |
8 | .\" See the License for the specific language governing permissions and | |
9 | .\" limitations under the License. When distributing Covered Code, include this | |
10 | .\" CDDL HEADER in each file and include the License file at | |
11 | .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this | |
12 | .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your | |
13 | .\" own identifying information: | |
14 | .\" Portions Copyright [yyyy] [name of copyright owner] | |
15 | .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013" | |
16 | .SH NAME | |
17 | zfs\-module\-parameters \- ZFS module parameters | |
18 | .SH DESCRIPTION | |
19 | .sp | |
20 | .LP | |
21 | Description of the different parameters to the ZFS module. | |
22 | ||
23 | .SS "Module parameters" | |
24 | .sp | |
25 | .LP | |
26 | ||
6d836e6f RE |
27 | .sp |
28 | .ne 2 | |
29 | .na | |
30 | \fBignore_hole_birth\fR (int) | |
31 | .ad | |
32 | .RS 12n | |
33 | When set, the hole_birth optimization will not be used, and all holes will | |
34 | always be sent on zfs send. Useful if you suspect your datasets are affected | |
35 | by a bug in hole_birth. | |
36 | .sp | |
9ea9e0b9 | 37 | Use \fB1\fR for on (default) and \fB0\fR for off. |
6d836e6f RE |
38 | .RE |
39 | ||
29714574 TF |
40 | .sp |
41 | .ne 2 | |
42 | .na | |
43 | \fBl2arc_feed_again\fR (int) | |
44 | .ad | |
45 | .RS 12n | |
83426735 D |
46 | Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as |
47 | fast as possible. | |
29714574 TF |
48 | .sp |
49 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
50 | .RE | |
51 | ||
52 | .sp | |
53 | .ne 2 | |
54 | .na | |
55 | \fBl2arc_feed_min_ms\fR (ulong) | |
56 | .ad | |
57 | .RS 12n | |
83426735 D |
58 | Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only |
59 | applicable in related situations. | |
29714574 TF |
60 | .sp |
61 | Default value: \fB200\fR. | |
62 | .RE | |
63 | ||
64 | .sp | |
65 | .ne 2 | |
66 | .na | |
67 | \fBl2arc_feed_secs\fR (ulong) | |
68 | .ad | |
69 | .RS 12n | |
70 | Seconds between L2ARC writing | |
71 | .sp | |
72 | Default value: \fB1\fR. | |
73 | .RE | |
74 | ||
75 | .sp | |
76 | .ne 2 | |
77 | .na | |
78 | \fBl2arc_headroom\fR (ulong) | |
79 | .ad | |
80 | .RS 12n | |
83426735 D |
81 | How far through the ARC lists to search for L2ARC cacheable content, expressed |
82 | as a multiplier of \fBl2arc_write_max\fR | |
29714574 TF |
83 | .sp |
84 | Default value: \fB2\fR. | |
85 | .RE | |
86 | ||
87 | .sp | |
88 | .ne 2 | |
89 | .na | |
90 | \fBl2arc_headroom_boost\fR (ulong) | |
91 | .ad | |
92 | .RS 12n | |
83426735 D |
93 | Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being |
94 | successfully compressed before writing. A value of 100 disables this feature. | |
29714574 TF |
95 | .sp |
96 | Default value: \fB200\fR. | |
97 | .RE | |
98 | ||
99 | .sp | |
100 | .ne 2 | |
101 | .na | |
102 | \fBl2arc_nocompress\fR (int) | |
103 | .ad | |
104 | .RS 12n | |
105 | Skip compressing L2ARC buffers | |
106 | .sp | |
107 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
108 | .RE | |
109 | ||
110 | .sp | |
111 | .ne 2 | |
112 | .na | |
113 | \fBl2arc_noprefetch\fR (int) | |
114 | .ad | |
115 | .RS 12n | |
83426735 D |
116 | Do not write buffers to L2ARC if they were prefetched but not used by |
117 | applications | |
29714574 TF |
118 | .sp |
119 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
120 | .RE | |
121 | ||
122 | .sp | |
123 | .ne 2 | |
124 | .na | |
125 | \fBl2arc_norw\fR (int) | |
126 | .ad | |
127 | .RS 12n | |
128 | No reads during writes | |
129 | .sp | |
130 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
131 | .RE | |
132 | ||
133 | .sp | |
134 | .ne 2 | |
135 | .na | |
136 | \fBl2arc_write_boost\fR (ulong) | |
137 | .ad | |
138 | .RS 12n | |
83426735 D |
139 | Cold L2ARC devices will have \fBl2arc_write_nax\fR increased by this amount |
140 | while they remain cold. | |
29714574 TF |
141 | .sp |
142 | Default value: \fB8,388,608\fR. | |
143 | .RE | |
144 | ||
145 | .sp | |
146 | .ne 2 | |
147 | .na | |
148 | \fBl2arc_write_max\fR (ulong) | |
149 | .ad | |
150 | .RS 12n | |
151 | Max write bytes per interval | |
152 | .sp | |
153 | Default value: \fB8,388,608\fR. | |
154 | .RE | |
155 | ||
99b14de4 ED |
156 | .sp |
157 | .ne 2 | |
158 | .na | |
159 | \fBmetaslab_aliquot\fR (ulong) | |
160 | .ad | |
161 | .RS 12n | |
162 | Metaslab granularity, in bytes. This is roughly similar to what would be | |
163 | referred to as the "stripe size" in traditional RAID arrays. In normal | |
164 | operation, ZFS will try to write this amount of data to a top-level vdev | |
165 | before moving on to the next one. | |
166 | .sp | |
167 | Default value: \fB524,288\fR. | |
168 | .RE | |
169 | ||
f3a7f661 GW |
170 | .sp |
171 | .ne 2 | |
172 | .na | |
173 | \fBmetaslab_bias_enabled\fR (int) | |
174 | .ad | |
175 | .RS 12n | |
176 | Enable metaslab group biasing based on its vdev's over- or under-utilization | |
177 | relative to the pool. | |
178 | .sp | |
179 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
180 | .RE | |
181 | ||
4e21fd06 DB |
182 | .sp |
183 | .ne 2 | |
184 | .na | |
185 | \fBzfs_metaslab_segment_weight_enabled\fR (int) | |
186 | .ad | |
187 | .RS 12n | |
188 | Enable/disable segment-based metaslab selection. | |
189 | .sp | |
190 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
191 | .RE | |
192 | ||
193 | .sp | |
194 | .ne 2 | |
195 | .na | |
196 | \fBzfs_metaslab_switch_threshold\fR (int) | |
197 | .ad | |
198 | .RS 12n | |
199 | When using segment-based metaslab selection, continue allocating | |
200 | from the active metaslab until \fBlzfs_metaslab_switch_threshold\fR | |
201 | worth of buckets have been exhausted. | |
202 | .sp | |
203 | Default value: \fB2\fR. | |
204 | .RE | |
205 | ||
29714574 TF |
206 | .sp |
207 | .ne 2 | |
208 | .na | |
aa7d06a9 | 209 | \fBmetaslab_debug_load\fR (int) |
29714574 TF |
210 | .ad |
211 | .RS 12n | |
aa7d06a9 GW |
212 | Load all metaslabs during pool import. |
213 | .sp | |
214 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
215 | .RE | |
216 | ||
217 | .sp | |
218 | .ne 2 | |
219 | .na | |
220 | \fBmetaslab_debug_unload\fR (int) | |
221 | .ad | |
222 | .RS 12n | |
223 | Prevent metaslabs from being unloaded. | |
29714574 TF |
224 | .sp |
225 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
226 | .RE | |
227 | ||
f3a7f661 GW |
228 | .sp |
229 | .ne 2 | |
230 | .na | |
231 | \fBmetaslab_fragmentation_factor_enabled\fR (int) | |
232 | .ad | |
233 | .RS 12n | |
234 | Enable use of the fragmentation metric in computing metaslab weights. | |
235 | .sp | |
236 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
237 | .RE | |
238 | ||
b8bcca18 MA |
239 | .sp |
240 | .ne 2 | |
241 | .na | |
242 | \fBmetaslabs_per_vdev\fR (int) | |
243 | .ad | |
244 | .RS 12n | |
245 | When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs. | |
246 | .sp | |
247 | Default value: \fB200\fR. | |
248 | .RE | |
249 | ||
f3a7f661 GW |
250 | .sp |
251 | .ne 2 | |
252 | .na | |
253 | \fBmetaslab_preload_enabled\fR (int) | |
254 | .ad | |
255 | .RS 12n | |
256 | Enable metaslab group preloading. | |
257 | .sp | |
258 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
259 | .RE | |
260 | ||
261 | .sp | |
262 | .ne 2 | |
263 | .na | |
264 | \fBmetaslab_lba_weighting_enabled\fR (int) | |
265 | .ad | |
266 | .RS 12n | |
267 | Give more weight to metaslabs with lower LBAs, assuming they have | |
268 | greater bandwidth as is typically the case on a modern constant | |
269 | angular velocity disk drive. | |
270 | .sp | |
271 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
272 | .RE | |
273 | ||
29714574 TF |
274 | .sp |
275 | .ne 2 | |
276 | .na | |
277 | \fBspa_config_path\fR (charp) | |
278 | .ad | |
279 | .RS 12n | |
280 | SPA config file | |
281 | .sp | |
282 | Default value: \fB/etc/zfs/zpool.cache\fR. | |
283 | .RE | |
284 | ||
e8b96c60 MA |
285 | .sp |
286 | .ne 2 | |
287 | .na | |
288 | \fBspa_asize_inflation\fR (int) | |
289 | .ad | |
290 | .RS 12n | |
291 | Multiplication factor used to estimate actual disk consumption from the | |
292 | size of data being written. The default value is a worst case estimate, | |
293 | but lower values may be valid for a given pool depending on its | |
294 | configuration. Pool administrators who understand the factors involved | |
295 | may wish to specify a more realistic inflation factor, particularly if | |
296 | they operate close to quota or capacity limits. | |
297 | .sp | |
83426735 | 298 | Default value: \fB24\fR. |
e8b96c60 MA |
299 | .RE |
300 | ||
dea377c0 MA |
301 | .sp |
302 | .ne 2 | |
303 | .na | |
304 | \fBspa_load_verify_data\fR (int) | |
305 | .ad | |
306 | .RS 12n | |
307 | Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR) | |
308 | import. Use 0 to disable and 1 to enable. | |
309 | ||
310 | An extreme rewind import normally performs a full traversal of all | |
311 | blocks in the pool for verification. If this parameter is set to 0, | |
312 | the traversal skips non-metadata blocks. It can be toggled once the | |
313 | import has started to stop or start the traversal of non-metadata blocks. | |
314 | .sp | |
83426735 | 315 | Default value: \fB1\fR. |
dea377c0 MA |
316 | .RE |
317 | ||
318 | .sp | |
319 | .ne 2 | |
320 | .na | |
321 | \fBspa_load_verify_metadata\fR (int) | |
322 | .ad | |
323 | .RS 12n | |
324 | Whether to traverse blocks during an "extreme rewind" (\fB-X\fR) | |
325 | pool import. Use 0 to disable and 1 to enable. | |
326 | ||
327 | An extreme rewind import normally performs a full traversal of all | |
1c012083 | 328 | blocks in the pool for verification. If this parameter is set to 0, |
dea377c0 MA |
329 | the traversal is not performed. It can be toggled once the import has |
330 | started to stop or start the traversal. | |
331 | .sp | |
83426735 | 332 | Default value: \fB1\fR. |
dea377c0 MA |
333 | .RE |
334 | ||
335 | .sp | |
336 | .ne 2 | |
337 | .na | |
338 | \fBspa_load_verify_maxinflight\fR (int) | |
339 | .ad | |
340 | .RS 12n | |
341 | Maximum concurrent I/Os during the traversal performed during an "extreme | |
342 | rewind" (\fB-X\fR) pool import. | |
343 | .sp | |
83426735 | 344 | Default value: \fB10000\fR. |
dea377c0 MA |
345 | .RE |
346 | ||
6cde6435 BB |
347 | .sp |
348 | .ne 2 | |
349 | .na | |
350 | \fBspa_slop_shift\fR (int) | |
351 | .ad | |
352 | .RS 12n | |
353 | Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space | |
354 | in the pool to be consumed. This ensures that we don't run the pool | |
355 | completely out of space, due to unaccounted changes (e.g. to the MOS). | |
356 | It also limits the worst-case time to allocate space. If we have | |
357 | less than this amount of free space, most ZPL operations (e.g. write, | |
358 | create) will return ENOSPC. | |
359 | .sp | |
83426735 | 360 | Default value: \fB5\fR. |
6cde6435 BB |
361 | .RE |
362 | ||
29714574 TF |
363 | .sp |
364 | .ne 2 | |
365 | .na | |
366 | \fBzfetch_array_rd_sz\fR (ulong) | |
367 | .ad | |
368 | .RS 12n | |
27b293be | 369 | If prefetching is enabled, disable prefetching for reads larger than this size. |
29714574 TF |
370 | .sp |
371 | Default value: \fB1,048,576\fR. | |
372 | .RE | |
373 | ||
374 | .sp | |
375 | .ne 2 | |
376 | .na | |
7f60329a | 377 | \fBzfetch_max_distance\fR (uint) |
29714574 TF |
378 | .ad |
379 | .RS 12n | |
7f60329a | 380 | Max bytes to prefetch per stream (default 8MB). |
29714574 | 381 | .sp |
7f60329a | 382 | Default value: \fB8,388,608\fR. |
29714574 TF |
383 | .RE |
384 | ||
385 | .sp | |
386 | .ne 2 | |
387 | .na | |
388 | \fBzfetch_max_streams\fR (uint) | |
389 | .ad | |
390 | .RS 12n | |
27b293be | 391 | Max number of streams per zfetch (prefetch streams per file). |
29714574 TF |
392 | .sp |
393 | Default value: \fB8\fR. | |
394 | .RE | |
395 | ||
396 | .sp | |
397 | .ne 2 | |
398 | .na | |
399 | \fBzfetch_min_sec_reap\fR (uint) | |
400 | .ad | |
401 | .RS 12n | |
27b293be | 402 | Min time before an active prefetch stream can be reclaimed |
29714574 TF |
403 | .sp |
404 | Default value: \fB2\fR. | |
405 | .RE | |
406 | ||
25458cbe TC |
407 | .sp |
408 | .ne 2 | |
409 | .na | |
410 | \fBzfs_arc_dnode_limit\fR (ulong) | |
411 | .ad | |
412 | .RS 12n | |
413 | When the number of bytes consumed by dnodes in the ARC exceeds this number of | |
9907cc1c G |
414 | bytes, try to unpin some of it in response to demand for non-metadata. This |
415 | value acts as a floor to the amount of dnode metadata, and defaults to 0 which | |
416 | indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of | |
417 | the ARC meta buffers that may be used for dnodes. | |
25458cbe TC |
418 | |
419 | See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used | |
420 | when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather | |
421 | than in response to overall demand for non-metadata. | |
422 | ||
423 | .sp | |
9907cc1c G |
424 | Default value: \fB0\fR. |
425 | .RE | |
426 | ||
427 | .sp | |
428 | .ne 2 | |
429 | .na | |
430 | \fBzfs_arc_dnode_limit_percent\fR (ulong) | |
431 | .ad | |
432 | .RS 12n | |
433 | Percentage that can be consumed by dnodes of ARC meta buffers. | |
434 | .sp | |
435 | See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a | |
436 | higher priority if set to nonzero value. | |
437 | .sp | |
438 | Default value: \fB10\fR. | |
25458cbe TC |
439 | .RE |
440 | ||
441 | .sp | |
442 | .ne 2 | |
443 | .na | |
444 | \fBzfs_arc_dnode_reduce_percent\fR (ulong) | |
445 | .ad | |
446 | .RS 12n | |
447 | Percentage of ARC dnodes to try to scan in response to demand for non-metadata | |
6146e17e | 448 | when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR. |
25458cbe TC |
449 | |
450 | .sp | |
451 | Default value: \fB10% of the number of dnodes in the ARC\fR. | |
452 | .RE | |
453 | ||
49ddb315 MA |
454 | .sp |
455 | .ne 2 | |
456 | .na | |
457 | \fBzfs_arc_average_blocksize\fR (int) | |
458 | .ad | |
459 | .RS 12n | |
460 | The ARC's buffer hash table is sized based on the assumption of an average | |
461 | block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out | |
462 | to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers. | |
463 | For configurations with a known larger average block size this value can be | |
464 | increased to reduce the memory footprint. | |
465 | ||
466 | .sp | |
467 | Default value: \fB8192\fR. | |
468 | .RE | |
469 | ||
ca0bf58d PS |
470 | .sp |
471 | .ne 2 | |
472 | .na | |
473 | \fBzfs_arc_evict_batch_limit\fR (int) | |
474 | .ad | |
475 | .RS 12n | |
8f343973 | 476 | Number ARC headers to evict per sub-list before proceeding to another sub-list. |
ca0bf58d PS |
477 | This batch-style operation prevents entire sub-lists from being evicted at once |
478 | but comes at a cost of additional unlocking and locking. | |
479 | .sp | |
480 | Default value: \fB10\fR. | |
481 | .RE | |
482 | ||
29714574 TF |
483 | .sp |
484 | .ne 2 | |
485 | .na | |
486 | \fBzfs_arc_grow_retry\fR (int) | |
487 | .ad | |
488 | .RS 12n | |
83426735 D |
489 | After a memory pressure event the ARC will wait this many seconds before trying |
490 | to resume growth | |
29714574 TF |
491 | .sp |
492 | Default value: \fB5\fR. | |
493 | .RE | |
494 | ||
495 | .sp | |
496 | .ne 2 | |
497 | .na | |
7e8bddd0 | 498 | \fBzfs_arc_lotsfree_percent\fR (int) |
29714574 TF |
499 | .ad |
500 | .RS 12n | |
7e8bddd0 BB |
501 | Throttle I/O when free system memory drops below this percentage of total |
502 | system memory. Setting this value to 0 will disable the throttle. | |
29714574 | 503 | .sp |
7e8bddd0 | 504 | Default value: \fB10\fR. |
29714574 TF |
505 | .RE |
506 | ||
507 | .sp | |
508 | .ne 2 | |
509 | .na | |
7e8bddd0 | 510 | \fBzfs_arc_max\fR (ulong) |
29714574 TF |
511 | .ad |
512 | .RS 12n | |
83426735 D |
513 | Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system |
514 | RAM. This value must be at least 67108864 (64 megabytes). | |
515 | .sp | |
516 | This value can be changed dynamically with some caveats. It cannot be set back | |
517 | to 0 while running and reducing it below the current ARC size will not cause | |
518 | the ARC to shrink without memory pressure to induce shrinking. | |
29714574 | 519 | .sp |
7e8bddd0 | 520 | Default value: \fB0\fR. |
29714574 TF |
521 | .RE |
522 | ||
523 | .sp | |
524 | .ne 2 | |
525 | .na | |
526 | \fBzfs_arc_meta_limit\fR (ulong) | |
527 | .ad | |
528 | .RS 12n | |
2cbb06b5 BB |
529 | The maximum allowed size in bytes that meta data buffers are allowed to |
530 | consume in the ARC. When this limit is reached meta data buffers will | |
531 | be reclaimed even if the overall arc_c_max has not been reached. This | |
9907cc1c G |
532 | value defaults to 0 which indicates that a percent which is based on |
533 | \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data. | |
29714574 | 534 | .sp |
83426735 | 535 | This value my be changed dynamically except that it cannot be set back to 0 |
9907cc1c | 536 | for a specific percent of the ARC; it must be set to an explicit value. |
83426735 | 537 | .sp |
29714574 TF |
538 | Default value: \fB0\fR. |
539 | .RE | |
540 | ||
9907cc1c G |
541 | .sp |
542 | .ne 2 | |
543 | .na | |
544 | \fBzfs_arc_meta_limit_percent\fR (ulong) | |
545 | .ad | |
546 | .RS 12n | |
547 | Percentage of ARC buffers that can be used for meta data. | |
548 | ||
549 | See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a | |
550 | higher priority if set to nonzero value. | |
551 | ||
552 | .sp | |
553 | Default value: \fB75\fR. | |
554 | .RE | |
555 | ||
ca0bf58d PS |
556 | .sp |
557 | .ne 2 | |
558 | .na | |
559 | \fBzfs_arc_meta_min\fR (ulong) | |
560 | .ad | |
561 | .RS 12n | |
562 | The minimum allowed size in bytes that meta data buffers may consume in | |
563 | the ARC. This value defaults to 0 which disables a floor on the amount | |
564 | of the ARC devoted meta data. | |
565 | .sp | |
566 | Default value: \fB0\fR. | |
567 | .RE | |
568 | ||
29714574 TF |
569 | .sp |
570 | .ne 2 | |
571 | .na | |
572 | \fBzfs_arc_meta_prune\fR (int) | |
573 | .ad | |
574 | .RS 12n | |
2cbb06b5 BB |
575 | The number of dentries and inodes to be scanned looking for entries |
576 | which can be dropped. This may be required when the ARC reaches the | |
577 | \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers | |
578 | in the ARC. Increasing this value will cause to dentry and inode caches | |
579 | to be pruned more aggressively. Setting this value to 0 will disable | |
580 | pruning the inode and dentry caches. | |
29714574 | 581 | .sp |
2cbb06b5 | 582 | Default value: \fB10,000\fR. |
29714574 TF |
583 | .RE |
584 | ||
bc888666 BB |
585 | .sp |
586 | .ne 2 | |
587 | .na | |
588 | \fBzfs_arc_meta_adjust_restarts\fR (ulong) | |
589 | .ad | |
590 | .RS 12n | |
591 | The number of restart passes to make while scanning the ARC attempting | |
592 | the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR. | |
593 | This value should not need to be tuned but is available to facilitate | |
594 | performance analysis. | |
595 | .sp | |
596 | Default value: \fB4096\fR. | |
597 | .RE | |
598 | ||
29714574 TF |
599 | .sp |
600 | .ne 2 | |
601 | .na | |
602 | \fBzfs_arc_min\fR (ulong) | |
603 | .ad | |
604 | .RS 12n | |
605 | Min arc size | |
606 | .sp | |
607 | Default value: \fB100\fR. | |
608 | .RE | |
609 | ||
610 | .sp | |
611 | .ne 2 | |
612 | .na | |
613 | \fBzfs_arc_min_prefetch_lifespan\fR (int) | |
614 | .ad | |
615 | .RS 12n | |
83426735 D |
616 | Minimum time prefetched blocks are locked in the ARC, specified in jiffies. |
617 | A value of 0 will default to 1 second. | |
29714574 | 618 | .sp |
83426735 | 619 | Default value: \fB0\fR. |
29714574 TF |
620 | .RE |
621 | ||
ca0bf58d PS |
622 | .sp |
623 | .ne 2 | |
624 | .na | |
625 | \fBzfs_arc_num_sublists_per_state\fR (int) | |
626 | .ad | |
627 | .RS 12n | |
628 | To allow more fine-grained locking, each ARC state contains a series | |
629 | of lists for both data and meta data objects. Locking is performed at | |
630 | the level of these "sub-lists". This parameters controls the number of | |
631 | sub-lists per ARC state. | |
632 | .sp | |
6146e17e | 633 | Default value: \fB1\fR or the number of online CPUs, whichever is greater |
ca0bf58d PS |
634 | .RE |
635 | ||
636 | .sp | |
637 | .ne 2 | |
638 | .na | |
639 | \fBzfs_arc_overflow_shift\fR (int) | |
640 | .ad | |
641 | .RS 12n | |
642 | The ARC size is considered to be overflowing if it exceeds the current | |
643 | ARC target size (arc_c) by a threshold determined by this parameter. | |
644 | The threshold is calculated as a fraction of arc_c using the formula | |
645 | "arc_c >> \fBzfs_arc_overflow_shift\fR". | |
646 | ||
647 | The default value of 8 causes the ARC to be considered to be overflowing | |
648 | if it exceeds the target size by 1/256th (0.3%) of the target size. | |
649 | ||
650 | When the ARC is overflowing, new buffer allocations are stalled until | |
651 | the reclaim thread catches up and the overflow condition no longer exists. | |
652 | .sp | |
653 | Default value: \fB8\fR. | |
654 | .RE | |
655 | ||
728d6ae9 BB |
656 | .sp |
657 | .ne 2 | |
658 | .na | |
659 | ||
660 | \fBzfs_arc_p_min_shift\fR (int) | |
661 | .ad | |
662 | .RS 12n | |
663 | arc_c shift to calc min/max arc_p | |
664 | .sp | |
665 | Default value: \fB4\fR. | |
666 | .RE | |
667 | ||
89c8cac4 PS |
668 | .sp |
669 | .ne 2 | |
670 | .na | |
671 | \fBzfs_arc_p_aggressive_disable\fR (int) | |
672 | .ad | |
673 | .RS 12n | |
674 | Disable aggressive arc_p growth | |
675 | .sp | |
676 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
677 | .RE | |
678 | ||
62422785 PS |
679 | .sp |
680 | .ne 2 | |
681 | .na | |
682 | \fBzfs_arc_p_dampener_disable\fR (int) | |
683 | .ad | |
684 | .RS 12n | |
685 | Disable arc_p adapt dampener | |
686 | .sp | |
687 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
688 | .RE | |
689 | ||
29714574 TF |
690 | .sp |
691 | .ne 2 | |
692 | .na | |
693 | \fBzfs_arc_shrink_shift\fR (int) | |
694 | .ad | |
695 | .RS 12n | |
696 | log2(fraction of arc to reclaim) | |
697 | .sp | |
698 | Default value: \fB5\fR. | |
699 | .RE | |
700 | ||
11f552fa BB |
701 | .sp |
702 | .ne 2 | |
703 | .na | |
704 | \fBzfs_arc_sys_free\fR (ulong) | |
705 | .ad | |
706 | .RS 12n | |
707 | The target number of bytes the ARC should leave as free memory on the system. | |
708 | Defaults to the larger of 1/64 of physical memory or 512K. Setting this | |
709 | option to a non-zero value will override the default. | |
710 | .sp | |
711 | Default value: \fB0\fR. | |
712 | .RE | |
713 | ||
29714574 TF |
714 | .sp |
715 | .ne 2 | |
716 | .na | |
717 | \fBzfs_autoimport_disable\fR (int) | |
718 | .ad | |
719 | .RS 12n | |
27b293be | 720 | Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR). |
29714574 | 721 | .sp |
70081096 | 722 | Use \fB1\fR for yes (default) and \fB0\fR for no. |
29714574 TF |
723 | .RE |
724 | ||
3b36f831 BB |
725 | .sp |
726 | .ne 2 | |
727 | .na | |
728 | \fBzfs_dbgmsg_enable\fR (int) | |
729 | .ad | |
730 | .RS 12n | |
731 | Internally ZFS keeps a small log to facilitate debugging. By default the log | |
732 | is disabled, to enable it set this option to 1. The contents of the log can | |
733 | be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to | |
734 | this proc file clears the log. | |
735 | .sp | |
736 | Default value: \fB0\fR. | |
737 | .RE | |
738 | ||
739 | .sp | |
740 | .ne 2 | |
741 | .na | |
742 | \fBzfs_dbgmsg_maxsize\fR (int) | |
743 | .ad | |
744 | .RS 12n | |
745 | The maximum size in bytes of the internal ZFS debug log. | |
746 | .sp | |
747 | Default value: \fB4M\fR. | |
748 | .RE | |
749 | ||
29714574 TF |
750 | .sp |
751 | .ne 2 | |
752 | .na | |
753 | \fBzfs_dbuf_state_index\fR (int) | |
754 | .ad | |
755 | .RS 12n | |
83426735 D |
756 | This feature is currently unused. It is normally used for controlling what |
757 | reporting is available under /proc/spl/kstat/zfs. | |
29714574 TF |
758 | .sp |
759 | Default value: \fB0\fR. | |
760 | .RE | |
761 | ||
762 | .sp | |
763 | .ne 2 | |
764 | .na | |
765 | \fBzfs_deadman_enabled\fR (int) | |
766 | .ad | |
767 | .RS 12n | |
83426735 | 768 | Enable deadman timer. See description below. |
29714574 TF |
769 | .sp |
770 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
771 | .RE | |
772 | ||
773 | .sp | |
774 | .ne 2 | |
775 | .na | |
e8b96c60 | 776 | \fBzfs_deadman_synctime_ms\fR (ulong) |
29714574 TF |
777 | .ad |
778 | .RS 12n | |
e8b96c60 MA |
779 | Expiration time in milliseconds. This value has two meanings. First it is |
780 | used to determine when the spa_deadman() logic should fire. By default the | |
781 | spa_deadman() will fire if spa_sync() has not completed in 1000 seconds. | |
782 | Secondly, the value determines if an I/O is considered "hung". Any I/O that | |
783 | has not completed in zfs_deadman_synctime_ms is considered "hung" resulting | |
784 | in a zevent being logged. | |
29714574 | 785 | .sp |
e8b96c60 | 786 | Default value: \fB1,000,000\fR. |
29714574 TF |
787 | .RE |
788 | ||
789 | .sp | |
790 | .ne 2 | |
791 | .na | |
792 | \fBzfs_dedup_prefetch\fR (int) | |
793 | .ad | |
794 | .RS 12n | |
795 | Enable prefetching dedup-ed blks | |
796 | .sp | |
0dfc7324 | 797 | Use \fB1\fR for yes and \fB0\fR to disable (default). |
29714574 TF |
798 | .RE |
799 | ||
e8b96c60 MA |
800 | .sp |
801 | .ne 2 | |
802 | .na | |
803 | \fBzfs_delay_min_dirty_percent\fR (int) | |
804 | .ad | |
805 | .RS 12n | |
806 | Start to delay each transaction once there is this amount of dirty data, | |
807 | expressed as a percentage of \fBzfs_dirty_data_max\fR. | |
808 | This value should be >= zfs_vdev_async_write_active_max_dirty_percent. | |
809 | See the section "ZFS TRANSACTION DELAY". | |
810 | .sp | |
811 | Default value: \fB60\fR. | |
812 | .RE | |
813 | ||
814 | .sp | |
815 | .ne 2 | |
816 | .na | |
817 | \fBzfs_delay_scale\fR (int) | |
818 | .ad | |
819 | .RS 12n | |
820 | This controls how quickly the transaction delay approaches infinity. | |
821 | Larger values cause longer delays for a given amount of dirty data. | |
822 | .sp | |
823 | For the smoothest delay, this value should be about 1 billion divided | |
824 | by the maximum number of operations per second. This will smoothly | |
825 | handle between 10x and 1/10th this number. | |
826 | .sp | |
827 | See the section "ZFS TRANSACTION DELAY". | |
828 | .sp | |
829 | Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64. | |
830 | .sp | |
831 | Default value: \fB500,000\fR. | |
832 | .RE | |
833 | ||
a966c564 K |
834 | .sp |
835 | .ne 2 | |
836 | .na | |
837 | \fBzfs_delete_blocks\fR (ulong) | |
838 | .ad | |
839 | .RS 12n | |
840 | This is the used to define a large file for the purposes of delete. Files | |
841 | containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously | |
842 | while smaller files are deleted synchronously. Decreasing this value will | |
843 | reduce the time spent in an unlink(2) system call at the expense of a longer | |
844 | delay before the freed space is available. | |
845 | .sp | |
846 | Default value: \fB20,480\fR. | |
847 | .RE | |
848 | ||
e8b96c60 MA |
849 | .sp |
850 | .ne 2 | |
851 | .na | |
852 | \fBzfs_dirty_data_max\fR (int) | |
853 | .ad | |
854 | .RS 12n | |
855 | Determines the dirty space limit in bytes. Once this limit is exceeded, new | |
856 | writes are halted until space frees up. This parameter takes precedence | |
857 | over \fBzfs_dirty_data_max_percent\fR. | |
858 | See the section "ZFS TRANSACTION DELAY". | |
859 | .sp | |
860 | Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR. | |
861 | .RE | |
862 | ||
863 | .sp | |
864 | .ne 2 | |
865 | .na | |
866 | \fBzfs_dirty_data_max_max\fR (int) | |
867 | .ad | |
868 | .RS 12n | |
869 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes. | |
870 | This limit is only enforced at module load time, and will be ignored if | |
871 | \fBzfs_dirty_data_max\fR is later changed. This parameter takes | |
872 | precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section | |
873 | "ZFS TRANSACTION DELAY". | |
874 | .sp | |
875 | Default value: 25% of physical RAM. | |
876 | .RE | |
877 | ||
878 | .sp | |
879 | .ne 2 | |
880 | .na | |
881 | \fBzfs_dirty_data_max_max_percent\fR (int) | |
882 | .ad | |
883 | .RS 12n | |
884 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a | |
885 | percentage of physical RAM. This limit is only enforced at module load | |
886 | time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed. | |
887 | The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this | |
888 | one. See the section "ZFS TRANSACTION DELAY". | |
889 | .sp | |
9ef3906a | 890 | Default value: \fB25\fR. |
e8b96c60 MA |
891 | .RE |
892 | ||
893 | .sp | |
894 | .ne 2 | |
895 | .na | |
896 | \fBzfs_dirty_data_max_percent\fR (int) | |
897 | .ad | |
898 | .RS 12n | |
899 | Determines the dirty space limit, expressed as a percentage of all | |
900 | memory. Once this limit is exceeded, new writes are halted until space frees | |
901 | up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this | |
902 | one. See the section "ZFS TRANSACTION DELAY". | |
903 | .sp | |
904 | Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR. | |
905 | .RE | |
906 | ||
907 | .sp | |
908 | .ne 2 | |
909 | .na | |
910 | \fBzfs_dirty_data_sync\fR (int) | |
911 | .ad | |
912 | .RS 12n | |
913 | Start syncing out a transaction group if there is at least this much dirty data. | |
914 | .sp | |
915 | Default value: \fB67,108,864\fR. | |
916 | .RE | |
917 | ||
1eeb4562 JX |
918 | .sp |
919 | .ne 2 | |
920 | .na | |
921 | \fBzfs_fletcher_4_impl\fR (string) | |
922 | .ad | |
923 | .RS 12n | |
924 | Select a fletcher 4 implementation. | |
925 | .sp | |
35a76a03 | 926 | Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR, |
24cdeaf1 | 927 | \fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR. |
70b258fc GN |
928 | All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction |
929 | set extensions to be available and will only appear if ZFS detects that they are | |
930 | present at runtime. If multiple implementations of fletcher 4 are available, | |
931 | the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR | |
932 | results in the original, CPU based calculation, being used. Selecting any option | |
933 | other than \fBfastest\fR and \fBscalar\fR results in vector instructions from | |
934 | the respective CPU instruction set being used. | |
1eeb4562 JX |
935 | .sp |
936 | Default value: \fBfastest\fR. | |
937 | .RE | |
938 | ||
ba5ad9a4 GW |
939 | .sp |
940 | .ne 2 | |
941 | .na | |
942 | \fBzfs_free_bpobj_enabled\fR (int) | |
943 | .ad | |
944 | .RS 12n | |
945 | Enable/disable the processing of the free_bpobj object. | |
946 | .sp | |
947 | Default value: \fB1\fR. | |
948 | .RE | |
949 | ||
36283ca2 MG |
950 | .sp |
951 | .ne 2 | |
952 | .na | |
953 | \fBzfs_free_max_blocks\fR (ulong) | |
954 | .ad | |
955 | .RS 12n | |
956 | Maximum number of blocks freed in a single txg. | |
957 | .sp | |
958 | Default value: \fB100,000\fR. | |
959 | .RE | |
960 | ||
e8b96c60 MA |
961 | .sp |
962 | .ne 2 | |
963 | .na | |
964 | \fBzfs_vdev_async_read_max_active\fR (int) | |
965 | .ad | |
966 | .RS 12n | |
83426735 | 967 | Maximum asynchronous read I/Os active to each device. |
e8b96c60 MA |
968 | See the section "ZFS I/O SCHEDULER". |
969 | .sp | |
970 | Default value: \fB3\fR. | |
971 | .RE | |
972 | ||
973 | .sp | |
974 | .ne 2 | |
975 | .na | |
976 | \fBzfs_vdev_async_read_min_active\fR (int) | |
977 | .ad | |
978 | .RS 12n | |
979 | Minimum asynchronous read I/Os active to each device. | |
980 | See the section "ZFS I/O SCHEDULER". | |
981 | .sp | |
982 | Default value: \fB1\fR. | |
983 | .RE | |
984 | ||
985 | .sp | |
986 | .ne 2 | |
987 | .na | |
988 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int) | |
989 | .ad | |
990 | .RS 12n | |
991 | When the pool has more than | |
992 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use | |
993 | \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If | |
994 | the dirty data is between min and max, the active I/O limit is linearly | |
995 | interpolated. See the section "ZFS I/O SCHEDULER". | |
996 | .sp | |
997 | Default value: \fB60\fR. | |
998 | .RE | |
999 | ||
1000 | .sp | |
1001 | .ne 2 | |
1002 | .na | |
1003 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int) | |
1004 | .ad | |
1005 | .RS 12n | |
1006 | When the pool has less than | |
1007 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use | |
1008 | \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If | |
1009 | the dirty data is between min and max, the active I/O limit is linearly | |
1010 | interpolated. See the section "ZFS I/O SCHEDULER". | |
1011 | .sp | |
1012 | Default value: \fB30\fR. | |
1013 | .RE | |
1014 | ||
1015 | .sp | |
1016 | .ne 2 | |
1017 | .na | |
1018 | \fBzfs_vdev_async_write_max_active\fR (int) | |
1019 | .ad | |
1020 | .RS 12n | |
83426735 | 1021 | Maximum asynchronous write I/Os active to each device. |
e8b96c60 MA |
1022 | See the section "ZFS I/O SCHEDULER". |
1023 | .sp | |
1024 | Default value: \fB10\fR. | |
1025 | .RE | |
1026 | ||
1027 | .sp | |
1028 | .ne 2 | |
1029 | .na | |
1030 | \fBzfs_vdev_async_write_min_active\fR (int) | |
1031 | .ad | |
1032 | .RS 12n | |
1033 | Minimum asynchronous write I/Os active to each device. | |
1034 | See the section "ZFS I/O SCHEDULER". | |
1035 | .sp | |
1036 | Default value: \fB1\fR. | |
1037 | .RE | |
1038 | ||
1039 | .sp | |
1040 | .ne 2 | |
1041 | .na | |
1042 | \fBzfs_vdev_max_active\fR (int) | |
1043 | .ad | |
1044 | .RS 12n | |
1045 | The maximum number of I/Os active to each device. Ideally, this will be >= | |
1046 | the sum of each queue's max_active. It must be at least the sum of each | |
1047 | queue's min_active. See the section "ZFS I/O SCHEDULER". | |
1048 | .sp | |
1049 | Default value: \fB1,000\fR. | |
1050 | .RE | |
1051 | ||
1052 | .sp | |
1053 | .ne 2 | |
1054 | .na | |
1055 | \fBzfs_vdev_scrub_max_active\fR (int) | |
1056 | .ad | |
1057 | .RS 12n | |
83426735 | 1058 | Maximum scrub I/Os active to each device. |
e8b96c60 MA |
1059 | See the section "ZFS I/O SCHEDULER". |
1060 | .sp | |
1061 | Default value: \fB2\fR. | |
1062 | .RE | |
1063 | ||
1064 | .sp | |
1065 | .ne 2 | |
1066 | .na | |
1067 | \fBzfs_vdev_scrub_min_active\fR (int) | |
1068 | .ad | |
1069 | .RS 12n | |
1070 | Minimum scrub I/Os active to each device. | |
1071 | See the section "ZFS I/O SCHEDULER". | |
1072 | .sp | |
1073 | Default value: \fB1\fR. | |
1074 | .RE | |
1075 | ||
1076 | .sp | |
1077 | .ne 2 | |
1078 | .na | |
1079 | \fBzfs_vdev_sync_read_max_active\fR (int) | |
1080 | .ad | |
1081 | .RS 12n | |
83426735 | 1082 | Maximum synchronous read I/Os active to each device. |
e8b96c60 MA |
1083 | See the section "ZFS I/O SCHEDULER". |
1084 | .sp | |
1085 | Default value: \fB10\fR. | |
1086 | .RE | |
1087 | ||
1088 | .sp | |
1089 | .ne 2 | |
1090 | .na | |
1091 | \fBzfs_vdev_sync_read_min_active\fR (int) | |
1092 | .ad | |
1093 | .RS 12n | |
1094 | Minimum synchronous read I/Os active to each device. | |
1095 | See the section "ZFS I/O SCHEDULER". | |
1096 | .sp | |
1097 | Default value: \fB10\fR. | |
1098 | .RE | |
1099 | ||
1100 | .sp | |
1101 | .ne 2 | |
1102 | .na | |
1103 | \fBzfs_vdev_sync_write_max_active\fR (int) | |
1104 | .ad | |
1105 | .RS 12n | |
83426735 | 1106 | Maximum synchronous write I/Os active to each device. |
e8b96c60 MA |
1107 | See the section "ZFS I/O SCHEDULER". |
1108 | .sp | |
1109 | Default value: \fB10\fR. | |
1110 | .RE | |
1111 | ||
1112 | .sp | |
1113 | .ne 2 | |
1114 | .na | |
1115 | \fBzfs_vdev_sync_write_min_active\fR (int) | |
1116 | .ad | |
1117 | .RS 12n | |
1118 | Minimum synchronous write I/Os active to each device. | |
1119 | See the section "ZFS I/O SCHEDULER". | |
1120 | .sp | |
1121 | Default value: \fB10\fR. | |
1122 | .RE | |
1123 | ||
3dfb57a3 DB |
1124 | .sp |
1125 | .ne 2 | |
1126 | .na | |
1127 | \fBzfs_vdev_queue_depth_pct\fR (int) | |
1128 | .ad | |
1129 | .RS 12n | |
1130 | The queue depth percentage for each top-level virtual device. | |
1131 | Used in conjunction with zfs_vdev_async_max_active. | |
1132 | .sp | |
1133 | Default value: \fB1000\fR. | |
1134 | .RE | |
1135 | ||
29714574 TF |
1136 | .sp |
1137 | .ne 2 | |
1138 | .na | |
1139 | \fBzfs_disable_dup_eviction\fR (int) | |
1140 | .ad | |
1141 | .RS 12n | |
1142 | Disable duplicate buffer eviction | |
1143 | .sp | |
1144 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1145 | .RE | |
1146 | ||
1147 | .sp | |
1148 | .ne 2 | |
1149 | .na | |
1150 | \fBzfs_expire_snapshot\fR (int) | |
1151 | .ad | |
1152 | .RS 12n | |
1153 | Seconds to expire .zfs/snapshot | |
1154 | .sp | |
1155 | Default value: \fB300\fR. | |
1156 | .RE | |
1157 | ||
0500e835 BB |
1158 | .sp |
1159 | .ne 2 | |
1160 | .na | |
1161 | \fBzfs_admin_snapshot\fR (int) | |
1162 | .ad | |
1163 | .RS 12n | |
1164 | Allow the creation, removal, or renaming of entries in the .zfs/snapshot | |
1165 | directory to cause the creation, destruction, or renaming of snapshots. | |
1166 | When enabled this functionality works both locally and over NFS exports | |
1167 | which have the 'no_root_squash' option set. This functionality is disabled | |
1168 | by default. | |
1169 | .sp | |
1170 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1171 | .RE | |
1172 | ||
29714574 TF |
1173 | .sp |
1174 | .ne 2 | |
1175 | .na | |
1176 | \fBzfs_flags\fR (int) | |
1177 | .ad | |
1178 | .RS 12n | |
33b6dbbc NB |
1179 | Set additional debugging flags. The following flags may be bitwise-or'd |
1180 | together. | |
1181 | .sp | |
1182 | .TS | |
1183 | box; | |
1184 | rB lB | |
1185 | lB lB | |
1186 | r l. | |
1187 | Value Symbolic Name | |
1188 | Description | |
1189 | _ | |
1190 | 1 ZFS_DEBUG_DPRINTF | |
1191 | Enable dprintf entries in the debug log. | |
1192 | _ | |
1193 | 2 ZFS_DEBUG_DBUF_VERIFY * | |
1194 | Enable extra dbuf verifications. | |
1195 | _ | |
1196 | 4 ZFS_DEBUG_DNODE_VERIFY * | |
1197 | Enable extra dnode verifications. | |
1198 | _ | |
1199 | 8 ZFS_DEBUG_SNAPNAMES | |
1200 | Enable snapshot name verification. | |
1201 | _ | |
1202 | 16 ZFS_DEBUG_MODIFY | |
1203 | Check for illegally modified ARC buffers. | |
1204 | _ | |
1205 | 32 ZFS_DEBUG_SPA | |
1206 | Enable spa_dbgmsg entries in the debug log. | |
1207 | _ | |
1208 | 64 ZFS_DEBUG_ZIO_FREE | |
1209 | Enable verification of block frees. | |
1210 | _ | |
1211 | 128 ZFS_DEBUG_HISTOGRAM_VERIFY | |
1212 | Enable extra spacemap histogram verifications. | |
1213 | .TE | |
1214 | .sp | |
1215 | * Requires debug build. | |
29714574 | 1216 | .sp |
33b6dbbc | 1217 | Default value: \fB0\fR. |
29714574 TF |
1218 | .RE |
1219 | ||
fbeddd60 MA |
1220 | .sp |
1221 | .ne 2 | |
1222 | .na | |
1223 | \fBzfs_free_leak_on_eio\fR (int) | |
1224 | .ad | |
1225 | .RS 12n | |
1226 | If destroy encounters an EIO while reading metadata (e.g. indirect | |
1227 | blocks), space referenced by the missing metadata can not be freed. | |
1228 | Normally this causes the background destroy to become "stalled", as | |
1229 | it is unable to make forward progress. While in this stalled state, | |
1230 | all remaining space to free from the error-encountering filesystem is | |
1231 | "temporarily leaked". Set this flag to cause it to ignore the EIO, | |
1232 | permanently leak the space from indirect blocks that can not be read, | |
1233 | and continue to free everything else that it can. | |
1234 | ||
1235 | The default, "stalling" behavior is useful if the storage partially | |
1236 | fails (i.e. some but not all i/os fail), and then later recovers. In | |
1237 | this case, we will be able to continue pool operations while it is | |
1238 | partially failed, and when it recovers, we can continue to free the | |
1239 | space, with no leaks. However, note that this case is actually | |
1240 | fairly rare. | |
1241 | ||
1242 | Typically pools either (a) fail completely (but perhaps temporarily, | |
1243 | e.g. a top-level vdev going offline), or (b) have localized, | |
1244 | permanent errors (e.g. disk returns the wrong data due to bit flip or | |
1245 | firmware bug). In case (a), this setting does not matter because the | |
1246 | pool will be suspended and the sync thread will not be able to make | |
1247 | forward progress regardless. In case (b), because the error is | |
1248 | permanent, the best we can do is leak the minimum amount of space, | |
1249 | which is what setting this flag will do. Therefore, it is reasonable | |
1250 | for this flag to normally be set, but we chose the more conservative | |
1251 | approach of not setting it, so that there is no possibility of | |
1252 | leaking space in the "partial temporary" failure case. | |
1253 | .sp | |
1254 | Default value: \fB0\fR. | |
1255 | .RE | |
1256 | ||
29714574 TF |
1257 | .sp |
1258 | .ne 2 | |
1259 | .na | |
1260 | \fBzfs_free_min_time_ms\fR (int) | |
1261 | .ad | |
1262 | .RS 12n | |
6146e17e | 1263 | During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum |
83426735 | 1264 | of this much time will be spent working on freeing blocks per txg. |
29714574 TF |
1265 | .sp |
1266 | Default value: \fB1,000\fR. | |
1267 | .RE | |
1268 | ||
1269 | .sp | |
1270 | .ne 2 | |
1271 | .na | |
1272 | \fBzfs_immediate_write_sz\fR (long) | |
1273 | .ad | |
1274 | .RS 12n | |
83426735 | 1275 | Largest data block to write to zil. Larger blocks will be treated as if the |
6146e17e | 1276 | dataset being written to had the property setting \fBlogbias=throughput\fR. |
29714574 TF |
1277 | .sp |
1278 | Default value: \fB32,768\fR. | |
1279 | .RE | |
1280 | ||
f1512ee6 MA |
1281 | .sp |
1282 | .ne 2 | |
1283 | .na | |
1284 | \fBzfs_max_recordsize\fR (int) | |
1285 | .ad | |
1286 | .RS 12n | |
1287 | We currently support block sizes from 512 bytes to 16MB. The benefits of | |
1288 | larger blocks, and thus larger IO, need to be weighed against the cost of | |
1289 | COWing a giant block to modify one byte. Additionally, very large blocks | |
1290 | can have an impact on i/o latency, and also potentially on the memory | |
1291 | allocator. Therefore, we do not allow the recordsize to be set larger than | |
1292 | zfs_max_recordsize (default 1MB). Larger blocks can be created by changing | |
1293 | this tunable, and pools with larger blocks can always be imported and used, | |
1294 | regardless of this setting. | |
1295 | .sp | |
1296 | Default value: \fB1,048,576\fR. | |
1297 | .RE | |
1298 | ||
29714574 TF |
1299 | .sp |
1300 | .ne 2 | |
1301 | .na | |
1302 | \fBzfs_mdcomp_disable\fR (int) | |
1303 | .ad | |
1304 | .RS 12n | |
1305 | Disable meta data compression | |
1306 | .sp | |
1307 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1308 | .RE | |
1309 | ||
f3a7f661 GW |
1310 | .sp |
1311 | .ne 2 | |
1312 | .na | |
1313 | \fBzfs_metaslab_fragmentation_threshold\fR (int) | |
1314 | .ad | |
1315 | .RS 12n | |
1316 | Allow metaslabs to keep their active state as long as their fragmentation | |
1317 | percentage is less than or equal to this value. An active metaslab that | |
1318 | exceeds this threshold will no longer keep its active status allowing | |
1319 | better metaslabs to be selected. | |
1320 | .sp | |
1321 | Default value: \fB70\fR. | |
1322 | .RE | |
1323 | ||
1324 | .sp | |
1325 | .ne 2 | |
1326 | .na | |
1327 | \fBzfs_mg_fragmentation_threshold\fR (int) | |
1328 | .ad | |
1329 | .RS 12n | |
1330 | Metaslab groups are considered eligible for allocations if their | |
83426735 | 1331 | fragmentation metric (measured as a percentage) is less than or equal to |
f3a7f661 GW |
1332 | this value. If a metaslab group exceeds this threshold then it will be |
1333 | skipped unless all metaslab groups within the metaslab class have also | |
1334 | crossed this threshold. | |
1335 | .sp | |
1336 | Default value: \fB85\fR. | |
1337 | .RE | |
1338 | ||
f4a4046b TC |
1339 | .sp |
1340 | .ne 2 | |
1341 | .na | |
1342 | \fBzfs_mg_noalloc_threshold\fR (int) | |
1343 | .ad | |
1344 | .RS 12n | |
1345 | Defines a threshold at which metaslab groups should be eligible for | |
1346 | allocations. The value is expressed as a percentage of free space | |
1347 | beyond which a metaslab group is always eligible for allocations. | |
1348 | If a metaslab group's free space is less than or equal to the | |
6b4e21c6 | 1349 | threshold, the allocator will avoid allocating to that group |
f4a4046b TC |
1350 | unless all groups in the pool have reached the threshold. Once all |
1351 | groups have reached the threshold, all groups are allowed to accept | |
1352 | allocations. The default value of 0 disables the feature and causes | |
1353 | all metaslab groups to be eligible for allocations. | |
1354 | ||
1355 | This parameter allows to deal with pools having heavily imbalanced | |
1356 | vdevs such as would be the case when a new vdev has been added. | |
1357 | Setting the threshold to a non-zero percentage will stop allocations | |
1358 | from being made to vdevs that aren't filled to the specified percentage | |
1359 | and allow lesser filled vdevs to acquire more allocations than they | |
1360 | otherwise would under the old \fBzfs_mg_alloc_failures\fR facility. | |
1361 | .sp | |
1362 | Default value: \fB0\fR. | |
1363 | .RE | |
1364 | ||
29714574 TF |
1365 | .sp |
1366 | .ne 2 | |
1367 | .na | |
1368 | \fBzfs_no_scrub_io\fR (int) | |
1369 | .ad | |
1370 | .RS 12n | |
83426735 D |
1371 | Set for no scrub I/O. This results in scrubs not actually scrubbing data and |
1372 | simply doing a metadata crawl of the pool instead. | |
29714574 TF |
1373 | .sp |
1374 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1375 | .RE | |
1376 | ||
1377 | .sp | |
1378 | .ne 2 | |
1379 | .na | |
1380 | \fBzfs_no_scrub_prefetch\fR (int) | |
1381 | .ad | |
1382 | .RS 12n | |
83426735 | 1383 | Set to disable block prefetching for scrubs. |
29714574 TF |
1384 | .sp |
1385 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1386 | .RE | |
1387 | ||
29714574 TF |
1388 | .sp |
1389 | .ne 2 | |
1390 | .na | |
1391 | \fBzfs_nocacheflush\fR (int) | |
1392 | .ad | |
1393 | .RS 12n | |
83426735 D |
1394 | Disable cache flush operations on disks when writing. Beware, this may cause |
1395 | corruption if disks re-order writes. | |
29714574 TF |
1396 | .sp |
1397 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1398 | .RE | |
1399 | ||
1400 | .sp | |
1401 | .ne 2 | |
1402 | .na | |
1403 | \fBzfs_nopwrite_enabled\fR (int) | |
1404 | .ad | |
1405 | .RS 12n | |
1406 | Enable NOP writes | |
1407 | .sp | |
1408 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
1409 | .RE | |
1410 | ||
1411 | .sp | |
1412 | .ne 2 | |
1413 | .na | |
b738bc5a | 1414 | \fBzfs_pd_bytes_max\fR (int) |
29714574 TF |
1415 | .ad |
1416 | .RS 12n | |
83426735 | 1417 | The number of bytes which should be prefetched during a pool traversal |
6146e17e | 1418 | (eg: \fBzfs send\fR or other data crawling operations) |
29714574 | 1419 | .sp |
74aa2ba2 | 1420 | Default value: \fB52,428,800\fR. |
29714574 TF |
1421 | .RE |
1422 | ||
1423 | .sp | |
1424 | .ne 2 | |
1425 | .na | |
1426 | \fBzfs_prefetch_disable\fR (int) | |
1427 | .ad | |
1428 | .RS 12n | |
7f60329a MA |
1429 | This tunable disables predictive prefetch. Note that it leaves "prescient" |
1430 | prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch, | |
1431 | prescient prefetch never issues i/os that end up not being needed, so it | |
1432 | can't hurt performance. | |
29714574 TF |
1433 | .sp |
1434 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1435 | .RE | |
1436 | ||
1437 | .sp | |
1438 | .ne 2 | |
1439 | .na | |
1440 | \fBzfs_read_chunk_size\fR (long) | |
1441 | .ad | |
1442 | .RS 12n | |
1443 | Bytes to read per chunk | |
1444 | .sp | |
1445 | Default value: \fB1,048,576\fR. | |
1446 | .RE | |
1447 | ||
1448 | .sp | |
1449 | .ne 2 | |
1450 | .na | |
1451 | \fBzfs_read_history\fR (int) | |
1452 | .ad | |
1453 | .RS 12n | |
83426735 D |
1454 | Historic statistics for the last N reads will be available in |
1455 | \fR/proc/spl/kstat/zfs/POOLNAME/reads\fB | |
29714574 | 1456 | .sp |
83426735 | 1457 | Default value: \fB0\fR (no data is kept). |
29714574 TF |
1458 | .RE |
1459 | ||
1460 | .sp | |
1461 | .ne 2 | |
1462 | .na | |
1463 | \fBzfs_read_history_hits\fR (int) | |
1464 | .ad | |
1465 | .RS 12n | |
1466 | Include cache hits in read history | |
1467 | .sp | |
1468 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1469 | .RE | |
1470 | ||
1471 | .sp | |
1472 | .ne 2 | |
1473 | .na | |
1474 | \fBzfs_recover\fR (int) | |
1475 | .ad | |
1476 | .RS 12n | |
1477 | Set to attempt to recover from fatal errors. This should only be used as a | |
1478 | last resort, as it typically results in leaked space, or worse. | |
1479 | .sp | |
1480 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1481 | .RE | |
1482 | ||
1483 | .sp | |
1484 | .ne 2 | |
1485 | .na | |
1486 | \fBzfs_resilver_delay\fR (int) | |
1487 | .ad | |
1488 | .RS 12n | |
27b293be TC |
1489 | Number of ticks to delay prior to issuing a resilver I/O operation when |
1490 | a non-resilver or non-scrub I/O operation has occurred within the past | |
1491 | \fBzfs_scan_idle\fR ticks. | |
29714574 TF |
1492 | .sp |
1493 | Default value: \fB2\fR. | |
1494 | .RE | |
1495 | ||
1496 | .sp | |
1497 | .ne 2 | |
1498 | .na | |
1499 | \fBzfs_resilver_min_time_ms\fR (int) | |
1500 | .ad | |
1501 | .RS 12n | |
83426735 D |
1502 | Resilvers are processed by the sync thread. While resilvering it will spend |
1503 | at least this much time working on a resilver between txg flushes. | |
29714574 TF |
1504 | .sp |
1505 | Default value: \fB3,000\fR. | |
1506 | .RE | |
1507 | ||
1508 | .sp | |
1509 | .ne 2 | |
1510 | .na | |
1511 | \fBzfs_scan_idle\fR (int) | |
1512 | .ad | |
1513 | .RS 12n | |
27b293be TC |
1514 | Idle window in clock ticks. During a scrub or a resilver, if |
1515 | a non-scrub or non-resilver I/O operation has occurred during this | |
1516 | window, the next scrub or resilver operation is delayed by, respectively | |
1517 | \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks. | |
29714574 TF |
1518 | .sp |
1519 | Default value: \fB50\fR. | |
1520 | .RE | |
1521 | ||
1522 | .sp | |
1523 | .ne 2 | |
1524 | .na | |
1525 | \fBzfs_scan_min_time_ms\fR (int) | |
1526 | .ad | |
1527 | .RS 12n | |
83426735 D |
1528 | Scrubs are processed by the sync thread. While scrubbing it will spend |
1529 | at least this much time working on a scrub between txg flushes. | |
29714574 TF |
1530 | .sp |
1531 | Default value: \fB1,000\fR. | |
1532 | .RE | |
1533 | ||
1534 | .sp | |
1535 | .ne 2 | |
1536 | .na | |
1537 | \fBzfs_scrub_delay\fR (int) | |
1538 | .ad | |
1539 | .RS 12n | |
27b293be TC |
1540 | Number of ticks to delay prior to issuing a scrub I/O operation when |
1541 | a non-scrub or non-resilver I/O operation has occurred within the past | |
1542 | \fBzfs_scan_idle\fR ticks. | |
29714574 TF |
1543 | .sp |
1544 | Default value: \fB4\fR. | |
1545 | .RE | |
1546 | ||
fd8febbd TF |
1547 | .sp |
1548 | .ne 2 | |
1549 | .na | |
1550 | \fBzfs_send_corrupt_data\fR (int) | |
1551 | .ad | |
1552 | .RS 12n | |
83426735 | 1553 | Allow sending of corrupt data (ignore read/checksum errors when sending data) |
fd8febbd TF |
1554 | .sp |
1555 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1556 | .RE | |
1557 | ||
29714574 TF |
1558 | .sp |
1559 | .ne 2 | |
1560 | .na | |
1561 | \fBzfs_sync_pass_deferred_free\fR (int) | |
1562 | .ad | |
1563 | .RS 12n | |
83426735 | 1564 | Flushing of data to disk is done in passes. Defer frees starting in this pass |
29714574 TF |
1565 | .sp |
1566 | Default value: \fB2\fR. | |
1567 | .RE | |
1568 | ||
1569 | .sp | |
1570 | .ne 2 | |
1571 | .na | |
1572 | \fBzfs_sync_pass_dont_compress\fR (int) | |
1573 | .ad | |
1574 | .RS 12n | |
1575 | Don't compress starting in this pass | |
1576 | .sp | |
1577 | Default value: \fB5\fR. | |
1578 | .RE | |
1579 | ||
1580 | .sp | |
1581 | .ne 2 | |
1582 | .na | |
1583 | \fBzfs_sync_pass_rewrite\fR (int) | |
1584 | .ad | |
1585 | .RS 12n | |
83426735 | 1586 | Rewrite new block pointers starting in this pass |
29714574 TF |
1587 | .sp |
1588 | Default value: \fB2\fR. | |
1589 | .RE | |
1590 | ||
1591 | .sp | |
1592 | .ne 2 | |
1593 | .na | |
1594 | \fBzfs_top_maxinflight\fR (int) | |
1595 | .ad | |
1596 | .RS 12n | |
83426735 D |
1597 | Max concurrent I/Os per top-level vdev (mirrors or raidz arrays) allowed during |
1598 | scrub or resilver operations. | |
29714574 TF |
1599 | .sp |
1600 | Default value: \fB32\fR. | |
1601 | .RE | |
1602 | ||
1603 | .sp | |
1604 | .ne 2 | |
1605 | .na | |
1606 | \fBzfs_txg_history\fR (int) | |
1607 | .ad | |
1608 | .RS 12n | |
83426735 D |
1609 | Historic statistics for the last N txgs will be available in |
1610 | \fR/proc/spl/kstat/zfs/POOLNAME/txgs\fB | |
29714574 TF |
1611 | .sp |
1612 | Default value: \fB0\fR. | |
1613 | .RE | |
1614 | ||
29714574 TF |
1615 | .sp |
1616 | .ne 2 | |
1617 | .na | |
1618 | \fBzfs_txg_timeout\fR (int) | |
1619 | .ad | |
1620 | .RS 12n | |
83426735 | 1621 | Flush dirty data to disk at least every N seconds (maximum txg duration) |
29714574 TF |
1622 | .sp |
1623 | Default value: \fB5\fR. | |
1624 | .RE | |
1625 | ||
1626 | .sp | |
1627 | .ne 2 | |
1628 | .na | |
1629 | \fBzfs_vdev_aggregation_limit\fR (int) | |
1630 | .ad | |
1631 | .RS 12n | |
1632 | Max vdev I/O aggregation size | |
1633 | .sp | |
1634 | Default value: \fB131,072\fR. | |
1635 | .RE | |
1636 | ||
1637 | .sp | |
1638 | .ne 2 | |
1639 | .na | |
1640 | \fBzfs_vdev_cache_bshift\fR (int) | |
1641 | .ad | |
1642 | .RS 12n | |
1643 | Shift size to inflate reads too | |
1644 | .sp | |
83426735 | 1645 | Default value: \fB16\fR (effectively 65536). |
29714574 TF |
1646 | .RE |
1647 | ||
1648 | .sp | |
1649 | .ne 2 | |
1650 | .na | |
1651 | \fBzfs_vdev_cache_max\fR (int) | |
1652 | .ad | |
1653 | .RS 12n | |
83426735 D |
1654 | Inflate reads small than this value to meet the \fBzfs_vdev_cache_bshift\fR |
1655 | size. | |
1656 | .sp | |
1657 | Default value: \fB16384\fR. | |
29714574 TF |
1658 | .RE |
1659 | ||
1660 | .sp | |
1661 | .ne 2 | |
1662 | .na | |
1663 | \fBzfs_vdev_cache_size\fR (int) | |
1664 | .ad | |
1665 | .RS 12n | |
83426735 D |
1666 | Total size of the per-disk cache in bytes. |
1667 | .sp | |
1668 | Currently this feature is disabled as it has been found to not be helpful | |
1669 | for performance and in some cases harmful. | |
29714574 TF |
1670 | .sp |
1671 | Default value: \fB0\fR. | |
1672 | .RE | |
1673 | ||
29714574 TF |
1674 | .sp |
1675 | .ne 2 | |
1676 | .na | |
9f500936 | 1677 | \fBzfs_vdev_mirror_rotating_inc\fR (int) |
29714574 TF |
1678 | .ad |
1679 | .RS 12n | |
9f500936 | 1680 | A number by which the balancing algorithm increments the load calculation for |
1681 | the purpose of selecting the least busy mirror member when an I/O immediately | |
1682 | follows its predecessor on rotational vdevs for the purpose of making decisions | |
1683 | based on load. | |
29714574 | 1684 | .sp |
9f500936 | 1685 | Default value: \fB0\fR. |
1686 | .RE | |
1687 | ||
1688 | .sp | |
1689 | .ne 2 | |
1690 | .na | |
1691 | \fBzfs_vdev_mirror_rotating_seek_inc\fR (int) | |
1692 | .ad | |
1693 | .RS 12n | |
1694 | A number by which the balancing algorithm increments the load calculation for | |
1695 | the purpose of selecting the least busy mirror member when an I/O lacks | |
1696 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
1697 | this that are not immediately following the previous I/O are incremented by | |
1698 | half. | |
1699 | .sp | |
1700 | Default value: \fB5\fR. | |
1701 | .RE | |
1702 | ||
1703 | .sp | |
1704 | .ne 2 | |
1705 | .na | |
1706 | \fBzfs_vdev_mirror_rotating_seek_offset\fR (int) | |
1707 | .ad | |
1708 | .RS 12n | |
1709 | The maximum distance for the last queued I/O in which the balancing algorithm | |
1710 | considers an I/O to have locality. | |
1711 | See the section "ZFS I/O SCHEDULER". | |
1712 | .sp | |
1713 | Default value: \fB1048576\fR. | |
1714 | .RE | |
1715 | ||
1716 | .sp | |
1717 | .ne 2 | |
1718 | .na | |
1719 | \fBzfs_vdev_mirror_non_rotating_inc\fR (int) | |
1720 | .ad | |
1721 | .RS 12n | |
1722 | A number by which the balancing algorithm increments the load calculation for | |
1723 | the purpose of selecting the least busy mirror member on non-rotational vdevs | |
1724 | when I/Os do not immediately follow one another. | |
1725 | .sp | |
1726 | Default value: \fB0\fR. | |
1727 | .RE | |
1728 | ||
1729 | .sp | |
1730 | .ne 2 | |
1731 | .na | |
1732 | \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int) | |
1733 | .ad | |
1734 | .RS 12n | |
1735 | A number by which the balancing algorithm increments the load calculation for | |
1736 | the purpose of selecting the least busy mirror member when an I/O lacks | |
1737 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
1738 | this that are not immediately following the previous I/O are incremented by | |
1739 | half. | |
1740 | .sp | |
1741 | Default value: \fB1\fR. | |
29714574 TF |
1742 | .RE |
1743 | ||
29714574 TF |
1744 | .sp |
1745 | .ne 2 | |
1746 | .na | |
1747 | \fBzfs_vdev_read_gap_limit\fR (int) | |
1748 | .ad | |
1749 | .RS 12n | |
83426735 D |
1750 | Aggregate read I/O operations if the gap on-disk between them is within this |
1751 | threshold. | |
29714574 TF |
1752 | .sp |
1753 | Default value: \fB32,768\fR. | |
1754 | .RE | |
1755 | ||
1756 | .sp | |
1757 | .ne 2 | |
1758 | .na | |
1759 | \fBzfs_vdev_scheduler\fR (charp) | |
1760 | .ad | |
1761 | .RS 12n | |
83426735 | 1762 | Set the Linux I/O scheduler on whole disk vdevs to this scheduler |
29714574 TF |
1763 | .sp |
1764 | Default value: \fBnoop\fR. | |
1765 | .RE | |
1766 | ||
29714574 TF |
1767 | .sp |
1768 | .ne 2 | |
1769 | .na | |
1770 | \fBzfs_vdev_write_gap_limit\fR (int) | |
1771 | .ad | |
1772 | .RS 12n | |
1773 | Aggregate write I/O over gap | |
1774 | .sp | |
1775 | Default value: \fB4,096\fR. | |
1776 | .RE | |
1777 | ||
ab9f4b0b GN |
1778 | .sp |
1779 | .ne 2 | |
1780 | .na | |
1781 | \fBzfs_vdev_raidz_impl\fR (string) | |
1782 | .ad | |
1783 | .RS 12n | |
c9187d86 | 1784 | Parameter for selecting raidz parity implementation to use. |
ab9f4b0b GN |
1785 | |
1786 | Options marked (always) below may be selected on module load as they are | |
1787 | supported on all systems. | |
1788 | The remaining options may only be set after the module is loaded, as they | |
1789 | are available only if the implementations are compiled in and supported | |
1790 | on the running system. | |
1791 | ||
1792 | Once the module is loaded, the content of | |
1793 | /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options | |
1794 | with the currently selected one enclosed in []. | |
1795 | Possible options are: | |
1796 | fastest - (always) implementation selected using built-in benchmark | |
1797 | original - (always) original raidz implementation | |
1798 | scalar - (always) scalar raidz implementation | |
ae25d222 GN |
1799 | sse2 - implementation using SSE2 instruction set (64bit x86 only) |
1800 | ssse3 - implementation using SSSE3 instruction set (64bit x86 only) | |
ab9f4b0b | 1801 | avx2 - implementation using AVX2 instruction set (64bit x86 only) |
7f547f85 RD |
1802 | avx512f - implementation using AVX512F instruction set (64bit x86 only) |
1803 | avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only) | |
62a65a65 RD |
1804 | aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) |
1805 | aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only) | |
ab9f4b0b GN |
1806 | .sp |
1807 | Default value: \fBfastest\fR. | |
1808 | .RE | |
1809 | ||
29714574 TF |
1810 | .sp |
1811 | .ne 2 | |
1812 | .na | |
1813 | \fBzfs_zevent_cols\fR (int) | |
1814 | .ad | |
1815 | .RS 12n | |
83426735 | 1816 | When zevents are logged to the console use this as the word wrap width. |
29714574 TF |
1817 | .sp |
1818 | Default value: \fB80\fR. | |
1819 | .RE | |
1820 | ||
1821 | .sp | |
1822 | .ne 2 | |
1823 | .na | |
1824 | \fBzfs_zevent_console\fR (int) | |
1825 | .ad | |
1826 | .RS 12n | |
1827 | Log events to the console | |
1828 | .sp | |
1829 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1830 | .RE | |
1831 | ||
1832 | .sp | |
1833 | .ne 2 | |
1834 | .na | |
1835 | \fBzfs_zevent_len_max\fR (int) | |
1836 | .ad | |
1837 | .RS 12n | |
83426735 D |
1838 | Max event queue length. A value of 0 will result in a calculated value which |
1839 | increases with the number of CPUs in the system (minimum 64 events). Events | |
1840 | in the queue can be viewed with the \fBzpool events\fR command. | |
29714574 TF |
1841 | .sp |
1842 | Default value: \fB0\fR. | |
1843 | .RE | |
1844 | ||
1845 | .sp | |
1846 | .ne 2 | |
1847 | .na | |
1848 | \fBzil_replay_disable\fR (int) | |
1849 | .ad | |
1850 | .RS 12n | |
83426735 D |
1851 | Disable intent logging replay. Can be disabled for recovery from corrupted |
1852 | ZIL | |
29714574 TF |
1853 | .sp |
1854 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1855 | .RE | |
1856 | ||
1857 | .sp | |
1858 | .ne 2 | |
1859 | .na | |
1860 | \fBzil_slog_limit\fR (ulong) | |
1861 | .ad | |
1862 | .RS 12n | |
1863 | Max commit bytes to separate log device | |
1864 | .sp | |
1865 | Default value: \fB1,048,576\fR. | |
1866 | .RE | |
1867 | ||
29714574 TF |
1868 | .sp |
1869 | .ne 2 | |
1870 | .na | |
1871 | \fBzio_delay_max\fR (int) | |
1872 | .ad | |
1873 | .RS 12n | |
83426735 | 1874 | A zevent will be logged if a ZIO operation takes more than N milliseconds to |
ab9f4b0b | 1875 | complete. Note that this is only a logging facility, not a timeout on |
83426735 | 1876 | operations. |
29714574 TF |
1877 | .sp |
1878 | Default value: \fB30,000\fR. | |
1879 | .RE | |
1880 | ||
3dfb57a3 DB |
1881 | .sp |
1882 | .ne 2 | |
1883 | .na | |
1884 | \fBzio_dva_throttle_enabled\fR (int) | |
1885 | .ad | |
1886 | .RS 12n | |
1887 | Throttle block allocations in the ZIO pipeline. This allows for | |
1888 | dynamic allocation distribution when devices are imbalanced. | |
1889 | .sp | |
27f2b90d | 1890 | Default value: \fB1\fR. |
3dfb57a3 DB |
1891 | .RE |
1892 | ||
29714574 TF |
1893 | .sp |
1894 | .ne 2 | |
1895 | .na | |
1896 | \fBzio_requeue_io_start_cut_in_line\fR (int) | |
1897 | .ad | |
1898 | .RS 12n | |
1899 | Prioritize requeued I/O | |
1900 | .sp | |
1901 | Default value: \fB0\fR. | |
1902 | .RE | |
1903 | ||
dcb6bed1 D |
1904 | .sp |
1905 | .ne 2 | |
1906 | .na | |
1907 | \fBzio_taskq_batch_pct\fR (uint) | |
1908 | .ad | |
1909 | .RS 12n | |
1910 | Percentage of online CPUs (or CPU cores, etc) which will run a worker thread | |
1911 | for IO. These workers are responsible for IO work such as compression and | |
1912 | checksum calculations. Fractional number of CPUs will be rounded down. | |
1913 | .sp | |
1914 | The default value of 75 was chosen to avoid using all CPUs which can result in | |
1915 | latency issues and inconsistent application performance, especially when high | |
1916 | compression is enabled. | |
1917 | .sp | |
1918 | Default value: \fB75\fR. | |
1919 | .RE | |
1920 | ||
29714574 TF |
1921 | .sp |
1922 | .ne 2 | |
1923 | .na | |
1924 | \fBzvol_inhibit_dev\fR (uint) | |
1925 | .ad | |
1926 | .RS 12n | |
83426735 D |
1927 | Do not create zvol device nodes. This may slightly improve startup time on |
1928 | systems with a very large number of zvols. | |
29714574 TF |
1929 | .sp |
1930 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1931 | .RE | |
1932 | ||
1933 | .sp | |
1934 | .ne 2 | |
1935 | .na | |
1936 | \fBzvol_major\fR (uint) | |
1937 | .ad | |
1938 | .RS 12n | |
83426735 | 1939 | Major number for zvol block devices |
29714574 TF |
1940 | .sp |
1941 | Default value: \fB230\fR. | |
1942 | .RE | |
1943 | ||
1944 | .sp | |
1945 | .ne 2 | |
1946 | .na | |
1947 | \fBzvol_max_discard_blocks\fR (ulong) | |
1948 | .ad | |
1949 | .RS 12n | |
83426735 D |
1950 | Discard (aka TRIM) operations done on zvols will be done in batches of this |
1951 | many blocks, where block size is determined by the \fBvolblocksize\fR property | |
1952 | of a zvol. | |
29714574 TF |
1953 | .sp |
1954 | Default value: \fB16,384\fR. | |
1955 | .RE | |
1956 | ||
9965059a BB |
1957 | .sp |
1958 | .ne 2 | |
1959 | .na | |
1960 | \fBzvol_prefetch_bytes\fR (uint) | |
1961 | .ad | |
1962 | .RS 12n | |
1963 | When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR | |
1964 | from the start and end of the volume. Prefetching these regions | |
1965 | of the volume is desirable because they are likely to be accessed | |
1966 | immediately by \fBblkid(8)\fR or by the kernel scanning for a partition | |
1967 | table. | |
1968 | .sp | |
1969 | Default value: \fB131,072\fR. | |
1970 | .RE | |
1971 | ||
e8b96c60 MA |
1972 | .SH ZFS I/O SCHEDULER |
1973 | ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os. | |
1974 | The I/O scheduler determines when and in what order those operations are | |
1975 | issued. The I/O scheduler divides operations into five I/O classes | |
1976 | prioritized in the following order: sync read, sync write, async read, | |
1977 | async write, and scrub/resilver. Each queue defines the minimum and | |
1978 | maximum number of concurrent operations that may be issued to the | |
1979 | device. In addition, the device has an aggregate maximum, | |
1980 | \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums | |
1981 | must not exceed the aggregate maximum. If the sum of the per-queue | |
1982 | maximums exceeds the aggregate maximum, then the number of active I/Os | |
1983 | may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will | |
1984 | be issued regardless of whether all per-queue minimums have been met. | |
1985 | .sp | |
1986 | For many physical devices, throughput increases with the number of | |
1987 | concurrent operations, but latency typically suffers. Further, physical | |
1988 | devices typically have a limit at which more concurrent operations have no | |
1989 | effect on throughput or can actually cause it to decrease. | |
1990 | .sp | |
1991 | The scheduler selects the next operation to issue by first looking for an | |
1992 | I/O class whose minimum has not been satisfied. Once all are satisfied and | |
1993 | the aggregate maximum has not been hit, the scheduler looks for classes | |
1994 | whose maximum has not been satisfied. Iteration through the I/O classes is | |
1995 | done in the order specified above. No further operations are issued if the | |
1996 | aggregate maximum number of concurrent operations has been hit or if there | |
1997 | are no operations queued for an I/O class that has not hit its maximum. | |
1998 | Every time an I/O is queued or an operation completes, the I/O scheduler | |
1999 | looks for new operations to issue. | |
2000 | .sp | |
2001 | In general, smaller max_active's will lead to lower latency of synchronous | |
2002 | operations. Larger max_active's may lead to higher overall throughput, | |
2003 | depending on underlying storage. | |
2004 | .sp | |
2005 | The ratio of the queues' max_actives determines the balance of performance | |
2006 | between reads, writes, and scrubs. E.g., increasing | |
2007 | \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete | |
2008 | more quickly, but reads and writes to have higher latency and lower throughput. | |
2009 | .sp | |
2010 | All I/O classes have a fixed maximum number of outstanding operations | |
2011 | except for the async write class. Asynchronous writes represent the data | |
2012 | that is committed to stable storage during the syncing stage for | |
2013 | transaction groups. Transaction groups enter the syncing state | |
2014 | periodically so the number of queued async writes will quickly burst up | |
2015 | and then bleed down to zero. Rather than servicing them as quickly as | |
2016 | possible, the I/O scheduler changes the maximum number of active async | |
2017 | write I/Os according to the amount of dirty data in the pool. Since | |
2018 | both throughput and latency typically increase with the number of | |
2019 | concurrent operations issued to physical devices, reducing the | |
2020 | burstiness in the number of concurrent operations also stabilizes the | |
2021 | response time of operations from other -- and in particular synchronous | |
2022 | -- queues. In broad strokes, the I/O scheduler will issue more | |
2023 | concurrent operations from the async write queue as there's more dirty | |
2024 | data in the pool. | |
2025 | .sp | |
2026 | Async Writes | |
2027 | .sp | |
2028 | The number of concurrent operations issued for the async write I/O class | |
2029 | follows a piece-wise linear function defined by a few adjustable points. | |
2030 | .nf | |
2031 | ||
2032 | | o---------| <-- zfs_vdev_async_write_max_active | |
2033 | ^ | /^ | | |
2034 | | | / | | | |
2035 | active | / | | | |
2036 | I/O | / | | | |
2037 | count | / | | | |
2038 | | / | | | |
2039 | |-------o | | <-- zfs_vdev_async_write_min_active | |
2040 | 0|_______^______|_________| | |
2041 | 0% | | 100% of zfs_dirty_data_max | |
2042 | | | | |
2043 | | `-- zfs_vdev_async_write_active_max_dirty_percent | |
2044 | `--------- zfs_vdev_async_write_active_min_dirty_percent | |
2045 | ||
2046 | .fi | |
2047 | Until the amount of dirty data exceeds a minimum percentage of the dirty | |
2048 | data allowed in the pool, the I/O scheduler will limit the number of | |
2049 | concurrent operations to the minimum. As that threshold is crossed, the | |
2050 | number of concurrent operations issued increases linearly to the maximum at | |
2051 | the specified maximum percentage of the dirty data allowed in the pool. | |
2052 | .sp | |
2053 | Ideally, the amount of dirty data on a busy pool will stay in the sloped | |
2054 | part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR | |
2055 | and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the | |
2056 | maximum percentage, this indicates that the rate of incoming data is | |
2057 | greater than the rate that the backend storage can handle. In this case, we | |
2058 | must further throttle incoming writes, as described in the next section. | |
2059 | ||
2060 | .SH ZFS TRANSACTION DELAY | |
2061 | We delay transactions when we've determined that the backend storage | |
2062 | isn't able to accommodate the rate of incoming writes. | |
2063 | .sp | |
2064 | If there is already a transaction waiting, we delay relative to when | |
2065 | that transaction will finish waiting. This way the calculated delay time | |
2066 | is independent of the number of threads concurrently executing | |
2067 | transactions. | |
2068 | .sp | |
2069 | If we are the only waiter, wait relative to when the transaction | |
2070 | started, rather than the current time. This credits the transaction for | |
2071 | "time already served", e.g. reading indirect blocks. | |
2072 | .sp | |
2073 | The minimum time for a transaction to take is calculated as: | |
2074 | .nf | |
2075 | min_time = zfs_delay_scale * (dirty - min) / (max - dirty) | |
2076 | min_time is then capped at 100 milliseconds. | |
2077 | .fi | |
2078 | .sp | |
2079 | The delay has two degrees of freedom that can be adjusted via tunables. The | |
2080 | percentage of dirty data at which we start to delay is defined by | |
2081 | \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above | |
2082 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to | |
2083 | delay after writing at full speed has failed to keep up with the incoming write | |
2084 | rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking, | |
2085 | this variable determines the amount of delay at the midpoint of the curve. | |
2086 | .sp | |
2087 | .nf | |
2088 | delay | |
2089 | 10ms +-------------------------------------------------------------*+ | |
2090 | | *| | |
2091 | 9ms + *+ | |
2092 | | *| | |
2093 | 8ms + *+ | |
2094 | | * | | |
2095 | 7ms + * + | |
2096 | | * | | |
2097 | 6ms + * + | |
2098 | | * | | |
2099 | 5ms + * + | |
2100 | | * | | |
2101 | 4ms + * + | |
2102 | | * | | |
2103 | 3ms + * + | |
2104 | | * | | |
2105 | 2ms + (midpoint) * + | |
2106 | | | ** | | |
2107 | 1ms + v *** + | |
2108 | | zfs_delay_scale ----------> ******** | | |
2109 | 0 +-------------------------------------*********----------------+ | |
2110 | 0% <- zfs_dirty_data_max -> 100% | |
2111 | .fi | |
2112 | .sp | |
2113 | Note that since the delay is added to the outstanding time remaining on the | |
2114 | most recent transaction, the delay is effectively the inverse of IOPS. | |
2115 | Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve | |
2116 | was chosen such that small changes in the amount of accumulated dirty data | |
2117 | in the first 3/4 of the curve yield relatively small differences in the | |
2118 | amount of delay. | |
2119 | .sp | |
2120 | The effects can be easier to understand when the amount of delay is | |
2121 | represented on a log scale: | |
2122 | .sp | |
2123 | .nf | |
2124 | delay | |
2125 | 100ms +-------------------------------------------------------------++ | |
2126 | + + | |
2127 | | | | |
2128 | + *+ | |
2129 | 10ms + *+ | |
2130 | + ** + | |
2131 | | (midpoint) ** | | |
2132 | + | ** + | |
2133 | 1ms + v **** + | |
2134 | + zfs_delay_scale ----------> ***** + | |
2135 | | **** | | |
2136 | + **** + | |
2137 | 100us + ** + | |
2138 | + * + | |
2139 | | * | | |
2140 | + * + | |
2141 | 10us + * + | |
2142 | + + | |
2143 | | | | |
2144 | + + | |
2145 | +--------------------------------------------------------------+ | |
2146 | 0% <- zfs_dirty_data_max -> 100% | |
2147 | .fi | |
2148 | .sp | |
2149 | Note here that only as the amount of dirty data approaches its limit does | |
2150 | the delay start to increase rapidly. The goal of a properly tuned system | |
2151 | should be to keep the amount of dirty data out of that range by first | |
2152 | ensuring that the appropriate limits are set for the I/O scheduler to reach | |
2153 | optimal throughput on the backend storage, and then by changing the value | |
2154 | of \fBzfs_delay_scale\fR to increase the steepness of the curve. |