]> git.proxmox.com Git - mirror_zfs.git/blob - man/man5/zfs-module-parameters.5
Freeing throttle should account for holes
[mirror_zfs.git] / man / man5 / zfs-module-parameters.5
1 '\" te
2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" Copyright (c) 2018 by Delphix. All rights reserved.
4 .\" Copyright (c) 2019 Datto Inc.
5 .\" The contents of this file are subject to the terms of the Common Development
6 .\" and Distribution License (the "License"). You may not use this file except
7 .\" in compliance with the License. You can obtain a copy of the license at
8 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
9 .\"
10 .\" See the License for the specific language governing permissions and
11 .\" limitations under the License. When distributing Covered Code, include this
12 .\" CDDL HEADER in each file and include the License file at
13 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15 .\" own identifying information:
16 .\" Portions Copyright [yyyy] [name of copyright owner]
17 .TH ZFS-MODULE-PARAMETERS 5 "Feb 8, 2019"
18 .SH NAME
19 zfs\-module\-parameters \- ZFS module parameters
20 .SH DESCRIPTION
21 .sp
22 .LP
23 Description of the different parameters to the ZFS module.
24
25 .SS "Module parameters"
26 .sp
27 .LP
28
29 .sp
30 .ne 2
31 .na
32 \fBdbuf_cache_max_bytes\fR (ulong)
33 .ad
34 .RS 12n
35 Maximum size in bytes of the dbuf cache. When \fB0\fR this value will default
36 to \fB1/2^dbuf_cache_shift\fR (1/32) of the target ARC size, otherwise the
37 provided value in bytes will be used. The behavior of the dbuf cache and its
38 associated settings can be observed via the \fB/proc/spl/kstat/zfs/dbufstats\fR
39 kstat.
40 .sp
41 Default value: \fB0\fR.
42 .RE
43
44 .sp
45 .ne 2
46 .na
47 \fBdbuf_metadata_cache_max_bytes\fR (ulong)
48 .ad
49 .RS 12n
50 Maximum size in bytes of the metadata dbuf cache. When \fB0\fR this value will
51 default to \fB1/2^dbuf_cache_shift\fR (1/16) of the target ARC size, otherwise
52 the provided value in bytes will be used. The behavior of the metadata dbuf
53 cache and its associated settings can be observed via the
54 \fB/proc/spl/kstat/zfs/dbufstats\fR kstat.
55 .sp
56 Default value: \fB0\fR.
57 .RE
58
59 .sp
60 .ne 2
61 .na
62 \fBdbuf_cache_hiwater_pct\fR (uint)
63 .ad
64 .RS 12n
65 The percentage over \fBdbuf_cache_max_bytes\fR when dbufs must be evicted
66 directly.
67 .sp
68 Default value: \fB10\fR%.
69 .RE
70
71 .sp
72 .ne 2
73 .na
74 \fBdbuf_cache_lowater_pct\fR (uint)
75 .ad
76 .RS 12n
77 The percentage below \fBdbuf_cache_max_bytes\fR when the evict thread stops
78 evicting dbufs.
79 .sp
80 Default value: \fB10\fR%.
81 .RE
82
83 .sp
84 .ne 2
85 .na
86 \fBdbuf_cache_shift\fR (int)
87 .ad
88 .RS 12n
89 Set the size of the dbuf cache, \fBdbuf_cache_max_bytes\fR, to a log2 fraction
90 of the target arc size.
91 .sp
92 Default value: \fB5\fR.
93 .RE
94
95 .sp
96 .ne 2
97 .na
98 \fBdbuf_metadata_cache_shift\fR (int)
99 .ad
100 .RS 12n
101 Set the size of the dbuf metadata cache, \fBdbuf_metadata_cache_max_bytes\fR,
102 to a log2 fraction of the target arc size.
103 .sp
104 Default value: \fB6\fR.
105 .RE
106
107 .sp
108 .ne 2
109 .na
110 \fBignore_hole_birth\fR (int)
111 .ad
112 .RS 12n
113 When set, the hole_birth optimization will not be used, and all holes will
114 always be sent on zfs send. Useful if you suspect your datasets are affected
115 by a bug in hole_birth.
116 .sp
117 Use \fB1\fR for on (default) and \fB0\fR for off.
118 .RE
119
120 .sp
121 .ne 2
122 .na
123 \fBl2arc_feed_again\fR (int)
124 .ad
125 .RS 12n
126 Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
127 fast as possible.
128 .sp
129 Use \fB1\fR for yes (default) and \fB0\fR to disable.
130 .RE
131
132 .sp
133 .ne 2
134 .na
135 \fBl2arc_feed_min_ms\fR (ulong)
136 .ad
137 .RS 12n
138 Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
139 applicable in related situations.
140 .sp
141 Default value: \fB200\fR.
142 .RE
143
144 .sp
145 .ne 2
146 .na
147 \fBl2arc_feed_secs\fR (ulong)
148 .ad
149 .RS 12n
150 Seconds between L2ARC writing
151 .sp
152 Default value: \fB1\fR.
153 .RE
154
155 .sp
156 .ne 2
157 .na
158 \fBl2arc_headroom\fR (ulong)
159 .ad
160 .RS 12n
161 How far through the ARC lists to search for L2ARC cacheable content, expressed
162 as a multiplier of \fBl2arc_write_max\fR
163 .sp
164 Default value: \fB2\fR.
165 .RE
166
167 .sp
168 .ne 2
169 .na
170 \fBl2arc_headroom_boost\fR (ulong)
171 .ad
172 .RS 12n
173 Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
174 successfully compressed before writing. A value of 100 disables this feature.
175 .sp
176 Default value: \fB200\fR%.
177 .RE
178
179 .sp
180 .ne 2
181 .na
182 \fBl2arc_noprefetch\fR (int)
183 .ad
184 .RS 12n
185 Do not write buffers to L2ARC if they were prefetched but not used by
186 applications
187 .sp
188 Use \fB1\fR for yes (default) and \fB0\fR to disable.
189 .RE
190
191 .sp
192 .ne 2
193 .na
194 \fBl2arc_norw\fR (int)
195 .ad
196 .RS 12n
197 No reads during writes
198 .sp
199 Use \fB1\fR for yes and \fB0\fR for no (default).
200 .RE
201
202 .sp
203 .ne 2
204 .na
205 \fBl2arc_write_boost\fR (ulong)
206 .ad
207 .RS 12n
208 Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount
209 while they remain cold.
210 .sp
211 Default value: \fB8,388,608\fR.
212 .RE
213
214 .sp
215 .ne 2
216 .na
217 \fBl2arc_write_max\fR (ulong)
218 .ad
219 .RS 12n
220 Max write bytes per interval
221 .sp
222 Default value: \fB8,388,608\fR.
223 .RE
224
225 .sp
226 .ne 2
227 .na
228 \fBmetaslab_aliquot\fR (ulong)
229 .ad
230 .RS 12n
231 Metaslab granularity, in bytes. This is roughly similar to what would be
232 referred to as the "stripe size" in traditional RAID arrays. In normal
233 operation, ZFS will try to write this amount of data to a top-level vdev
234 before moving on to the next one.
235 .sp
236 Default value: \fB524,288\fR.
237 .RE
238
239 .sp
240 .ne 2
241 .na
242 \fBmetaslab_bias_enabled\fR (int)
243 .ad
244 .RS 12n
245 Enable metaslab group biasing based on its vdev's over- or under-utilization
246 relative to the pool.
247 .sp
248 Use \fB1\fR for yes (default) and \fB0\fR for no.
249 .RE
250
251 .sp
252 .ne 2
253 .na
254 \fBmetaslab_force_ganging\fR (ulong)
255 .ad
256 .RS 12n
257 Make some blocks above a certain size be gang blocks. This option is used
258 by the test suite to facilitate testing.
259 .sp
260 Default value: \fB16,777,217\fR.
261 .RE
262
263 .sp
264 .ne 2
265 .na
266 \fBzfs_metaslab_segment_weight_enabled\fR (int)
267 .ad
268 .RS 12n
269 Enable/disable segment-based metaslab selection.
270 .sp
271 Use \fB1\fR for yes (default) and \fB0\fR for no.
272 .RE
273
274 .sp
275 .ne 2
276 .na
277 \fBzfs_metaslab_switch_threshold\fR (int)
278 .ad
279 .RS 12n
280 When using segment-based metaslab selection, continue allocating
281 from the active metaslab until \fBzfs_metaslab_switch_threshold\fR
282 worth of buckets have been exhausted.
283 .sp
284 Default value: \fB2\fR.
285 .RE
286
287 .sp
288 .ne 2
289 .na
290 \fBmetaslab_debug_load\fR (int)
291 .ad
292 .RS 12n
293 Load all metaslabs during pool import.
294 .sp
295 Use \fB1\fR for yes and \fB0\fR for no (default).
296 .RE
297
298 .sp
299 .ne 2
300 .na
301 \fBmetaslab_debug_unload\fR (int)
302 .ad
303 .RS 12n
304 Prevent metaslabs from being unloaded.
305 .sp
306 Use \fB1\fR for yes and \fB0\fR for no (default).
307 .RE
308
309 .sp
310 .ne 2
311 .na
312 \fBmetaslab_fragmentation_factor_enabled\fR (int)
313 .ad
314 .RS 12n
315 Enable use of the fragmentation metric in computing metaslab weights.
316 .sp
317 Use \fB1\fR for yes (default) and \fB0\fR for no.
318 .RE
319
320 .sp
321 .ne 2
322 .na
323 \fBzfs_vdev_default_ms_count\fR (int)
324 .ad
325 .RS 12n
326 When a vdev is added target this number of metaslabs per top-level vdev.
327 .sp
328 Default value: \fB200\fR.
329 .RE
330
331 .sp
332 .ne 2
333 .na
334 \fBzfs_vdev_min_ms_count\fR (int)
335 .ad
336 .RS 12n
337 Minimum number of metaslabs to create in a top-level vdev.
338 .sp
339 Default value: \fB16\fR.
340 .RE
341
342 .sp
343 .ne 2
344 .na
345 \fBvdev_ms_count_limit\fR (int)
346 .ad
347 .RS 12n
348 Practical upper limit of total metaslabs per top-level vdev.
349 .sp
350 Default value: \fB131,072\fR.
351 .RE
352
353 .sp
354 .ne 2
355 .na
356 \fBmetaslab_preload_enabled\fR (int)
357 .ad
358 .RS 12n
359 Enable metaslab group preloading.
360 .sp
361 Use \fB1\fR for yes (default) and \fB0\fR for no.
362 .RE
363
364 .sp
365 .ne 2
366 .na
367 \fBmetaslab_lba_weighting_enabled\fR (int)
368 .ad
369 .RS 12n
370 Give more weight to metaslabs with lower LBAs, assuming they have
371 greater bandwidth as is typically the case on a modern constant
372 angular velocity disk drive.
373 .sp
374 Use \fB1\fR for yes (default) and \fB0\fR for no.
375 .RE
376
377 .sp
378 .ne 2
379 .na
380 \fBspa_config_path\fR (charp)
381 .ad
382 .RS 12n
383 SPA config file
384 .sp
385 Default value: \fB/etc/zfs/zpool.cache\fR.
386 .RE
387
388 .sp
389 .ne 2
390 .na
391 \fBspa_asize_inflation\fR (int)
392 .ad
393 .RS 12n
394 Multiplication factor used to estimate actual disk consumption from the
395 size of data being written. The default value is a worst case estimate,
396 but lower values may be valid for a given pool depending on its
397 configuration. Pool administrators who understand the factors involved
398 may wish to specify a more realistic inflation factor, particularly if
399 they operate close to quota or capacity limits.
400 .sp
401 Default value: \fB24\fR.
402 .RE
403
404 .sp
405 .ne 2
406 .na
407 \fBspa_load_print_vdev_tree\fR (int)
408 .ad
409 .RS 12n
410 Whether to print the vdev tree in the debugging message buffer during pool import.
411 Use 0 to disable and 1 to enable.
412 .sp
413 Default value: \fB0\fR.
414 .RE
415
416 .sp
417 .ne 2
418 .na
419 \fBspa_load_verify_data\fR (int)
420 .ad
421 .RS 12n
422 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
423 import. Use 0 to disable and 1 to enable.
424
425 An extreme rewind import normally performs a full traversal of all
426 blocks in the pool for verification. If this parameter is set to 0,
427 the traversal skips non-metadata blocks. It can be toggled once the
428 import has started to stop or start the traversal of non-metadata blocks.
429 .sp
430 Default value: \fB1\fR.
431 .RE
432
433 .sp
434 .ne 2
435 .na
436 \fBspa_load_verify_metadata\fR (int)
437 .ad
438 .RS 12n
439 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
440 pool import. Use 0 to disable and 1 to enable.
441
442 An extreme rewind import normally performs a full traversal of all
443 blocks in the pool for verification. If this parameter is set to 0,
444 the traversal is not performed. It can be toggled once the import has
445 started to stop or start the traversal.
446 .sp
447 Default value: \fB1\fR.
448 .RE
449
450 .sp
451 .ne 2
452 .na
453 \fBspa_load_verify_maxinflight\fR (int)
454 .ad
455 .RS 12n
456 Maximum concurrent I/Os during the traversal performed during an "extreme
457 rewind" (\fB-X\fR) pool import.
458 .sp
459 Default value: \fB10000\fR.
460 .RE
461
462 .sp
463 .ne 2
464 .na
465 \fBspa_slop_shift\fR (int)
466 .ad
467 .RS 12n
468 Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
469 in the pool to be consumed. This ensures that we don't run the pool
470 completely out of space, due to unaccounted changes (e.g. to the MOS).
471 It also limits the worst-case time to allocate space. If we have
472 less than this amount of free space, most ZPL operations (e.g. write,
473 create) will return ENOSPC.
474 .sp
475 Default value: \fB5\fR.
476 .RE
477
478 .sp
479 .ne 2
480 .na
481 \fBvdev_removal_max_span\fR (int)
482 .ad
483 .RS 12n
484 During top-level vdev removal, chunks of data are copied from the vdev
485 which may include free space in order to trade bandwidth for IOPS.
486 This parameter determines the maximum span of free space (in bytes)
487 which will be included as "unnecessary" data in a chunk of copied data.
488
489 The default value here was chosen to align with
490 \fBzfs_vdev_read_gap_limit\fR, which is a similar concept when doing
491 regular reads (but there's no reason it has to be the same).
492 .sp
493 Default value: \fB32,768\fR.
494 .RE
495
496 .sp
497 .ne 2
498 .na
499 \fBzfetch_array_rd_sz\fR (ulong)
500 .ad
501 .RS 12n
502 If prefetching is enabled, disable prefetching for reads larger than this size.
503 .sp
504 Default value: \fB1,048,576\fR.
505 .RE
506
507 .sp
508 .ne 2
509 .na
510 \fBzfetch_max_distance\fR (uint)
511 .ad
512 .RS 12n
513 Max bytes to prefetch per stream (default 8MB).
514 .sp
515 Default value: \fB8,388,608\fR.
516 .RE
517
518 .sp
519 .ne 2
520 .na
521 \fBzfetch_max_streams\fR (uint)
522 .ad
523 .RS 12n
524 Max number of streams per zfetch (prefetch streams per file).
525 .sp
526 Default value: \fB8\fR.
527 .RE
528
529 .sp
530 .ne 2
531 .na
532 \fBzfetch_min_sec_reap\fR (uint)
533 .ad
534 .RS 12n
535 Min time before an active prefetch stream can be reclaimed
536 .sp
537 Default value: \fB2\fR.
538 .RE
539
540 .sp
541 .ne 2
542 .na
543 \fBzfs_arc_dnode_limit\fR (ulong)
544 .ad
545 .RS 12n
546 When the number of bytes consumed by dnodes in the ARC exceeds this number of
547 bytes, try to unpin some of it in response to demand for non-metadata. This
548 value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which
549 indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of
550 the ARC meta buffers that may be used for dnodes.
551
552 See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used
553 when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather
554 than in response to overall demand for non-metadata.
555
556 .sp
557 Default value: \fB0\fR.
558 .RE
559
560 .sp
561 .ne 2
562 .na
563 \fBzfs_arc_dnode_limit_percent\fR (ulong)
564 .ad
565 .RS 12n
566 Percentage that can be consumed by dnodes of ARC meta buffers.
567 .sp
568 See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a
569 higher priority if set to nonzero value.
570 .sp
571 Default value: \fB10\fR%.
572 .RE
573
574 .sp
575 .ne 2
576 .na
577 \fBzfs_arc_dnode_reduce_percent\fR (ulong)
578 .ad
579 .RS 12n
580 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
581 when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR.
582
583 .sp
584 Default value: \fB10\fR% of the number of dnodes in the ARC.
585 .RE
586
587 .sp
588 .ne 2
589 .na
590 \fBzfs_arc_average_blocksize\fR (int)
591 .ad
592 .RS 12n
593 The ARC's buffer hash table is sized based on the assumption of an average
594 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
595 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
596 For configurations with a known larger average block size this value can be
597 increased to reduce the memory footprint.
598
599 .sp
600 Default value: \fB8192\fR.
601 .RE
602
603 .sp
604 .ne 2
605 .na
606 \fBzfs_arc_evict_batch_limit\fR (int)
607 .ad
608 .RS 12n
609 Number ARC headers to evict per sub-list before proceeding to another sub-list.
610 This batch-style operation prevents entire sub-lists from being evicted at once
611 but comes at a cost of additional unlocking and locking.
612 .sp
613 Default value: \fB10\fR.
614 .RE
615
616 .sp
617 .ne 2
618 .na
619 \fBzfs_arc_grow_retry\fR (int)
620 .ad
621 .RS 12n
622 If set to a non zero value, it will replace the arc_grow_retry value with this value.
623 The arc_grow_retry value (default 5) is the number of seconds the ARC will wait before
624 trying to resume growth after a memory pressure event.
625 .sp
626 Default value: \fB0\fR.
627 .RE
628
629 .sp
630 .ne 2
631 .na
632 \fBzfs_arc_lotsfree_percent\fR (int)
633 .ad
634 .RS 12n
635 Throttle I/O when free system memory drops below this percentage of total
636 system memory. Setting this value to 0 will disable the throttle.
637 .sp
638 Default value: \fB10\fR%.
639 .RE
640
641 .sp
642 .ne 2
643 .na
644 \fBzfs_arc_max\fR (ulong)
645 .ad
646 .RS 12n
647 Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
648 RAM. This value must be at least 67108864 (64 megabytes).
649 .sp
650 This value can be changed dynamically with some caveats. It cannot be set back
651 to 0 while running and reducing it below the current ARC size will not cause
652 the ARC to shrink without memory pressure to induce shrinking.
653 .sp
654 Default value: \fB0\fR.
655 .RE
656
657 .sp
658 .ne 2
659 .na
660 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
661 .ad
662 .RS 12n
663 The number of restart passes to make while scanning the ARC attempting
664 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
665 This value should not need to be tuned but is available to facilitate
666 performance analysis.
667 .sp
668 Default value: \fB4096\fR.
669 .RE
670
671 .sp
672 .ne 2
673 .na
674 \fBzfs_arc_meta_limit\fR (ulong)
675 .ad
676 .RS 12n
677 The maximum allowed size in bytes that meta data buffers are allowed to
678 consume in the ARC. When this limit is reached meta data buffers will
679 be reclaimed even if the overall arc_c_max has not been reached. This
680 value defaults to 0 which indicates that a percent which is based on
681 \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data.
682 .sp
683 This value my be changed dynamically except that it cannot be set back to 0
684 for a specific percent of the ARC; it must be set to an explicit value.
685 .sp
686 Default value: \fB0\fR.
687 .RE
688
689 .sp
690 .ne 2
691 .na
692 \fBzfs_arc_meta_limit_percent\fR (ulong)
693 .ad
694 .RS 12n
695 Percentage of ARC buffers that can be used for meta data.
696
697 See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a
698 higher priority if set to nonzero value.
699
700 .sp
701 Default value: \fB75\fR%.
702 .RE
703
704 .sp
705 .ne 2
706 .na
707 \fBzfs_arc_meta_min\fR (ulong)
708 .ad
709 .RS 12n
710 The minimum allowed size in bytes that meta data buffers may consume in
711 the ARC. This value defaults to 0 which disables a floor on the amount
712 of the ARC devoted meta data.
713 .sp
714 Default value: \fB0\fR.
715 .RE
716
717 .sp
718 .ne 2
719 .na
720 \fBzfs_arc_meta_prune\fR (int)
721 .ad
722 .RS 12n
723 The number of dentries and inodes to be scanned looking for entries
724 which can be dropped. This may be required when the ARC reaches the
725 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
726 in the ARC. Increasing this value will cause to dentry and inode caches
727 to be pruned more aggressively. Setting this value to 0 will disable
728 pruning the inode and dentry caches.
729 .sp
730 Default value: \fB10,000\fR.
731 .RE
732
733 .sp
734 .ne 2
735 .na
736 \fBzfs_arc_meta_strategy\fR (int)
737 .ad
738 .RS 12n
739 Define the strategy for ARC meta data buffer eviction (meta reclaim strategy).
740 A value of 0 (META_ONLY) will evict only the ARC meta data buffers.
741 A value of 1 (BALANCED) indicates that additional data buffers may be evicted if
742 that is required to in order to evict the required number of meta data buffers.
743 .sp
744 Default value: \fB1\fR.
745 .RE
746
747 .sp
748 .ne 2
749 .na
750 \fBzfs_arc_min\fR (ulong)
751 .ad
752 .RS 12n
753 Min arc size of ARC in bytes. If set to 0 then arc_c_min will default to
754 consuming the larger of 32M or 1/32 of total system memory.
755 .sp
756 Default value: \fB0\fR.
757 .RE
758
759 .sp
760 .ne 2
761 .na
762 \fBzfs_arc_min_prefetch_ms\fR (int)
763 .ad
764 .RS 12n
765 Minimum time prefetched blocks are locked in the ARC, specified in ms.
766 A value of \fB0\fR will default to 1000 ms.
767 .sp
768 Default value: \fB0\fR.
769 .RE
770
771 .sp
772 .ne 2
773 .na
774 \fBzfs_arc_min_prescient_prefetch_ms\fR (int)
775 .ad
776 .RS 12n
777 Minimum time "prescient prefetched" blocks are locked in the ARC, specified
778 in ms. These blocks are meant to be prefetched fairly aggresively ahead of
779 the code that may use them. A value of \fB0\fR will default to 6000 ms.
780 .sp
781 Default value: \fB0\fR.
782 .RE
783
784 .sp
785 .ne 2
786 .na
787 \fBzfs_max_missing_tvds\fR (int)
788 .ad
789 .RS 12n
790 Number of missing top-level vdevs which will be allowed during
791 pool import (only in read-only mode).
792 .sp
793 Default value: \fB0\fR
794 .RE
795
796 .sp
797 .ne 2
798 .na
799 \fBzfs_multilist_num_sublists\fR (int)
800 .ad
801 .RS 12n
802 To allow more fine-grained locking, each ARC state contains a series
803 of lists for both data and meta data objects. Locking is performed at
804 the level of these "sub-lists". This parameters controls the number of
805 sub-lists per ARC state, and also applies to other uses of the
806 multilist data structure.
807 .sp
808 Default value: \fB4\fR or the number of online CPUs, whichever is greater
809 .RE
810
811 .sp
812 .ne 2
813 .na
814 \fBzfs_arc_overflow_shift\fR (int)
815 .ad
816 .RS 12n
817 The ARC size is considered to be overflowing if it exceeds the current
818 ARC target size (arc_c) by a threshold determined by this parameter.
819 The threshold is calculated as a fraction of arc_c using the formula
820 "arc_c >> \fBzfs_arc_overflow_shift\fR".
821
822 The default value of 8 causes the ARC to be considered to be overflowing
823 if it exceeds the target size by 1/256th (0.3%) of the target size.
824
825 When the ARC is overflowing, new buffer allocations are stalled until
826 the reclaim thread catches up and the overflow condition no longer exists.
827 .sp
828 Default value: \fB8\fR.
829 .RE
830
831 .sp
832 .ne 2
833 .na
834
835 \fBzfs_arc_p_min_shift\fR (int)
836 .ad
837 .RS 12n
838 If set to a non zero value, this will update arc_p_min_shift (default 4)
839 with the new value.
840 arc_p_min_shift is used to shift of arc_c for calculating both min and max
841 max arc_p
842 .sp
843 Default value: \fB0\fR.
844 .RE
845
846 .sp
847 .ne 2
848 .na
849 \fBzfs_arc_p_dampener_disable\fR (int)
850 .ad
851 .RS 12n
852 Disable arc_p adapt dampener
853 .sp
854 Use \fB1\fR for yes (default) and \fB0\fR to disable.
855 .RE
856
857 .sp
858 .ne 2
859 .na
860 \fBzfs_arc_shrink_shift\fR (int)
861 .ad
862 .RS 12n
863 If set to a non zero value, this will update arc_shrink_shift (default 7)
864 with the new value.
865 .sp
866 Default value: \fB0\fR.
867 .RE
868
869 .sp
870 .ne 2
871 .na
872 \fBzfs_arc_pc_percent\fR (uint)
873 .ad
874 .RS 12n
875 Percent of pagecache to reclaim arc to
876
877 This tunable allows ZFS arc to play more nicely with the kernel's LRU
878 pagecache. It can guarantee that the arc size won't collapse under scanning
879 pressure on the pagecache, yet still allows arc to be reclaimed down to
880 zfs_arc_min if necessary. This value is specified as percent of pagecache
881 size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This
882 only operates during memory pressure/reclaim.
883 .sp
884 Default value: \fB0\fR% (disabled).
885 .RE
886
887 .sp
888 .ne 2
889 .na
890 \fBzfs_arc_sys_free\fR (ulong)
891 .ad
892 .RS 12n
893 The target number of bytes the ARC should leave as free memory on the system.
894 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
895 option to a non-zero value will override the default.
896 .sp
897 Default value: \fB0\fR.
898 .RE
899
900 .sp
901 .ne 2
902 .na
903 \fBzfs_autoimport_disable\fR (int)
904 .ad
905 .RS 12n
906 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
907 .sp
908 Use \fB1\fR for yes (default) and \fB0\fR for no.
909 .RE
910
911 .sp
912 .ne 2
913 .na
914 \fBzfs_checksums_per_second\fR (int)
915 .ad
916 .RS 12n
917 Rate limit checksum events to this many per second. Note that this should
918 not be set below the zed thresholds (currently 10 checksums over 10 sec)
919 or else zed may not trigger any action.
920 .sp
921 Default value: 20
922 .RE
923
924 .sp
925 .ne 2
926 .na
927 \fBzfs_commit_timeout_pct\fR (int)
928 .ad
929 .RS 12n
930 This controls the amount of time that a ZIL block (lwb) will remain "open"
931 when it isn't "full", and it has a thread waiting for it to be committed to
932 stable storage. The timeout is scaled based on a percentage of the last lwb
933 latency to avoid significantly impacting the latency of each individual
934 transaction record (itx).
935 .sp
936 Default value: \fB5\fR%.
937 .RE
938
939 .sp
940 .ne 2
941 .na
942 \fBzfs_condense_indirect_vdevs_enable\fR (int)
943 .ad
944 .RS 12n
945 Enable condensing indirect vdev mappings. When set to a non-zero value,
946 attempt to condense indirect vdev mappings if the mapping uses more than
947 \fBzfs_condense_min_mapping_bytes\fR bytes of memory and if the obsolete
948 space map object uses more than \fBzfs_condense_max_obsolete_bytes\fR
949 bytes on-disk. The condensing process is an attempt to save memory by
950 removing obsolete mappings.
951 .sp
952 Default value: \fB1\fR.
953 .RE
954
955 .sp
956 .ne 2
957 .na
958 \fBzfs_condense_max_obsolete_bytes\fR (ulong)
959 .ad
960 .RS 12n
961 Only attempt to condense indirect vdev mappings if the on-disk size
962 of the obsolete space map object is greater than this number of bytes
963 (see \fBfBzfs_condense_indirect_vdevs_enable\fR).
964 .sp
965 Default value: \fB1,073,741,824\fR.
966 .RE
967
968 .sp
969 .ne 2
970 .na
971 \fBzfs_condense_min_mapping_bytes\fR (ulong)
972 .ad
973 .RS 12n
974 Minimum size vdev mapping to attempt to condense (see
975 \fBzfs_condense_indirect_vdevs_enable\fR).
976 .sp
977 Default value: \fB131,072\fR.
978 .RE
979
980 .sp
981 .ne 2
982 .na
983 \fBzfs_dbgmsg_enable\fR (int)
984 .ad
985 .RS 12n
986 Internally ZFS keeps a small log to facilitate debugging. By default the log
987 is disabled, to enable it set this option to 1. The contents of the log can
988 be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
989 this proc file clears the log.
990 .sp
991 Default value: \fB0\fR.
992 .RE
993
994 .sp
995 .ne 2
996 .na
997 \fBzfs_dbgmsg_maxsize\fR (int)
998 .ad
999 .RS 12n
1000 The maximum size in bytes of the internal ZFS debug log.
1001 .sp
1002 Default value: \fB4M\fR.
1003 .RE
1004
1005 .sp
1006 .ne 2
1007 .na
1008 \fBzfs_dbuf_state_index\fR (int)
1009 .ad
1010 .RS 12n
1011 This feature is currently unused. It is normally used for controlling what
1012 reporting is available under /proc/spl/kstat/zfs.
1013 .sp
1014 Default value: \fB0\fR.
1015 .RE
1016
1017 .sp
1018 .ne 2
1019 .na
1020 \fBzfs_deadman_enabled\fR (int)
1021 .ad
1022 .RS 12n
1023 When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR
1024 milliseconds, or when an individual I/O takes longer than
1025 \fBzfs_deadman_ziotime_ms\fR milliseconds, then the operation is considered to
1026 be "hung". If \fBzfs_deadman_enabled\fR is set then the deadman behavior is
1027 invoked as described by the \fBzfs_deadman_failmode\fR module option.
1028 By default the deadman is enabled and configured to \fBwait\fR which results
1029 in "hung" I/Os only being logged. The deadman is automatically disabled
1030 when a pool gets suspended.
1031 .sp
1032 Default value: \fB1\fR.
1033 .RE
1034
1035 .sp
1036 .ne 2
1037 .na
1038 \fBzfs_deadman_failmode\fR (charp)
1039 .ad
1040 .RS 12n
1041 Controls the failure behavior when the deadman detects a "hung" I/O. Valid
1042 values are \fBwait\fR, \fBcontinue\fR, and \fBpanic\fR.
1043 .sp
1044 \fBwait\fR - Wait for a "hung" I/O to complete. For each "hung" I/O a
1045 "deadman" event will be posted describing that I/O.
1046 .sp
1047 \fBcontinue\fR - Attempt to recover from a "hung" I/O by re-dispatching it
1048 to the I/O pipeline if possible.
1049 .sp
1050 \fBpanic\fR - Panic the system. This can be used to facilitate an automatic
1051 fail-over to a properly configured fail-over partner.
1052 .sp
1053 Default value: \fBwait\fR.
1054 .RE
1055
1056 .sp
1057 .ne 2
1058 .na
1059 \fBzfs_deadman_checktime_ms\fR (int)
1060 .ad
1061 .RS 12n
1062 Check time in milliseconds. This defines the frequency at which we check
1063 for hung I/O and potentially invoke the \fBzfs_deadman_failmode\fR behavior.
1064 .sp
1065 Default value: \fB60,000\fR.
1066 .RE
1067
1068 .sp
1069 .ne 2
1070 .na
1071 \fBzfs_deadman_synctime_ms\fR (ulong)
1072 .ad
1073 .RS 12n
1074 Interval in milliseconds after which the deadman is triggered and also
1075 the interval after which a pool sync operation is considered to be "hung".
1076 Once this limit is exceeded the deadman will be invoked every
1077 \fBzfs_deadman_checktime_ms\fR milliseconds until the pool sync completes.
1078 .sp
1079 Default value: \fB600,000\fR.
1080 .RE
1081
1082 .sp
1083 .ne 2
1084 .na
1085 \fBzfs_deadman_ziotime_ms\fR (ulong)
1086 .ad
1087 .RS 12n
1088 Interval in milliseconds after which the deadman is triggered and an
1089 individual I/O operation is considered to be "hung". As long as the I/O
1090 remains "hung" the deadman will be invoked every \fBzfs_deadman_checktime_ms\fR
1091 milliseconds until the I/O completes.
1092 .sp
1093 Default value: \fB300,000\fR.
1094 .RE
1095
1096 .sp
1097 .ne 2
1098 .na
1099 \fBzfs_dedup_prefetch\fR (int)
1100 .ad
1101 .RS 12n
1102 Enable prefetching dedup-ed blks
1103 .sp
1104 Use \fB1\fR for yes and \fB0\fR to disable (default).
1105 .RE
1106
1107 .sp
1108 .ne 2
1109 .na
1110 \fBzfs_delay_min_dirty_percent\fR (int)
1111 .ad
1112 .RS 12n
1113 Start to delay each transaction once there is this amount of dirty data,
1114 expressed as a percentage of \fBzfs_dirty_data_max\fR.
1115 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
1116 See the section "ZFS TRANSACTION DELAY".
1117 .sp
1118 Default value: \fB60\fR%.
1119 .RE
1120
1121 .sp
1122 .ne 2
1123 .na
1124 \fBzfs_delay_scale\fR (int)
1125 .ad
1126 .RS 12n
1127 This controls how quickly the transaction delay approaches infinity.
1128 Larger values cause longer delays for a given amount of dirty data.
1129 .sp
1130 For the smoothest delay, this value should be about 1 billion divided
1131 by the maximum number of operations per second. This will smoothly
1132 handle between 10x and 1/10th this number.
1133 .sp
1134 See the section "ZFS TRANSACTION DELAY".
1135 .sp
1136 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
1137 .sp
1138 Default value: \fB500,000\fR.
1139 .RE
1140
1141 .sp
1142 .ne 2
1143 .na
1144 \fBzfs_slow_io_events_per_second\fR (int)
1145 .ad
1146 .RS 12n
1147 Rate limit delay zevents (which report slow I/Os) to this many per second.
1148 .sp
1149 Default value: 20
1150 .RE
1151
1152 .sp
1153 .ne 2
1154 .na
1155 \fBzfs_unlink_suspend_progress\fR (uint)
1156 .ad
1157 .RS 12n
1158 When enabled, files will not be asynchronously removed from the list of pending
1159 unlinks and the space they consume will be leaked. Once this option has been
1160 disabled and the dataset is remounted, the pending unlinks will be processed
1161 and the freed space returned to the pool.
1162 This option is used by the test suite to facilitate testing.
1163 .sp
1164 Uses \fB0\fR (default) to allow progress and \fB1\fR to pause progress.
1165 .RE
1166
1167 .sp
1168 .ne 2
1169 .na
1170 \fBzfs_delete_blocks\fR (ulong)
1171 .ad
1172 .RS 12n
1173 This is the used to define a large file for the purposes of delete. Files
1174 containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
1175 while smaller files are deleted synchronously. Decreasing this value will
1176 reduce the time spent in an unlink(2) system call at the expense of a longer
1177 delay before the freed space is available.
1178 .sp
1179 Default value: \fB20,480\fR.
1180 .RE
1181
1182 .sp
1183 .ne 2
1184 .na
1185 \fBzfs_dirty_data_max\fR (int)
1186 .ad
1187 .RS 12n
1188 Determines the dirty space limit in bytes. Once this limit is exceeded, new
1189 writes are halted until space frees up. This parameter takes precedence
1190 over \fBzfs_dirty_data_max_percent\fR.
1191 See the section "ZFS TRANSACTION DELAY".
1192 .sp
1193 Default value: \fB10\fR% of physical RAM, capped at \fBzfs_dirty_data_max_max\fR.
1194 .RE
1195
1196 .sp
1197 .ne 2
1198 .na
1199 \fBzfs_dirty_data_max_max\fR (int)
1200 .ad
1201 .RS 12n
1202 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
1203 This limit is only enforced at module load time, and will be ignored if
1204 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
1205 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
1206 "ZFS TRANSACTION DELAY".
1207 .sp
1208 Default value: \fB25\fR% of physical RAM.
1209 .RE
1210
1211 .sp
1212 .ne 2
1213 .na
1214 \fBzfs_dirty_data_max_max_percent\fR (int)
1215 .ad
1216 .RS 12n
1217 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
1218 percentage of physical RAM. This limit is only enforced at module load
1219 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
1220 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
1221 one. See the section "ZFS TRANSACTION DELAY".
1222 .sp
1223 Default value: \fB25\fR%.
1224 .RE
1225
1226 .sp
1227 .ne 2
1228 .na
1229 \fBzfs_dirty_data_max_percent\fR (int)
1230 .ad
1231 .RS 12n
1232 Determines the dirty space limit, expressed as a percentage of all
1233 memory. Once this limit is exceeded, new writes are halted until space frees
1234 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
1235 one. See the section "ZFS TRANSACTION DELAY".
1236 .sp
1237 Default value: \fB10\fR%, subject to \fBzfs_dirty_data_max_max\fR.
1238 .RE
1239
1240 .sp
1241 .ne 2
1242 .na
1243 \fBzfs_dirty_data_sync_percent\fR (int)
1244 .ad
1245 .RS 12n
1246 Start syncing out a transaction group if there's at least this much dirty data
1247 as a percentage of \fBzfs_dirty_data_max\fR. This should be less than
1248 \fBzfs_vdev_async_write_active_min_dirty_percent\fR.
1249 .sp
1250 Default value: \fB20\fR% of \fBzfs_dirty_data_max\fR.
1251 .RE
1252
1253 .sp
1254 .ne 2
1255 .na
1256 \fBzfs_fletcher_4_impl\fR (string)
1257 .ad
1258 .RS 12n
1259 Select a fletcher 4 implementation.
1260 .sp
1261 Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR,
1262 \fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR.
1263 All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction
1264 set extensions to be available and will only appear if ZFS detects that they are
1265 present at runtime. If multiple implementations of fletcher 4 are available,
1266 the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR
1267 results in the original, CPU based calculation, being used. Selecting any option
1268 other than \fBfastest\fR and \fBscalar\fR results in vector instructions from
1269 the respective CPU instruction set being used.
1270 .sp
1271 Default value: \fBfastest\fR.
1272 .RE
1273
1274 .sp
1275 .ne 2
1276 .na
1277 \fBzfs_free_bpobj_enabled\fR (int)
1278 .ad
1279 .RS 12n
1280 Enable/disable the processing of the free_bpobj object.
1281 .sp
1282 Default value: \fB1\fR.
1283 .RE
1284
1285 .sp
1286 .ne 2
1287 .na
1288 \fBzfs_async_block_max_blocks\fR (ulong)
1289 .ad
1290 .RS 12n
1291 Maximum number of blocks freed in a single txg.
1292 .sp
1293 Default value: \fB100,000\fR.
1294 .RE
1295
1296 .sp
1297 .ne 2
1298 .na
1299 \fBzfs_override_estimate_recordsize\fR (ulong)
1300 .ad
1301 .RS 12n
1302 Record size calculation override for zfs send estimates.
1303 .sp
1304 Default value: \fB0\fR.
1305 .RE
1306
1307 .sp
1308 .ne 2
1309 .na
1310 \fBzfs_vdev_async_read_max_active\fR (int)
1311 .ad
1312 .RS 12n
1313 Maximum asynchronous read I/Os active to each device.
1314 See the section "ZFS I/O SCHEDULER".
1315 .sp
1316 Default value: \fB3\fR.
1317 .RE
1318
1319 .sp
1320 .ne 2
1321 .na
1322 \fBzfs_vdev_async_read_min_active\fR (int)
1323 .ad
1324 .RS 12n
1325 Minimum asynchronous read I/Os active to each device.
1326 See the section "ZFS I/O SCHEDULER".
1327 .sp
1328 Default value: \fB1\fR.
1329 .RE
1330
1331 .sp
1332 .ne 2
1333 .na
1334 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
1335 .ad
1336 .RS 12n
1337 When the pool has more than
1338 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
1339 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
1340 the dirty data is between min and max, the active I/O limit is linearly
1341 interpolated. See the section "ZFS I/O SCHEDULER".
1342 .sp
1343 Default value: \fB60\fR%.
1344 .RE
1345
1346 .sp
1347 .ne 2
1348 .na
1349 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
1350 .ad
1351 .RS 12n
1352 When the pool has less than
1353 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
1354 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
1355 the dirty data is between min and max, the active I/O limit is linearly
1356 interpolated. See the section "ZFS I/O SCHEDULER".
1357 .sp
1358 Default value: \fB30\fR%.
1359 .RE
1360
1361 .sp
1362 .ne 2
1363 .na
1364 \fBzfs_vdev_async_write_max_active\fR (int)
1365 .ad
1366 .RS 12n
1367 Maximum asynchronous write I/Os active to each device.
1368 See the section "ZFS I/O SCHEDULER".
1369 .sp
1370 Default value: \fB10\fR.
1371 .RE
1372
1373 .sp
1374 .ne 2
1375 .na
1376 \fBzfs_vdev_async_write_min_active\fR (int)
1377 .ad
1378 .RS 12n
1379 Minimum asynchronous write I/Os active to each device.
1380 See the section "ZFS I/O SCHEDULER".
1381 .sp
1382 Lower values are associated with better latency on rotational media but poorer
1383 resilver performance. The default value of 2 was chosen as a compromise. A
1384 value of 3 has been shown to improve resilver performance further at a cost of
1385 further increasing latency.
1386 .sp
1387 Default value: \fB2\fR.
1388 .RE
1389
1390 .sp
1391 .ne 2
1392 .na
1393 \fBzfs_vdev_initializing_max_active\fR (int)
1394 .ad
1395 .RS 12n
1396 Maximum initializing I/Os active to each device.
1397 See the section "ZFS I/O SCHEDULER".
1398 .sp
1399 Default value: \fB1\fR.
1400 .RE
1401
1402 .sp
1403 .ne 2
1404 .na
1405 \fBzfs_vdev_initializing_min_active\fR (int)
1406 .ad
1407 .RS 12n
1408 Minimum initializing I/Os active to each device.
1409 See the section "ZFS I/O SCHEDULER".
1410 .sp
1411 Default value: \fB1\fR.
1412 .RE
1413
1414 .sp
1415 .ne 2
1416 .na
1417 \fBzfs_vdev_max_active\fR (int)
1418 .ad
1419 .RS 12n
1420 The maximum number of I/Os active to each device. Ideally, this will be >=
1421 the sum of each queue's max_active. It must be at least the sum of each
1422 queue's min_active. See the section "ZFS I/O SCHEDULER".
1423 .sp
1424 Default value: \fB1,000\fR.
1425 .RE
1426
1427 .sp
1428 .ne 2
1429 .na
1430 \fBzfs_vdev_removal_max_active\fR (int)
1431 .ad
1432 .RS 12n
1433 Maximum removal I/Os active to each device.
1434 See the section "ZFS I/O SCHEDULER".
1435 .sp
1436 Default value: \fB2\fR.
1437 .RE
1438
1439 .sp
1440 .ne 2
1441 .na
1442 \fBzfs_vdev_removal_min_active\fR (int)
1443 .ad
1444 .RS 12n
1445 Minimum removal I/Os active to each device.
1446 See the section "ZFS I/O SCHEDULER".
1447 .sp
1448 Default value: \fB1\fR.
1449 .RE
1450
1451 .sp
1452 .ne 2
1453 .na
1454 \fBzfs_vdev_scrub_max_active\fR (int)
1455 .ad
1456 .RS 12n
1457 Maximum scrub I/Os active to each device.
1458 See the section "ZFS I/O SCHEDULER".
1459 .sp
1460 Default value: \fB2\fR.
1461 .RE
1462
1463 .sp
1464 .ne 2
1465 .na
1466 \fBzfs_vdev_scrub_min_active\fR (int)
1467 .ad
1468 .RS 12n
1469 Minimum scrub I/Os active to each device.
1470 See the section "ZFS I/O SCHEDULER".
1471 .sp
1472 Default value: \fB1\fR.
1473 .RE
1474
1475 .sp
1476 .ne 2
1477 .na
1478 \fBzfs_vdev_sync_read_max_active\fR (int)
1479 .ad
1480 .RS 12n
1481 Maximum synchronous read I/Os active to each device.
1482 See the section "ZFS I/O SCHEDULER".
1483 .sp
1484 Default value: \fB10\fR.
1485 .RE
1486
1487 .sp
1488 .ne 2
1489 .na
1490 \fBzfs_vdev_sync_read_min_active\fR (int)
1491 .ad
1492 .RS 12n
1493 Minimum synchronous read I/Os active to each device.
1494 See the section "ZFS I/O SCHEDULER".
1495 .sp
1496 Default value: \fB10\fR.
1497 .RE
1498
1499 .sp
1500 .ne 2
1501 .na
1502 \fBzfs_vdev_sync_write_max_active\fR (int)
1503 .ad
1504 .RS 12n
1505 Maximum synchronous write I/Os active to each device.
1506 See the section "ZFS I/O SCHEDULER".
1507 .sp
1508 Default value: \fB10\fR.
1509 .RE
1510
1511 .sp
1512 .ne 2
1513 .na
1514 \fBzfs_vdev_sync_write_min_active\fR (int)
1515 .ad
1516 .RS 12n
1517 Minimum synchronous write I/Os active to each device.
1518 See the section "ZFS I/O SCHEDULER".
1519 .sp
1520 Default value: \fB10\fR.
1521 .RE
1522
1523 .sp
1524 .ne 2
1525 .na
1526 \fBzfs_vdev_queue_depth_pct\fR (int)
1527 .ad
1528 .RS 12n
1529 Maximum number of queued allocations per top-level vdev expressed as
1530 a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the
1531 system to detect devices that are more capable of handling allocations
1532 and to allocate more blocks to those devices. It allows for dynamic
1533 allocation distribution when devices are imbalanced as fuller devices
1534 will tend to be slower than empty devices.
1535
1536 See also \fBzio_dva_throttle_enabled\fR.
1537 .sp
1538 Default value: \fB1000\fR%.
1539 .RE
1540
1541 .sp
1542 .ne 2
1543 .na
1544 \fBzfs_expire_snapshot\fR (int)
1545 .ad
1546 .RS 12n
1547 Seconds to expire .zfs/snapshot
1548 .sp
1549 Default value: \fB300\fR.
1550 .RE
1551
1552 .sp
1553 .ne 2
1554 .na
1555 \fBzfs_admin_snapshot\fR (int)
1556 .ad
1557 .RS 12n
1558 Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1559 directory to cause the creation, destruction, or renaming of snapshots.
1560 When enabled this functionality works both locally and over NFS exports
1561 which have the 'no_root_squash' option set. This functionality is disabled
1562 by default.
1563 .sp
1564 Use \fB1\fR for yes and \fB0\fR for no (default).
1565 .RE
1566
1567 .sp
1568 .ne 2
1569 .na
1570 \fBzfs_flags\fR (int)
1571 .ad
1572 .RS 12n
1573 Set additional debugging flags. The following flags may be bitwise-or'd
1574 together.
1575 .sp
1576 .TS
1577 box;
1578 rB lB
1579 lB lB
1580 r l.
1581 Value Symbolic Name
1582 Description
1583 _
1584 1 ZFS_DEBUG_DPRINTF
1585 Enable dprintf entries in the debug log.
1586 _
1587 2 ZFS_DEBUG_DBUF_VERIFY *
1588 Enable extra dbuf verifications.
1589 _
1590 4 ZFS_DEBUG_DNODE_VERIFY *
1591 Enable extra dnode verifications.
1592 _
1593 8 ZFS_DEBUG_SNAPNAMES
1594 Enable snapshot name verification.
1595 _
1596 16 ZFS_DEBUG_MODIFY
1597 Check for illegally modified ARC buffers.
1598 _
1599 64 ZFS_DEBUG_ZIO_FREE
1600 Enable verification of block frees.
1601 _
1602 128 ZFS_DEBUG_HISTOGRAM_VERIFY
1603 Enable extra spacemap histogram verifications.
1604 _
1605 256 ZFS_DEBUG_METASLAB_VERIFY
1606 Verify space accounting on disk matches in-core range_trees.
1607 _
1608 512 ZFS_DEBUG_SET_ERROR
1609 Enable SET_ERROR and dprintf entries in the debug log.
1610 .TE
1611 .sp
1612 * Requires debug build.
1613 .sp
1614 Default value: \fB0\fR.
1615 .RE
1616
1617 .sp
1618 .ne 2
1619 .na
1620 \fBzfs_free_leak_on_eio\fR (int)
1621 .ad
1622 .RS 12n
1623 If destroy encounters an EIO while reading metadata (e.g. indirect
1624 blocks), space referenced by the missing metadata can not be freed.
1625 Normally this causes the background destroy to become "stalled", as
1626 it is unable to make forward progress. While in this stalled state,
1627 all remaining space to free from the error-encountering filesystem is
1628 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1629 permanently leak the space from indirect blocks that can not be read,
1630 and continue to free everything else that it can.
1631
1632 The default, "stalling" behavior is useful if the storage partially
1633 fails (i.e. some but not all i/os fail), and then later recovers. In
1634 this case, we will be able to continue pool operations while it is
1635 partially failed, and when it recovers, we can continue to free the
1636 space, with no leaks. However, note that this case is actually
1637 fairly rare.
1638
1639 Typically pools either (a) fail completely (but perhaps temporarily,
1640 e.g. a top-level vdev going offline), or (b) have localized,
1641 permanent errors (e.g. disk returns the wrong data due to bit flip or
1642 firmware bug). In case (a), this setting does not matter because the
1643 pool will be suspended and the sync thread will not be able to make
1644 forward progress regardless. In case (b), because the error is
1645 permanent, the best we can do is leak the minimum amount of space,
1646 which is what setting this flag will do. Therefore, it is reasonable
1647 for this flag to normally be set, but we chose the more conservative
1648 approach of not setting it, so that there is no possibility of
1649 leaking space in the "partial temporary" failure case.
1650 .sp
1651 Default value: \fB0\fR.
1652 .RE
1653
1654 .sp
1655 .ne 2
1656 .na
1657 \fBzfs_free_min_time_ms\fR (int)
1658 .ad
1659 .RS 12n
1660 During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum
1661 of this much time will be spent working on freeing blocks per txg.
1662 .sp
1663 Default value: \fB1,000\fR.
1664 .RE
1665
1666 .sp
1667 .ne 2
1668 .na
1669 \fBzfs_immediate_write_sz\fR (long)
1670 .ad
1671 .RS 12n
1672 Largest data block to write to zil. Larger blocks will be treated as if the
1673 dataset being written to had the property setting \fBlogbias=throughput\fR.
1674 .sp
1675 Default value: \fB32,768\fR.
1676 .RE
1677
1678 .sp
1679 .ne 2
1680 .na
1681 \fBzfs_initialize_value\fR (ulong)
1682 .ad
1683 .RS 12n
1684 Pattern written to vdev free space by \fBzpool initialize\fR.
1685 .sp
1686 Default value: \fB16,045,690,984,833,335,022\fR (0xdeadbeefdeadbeee).
1687 .RE
1688
1689 .sp
1690 .ne 2
1691 .na
1692 \fBzfs_lua_max_instrlimit\fR (ulong)
1693 .ad
1694 .RS 12n
1695 The maximum execution time limit that can be set for a ZFS channel program,
1696 specified as a number of Lua instructions.
1697 .sp
1698 Default value: \fB100,000,000\fR.
1699 .RE
1700
1701 .sp
1702 .ne 2
1703 .na
1704 \fBzfs_lua_max_memlimit\fR (ulong)
1705 .ad
1706 .RS 12n
1707 The maximum memory limit that can be set for a ZFS channel program, specified
1708 in bytes.
1709 .sp
1710 Default value: \fB104,857,600\fR.
1711 .RE
1712
1713 .sp
1714 .ne 2
1715 .na
1716 \fBzfs_max_dataset_nesting\fR (int)
1717 .ad
1718 .RS 12n
1719 The maximum depth of nested datasets. This value can be tuned temporarily to
1720 fix existing datasets that exceed the predefined limit.
1721 .sp
1722 Default value: \fB50\fR.
1723 .RE
1724
1725 .sp
1726 .ne 2
1727 .na
1728 \fBzfs_max_recordsize\fR (int)
1729 .ad
1730 .RS 12n
1731 We currently support block sizes from 512 bytes to 16MB. The benefits of
1732 larger blocks, and thus larger I/O, need to be weighed against the cost of
1733 COWing a giant block to modify one byte. Additionally, very large blocks
1734 can have an impact on i/o latency, and also potentially on the memory
1735 allocator. Therefore, we do not allow the recordsize to be set larger than
1736 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1737 this tunable, and pools with larger blocks can always be imported and used,
1738 regardless of this setting.
1739 .sp
1740 Default value: \fB1,048,576\fR.
1741 .RE
1742
1743 .sp
1744 .ne 2
1745 .na
1746 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1747 .ad
1748 .RS 12n
1749 Allow metaslabs to keep their active state as long as their fragmentation
1750 percentage is less than or equal to this value. An active metaslab that
1751 exceeds this threshold will no longer keep its active status allowing
1752 better metaslabs to be selected.
1753 .sp
1754 Default value: \fB70\fR.
1755 .RE
1756
1757 .sp
1758 .ne 2
1759 .na
1760 \fBzfs_mg_fragmentation_threshold\fR (int)
1761 .ad
1762 .RS 12n
1763 Metaslab groups are considered eligible for allocations if their
1764 fragmentation metric (measured as a percentage) is less than or equal to
1765 this value. If a metaslab group exceeds this threshold then it will be
1766 skipped unless all metaslab groups within the metaslab class have also
1767 crossed this threshold.
1768 .sp
1769 Default value: \fB85\fR.
1770 .RE
1771
1772 .sp
1773 .ne 2
1774 .na
1775 \fBzfs_mg_noalloc_threshold\fR (int)
1776 .ad
1777 .RS 12n
1778 Defines a threshold at which metaslab groups should be eligible for
1779 allocations. The value is expressed as a percentage of free space
1780 beyond which a metaslab group is always eligible for allocations.
1781 If a metaslab group's free space is less than or equal to the
1782 threshold, the allocator will avoid allocating to that group
1783 unless all groups in the pool have reached the threshold. Once all
1784 groups have reached the threshold, all groups are allowed to accept
1785 allocations. The default value of 0 disables the feature and causes
1786 all metaslab groups to be eligible for allocations.
1787
1788 This parameter allows one to deal with pools having heavily imbalanced
1789 vdevs such as would be the case when a new vdev has been added.
1790 Setting the threshold to a non-zero percentage will stop allocations
1791 from being made to vdevs that aren't filled to the specified percentage
1792 and allow lesser filled vdevs to acquire more allocations than they
1793 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1794 .sp
1795 Default value: \fB0\fR.
1796 .RE
1797
1798 .sp
1799 .ne 2
1800 .na
1801 \fBzfs_ddt_data_is_special\fR (int)
1802 .ad
1803 .RS 12n
1804 If enabled, ZFS will place DDT data into the special allocation class.
1805 .sp
1806 Default value: \fB1\fR.
1807 .RE
1808
1809 .sp
1810 .ne 2
1811 .na
1812 \fBzfs_user_indirect_is_special\fR (int)
1813 .ad
1814 .RS 12n
1815 If enabled, ZFS will place user data (both file and zvol) indirect blocks
1816 into the special allocation class.
1817 .sp
1818 Default value: \fB1\fR.
1819 .RE
1820
1821 .sp
1822 .ne 2
1823 .na
1824 \fBzfs_multihost_history\fR (int)
1825 .ad
1826 .RS 12n
1827 Historical statistics for the last N multihost updates will be available in
1828 \fB/proc/spl/kstat/zfs/<pool>/multihost\fR
1829 .sp
1830 Default value: \fB0\fR.
1831 .RE
1832
1833 .sp
1834 .ne 2
1835 .na
1836 \fBzfs_multihost_interval\fR (ulong)
1837 .ad
1838 .RS 12n
1839 Used to control the frequency of multihost writes which are performed when the
1840 \fBmultihost\fR pool property is on. This is one factor used to determine
1841 the length of the activity check during import.
1842 .sp
1843 The multihost write period is \fBzfs_multihost_interval / leaf-vdevs\fR milliseconds.
1844 This means that on average a multihost write will be issued for each leaf vdev every
1845 \fBzfs_multihost_interval\fR milliseconds. In practice, the observed period can
1846 vary with the I/O load and this observed value is the delay which is stored in
1847 the uberblock.
1848 .sp
1849 On import the activity check waits a minimum amount of time determined by
1850 \fBzfs_multihost_interval * zfs_multihost_import_intervals\fR. The activity
1851 check time may be further extended if the value of mmp delay found in the best
1852 uberblock indicates actual multihost updates happened at longer intervals than
1853 \fBzfs_multihost_interval\fR. A minimum value of \fB100ms\fR is enforced.
1854 .sp
1855 Default value: \fB1000\fR.
1856 .RE
1857
1858 .sp
1859 .ne 2
1860 .na
1861 \fBzfs_multihost_import_intervals\fR (uint)
1862 .ad
1863 .RS 12n
1864 Used to control the duration of the activity test on import. Smaller values of
1865 \fBzfs_multihost_import_intervals\fR will reduce the import time but increase
1866 the risk of failing to detect an active pool. The total activity check time is
1867 never allowed to drop below one second. A value of 0 is ignored and treated as
1868 if it was set to 1
1869 .sp
1870 Default value: \fB10\fR.
1871 .RE
1872
1873 .sp
1874 .ne 2
1875 .na
1876 \fBzfs_multihost_fail_intervals\fR (uint)
1877 .ad
1878 .RS 12n
1879 Controls the behavior of the pool when multihost write failures are detected.
1880 .sp
1881 When \fBzfs_multihost_fail_intervals = 0\fR then multihost write failures are ignored.
1882 The failures will still be reported to the ZED which depending on its
1883 configuration may take action such as suspending the pool or offlining a device.
1884 .sp
1885 When \fBzfs_multihost_fail_intervals > 0\fR then sequential multihost write failures
1886 will cause the pool to be suspended. This occurs when
1887 \fBzfs_multihost_fail_intervals * zfs_multihost_interval\fR milliseconds have
1888 passed since the last successful multihost write. This guarantees the activity test
1889 will see multihost writes if the pool is imported.
1890 .sp
1891 Default value: \fB5\fR.
1892 .RE
1893
1894 .sp
1895 .ne 2
1896 .na
1897 \fBzfs_no_scrub_io\fR (int)
1898 .ad
1899 .RS 12n
1900 Set for no scrub I/O. This results in scrubs not actually scrubbing data and
1901 simply doing a metadata crawl of the pool instead.
1902 .sp
1903 Use \fB1\fR for yes and \fB0\fR for no (default).
1904 .RE
1905
1906 .sp
1907 .ne 2
1908 .na
1909 \fBzfs_no_scrub_prefetch\fR (int)
1910 .ad
1911 .RS 12n
1912 Set to disable block prefetching for scrubs.
1913 .sp
1914 Use \fB1\fR for yes and \fB0\fR for no (default).
1915 .RE
1916
1917 .sp
1918 .ne 2
1919 .na
1920 \fBzfs_nocacheflush\fR (int)
1921 .ad
1922 .RS 12n
1923 Disable cache flush operations on disks when writing. Setting this will
1924 cause pool corruption on power loss if a volatile out-of-order write cache
1925 is enabled.
1926 .sp
1927 Use \fB1\fR for yes and \fB0\fR for no (default).
1928 .RE
1929
1930 .sp
1931 .ne 2
1932 .na
1933 \fBzfs_nopwrite_enabled\fR (int)
1934 .ad
1935 .RS 12n
1936 Enable NOP writes
1937 .sp
1938 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1939 .RE
1940
1941 .sp
1942 .ne 2
1943 .na
1944 \fBzfs_dmu_offset_next_sync\fR (int)
1945 .ad
1946 .RS 12n
1947 Enable forcing txg sync to find holes. When enabled forces ZFS to act
1948 like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which
1949 when a dnode is dirty causes txg's to be synced so that this data can be
1950 found.
1951 .sp
1952 Use \fB1\fR for yes and \fB0\fR to disable (default).
1953 .RE
1954
1955 .sp
1956 .ne 2
1957 .na
1958 \fBzfs_pd_bytes_max\fR (int)
1959 .ad
1960 .RS 12n
1961 The number of bytes which should be prefetched during a pool traversal
1962 (eg: \fBzfs send\fR or other data crawling operations)
1963 .sp
1964 Default value: \fB52,428,800\fR.
1965 .RE
1966
1967 .sp
1968 .ne 2
1969 .na
1970 \fBzfs_per_txg_dirty_frees_percent \fR (ulong)
1971 .ad
1972 .RS 12n
1973 Tunable to control percentage of dirtied indirect blocks from frees allowed
1974 into one TXG. After this threshold is crossed, additional frees will wait until
1975 the next TXG.
1976 A value of zero will disable this throttle.
1977 .sp
1978 Default value: \fB5\fR, set to \fB0\fR to disable.
1979 .RE
1980
1981 .sp
1982 .ne 2
1983 .na
1984 \fBzfs_prefetch_disable\fR (int)
1985 .ad
1986 .RS 12n
1987 This tunable disables predictive prefetch. Note that it leaves "prescient"
1988 prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1989 prescient prefetch never issues i/os that end up not being needed, so it
1990 can't hurt performance.
1991 .sp
1992 Use \fB1\fR for yes and \fB0\fR for no (default).
1993 .RE
1994
1995 .sp
1996 .ne 2
1997 .na
1998 \fBzfs_read_chunk_size\fR (long)
1999 .ad
2000 .RS 12n
2001 Bytes to read per chunk
2002 .sp
2003 Default value: \fB1,048,576\fR.
2004 .RE
2005
2006 .sp
2007 .ne 2
2008 .na
2009 \fBzfs_read_history\fR (int)
2010 .ad
2011 .RS 12n
2012 Historical statistics for the last N reads will be available in
2013 \fB/proc/spl/kstat/zfs/<pool>/reads\fR
2014 .sp
2015 Default value: \fB0\fR (no data is kept).
2016 .RE
2017
2018 .sp
2019 .ne 2
2020 .na
2021 \fBzfs_read_history_hits\fR (int)
2022 .ad
2023 .RS 12n
2024 Include cache hits in read history
2025 .sp
2026 Use \fB1\fR for yes and \fB0\fR for no (default).
2027 .RE
2028
2029 .sp
2030 .ne 2
2031 .na
2032 \fBzfs_reconstruct_indirect_combinations_max\fR (int)
2033 .ad
2034 .RS 12na
2035 If an indirect split block contains more than this many possible unique
2036 combinations when being reconstructed, consider it too computationally
2037 expensive to check them all. Instead, try at most
2038 \fBzfs_reconstruct_indirect_combinations_max\fR randomly-selected
2039 combinations each time the block is accessed. This allows all segment
2040 copies to participate fairly in the reconstruction when all combinations
2041 cannot be checked and prevents repeated use of one bad copy.
2042 .sp
2043 Default value: \fB4096\fR.
2044 .RE
2045
2046 .sp
2047 .ne 2
2048 .na
2049 \fBzfs_recover\fR (int)
2050 .ad
2051 .RS 12n
2052 Set to attempt to recover from fatal errors. This should only be used as a
2053 last resort, as it typically results in leaked space, or worse.
2054 .sp
2055 Use \fB1\fR for yes and \fB0\fR for no (default).
2056 .RE
2057
2058 .sp
2059 .ne 2
2060 .na
2061 \fBzfs_removal_ignore_errors\fR (int)
2062 .ad
2063 .RS 12n
2064 .sp
2065 Ignore hard IO errors during device removal. When set, if a device encounters
2066 a hard IO error during the removal process the removal will not be cancelled.
2067 This can result in a normally recoverable block becoming permanently damaged
2068 and is not recommended. This should only be used as a last resort when the
2069 pool cannot be returned to a healthy state prior to removing the device.
2070 .sp
2071 Default value: \fB0\fR.
2072 .RE
2073
2074 .sp
2075 .ne 2
2076 .na
2077 \fBzfs_resilver_min_time_ms\fR (int)
2078 .ad
2079 .RS 12n
2080 Resilvers are processed by the sync thread. While resilvering it will spend
2081 at least this much time working on a resilver between txg flushes.
2082 .sp
2083 Default value: \fB3,000\fR.
2084 .RE
2085
2086 .sp
2087 .ne 2
2088 .na
2089 \fBzfs_scan_ignore_errors\fR (int)
2090 .ad
2091 .RS 12n
2092 If set to a nonzero value, remove the DTL (dirty time list) upon
2093 completion of a pool scan (scrub) even if there were unrepairable
2094 errors. It is intended to be used during pool repair or recovery to
2095 stop resilvering when the pool is next imported.
2096 .sp
2097 Default value: \fB0\fR.
2098 .RE
2099
2100 .sp
2101 .ne 2
2102 .na
2103 \fBzfs_scrub_min_time_ms\fR (int)
2104 .ad
2105 .RS 12n
2106 Scrubs are processed by the sync thread. While scrubbing it will spend
2107 at least this much time working on a scrub between txg flushes.
2108 .sp
2109 Default value: \fB1,000\fR.
2110 .RE
2111
2112 .sp
2113 .ne 2
2114 .na
2115 \fBzfs_scan_checkpoint_intval\fR (int)
2116 .ad
2117 .RS 12n
2118 To preserve progress across reboots the sequential scan algorithm periodically
2119 needs to stop metadata scanning and issue all the verifications I/Os to disk.
2120 The frequency of this flushing is determined by the
2121 \fBzfs_scan_checkpoint_intval\fR tunable.
2122 .sp
2123 Default value: \fB7200\fR seconds (every 2 hours).
2124 .RE
2125
2126 .sp
2127 .ne 2
2128 .na
2129 \fBzfs_scan_fill_weight\fR (int)
2130 .ad
2131 .RS 12n
2132 This tunable affects how scrub and resilver I/O segments are ordered. A higher
2133 number indicates that we care more about how filled in a segment is, while a
2134 lower number indicates we care more about the size of the extent without
2135 considering the gaps within a segment. This value is only tunable upon module
2136 insertion. Changing the value afterwards will have no affect on scrub or
2137 resilver performance.
2138 .sp
2139 Default value: \fB3\fR.
2140 .RE
2141
2142 .sp
2143 .ne 2
2144 .na
2145 \fBzfs_scan_issue_strategy\fR (int)
2146 .ad
2147 .RS 12n
2148 Determines the order that data will be verified while scrubbing or resilvering.
2149 If set to \fB1\fR, data will be verified as sequentially as possible, given the
2150 amount of memory reserved for scrubbing (see \fBzfs_scan_mem_lim_fact\fR). This
2151 may improve scrub performance if the pool's data is very fragmented. If set to
2152 \fB2\fR, the largest mostly-contiguous chunk of found data will be verified
2153 first. By deferring scrubbing of small segments, we may later find adjacent data
2154 to coalesce and increase the segment size. If set to \fB0\fR, zfs will use
2155 strategy \fB1\fR during normal verification and strategy \fB2\fR while taking a
2156 checkpoint.
2157 .sp
2158 Default value: \fB0\fR.
2159 .RE
2160
2161 .sp
2162 .ne 2
2163 .na
2164 \fBzfs_scan_legacy\fR (int)
2165 .ad
2166 .RS 12n
2167 A value of 0 indicates that scrubs and resilvers will gather metadata in
2168 memory before issuing sequential I/O. A value of 1 indicates that the legacy
2169 algorithm will be used where I/O is initiated as soon as it is discovered.
2170 Changing this value to 0 will not affect scrubs or resilvers that are already
2171 in progress.
2172 .sp
2173 Default value: \fB0\fR.
2174 .RE
2175
2176 .sp
2177 .ne 2
2178 .na
2179 \fBzfs_scan_max_ext_gap\fR (int)
2180 .ad
2181 .RS 12n
2182 Indicates the largest gap in bytes between scrub / resilver I/Os that will still
2183 be considered sequential for sorting purposes. Changing this value will not
2184 affect scrubs or resilvers that are already in progress.
2185 .sp
2186 Default value: \fB2097152 (2 MB)\fR.
2187 .RE
2188
2189 .sp
2190 .ne 2
2191 .na
2192 \fBzfs_scan_mem_lim_fact\fR (int)
2193 .ad
2194 .RS 12n
2195 Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2196 This tunable determines the hard limit for I/O sorting memory usage.
2197 When the hard limit is reached we stop scanning metadata and start issuing
2198 data verification I/O. This is done until we get below the soft limit.
2199 .sp
2200 Default value: \fB20\fR which is 5% of RAM (1/20).
2201 .RE
2202
2203 .sp
2204 .ne 2
2205 .na
2206 \fBzfs_scan_mem_lim_soft_fact\fR (int)
2207 .ad
2208 .RS 12n
2209 The fraction of the hard limit used to determined the soft limit for I/O sorting
2210 by the sequential scan algorithm. When we cross this limit from bellow no action
2211 is taken. When we cross this limit from above it is because we are issuing
2212 verification I/O. In this case (unless the metadata scan is done) we stop
2213 issuing verification I/O and start scanning metadata again until we get to the
2214 hard limit.
2215 .sp
2216 Default value: \fB20\fR which is 5% of the hard limit (1/20).
2217 .RE
2218
2219 .sp
2220 .ne 2
2221 .na
2222 \fBzfs_scan_vdev_limit\fR (int)
2223 .ad
2224 .RS 12n
2225 Maximum amount of data that can be concurrently issued at once for scrubs and
2226 resilvers per leaf device, given in bytes.
2227 .sp
2228 Default value: \fB41943040\fR.
2229 .RE
2230
2231 .sp
2232 .ne 2
2233 .na
2234 \fBzfs_send_corrupt_data\fR (int)
2235 .ad
2236 .RS 12n
2237 Allow sending of corrupt data (ignore read/checksum errors when sending data)
2238 .sp
2239 Use \fB1\fR for yes and \fB0\fR for no (default).
2240 .RE
2241
2242 .sp
2243 .ne 2
2244 .na
2245 \fBzfs_send_queue_length\fR (int)
2246 .ad
2247 .RS 12n
2248 The maximum number of bytes allowed in the \fBzfs send\fR queue. This value
2249 must be at least twice the maximum block size in use.
2250 .sp
2251 Default value: \fB16,777,216\fR.
2252 .RE
2253
2254 .sp
2255 .ne 2
2256 .na
2257 \fBzfs_recv_queue_length\fR (int)
2258 .ad
2259 .RS 12n
2260 .sp
2261 The maximum number of bytes allowed in the \fBzfs receive\fR queue. This value
2262 must be at least twice the maximum block size in use.
2263 .sp
2264 Default value: \fB16,777,216\fR.
2265 .RE
2266
2267 .sp
2268 .ne 2
2269 .na
2270 \fBzfs_sync_pass_deferred_free\fR (int)
2271 .ad
2272 .RS 12n
2273 Flushing of data to disk is done in passes. Defer frees starting in this pass
2274 .sp
2275 Default value: \fB2\fR.
2276 .RE
2277
2278 .sp
2279 .ne 2
2280 .na
2281 \fBzfs_spa_discard_memory_limit\fR (int)
2282 .ad
2283 .RS 12n
2284 Maximum memory used for prefetching a checkpoint's space map on each
2285 vdev while discarding the checkpoint.
2286 .sp
2287 Default value: \fB16,777,216\fR.
2288 .RE
2289
2290 .sp
2291 .ne 2
2292 .na
2293 \fBzfs_sync_pass_dont_compress\fR (int)
2294 .ad
2295 .RS 12n
2296 Don't compress starting in this pass
2297 .sp
2298 Default value: \fB5\fR.
2299 .RE
2300
2301 .sp
2302 .ne 2
2303 .na
2304 \fBzfs_sync_pass_rewrite\fR (int)
2305 .ad
2306 .RS 12n
2307 Rewrite new block pointers starting in this pass
2308 .sp
2309 Default value: \fB2\fR.
2310 .RE
2311
2312 .sp
2313 .ne 2
2314 .na
2315 \fBzfs_sync_taskq_batch_pct\fR (int)
2316 .ad
2317 .RS 12n
2318 This controls the number of threads used by the dp_sync_taskq. The default
2319 value of 75% will create a maximum of one thread per cpu.
2320 .sp
2321 Default value: \fB75\fR%.
2322 .RE
2323
2324 .sp
2325 .ne 2
2326 .na
2327 \fBzfs_txg_history\fR (int)
2328 .ad
2329 .RS 12n
2330 Historical statistics for the last N txgs will be available in
2331 \fB/proc/spl/kstat/zfs/<pool>/txgs\fR
2332 .sp
2333 Default value: \fB0\fR.
2334 .RE
2335
2336 .sp
2337 .ne 2
2338 .na
2339 \fBzfs_txg_timeout\fR (int)
2340 .ad
2341 .RS 12n
2342 Flush dirty data to disk at least every N seconds (maximum txg duration)
2343 .sp
2344 Default value: \fB5\fR.
2345 .RE
2346
2347 .sp
2348 .ne 2
2349 .na
2350 \fBzfs_vdev_aggregation_limit\fR (int)
2351 .ad
2352 .RS 12n
2353 Max vdev I/O aggregation size
2354 .sp
2355 Default value: \fB131,072\fR.
2356 .RE
2357
2358 .sp
2359 .ne 2
2360 .na
2361 \fBzfs_vdev_cache_bshift\fR (int)
2362 .ad
2363 .RS 12n
2364 Shift size to inflate reads too
2365 .sp
2366 Default value: \fB16\fR (effectively 65536).
2367 .RE
2368
2369 .sp
2370 .ne 2
2371 .na
2372 \fBzfs_vdev_cache_max\fR (int)
2373 .ad
2374 .RS 12n
2375 Inflate reads smaller than this value to meet the \fBzfs_vdev_cache_bshift\fR
2376 size (default 64k).
2377 .sp
2378 Default value: \fB16384\fR.
2379 .RE
2380
2381 .sp
2382 .ne 2
2383 .na
2384 \fBzfs_vdev_cache_size\fR (int)
2385 .ad
2386 .RS 12n
2387 Total size of the per-disk cache in bytes.
2388 .sp
2389 Currently this feature is disabled as it has been found to not be helpful
2390 for performance and in some cases harmful.
2391 .sp
2392 Default value: \fB0\fR.
2393 .RE
2394
2395 .sp
2396 .ne 2
2397 .na
2398 \fBzfs_vdev_mirror_rotating_inc\fR (int)
2399 .ad
2400 .RS 12n
2401 A number by which the balancing algorithm increments the load calculation for
2402 the purpose of selecting the least busy mirror member when an I/O immediately
2403 follows its predecessor on rotational vdevs for the purpose of making decisions
2404 based on load.
2405 .sp
2406 Default value: \fB0\fR.
2407 .RE
2408
2409 .sp
2410 .ne 2
2411 .na
2412 \fBzfs_vdev_mirror_rotating_seek_inc\fR (int)
2413 .ad
2414 .RS 12n
2415 A number by which the balancing algorithm increments the load calculation for
2416 the purpose of selecting the least busy mirror member when an I/O lacks
2417 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2418 this that are not immediately following the previous I/O are incremented by
2419 half.
2420 .sp
2421 Default value: \fB5\fR.
2422 .RE
2423
2424 .sp
2425 .ne 2
2426 .na
2427 \fBzfs_vdev_mirror_rotating_seek_offset\fR (int)
2428 .ad
2429 .RS 12n
2430 The maximum distance for the last queued I/O in which the balancing algorithm
2431 considers an I/O to have locality.
2432 See the section "ZFS I/O SCHEDULER".
2433 .sp
2434 Default value: \fB1048576\fR.
2435 .RE
2436
2437 .sp
2438 .ne 2
2439 .na
2440 \fBzfs_vdev_mirror_non_rotating_inc\fR (int)
2441 .ad
2442 .RS 12n
2443 A number by which the balancing algorithm increments the load calculation for
2444 the purpose of selecting the least busy mirror member on non-rotational vdevs
2445 when I/Os do not immediately follow one another.
2446 .sp
2447 Default value: \fB0\fR.
2448 .RE
2449
2450 .sp
2451 .ne 2
2452 .na
2453 \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int)
2454 .ad
2455 .RS 12n
2456 A number by which the balancing algorithm increments the load calculation for
2457 the purpose of selecting the least busy mirror member when an I/O lacks
2458 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2459 this that are not immediately following the previous I/O are incremented by
2460 half.
2461 .sp
2462 Default value: \fB1\fR.
2463 .RE
2464
2465 .sp
2466 .ne 2
2467 .na
2468 \fBzfs_vdev_read_gap_limit\fR (int)
2469 .ad
2470 .RS 12n
2471 Aggregate read I/O operations if the gap on-disk between them is within this
2472 threshold.
2473 .sp
2474 Default value: \fB32,768\fR.
2475 .RE
2476
2477 .sp
2478 .ne 2
2479 .na
2480 \fBzfs_vdev_scheduler\fR (charp)
2481 .ad
2482 .RS 12n
2483 Set the Linux I/O scheduler on whole disk vdevs to this scheduler. Valid options
2484 are noop, cfq, bfq & deadline
2485 .sp
2486 Default value: \fBnoop\fR.
2487 .RE
2488
2489 .sp
2490 .ne 2
2491 .na
2492 \fBzfs_vdev_write_gap_limit\fR (int)
2493 .ad
2494 .RS 12n
2495 Aggregate write I/O over gap
2496 .sp
2497 Default value: \fB4,096\fR.
2498 .RE
2499
2500 .sp
2501 .ne 2
2502 .na
2503 \fBzfs_vdev_raidz_impl\fR (string)
2504 .ad
2505 .RS 12n
2506 Parameter for selecting raidz parity implementation to use.
2507
2508 Options marked (always) below may be selected on module load as they are
2509 supported on all systems.
2510 The remaining options may only be set after the module is loaded, as they
2511 are available only if the implementations are compiled in and supported
2512 on the running system.
2513
2514 Once the module is loaded, the content of
2515 /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options
2516 with the currently selected one enclosed in [].
2517 Possible options are:
2518 fastest - (always) implementation selected using built-in benchmark
2519 original - (always) original raidz implementation
2520 scalar - (always) scalar raidz implementation
2521 sse2 - implementation using SSE2 instruction set (64bit x86 only)
2522 ssse3 - implementation using SSSE3 instruction set (64bit x86 only)
2523 avx2 - implementation using AVX2 instruction set (64bit x86 only)
2524 avx512f - implementation using AVX512F instruction set (64bit x86 only)
2525 avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only)
2526 aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only)
2527 aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only)
2528 .sp
2529 Default value: \fBfastest\fR.
2530 .RE
2531
2532 .sp
2533 .ne 2
2534 .na
2535 \fBzfs_zevent_cols\fR (int)
2536 .ad
2537 .RS 12n
2538 When zevents are logged to the console use this as the word wrap width.
2539 .sp
2540 Default value: \fB80\fR.
2541 .RE
2542
2543 .sp
2544 .ne 2
2545 .na
2546 \fBzfs_zevent_console\fR (int)
2547 .ad
2548 .RS 12n
2549 Log events to the console
2550 .sp
2551 Use \fB1\fR for yes and \fB0\fR for no (default).
2552 .RE
2553
2554 .sp
2555 .ne 2
2556 .na
2557 \fBzfs_zevent_len_max\fR (int)
2558 .ad
2559 .RS 12n
2560 Max event queue length. A value of 0 will result in a calculated value which
2561 increases with the number of CPUs in the system (minimum 64 events). Events
2562 in the queue can be viewed with the \fBzpool events\fR command.
2563 .sp
2564 Default value: \fB0\fR.
2565 .RE
2566
2567 .sp
2568 .ne 2
2569 .na
2570 \fBzfs_zil_clean_taskq_maxalloc\fR (int)
2571 .ad
2572 .RS 12n
2573 The maximum number of taskq entries that are allowed to be cached. When this
2574 limit is exceeded transaction records (itxs) will be cleaned synchronously.
2575 .sp
2576 Default value: \fB1048576\fR.
2577 .RE
2578
2579 .sp
2580 .ne 2
2581 .na
2582 \fBzfs_zil_clean_taskq_minalloc\fR (int)
2583 .ad
2584 .RS 12n
2585 The number of taskq entries that are pre-populated when the taskq is first
2586 created and are immediately available for use.
2587 .sp
2588 Default value: \fB1024\fR.
2589 .RE
2590
2591 .sp
2592 .ne 2
2593 .na
2594 \fBzfs_zil_clean_taskq_nthr_pct\fR (int)
2595 .ad
2596 .RS 12n
2597 This controls the number of threads used by the dp_zil_clean_taskq. The default
2598 value of 100% will create a maximum of one thread per cpu.
2599 .sp
2600 Default value: \fB100\fR%.
2601 .RE
2602
2603 .sp
2604 .ne 2
2605 .na
2606 \fBzil_nocacheflush\fR (int)
2607 .ad
2608 .RS 12n
2609 Disable the cache flush commands that are normally sent to the disk(s) by
2610 the ZIL after an LWB write has completed. Setting this will cause ZIL
2611 corruption on power loss if a volatile out-of-order write cache is enabled.
2612 .sp
2613 Use \fB1\fR for yes and \fB0\fR for no (default).
2614 .RE
2615
2616 .sp
2617 .ne 2
2618 .na
2619 \fBzil_replay_disable\fR (int)
2620 .ad
2621 .RS 12n
2622 Disable intent logging replay. Can be disabled for recovery from corrupted
2623 ZIL
2624 .sp
2625 Use \fB1\fR for yes and \fB0\fR for no (default).
2626 .RE
2627
2628 .sp
2629 .ne 2
2630 .na
2631 \fBzil_slog_bulk\fR (ulong)
2632 .ad
2633 .RS 12n
2634 Limit SLOG write size per commit executed with synchronous priority.
2635 Any writes above that will be executed with lower (asynchronous) priority
2636 to limit potential SLOG device abuse by single active ZIL writer.
2637 .sp
2638 Default value: \fB786,432\fR.
2639 .RE
2640
2641 .sp
2642 .ne 2
2643 .na
2644 \fBzio_decompress_fail_fraction\fR (int)
2645 .ad
2646 .RS 12n
2647 If non-zero, this value represents the denominator of the probability that zfs
2648 should induce a decompression failure. For instance, for a 5% decompression
2649 failure rate, this value should be set to 20.
2650 .sp
2651 Default value: \fB0\fR.
2652 .RE
2653
2654 .sp
2655 .ne 2
2656 .na
2657 \fBzio_slow_io_ms\fR (int)
2658 .ad
2659 .RS 12n
2660 When an I/O operation takes more than \fBzio_slow_io_ms\fR milliseconds to
2661 complete is marked as a slow I/O. Each slow I/O causes a delay zevent. Slow
2662 I/O counters can be seen with "zpool status -s".
2663
2664 .sp
2665 Default value: \fB30,000\fR.
2666 .RE
2667
2668 .sp
2669 .ne 2
2670 .na
2671 \fBzio_dva_throttle_enabled\fR (int)
2672 .ad
2673 .RS 12n
2674 Throttle block allocations in the I/O pipeline. This allows for
2675 dynamic allocation distribution when devices are imbalanced.
2676 When enabled, the maximum number of pending allocations per top-level vdev
2677 is limited by \fBzfs_vdev_queue_depth_pct\fR.
2678 .sp
2679 Default value: \fB1\fR.
2680 .RE
2681
2682 .sp
2683 .ne 2
2684 .na
2685 \fBzio_requeue_io_start_cut_in_line\fR (int)
2686 .ad
2687 .RS 12n
2688 Prioritize requeued I/O
2689 .sp
2690 Default value: \fB0\fR.
2691 .RE
2692
2693 .sp
2694 .ne 2
2695 .na
2696 \fBzio_taskq_batch_pct\fR (uint)
2697 .ad
2698 .RS 12n
2699 Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
2700 for I/O. These workers are responsible for I/O work such as compression and
2701 checksum calculations. Fractional number of CPUs will be rounded down.
2702 .sp
2703 The default value of 75 was chosen to avoid using all CPUs which can result in
2704 latency issues and inconsistent application performance, especially when high
2705 compression is enabled.
2706 .sp
2707 Default value: \fB75\fR.
2708 .RE
2709
2710 .sp
2711 .ne 2
2712 .na
2713 \fBzvol_inhibit_dev\fR (uint)
2714 .ad
2715 .RS 12n
2716 Do not create zvol device nodes. This may slightly improve startup time on
2717 systems with a very large number of zvols.
2718 .sp
2719 Use \fB1\fR for yes and \fB0\fR for no (default).
2720 .RE
2721
2722 .sp
2723 .ne 2
2724 .na
2725 \fBzvol_major\fR (uint)
2726 .ad
2727 .RS 12n
2728 Major number for zvol block devices
2729 .sp
2730 Default value: \fB230\fR.
2731 .RE
2732
2733 .sp
2734 .ne 2
2735 .na
2736 \fBzvol_max_discard_blocks\fR (ulong)
2737 .ad
2738 .RS 12n
2739 Discard (aka TRIM) operations done on zvols will be done in batches of this
2740 many blocks, where block size is determined by the \fBvolblocksize\fR property
2741 of a zvol.
2742 .sp
2743 Default value: \fB16,384\fR.
2744 .RE
2745
2746 .sp
2747 .ne 2
2748 .na
2749 \fBzvol_prefetch_bytes\fR (uint)
2750 .ad
2751 .RS 12n
2752 When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
2753 from the start and end of the volume. Prefetching these regions
2754 of the volume is desirable because they are likely to be accessed
2755 immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
2756 table.
2757 .sp
2758 Default value: \fB131,072\fR.
2759 .RE
2760
2761 .sp
2762 .ne 2
2763 .na
2764 \fBzvol_request_sync\fR (uint)
2765 .ad
2766 .RS 12n
2767 When processing I/O requests for a zvol submit them synchronously. This
2768 effectively limits the queue depth to 1 for each I/O submitter. When set
2769 to 0 requests are handled asynchronously by a thread pool. The number of
2770 requests which can be handled concurrently is controller by \fBzvol_threads\fR.
2771 .sp
2772 Default value: \fB0\fR.
2773 .RE
2774
2775 .sp
2776 .ne 2
2777 .na
2778 \fBzvol_threads\fR (uint)
2779 .ad
2780 .RS 12n
2781 Max number of threads which can handle zvol I/O requests concurrently.
2782 .sp
2783 Default value: \fB32\fR.
2784 .RE
2785
2786 .sp
2787 .ne 2
2788 .na
2789 \fBzvol_volmode\fR (uint)
2790 .ad
2791 .RS 12n
2792 Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR.
2793 Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none).
2794 .sp
2795 Default value: \fB1\fR.
2796 .RE
2797
2798 .sp
2799 .ne 2
2800 .na
2801 \fBzfs_qat_disable\fR (int)
2802 .ad
2803 .RS 12n
2804 This tunable disables qat hardware acceleration for gzip compression and.
2805 AES-GCM encryption. It is available only if qat acceleration is compiled in
2806 and the qat driver is present.
2807 .sp
2808 Use \fB1\fR for yes and \fB0\fR for no (default).
2809 .RE
2810
2811 .SH ZFS I/O SCHEDULER
2812 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
2813 The I/O scheduler determines when and in what order those operations are
2814 issued. The I/O scheduler divides operations into five I/O classes
2815 prioritized in the following order: sync read, sync write, async read,
2816 async write, and scrub/resilver. Each queue defines the minimum and
2817 maximum number of concurrent operations that may be issued to the
2818 device. In addition, the device has an aggregate maximum,
2819 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
2820 must not exceed the aggregate maximum. If the sum of the per-queue
2821 maximums exceeds the aggregate maximum, then the number of active I/Os
2822 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
2823 be issued regardless of whether all per-queue minimums have been met.
2824 .sp
2825 For many physical devices, throughput increases with the number of
2826 concurrent operations, but latency typically suffers. Further, physical
2827 devices typically have a limit at which more concurrent operations have no
2828 effect on throughput or can actually cause it to decrease.
2829 .sp
2830 The scheduler selects the next operation to issue by first looking for an
2831 I/O class whose minimum has not been satisfied. Once all are satisfied and
2832 the aggregate maximum has not been hit, the scheduler looks for classes
2833 whose maximum has not been satisfied. Iteration through the I/O classes is
2834 done in the order specified above. No further operations are issued if the
2835 aggregate maximum number of concurrent operations has been hit or if there
2836 are no operations queued for an I/O class that has not hit its maximum.
2837 Every time an I/O is queued or an operation completes, the I/O scheduler
2838 looks for new operations to issue.
2839 .sp
2840 In general, smaller max_active's will lead to lower latency of synchronous
2841 operations. Larger max_active's may lead to higher overall throughput,
2842 depending on underlying storage.
2843 .sp
2844 The ratio of the queues' max_actives determines the balance of performance
2845 between reads, writes, and scrubs. E.g., increasing
2846 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
2847 more quickly, but reads and writes to have higher latency and lower throughput.
2848 .sp
2849 All I/O classes have a fixed maximum number of outstanding operations
2850 except for the async write class. Asynchronous writes represent the data
2851 that is committed to stable storage during the syncing stage for
2852 transaction groups. Transaction groups enter the syncing state
2853 periodically so the number of queued async writes will quickly burst up
2854 and then bleed down to zero. Rather than servicing them as quickly as
2855 possible, the I/O scheduler changes the maximum number of active async
2856 write I/Os according to the amount of dirty data in the pool. Since
2857 both throughput and latency typically increase with the number of
2858 concurrent operations issued to physical devices, reducing the
2859 burstiness in the number of concurrent operations also stabilizes the
2860 response time of operations from other -- and in particular synchronous
2861 -- queues. In broad strokes, the I/O scheduler will issue more
2862 concurrent operations from the async write queue as there's more dirty
2863 data in the pool.
2864 .sp
2865 Async Writes
2866 .sp
2867 The number of concurrent operations issued for the async write I/O class
2868 follows a piece-wise linear function defined by a few adjustable points.
2869 .nf
2870
2871 | o---------| <-- zfs_vdev_async_write_max_active
2872 ^ | /^ |
2873 | | / | |
2874 active | / | |
2875 I/O | / | |
2876 count | / | |
2877 | / | |
2878 |-------o | | <-- zfs_vdev_async_write_min_active
2879 0|_______^______|_________|
2880 0% | | 100% of zfs_dirty_data_max
2881 | |
2882 | `-- zfs_vdev_async_write_active_max_dirty_percent
2883 `--------- zfs_vdev_async_write_active_min_dirty_percent
2884
2885 .fi
2886 Until the amount of dirty data exceeds a minimum percentage of the dirty
2887 data allowed in the pool, the I/O scheduler will limit the number of
2888 concurrent operations to the minimum. As that threshold is crossed, the
2889 number of concurrent operations issued increases linearly to the maximum at
2890 the specified maximum percentage of the dirty data allowed in the pool.
2891 .sp
2892 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2893 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
2894 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
2895 maximum percentage, this indicates that the rate of incoming data is
2896 greater than the rate that the backend storage can handle. In this case, we
2897 must further throttle incoming writes, as described in the next section.
2898
2899 .SH ZFS TRANSACTION DELAY
2900 We delay transactions when we've determined that the backend storage
2901 isn't able to accommodate the rate of incoming writes.
2902 .sp
2903 If there is already a transaction waiting, we delay relative to when
2904 that transaction will finish waiting. This way the calculated delay time
2905 is independent of the number of threads concurrently executing
2906 transactions.
2907 .sp
2908 If we are the only waiter, wait relative to when the transaction
2909 started, rather than the current time. This credits the transaction for
2910 "time already served", e.g. reading indirect blocks.
2911 .sp
2912 The minimum time for a transaction to take is calculated as:
2913 .nf
2914 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
2915 min_time is then capped at 100 milliseconds.
2916 .fi
2917 .sp
2918 The delay has two degrees of freedom that can be adjusted via tunables. The
2919 percentage of dirty data at which we start to delay is defined by
2920 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
2921 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
2922 delay after writing at full speed has failed to keep up with the incoming write
2923 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
2924 this variable determines the amount of delay at the midpoint of the curve.
2925 .sp
2926 .nf
2927 delay
2928 10ms +-------------------------------------------------------------*+
2929 | *|
2930 9ms + *+
2931 | *|
2932 8ms + *+
2933 | * |
2934 7ms + * +
2935 | * |
2936 6ms + * +
2937 | * |
2938 5ms + * +
2939 | * |
2940 4ms + * +
2941 | * |
2942 3ms + * +
2943 | * |
2944 2ms + (midpoint) * +
2945 | | ** |
2946 1ms + v *** +
2947 | zfs_delay_scale ----------> ******** |
2948 0 +-------------------------------------*********----------------+
2949 0% <- zfs_dirty_data_max -> 100%
2950 .fi
2951 .sp
2952 Note that since the delay is added to the outstanding time remaining on the
2953 most recent transaction, the delay is effectively the inverse of IOPS.
2954 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
2955 was chosen such that small changes in the amount of accumulated dirty data
2956 in the first 3/4 of the curve yield relatively small differences in the
2957 amount of delay.
2958 .sp
2959 The effects can be easier to understand when the amount of delay is
2960 represented on a log scale:
2961 .sp
2962 .nf
2963 delay
2964 100ms +-------------------------------------------------------------++
2965 + +
2966 | |
2967 + *+
2968 10ms + *+
2969 + ** +
2970 | (midpoint) ** |
2971 + | ** +
2972 1ms + v **** +
2973 + zfs_delay_scale ----------> ***** +
2974 | **** |
2975 + **** +
2976 100us + ** +
2977 + * +
2978 | * |
2979 + * +
2980 10us + * +
2981 + +
2982 | |
2983 + +
2984 +--------------------------------------------------------------+
2985 0% <- zfs_dirty_data_max -> 100%
2986 .fi
2987 .sp
2988 Note here that only as the amount of dirty data approaches its limit does
2989 the delay start to increase rapidly. The goal of a properly tuned system
2990 should be to keep the amount of dirty data out of that range by first
2991 ensuring that the appropriate limits are set for the I/O scheduler to reach
2992 optimal throughput on the backend storage, and then by changing the value
2993 of \fBzfs_delay_scale\fR to increase the steepness of the curve.