2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBignore_hole_birth\fR (int)
33 When set, the hole_birth optimization will not be used, and all holes will
34 always be sent on zfs send. Useful if you suspect your datasets are affected
35 by a bug in hole_birth.
37 Use \fB1\fR for on (default) and \fB0\fR for off.
43 \fBl2arc_feed_again\fR (int)
46 Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
49 Use \fB1\fR for yes (default) and \fB0\fR to disable.
55 \fBl2arc_feed_min_ms\fR (ulong)
58 Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
59 applicable in related situations.
61 Default value: \fB200\fR.
67 \fBl2arc_feed_secs\fR (ulong)
70 Seconds between L2ARC writing
72 Default value: \fB1\fR.
78 \fBl2arc_headroom\fR (ulong)
81 How far through the ARC lists to search for L2ARC cacheable content, expressed
82 as a multiplier of \fBl2arc_write_max\fR
84 Default value: \fB2\fR.
90 \fBl2arc_headroom_boost\fR (ulong)
93 Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
94 successfully compressed before writing. A value of 100 disables this feature.
96 Default value: \fB200\fR.
102 \fBl2arc_nocompress\fR (int)
105 Skip compressing L2ARC buffers
107 Use \fB1\fR for yes and \fB0\fR for no (default).
113 \fBl2arc_noprefetch\fR (int)
116 Do not write buffers to L2ARC if they were prefetched but not used by
119 Use \fB1\fR for yes (default) and \fB0\fR to disable.
125 \fBl2arc_norw\fR (int)
128 No reads during writes
130 Use \fB1\fR for yes and \fB0\fR for no (default).
136 \fBl2arc_write_boost\fR (ulong)
139 Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount
140 while they remain cold.
142 Default value: \fB8,388,608\fR.
148 \fBl2arc_write_max\fR (ulong)
151 Max write bytes per interval
153 Default value: \fB8,388,608\fR.
159 \fBmetaslab_aliquot\fR (ulong)
162 Metaslab granularity, in bytes. This is roughly similar to what would be
163 referred to as the "stripe size" in traditional RAID arrays. In normal
164 operation, ZFS will try to write this amount of data to a top-level vdev
165 before moving on to the next one.
167 Default value: \fB524,288\fR.
173 \fBmetaslab_bias_enabled\fR (int)
176 Enable metaslab group biasing based on its vdev's over- or under-utilization
177 relative to the pool.
179 Use \fB1\fR for yes (default) and \fB0\fR for no.
185 \fBzfs_metaslab_segment_weight_enabled\fR (int)
188 Enable/disable segment-based metaslab selection.
190 Use \fB1\fR for yes (default) and \fB0\fR for no.
196 \fBzfs_metaslab_switch_threshold\fR (int)
199 When using segment-based metaslab selection, continue allocating
200 from the active metaslab until \fBzfs_metaslab_switch_threshold\fR
201 worth of buckets have been exhausted.
203 Default value: \fB2\fR.
209 \fBmetaslab_debug_load\fR (int)
212 Load all metaslabs during pool import.
214 Use \fB1\fR for yes and \fB0\fR for no (default).
220 \fBmetaslab_debug_unload\fR (int)
223 Prevent metaslabs from being unloaded.
225 Use \fB1\fR for yes and \fB0\fR for no (default).
231 \fBmetaslab_fragmentation_factor_enabled\fR (int)
234 Enable use of the fragmentation metric in computing metaslab weights.
236 Use \fB1\fR for yes (default) and \fB0\fR for no.
242 \fBmetaslabs_per_vdev\fR (int)
245 When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs.
247 Default value: \fB200\fR.
253 \fBmetaslab_preload_enabled\fR (int)
256 Enable metaslab group preloading.
258 Use \fB1\fR for yes (default) and \fB0\fR for no.
264 \fBmetaslab_lba_weighting_enabled\fR (int)
267 Give more weight to metaslabs with lower LBAs, assuming they have
268 greater bandwidth as is typically the case on a modern constant
269 angular velocity disk drive.
271 Use \fB1\fR for yes (default) and \fB0\fR for no.
277 \fBspa_config_path\fR (charp)
282 Default value: \fB/etc/zfs/zpool.cache\fR.
288 \fBspa_asize_inflation\fR (int)
291 Multiplication factor used to estimate actual disk consumption from the
292 size of data being written. The default value is a worst case estimate,
293 but lower values may be valid for a given pool depending on its
294 configuration. Pool administrators who understand the factors involved
295 may wish to specify a more realistic inflation factor, particularly if
296 they operate close to quota or capacity limits.
298 Default value: \fB24\fR.
304 \fBspa_load_verify_data\fR (int)
307 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
308 import. Use 0 to disable and 1 to enable.
310 An extreme rewind import normally performs a full traversal of all
311 blocks in the pool for verification. If this parameter is set to 0,
312 the traversal skips non-metadata blocks. It can be toggled once the
313 import has started to stop or start the traversal of non-metadata blocks.
315 Default value: \fB1\fR.
321 \fBspa_load_verify_metadata\fR (int)
324 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
325 pool import. Use 0 to disable and 1 to enable.
327 An extreme rewind import normally performs a full traversal of all
328 blocks in the pool for verification. If this parameter is set to 0,
329 the traversal is not performed. It can be toggled once the import has
330 started to stop or start the traversal.
332 Default value: \fB1\fR.
338 \fBspa_load_verify_maxinflight\fR (int)
341 Maximum concurrent I/Os during the traversal performed during an "extreme
342 rewind" (\fB-X\fR) pool import.
344 Default value: \fB10000\fR.
350 \fBspa_slop_shift\fR (int)
353 Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
354 in the pool to be consumed. This ensures that we don't run the pool
355 completely out of space, due to unaccounted changes (e.g. to the MOS).
356 It also limits the worst-case time to allocate space. If we have
357 less than this amount of free space, most ZPL operations (e.g. write,
358 create) will return ENOSPC.
360 Default value: \fB5\fR.
366 \fBzfetch_array_rd_sz\fR (ulong)
369 If prefetching is enabled, disable prefetching for reads larger than this size.
371 Default value: \fB1,048,576\fR.
377 \fBzfetch_max_distance\fR (uint)
380 Max bytes to prefetch per stream (default 8MB).
382 Default value: \fB8,388,608\fR.
388 \fBzfetch_max_streams\fR (uint)
391 Max number of streams per zfetch (prefetch streams per file).
393 Default value: \fB8\fR.
399 \fBzfetch_min_sec_reap\fR (uint)
402 Min time before an active prefetch stream can be reclaimed
404 Default value: \fB2\fR.
410 \fBzfs_arc_dnode_limit\fR (ulong)
413 When the number of bytes consumed by dnodes in the ARC exceeds this number of
414 bytes, try to unpin some of it in response to demand for non-metadata. This
415 value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which
416 indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of
417 the ARC meta buffers that may be used for dnodes.
419 See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used
420 when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather
421 than in response to overall demand for non-metadata.
424 Default value: \fB0\fR.
430 \fBzfs_arc_dnode_limit_percent\fR (ulong)
433 Percentage that can be consumed by dnodes of ARC meta buffers.
435 See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a
436 higher priority if set to nonzero value.
438 Default value: \fB10\fR.
444 \fBzfs_arc_dnode_reduce_percent\fR (ulong)
447 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
448 when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR.
451 Default value: \fB10% of the number of dnodes in the ARC\fR.
457 \fBzfs_arc_average_blocksize\fR (int)
460 The ARC's buffer hash table is sized based on the assumption of an average
461 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
462 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
463 For configurations with a known larger average block size this value can be
464 increased to reduce the memory footprint.
467 Default value: \fB8192\fR.
473 \fBzfs_arc_evict_batch_limit\fR (int)
476 Number ARC headers to evict per sub-list before proceeding to another sub-list.
477 This batch-style operation prevents entire sub-lists from being evicted at once
478 but comes at a cost of additional unlocking and locking.
480 Default value: \fB10\fR.
486 \fBzfs_arc_grow_retry\fR (int)
489 After a memory pressure event the ARC will wait this many seconds before trying
492 Default value: \fB5\fR.
498 \fBzfs_arc_lotsfree_percent\fR (int)
501 Throttle I/O when free system memory drops below this percentage of total
502 system memory. Setting this value to 0 will disable the throttle.
504 Default value: \fB10\fR.
510 \fBzfs_arc_max\fR (ulong)
513 Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
514 RAM. This value must be at least 67108864 (64 megabytes).
516 This value can be changed dynamically with some caveats. It cannot be set back
517 to 0 while running and reducing it below the current ARC size will not cause
518 the ARC to shrink without memory pressure to induce shrinking.
520 Default value: \fB0\fR.
526 \fBzfs_arc_meta_limit\fR (ulong)
529 The maximum allowed size in bytes that meta data buffers are allowed to
530 consume in the ARC. When this limit is reached meta data buffers will
531 be reclaimed even if the overall arc_c_max has not been reached. This
532 value defaults to 0 which indicates that a percent which is based on
533 \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data.
535 This value my be changed dynamically except that it cannot be set back to 0
536 for a specific percent of the ARC; it must be set to an explicit value.
538 Default value: \fB0\fR.
544 \fBzfs_arc_meta_limit_percent\fR (ulong)
547 Percentage of ARC buffers that can be used for meta data.
549 See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a
550 higher priority if set to nonzero value.
553 Default value: \fB75\fR.
559 \fBzfs_arc_meta_min\fR (ulong)
562 The minimum allowed size in bytes that meta data buffers may consume in
563 the ARC. This value defaults to 0 which disables a floor on the amount
564 of the ARC devoted meta data.
566 Default value: \fB0\fR.
572 \fBzfs_arc_meta_prune\fR (int)
575 The number of dentries and inodes to be scanned looking for entries
576 which can be dropped. This may be required when the ARC reaches the
577 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
578 in the ARC. Increasing this value will cause to dentry and inode caches
579 to be pruned more aggressively. Setting this value to 0 will disable
580 pruning the inode and dentry caches.
582 Default value: \fB10,000\fR.
588 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
591 The number of restart passes to make while scanning the ARC attempting
592 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
593 This value should not need to be tuned but is available to facilitate
594 performance analysis.
596 Default value: \fB4096\fR.
602 \fBzfs_arc_min\fR (ulong)
607 Default value: \fB100\fR.
613 \fBzfs_arc_min_prefetch_lifespan\fR (int)
616 Minimum time prefetched blocks are locked in the ARC, specified in jiffies.
617 A value of 0 will default to 1 second.
619 Default value: \fB0\fR.
625 \fBzfs_multilist_num_sublists\fR (int)
628 To allow more fine-grained locking, each ARC state contains a series
629 of lists for both data and meta data objects. Locking is performed at
630 the level of these "sub-lists". This parameters controls the number of
631 sub-lists per ARC state, and also applies to other uses of the
632 multilist data structure.
634 Default value: \fB4\fR or the number of online CPUs, whichever is greater
640 \fBzfs_arc_overflow_shift\fR (int)
643 The ARC size is considered to be overflowing if it exceeds the current
644 ARC target size (arc_c) by a threshold determined by this parameter.
645 The threshold is calculated as a fraction of arc_c using the formula
646 "arc_c >> \fBzfs_arc_overflow_shift\fR".
648 The default value of 8 causes the ARC to be considered to be overflowing
649 if it exceeds the target size by 1/256th (0.3%) of the target size.
651 When the ARC is overflowing, new buffer allocations are stalled until
652 the reclaim thread catches up and the overflow condition no longer exists.
654 Default value: \fB8\fR.
661 \fBzfs_arc_p_min_shift\fR (int)
664 arc_c shift to calc min/max arc_p
666 Default value: \fB4\fR.
672 \fBzfs_arc_p_aggressive_disable\fR (int)
675 Disable aggressive arc_p growth
677 Use \fB1\fR for yes (default) and \fB0\fR to disable.
683 \fBzfs_arc_p_dampener_disable\fR (int)
686 Disable arc_p adapt dampener
688 Use \fB1\fR for yes (default) and \fB0\fR to disable.
694 \fBzfs_arc_shrink_shift\fR (int)
697 log2(fraction of arc to reclaim)
699 Default value: \fB5\fR.
705 \fBzfs_arc_pc_percent\fR (uint)
708 Percent of pagecache to reclaim arc to
710 This tunable allows ZFS arc to play more nicely with the kernel's LRU
711 pagecache. It can guarantee that the arc size won't collapse under scanning
712 pressure on the pagecache, yet still allows arc to be reclaimed down to
713 zfs_arc_min if necessary. This value is specified as percent of pagecache
714 size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This
715 only operates during memory pressure/reclaim.
717 Default value: \fB0\fR (disabled).
723 \fBzfs_arc_sys_free\fR (ulong)
726 The target number of bytes the ARC should leave as free memory on the system.
727 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
728 option to a non-zero value will override the default.
730 Default value: \fB0\fR.
736 \fBzfs_autoimport_disable\fR (int)
739 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
741 Use \fB1\fR for yes (default) and \fB0\fR for no.
747 \fBzfs_dbgmsg_enable\fR (int)
750 Internally ZFS keeps a small log to facilitate debugging. By default the log
751 is disabled, to enable it set this option to 1. The contents of the log can
752 be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
753 this proc file clears the log.
755 Default value: \fB0\fR.
761 \fBzfs_dbgmsg_maxsize\fR (int)
764 The maximum size in bytes of the internal ZFS debug log.
766 Default value: \fB4M\fR.
772 \fBzfs_dbuf_state_index\fR (int)
775 This feature is currently unused. It is normally used for controlling what
776 reporting is available under /proc/spl/kstat/zfs.
778 Default value: \fB0\fR.
784 \fBzfs_deadman_enabled\fR (int)
787 When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR
788 milliseconds, a "slow spa_sync" message is logged to the debug log
789 (see \fBzfs_dbgmsg_enable\fR). If \fBzfs_deadman_enabled\fR is set,
790 all pending IO operations are also checked and if any haven't completed
791 within \fBzfs_deadman_synctime_ms\fR milliseconds, a "SLOW IO" message
792 is logged to the debug log and a "delay" system event with the details of
793 the hung IO is posted.
795 Use \fB1\fR (default) to enable the slow IO check and \fB0\fR to disable.
801 \fBzfs_deadman_checktime_ms\fR (int)
804 Once a pool sync operation has taken longer than
805 \fBzfs_deadman_synctime_ms\fR milliseconds, continue to check for slow
806 operations every \fBzfs_deadman_checktime_ms\fR milliseconds.
808 Default value: \fB5,000\fR.
814 \fBzfs_deadman_synctime_ms\fR (ulong)
817 Interval in milliseconds after which the deadman is triggered and also
818 the interval after which an IO operation is considered to be "hung"
819 if \fBzfs_deadman_enabled\fR is set.
821 See \fBzfs_deadman_enabled\fR.
823 Default value: \fB1,000,000\fR.
829 \fBzfs_dedup_prefetch\fR (int)
832 Enable prefetching dedup-ed blks
834 Use \fB1\fR for yes and \fB0\fR to disable (default).
840 \fBzfs_delay_min_dirty_percent\fR (int)
843 Start to delay each transaction once there is this amount of dirty data,
844 expressed as a percentage of \fBzfs_dirty_data_max\fR.
845 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
846 See the section "ZFS TRANSACTION DELAY".
848 Default value: \fB60\fR.
854 \fBzfs_delay_scale\fR (int)
857 This controls how quickly the transaction delay approaches infinity.
858 Larger values cause longer delays for a given amount of dirty data.
860 For the smoothest delay, this value should be about 1 billion divided
861 by the maximum number of operations per second. This will smoothly
862 handle between 10x and 1/10th this number.
864 See the section "ZFS TRANSACTION DELAY".
866 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
868 Default value: \fB500,000\fR.
874 \fBzfs_delete_blocks\fR (ulong)
877 This is the used to define a large file for the purposes of delete. Files
878 containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
879 while smaller files are deleted synchronously. Decreasing this value will
880 reduce the time spent in an unlink(2) system call at the expense of a longer
881 delay before the freed space is available.
883 Default value: \fB20,480\fR.
889 \fBzfs_dirty_data_max\fR (int)
892 Determines the dirty space limit in bytes. Once this limit is exceeded, new
893 writes are halted until space frees up. This parameter takes precedence
894 over \fBzfs_dirty_data_max_percent\fR.
895 See the section "ZFS TRANSACTION DELAY".
897 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
903 \fBzfs_dirty_data_max_max\fR (int)
906 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
907 This limit is only enforced at module load time, and will be ignored if
908 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
909 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
910 "ZFS TRANSACTION DELAY".
912 Default value: 25% of physical RAM.
918 \fBzfs_dirty_data_max_max_percent\fR (int)
921 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
922 percentage of physical RAM. This limit is only enforced at module load
923 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
924 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
925 one. See the section "ZFS TRANSACTION DELAY".
927 Default value: \fB25\fR.
933 \fBzfs_dirty_data_max_percent\fR (int)
936 Determines the dirty space limit, expressed as a percentage of all
937 memory. Once this limit is exceeded, new writes are halted until space frees
938 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
939 one. See the section "ZFS TRANSACTION DELAY".
941 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
947 \fBzfs_dirty_data_sync\fR (int)
950 Start syncing out a transaction group if there is at least this much dirty data.
952 Default value: \fB67,108,864\fR.
958 \fBzfs_fletcher_4_impl\fR (string)
961 Select a fletcher 4 implementation.
963 Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR,
964 \fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR.
965 All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction
966 set extensions to be available and will only appear if ZFS detects that they are
967 present at runtime. If multiple implementations of fletcher 4 are available,
968 the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR
969 results in the original, CPU based calculation, being used. Selecting any option
970 other than \fBfastest\fR and \fBscalar\fR results in vector instructions from
971 the respective CPU instruction set being used.
973 Default value: \fBfastest\fR.
979 \fBzfs_free_bpobj_enabled\fR (int)
982 Enable/disable the processing of the free_bpobj object.
984 Default value: \fB1\fR.
990 \fBzfs_free_max_blocks\fR (ulong)
993 Maximum number of blocks freed in a single txg.
995 Default value: \fB100,000\fR.
1001 \fBzfs_vdev_async_read_max_active\fR (int)
1004 Maximum asynchronous read I/Os active to each device.
1005 See the section "ZFS I/O SCHEDULER".
1007 Default value: \fB3\fR.
1013 \fBzfs_vdev_async_read_min_active\fR (int)
1016 Minimum asynchronous read I/Os active to each device.
1017 See the section "ZFS I/O SCHEDULER".
1019 Default value: \fB1\fR.
1025 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
1028 When the pool has more than
1029 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
1030 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
1031 the dirty data is between min and max, the active I/O limit is linearly
1032 interpolated. See the section "ZFS I/O SCHEDULER".
1034 Default value: \fB60\fR.
1040 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
1043 When the pool has less than
1044 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
1045 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
1046 the dirty data is between min and max, the active I/O limit is linearly
1047 interpolated. See the section "ZFS I/O SCHEDULER".
1049 Default value: \fB30\fR.
1055 \fBzfs_vdev_async_write_max_active\fR (int)
1058 Maximum asynchronous write I/Os active to each device.
1059 See the section "ZFS I/O SCHEDULER".
1061 Default value: \fB10\fR.
1067 \fBzfs_vdev_async_write_min_active\fR (int)
1070 Minimum asynchronous write I/Os active to each device.
1071 See the section "ZFS I/O SCHEDULER".
1073 Lower values are associated with better latency on rotational media but poorer
1074 resilver performance. The default value of 2 was chosen as a compromise. A
1075 value of 3 has been shown to improve resilver performance further at a cost of
1076 further increasing latency.
1078 Default value: \fB2\fR.
1084 \fBzfs_vdev_max_active\fR (int)
1087 The maximum number of I/Os active to each device. Ideally, this will be >=
1088 the sum of each queue's max_active. It must be at least the sum of each
1089 queue's min_active. See the section "ZFS I/O SCHEDULER".
1091 Default value: \fB1,000\fR.
1097 \fBzfs_vdev_scrub_max_active\fR (int)
1100 Maximum scrub I/Os active to each device.
1101 See the section "ZFS I/O SCHEDULER".
1103 Default value: \fB2\fR.
1109 \fBzfs_vdev_scrub_min_active\fR (int)
1112 Minimum scrub I/Os active to each device.
1113 See the section "ZFS I/O SCHEDULER".
1115 Default value: \fB1\fR.
1121 \fBzfs_vdev_sync_read_max_active\fR (int)
1124 Maximum synchronous read I/Os active to each device.
1125 See the section "ZFS I/O SCHEDULER".
1127 Default value: \fB10\fR.
1133 \fBzfs_vdev_sync_read_min_active\fR (int)
1136 Minimum synchronous read I/Os active to each device.
1137 See the section "ZFS I/O SCHEDULER".
1139 Default value: \fB10\fR.
1145 \fBzfs_vdev_sync_write_max_active\fR (int)
1148 Maximum synchronous write I/Os active to each device.
1149 See the section "ZFS I/O SCHEDULER".
1151 Default value: \fB10\fR.
1157 \fBzfs_vdev_sync_write_min_active\fR (int)
1160 Minimum synchronous write I/Os active to each device.
1161 See the section "ZFS I/O SCHEDULER".
1163 Default value: \fB10\fR.
1169 \fBzfs_vdev_queue_depth_pct\fR (int)
1172 Maximum number of queued allocations per top-level vdev expressed as
1173 a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the
1174 system to detect devices that are more capable of handling allocations
1175 and to allocate more blocks to those devices. It allows for dynamic
1176 allocation distribution when devices are imbalanced as fuller devices
1177 will tend to be slower than empty devices.
1179 See also \fBzio_dva_throttle_enabled\fR.
1181 Default value: \fB1000\fR.
1187 \fBzfs_disable_dup_eviction\fR (int)
1190 Disable duplicate buffer eviction
1192 Use \fB1\fR for yes and \fB0\fR for no (default).
1198 \fBzfs_expire_snapshot\fR (int)
1201 Seconds to expire .zfs/snapshot
1203 Default value: \fB300\fR.
1209 \fBzfs_admin_snapshot\fR (int)
1212 Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1213 directory to cause the creation, destruction, or renaming of snapshots.
1214 When enabled this functionality works both locally and over NFS exports
1215 which have the 'no_root_squash' option set. This functionality is disabled
1218 Use \fB1\fR for yes and \fB0\fR for no (default).
1224 \fBzfs_flags\fR (int)
1227 Set additional debugging flags. The following flags may be bitwise-or'd
1239 Enable dprintf entries in the debug log.
1241 2 ZFS_DEBUG_DBUF_VERIFY *
1242 Enable extra dbuf verifications.
1244 4 ZFS_DEBUG_DNODE_VERIFY *
1245 Enable extra dnode verifications.
1247 8 ZFS_DEBUG_SNAPNAMES
1248 Enable snapshot name verification.
1251 Check for illegally modified ARC buffers.
1254 Enable spa_dbgmsg entries in the debug log.
1256 64 ZFS_DEBUG_ZIO_FREE
1257 Enable verification of block frees.
1259 128 ZFS_DEBUG_HISTOGRAM_VERIFY
1260 Enable extra spacemap histogram verifications.
1263 * Requires debug build.
1265 Default value: \fB0\fR.
1271 \fBzfs_free_leak_on_eio\fR (int)
1274 If destroy encounters an EIO while reading metadata (e.g. indirect
1275 blocks), space referenced by the missing metadata can not be freed.
1276 Normally this causes the background destroy to become "stalled", as
1277 it is unable to make forward progress. While in this stalled state,
1278 all remaining space to free from the error-encountering filesystem is
1279 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1280 permanently leak the space from indirect blocks that can not be read,
1281 and continue to free everything else that it can.
1283 The default, "stalling" behavior is useful if the storage partially
1284 fails (i.e. some but not all i/os fail), and then later recovers. In
1285 this case, we will be able to continue pool operations while it is
1286 partially failed, and when it recovers, we can continue to free the
1287 space, with no leaks. However, note that this case is actually
1290 Typically pools either (a) fail completely (but perhaps temporarily,
1291 e.g. a top-level vdev going offline), or (b) have localized,
1292 permanent errors (e.g. disk returns the wrong data due to bit flip or
1293 firmware bug). In case (a), this setting does not matter because the
1294 pool will be suspended and the sync thread will not be able to make
1295 forward progress regardless. In case (b), because the error is
1296 permanent, the best we can do is leak the minimum amount of space,
1297 which is what setting this flag will do. Therefore, it is reasonable
1298 for this flag to normally be set, but we chose the more conservative
1299 approach of not setting it, so that there is no possibility of
1300 leaking space in the "partial temporary" failure case.
1302 Default value: \fB0\fR.
1308 \fBzfs_free_min_time_ms\fR (int)
1311 During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum
1312 of this much time will be spent working on freeing blocks per txg.
1314 Default value: \fB1,000\fR.
1320 \fBzfs_immediate_write_sz\fR (long)
1323 Largest data block to write to zil. Larger blocks will be treated as if the
1324 dataset being written to had the property setting \fBlogbias=throughput\fR.
1326 Default value: \fB32,768\fR.
1332 \fBzfs_max_recordsize\fR (int)
1335 We currently support block sizes from 512 bytes to 16MB. The benefits of
1336 larger blocks, and thus larger IO, need to be weighed against the cost of
1337 COWing a giant block to modify one byte. Additionally, very large blocks
1338 can have an impact on i/o latency, and also potentially on the memory
1339 allocator. Therefore, we do not allow the recordsize to be set larger than
1340 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1341 this tunable, and pools with larger blocks can always be imported and used,
1342 regardless of this setting.
1344 Default value: \fB1,048,576\fR.
1350 \fBzfs_mdcomp_disable\fR (int)
1353 Disable meta data compression
1355 Use \fB1\fR for yes and \fB0\fR for no (default).
1361 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1364 Allow metaslabs to keep their active state as long as their fragmentation
1365 percentage is less than or equal to this value. An active metaslab that
1366 exceeds this threshold will no longer keep its active status allowing
1367 better metaslabs to be selected.
1369 Default value: \fB70\fR.
1375 \fBzfs_mg_fragmentation_threshold\fR (int)
1378 Metaslab groups are considered eligible for allocations if their
1379 fragmentation metric (measured as a percentage) is less than or equal to
1380 this value. If a metaslab group exceeds this threshold then it will be
1381 skipped unless all metaslab groups within the metaslab class have also
1382 crossed this threshold.
1384 Default value: \fB85\fR.
1390 \fBzfs_mg_noalloc_threshold\fR (int)
1393 Defines a threshold at which metaslab groups should be eligible for
1394 allocations. The value is expressed as a percentage of free space
1395 beyond which a metaslab group is always eligible for allocations.
1396 If a metaslab group's free space is less than or equal to the
1397 threshold, the allocator will avoid allocating to that group
1398 unless all groups in the pool have reached the threshold. Once all
1399 groups have reached the threshold, all groups are allowed to accept
1400 allocations. The default value of 0 disables the feature and causes
1401 all metaslab groups to be eligible for allocations.
1403 This parameter allows to deal with pools having heavily imbalanced
1404 vdevs such as would be the case when a new vdev has been added.
1405 Setting the threshold to a non-zero percentage will stop allocations
1406 from being made to vdevs that aren't filled to the specified percentage
1407 and allow lesser filled vdevs to acquire more allocations than they
1408 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1410 Default value: \fB0\fR.
1416 \fBzfs_no_scrub_io\fR (int)
1419 Set for no scrub I/O. This results in scrubs not actually scrubbing data and
1420 simply doing a metadata crawl of the pool instead.
1422 Use \fB1\fR for yes and \fB0\fR for no (default).
1428 \fBzfs_no_scrub_prefetch\fR (int)
1431 Set to disable block prefetching for scrubs.
1433 Use \fB1\fR for yes and \fB0\fR for no (default).
1439 \fBzfs_nocacheflush\fR (int)
1442 Disable cache flush operations on disks when writing. Beware, this may cause
1443 corruption if disks re-order writes.
1445 Use \fB1\fR for yes and \fB0\fR for no (default).
1451 \fBzfs_nopwrite_enabled\fR (int)
1456 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1462 \fBzfs_dmu_offset_next_sync\fR (int)
1465 Enable forcing txg sync to find holes. When enabled forces ZFS to act
1466 like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which
1467 when a dnode is dirty causes txg's to be synced so that this data can be
1470 Use \fB1\fR for yes and \fB0\fR to disable (default).
1476 \fBzfs_pd_bytes_max\fR (int)
1479 The number of bytes which should be prefetched during a pool traversal
1480 (eg: \fBzfs send\fR or other data crawling operations)
1482 Default value: \fB52,428,800\fR.
1488 \fBzfs_per_txg_dirty_frees_percent \fR (ulong)
1491 Tunable to control percentage of dirtied blocks from frees in one TXG.
1492 After this threshold is crossed, additional dirty blocks from frees
1493 wait until the next TXG.
1494 A value of zero will disable this throttle.
1496 Default value: \fB30\fR and \fB0\fR to disable.
1504 \fBzfs_prefetch_disable\fR (int)
1507 This tunable disables predictive prefetch. Note that it leaves "prescient"
1508 prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1509 prescient prefetch never issues i/os that end up not being needed, so it
1510 can't hurt performance.
1512 Use \fB1\fR for yes and \fB0\fR for no (default).
1518 \fBzfs_read_chunk_size\fR (long)
1521 Bytes to read per chunk
1523 Default value: \fB1,048,576\fR.
1529 \fBzfs_read_history\fR (int)
1532 Historic statistics for the last N reads will be available in
1533 \fR/proc/spl/kstat/zfs/POOLNAME/reads\fB
1535 Default value: \fB0\fR (no data is kept).
1541 \fBzfs_read_history_hits\fR (int)
1544 Include cache hits in read history
1546 Use \fB1\fR for yes and \fB0\fR for no (default).
1552 \fBzfs_recover\fR (int)
1555 Set to attempt to recover from fatal errors. This should only be used as a
1556 last resort, as it typically results in leaked space, or worse.
1558 Use \fB1\fR for yes and \fB0\fR for no (default).
1564 \fBzfs_resilver_delay\fR (int)
1567 Number of ticks to delay prior to issuing a resilver I/O operation when
1568 a non-resilver or non-scrub I/O operation has occurred within the past
1569 \fBzfs_scan_idle\fR ticks.
1571 Default value: \fB2\fR.
1577 \fBzfs_resilver_min_time_ms\fR (int)
1580 Resilvers are processed by the sync thread. While resilvering it will spend
1581 at least this much time working on a resilver between txg flushes.
1583 Default value: \fB3,000\fR.
1589 \fBzfs_scan_idle\fR (int)
1592 Idle window in clock ticks. During a scrub or a resilver, if
1593 a non-scrub or non-resilver I/O operation has occurred during this
1594 window, the next scrub or resilver operation is delayed by, respectively
1595 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1597 Default value: \fB50\fR.
1603 \fBzfs_scan_min_time_ms\fR (int)
1606 Scrubs are processed by the sync thread. While scrubbing it will spend
1607 at least this much time working on a scrub between txg flushes.
1609 Default value: \fB1,000\fR.
1615 \fBzfs_scrub_delay\fR (int)
1618 Number of ticks to delay prior to issuing a scrub I/O operation when
1619 a non-scrub or non-resilver I/O operation has occurred within the past
1620 \fBzfs_scan_idle\fR ticks.
1622 Default value: \fB4\fR.
1628 \fBzfs_send_corrupt_data\fR (int)
1631 Allow sending of corrupt data (ignore read/checksum errors when sending data)
1633 Use \fB1\fR for yes and \fB0\fR for no (default).
1639 \fBzfs_sync_pass_deferred_free\fR (int)
1642 Flushing of data to disk is done in passes. Defer frees starting in this pass
1644 Default value: \fB2\fR.
1650 \fBzfs_sync_pass_dont_compress\fR (int)
1653 Don't compress starting in this pass
1655 Default value: \fB5\fR.
1661 \fBzfs_sync_pass_rewrite\fR (int)
1664 Rewrite new block pointers starting in this pass
1666 Default value: \fB2\fR.
1672 \fBzfs_top_maxinflight\fR (int)
1675 Max concurrent I/Os per top-level vdev (mirrors or raidz arrays) allowed during
1676 scrub or resilver operations.
1678 Default value: \fB32\fR.
1684 \fBzfs_txg_history\fR (int)
1687 Historic statistics for the last N txgs will be available in
1688 \fR/proc/spl/kstat/zfs/POOLNAME/txgs\fB
1690 Default value: \fB0\fR.
1696 \fBzfs_txg_timeout\fR (int)
1699 Flush dirty data to disk at least every N seconds (maximum txg duration)
1701 Default value: \fB5\fR.
1707 \fBzfs_vdev_aggregation_limit\fR (int)
1710 Max vdev I/O aggregation size
1712 Default value: \fB131,072\fR.
1718 \fBzfs_vdev_cache_bshift\fR (int)
1721 Shift size to inflate reads too
1723 Default value: \fB16\fR (effectively 65536).
1729 \fBzfs_vdev_cache_max\fR (int)
1732 Inflate reads small than this value to meet the \fBzfs_vdev_cache_bshift\fR
1735 Default value: \fB16384\fR.
1741 \fBzfs_vdev_cache_size\fR (int)
1744 Total size of the per-disk cache in bytes.
1746 Currently this feature is disabled as it has been found to not be helpful
1747 for performance and in some cases harmful.
1749 Default value: \fB0\fR.
1755 \fBzfs_vdev_mirror_rotating_inc\fR (int)
1758 A number by which the balancing algorithm increments the load calculation for
1759 the purpose of selecting the least busy mirror member when an I/O immediately
1760 follows its predecessor on rotational vdevs for the purpose of making decisions
1763 Default value: \fB0\fR.
1769 \fBzfs_vdev_mirror_rotating_seek_inc\fR (int)
1772 A number by which the balancing algorithm increments the load calculation for
1773 the purpose of selecting the least busy mirror member when an I/O lacks
1774 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
1775 this that are not immediately following the previous I/O are incremented by
1778 Default value: \fB5\fR.
1784 \fBzfs_vdev_mirror_rotating_seek_offset\fR (int)
1787 The maximum distance for the last queued I/O in which the balancing algorithm
1788 considers an I/O to have locality.
1789 See the section "ZFS I/O SCHEDULER".
1791 Default value: \fB1048576\fR.
1797 \fBzfs_vdev_mirror_non_rotating_inc\fR (int)
1800 A number by which the balancing algorithm increments the load calculation for
1801 the purpose of selecting the least busy mirror member on non-rotational vdevs
1802 when I/Os do not immediately follow one another.
1804 Default value: \fB0\fR.
1810 \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int)
1813 A number by which the balancing algorithm increments the load calculation for
1814 the purpose of selecting the least busy mirror member when an I/O lacks
1815 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
1816 this that are not immediately following the previous I/O are incremented by
1819 Default value: \fB1\fR.
1825 \fBzfs_vdev_read_gap_limit\fR (int)
1828 Aggregate read I/O operations if the gap on-disk between them is within this
1831 Default value: \fB32,768\fR.
1837 \fBzfs_vdev_scheduler\fR (charp)
1840 Set the Linux I/O scheduler on whole disk vdevs to this scheduler
1842 Default value: \fBnoop\fR.
1848 \fBzfs_vdev_write_gap_limit\fR (int)
1851 Aggregate write I/O over gap
1853 Default value: \fB4,096\fR.
1859 \fBzfs_vdev_raidz_impl\fR (string)
1862 Parameter for selecting raidz parity implementation to use.
1864 Options marked (always) below may be selected on module load as they are
1865 supported on all systems.
1866 The remaining options may only be set after the module is loaded, as they
1867 are available only if the implementations are compiled in and supported
1868 on the running system.
1870 Once the module is loaded, the content of
1871 /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options
1872 with the currently selected one enclosed in [].
1873 Possible options are:
1874 fastest - (always) implementation selected using built-in benchmark
1875 original - (always) original raidz implementation
1876 scalar - (always) scalar raidz implementation
1877 sse2 - implementation using SSE2 instruction set (64bit x86 only)
1878 ssse3 - implementation using SSSE3 instruction set (64bit x86 only)
1879 avx2 - implementation using AVX2 instruction set (64bit x86 only)
1880 avx512f - implementation using AVX512F instruction set (64bit x86 only)
1881 avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only)
1882 aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only)
1883 aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only)
1885 Default value: \fBfastest\fR.
1891 \fBzfs_zevent_cols\fR (int)
1894 When zevents are logged to the console use this as the word wrap width.
1896 Default value: \fB80\fR.
1902 \fBzfs_zevent_console\fR (int)
1905 Log events to the console
1907 Use \fB1\fR for yes and \fB0\fR for no (default).
1913 \fBzfs_zevent_len_max\fR (int)
1916 Max event queue length. A value of 0 will result in a calculated value which
1917 increases with the number of CPUs in the system (minimum 64 events). Events
1918 in the queue can be viewed with the \fBzpool events\fR command.
1920 Default value: \fB0\fR.
1926 \fBzil_replay_disable\fR (int)
1929 Disable intent logging replay. Can be disabled for recovery from corrupted
1932 Use \fB1\fR for yes and \fB0\fR for no (default).
1938 \fBzil_slog_bulk\fR (ulong)
1941 Limit SLOG write size per commit executed with synchronous priority.
1942 Any writes above that will be executed with lower (asynchronous) priority
1943 to limit potential SLOG device abuse by single active ZIL writer.
1945 Default value: \fB786,432\fR.
1951 \fBzio_delay_max\fR (int)
1954 A zevent will be logged if a ZIO operation takes more than N milliseconds to
1955 complete. Note that this is only a logging facility, not a timeout on
1958 Default value: \fB30,000\fR.
1964 \fBzio_dva_throttle_enabled\fR (int)
1967 Throttle block allocations in the ZIO pipeline. This allows for
1968 dynamic allocation distribution when devices are imbalanced.
1969 When enabled, the maximum number of pending allocations per top-level vdev
1970 is limited by \fBzfs_vdev_queue_depth_pct\fR.
1972 Default value: \fB1\fR.
1978 \fBzio_requeue_io_start_cut_in_line\fR (int)
1981 Prioritize requeued I/O
1983 Default value: \fB0\fR.
1989 \fBzio_taskq_batch_pct\fR (uint)
1992 Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
1993 for IO. These workers are responsible for IO work such as compression and
1994 checksum calculations. Fractional number of CPUs will be rounded down.
1996 The default value of 75 was chosen to avoid using all CPUs which can result in
1997 latency issues and inconsistent application performance, especially when high
1998 compression is enabled.
2000 Default value: \fB75\fR.
2006 \fBzvol_inhibit_dev\fR (uint)
2009 Do not create zvol device nodes. This may slightly improve startup time on
2010 systems with a very large number of zvols.
2012 Use \fB1\fR for yes and \fB0\fR for no (default).
2018 \fBzvol_major\fR (uint)
2021 Major number for zvol block devices
2023 Default value: \fB230\fR.
2029 \fBzvol_max_discard_blocks\fR (ulong)
2032 Discard (aka TRIM) operations done on zvols will be done in batches of this
2033 many blocks, where block size is determined by the \fBvolblocksize\fR property
2036 Default value: \fB16,384\fR.
2042 \fBzvol_prefetch_bytes\fR (uint)
2045 When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
2046 from the start and end of the volume. Prefetching these regions
2047 of the volume is desirable because they are likely to be accessed
2048 immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
2051 Default value: \fB131,072\fR.
2057 \fBzvol_request_sync\fR (uint)
2060 When processing I/O requests for a zvol submit them synchronously. This
2061 effectively limits the queue depth to 1 for each I/O submitter. When set
2062 to 0 requests are handled asynchronously by a thread pool. The number of
2063 requests which can be handled concurrently is controller by \fBzvol_threads\fR.
2065 Default value: \fB0\fR.
2071 \fBzvol_threads\fR (uint)
2074 Max number of threads which can handle zvol I/O requests concurrently.
2076 Default value: \fB32\fR.
2082 \fBzvol_volmode\fR (uint)
2085 Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR.
2086 Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none).
2088 Default value: \fB1\fR.
2094 \fBzfs_qat_disable\fR (int)
2097 This tunable disables qat hardware acceleration for gzip compression.
2098 It is available only if qat acceleration is compiled in and qat driver
2101 Use \fB1\fR for yes and \fB0\fR for no (default).
2104 .SH ZFS I/O SCHEDULER
2105 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
2106 The I/O scheduler determines when and in what order those operations are
2107 issued. The I/O scheduler divides operations into five I/O classes
2108 prioritized in the following order: sync read, sync write, async read,
2109 async write, and scrub/resilver. Each queue defines the minimum and
2110 maximum number of concurrent operations that may be issued to the
2111 device. In addition, the device has an aggregate maximum,
2112 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
2113 must not exceed the aggregate maximum. If the sum of the per-queue
2114 maximums exceeds the aggregate maximum, then the number of active I/Os
2115 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
2116 be issued regardless of whether all per-queue minimums have been met.
2118 For many physical devices, throughput increases with the number of
2119 concurrent operations, but latency typically suffers. Further, physical
2120 devices typically have a limit at which more concurrent operations have no
2121 effect on throughput or can actually cause it to decrease.
2123 The scheduler selects the next operation to issue by first looking for an
2124 I/O class whose minimum has not been satisfied. Once all are satisfied and
2125 the aggregate maximum has not been hit, the scheduler looks for classes
2126 whose maximum has not been satisfied. Iteration through the I/O classes is
2127 done in the order specified above. No further operations are issued if the
2128 aggregate maximum number of concurrent operations has been hit or if there
2129 are no operations queued for an I/O class that has not hit its maximum.
2130 Every time an I/O is queued or an operation completes, the I/O scheduler
2131 looks for new operations to issue.
2133 In general, smaller max_active's will lead to lower latency of synchronous
2134 operations. Larger max_active's may lead to higher overall throughput,
2135 depending on underlying storage.
2137 The ratio of the queues' max_actives determines the balance of performance
2138 between reads, writes, and scrubs. E.g., increasing
2139 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
2140 more quickly, but reads and writes to have higher latency and lower throughput.
2142 All I/O classes have a fixed maximum number of outstanding operations
2143 except for the async write class. Asynchronous writes represent the data
2144 that is committed to stable storage during the syncing stage for
2145 transaction groups. Transaction groups enter the syncing state
2146 periodically so the number of queued async writes will quickly burst up
2147 and then bleed down to zero. Rather than servicing them as quickly as
2148 possible, the I/O scheduler changes the maximum number of active async
2149 write I/Os according to the amount of dirty data in the pool. Since
2150 both throughput and latency typically increase with the number of
2151 concurrent operations issued to physical devices, reducing the
2152 burstiness in the number of concurrent operations also stabilizes the
2153 response time of operations from other -- and in particular synchronous
2154 -- queues. In broad strokes, the I/O scheduler will issue more
2155 concurrent operations from the async write queue as there's more dirty
2160 The number of concurrent operations issued for the async write I/O class
2161 follows a piece-wise linear function defined by a few adjustable points.
2164 | o---------| <-- zfs_vdev_async_write_max_active
2171 |-------o | | <-- zfs_vdev_async_write_min_active
2172 0|_______^______|_________|
2173 0% | | 100% of zfs_dirty_data_max
2175 | `-- zfs_vdev_async_write_active_max_dirty_percent
2176 `--------- zfs_vdev_async_write_active_min_dirty_percent
2179 Until the amount of dirty data exceeds a minimum percentage of the dirty
2180 data allowed in the pool, the I/O scheduler will limit the number of
2181 concurrent operations to the minimum. As that threshold is crossed, the
2182 number of concurrent operations issued increases linearly to the maximum at
2183 the specified maximum percentage of the dirty data allowed in the pool.
2185 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2186 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
2187 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
2188 maximum percentage, this indicates that the rate of incoming data is
2189 greater than the rate that the backend storage can handle. In this case, we
2190 must further throttle incoming writes, as described in the next section.
2192 .SH ZFS TRANSACTION DELAY
2193 We delay transactions when we've determined that the backend storage
2194 isn't able to accommodate the rate of incoming writes.
2196 If there is already a transaction waiting, we delay relative to when
2197 that transaction will finish waiting. This way the calculated delay time
2198 is independent of the number of threads concurrently executing
2201 If we are the only waiter, wait relative to when the transaction
2202 started, rather than the current time. This credits the transaction for
2203 "time already served", e.g. reading indirect blocks.
2205 The minimum time for a transaction to take is calculated as:
2207 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
2208 min_time is then capped at 100 milliseconds.
2211 The delay has two degrees of freedom that can be adjusted via tunables. The
2212 percentage of dirty data at which we start to delay is defined by
2213 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
2214 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
2215 delay after writing at full speed has failed to keep up with the incoming write
2216 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
2217 this variable determines the amount of delay at the midpoint of the curve.
2221 10ms +-------------------------------------------------------------*+
2237 2ms + (midpoint) * +
2240 | zfs_delay_scale ----------> ******** |
2241 0 +-------------------------------------*********----------------+
2242 0% <- zfs_dirty_data_max -> 100%
2245 Note that since the delay is added to the outstanding time remaining on the
2246 most recent transaction, the delay is effectively the inverse of IOPS.
2247 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
2248 was chosen such that small changes in the amount of accumulated dirty data
2249 in the first 3/4 of the curve yield relatively small differences in the
2252 The effects can be easier to understand when the amount of delay is
2253 represented on a log scale:
2257 100ms +-------------------------------------------------------------++
2266 + zfs_delay_scale ----------> ***** +
2277 +--------------------------------------------------------------+
2278 0% <- zfs_dirty_data_max -> 100%
2281 Note here that only as the amount of dirty data approaches its limit does
2282 the delay start to increase rapidly. The goal of a properly tuned system
2283 should be to keep the amount of dirty data out of that range by first
2284 ensuring that the appropriate limits are set for the I/O scheduler to reach
2285 optimal throughput on the backend storage, and then by changing the value
2286 of \fBzfs_delay_scale\fR to increase the steepness of the curve.