2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBl2arc_feed_again\fR (int)
35 Use \fB1\fR for yes (default) and \fB0\fR to disable.
41 \fBl2arc_feed_min_ms\fR (ulong)
44 Min feed interval in milliseconds
46 Default value: \fB200\fR.
52 \fBl2arc_feed_secs\fR (ulong)
55 Seconds between L2ARC writing
57 Default value: \fB1\fR.
63 \fBl2arc_headroom\fR (ulong)
66 Number of max device writes to precache
68 Default value: \fB2\fR.
74 \fBl2arc_headroom_boost\fR (ulong)
77 Compressed l2arc_headroom multiplier
79 Default value: \fB200\fR.
85 \fBl2arc_nocompress\fR (int)
88 Skip compressing L2ARC buffers
90 Use \fB1\fR for yes and \fB0\fR for no (default).
96 \fBl2arc_noprefetch\fR (int)
99 Skip caching prefetched buffers
101 Use \fB1\fR for yes (default) and \fB0\fR to disable.
107 \fBl2arc_norw\fR (int)
110 No reads during writes
112 Use \fB1\fR for yes and \fB0\fR for no (default).
118 \fBl2arc_write_boost\fR (ulong)
121 Extra write bytes during device warmup
123 Default value: \fB8,388,608\fR.
129 \fBl2arc_write_max\fR (ulong)
132 Max write bytes per interval
134 Default value: \fB8,388,608\fR.
140 \fBmetaslab_bias_enabled\fR (int)
143 Enable metaslab group biasing based on its vdev's over- or under-utilization
144 relative to the pool.
146 Use \fB1\fR for yes (default) and \fB0\fR for no.
152 \fBmetaslab_debug_load\fR (int)
155 Load all metaslabs during pool import.
157 Use \fB1\fR for yes and \fB0\fR for no (default).
163 \fBmetaslab_debug_unload\fR (int)
166 Prevent metaslabs from being unloaded.
168 Use \fB1\fR for yes and \fB0\fR for no (default).
174 \fBmetaslab_fragmentation_factor_enabled\fR (int)
177 Enable use of the fragmentation metric in computing metaslab weights.
179 Use \fB1\fR for yes (default) and \fB0\fR for no.
185 \fBmetaslabs_per_vdev\fR (int)
188 When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs.
190 Default value: \fB200\fR.
196 \fBmetaslab_preload_enabled\fR (int)
199 Enable metaslab group preloading.
201 Use \fB1\fR for yes (default) and \fB0\fR for no.
207 \fBmetaslab_lba_weighting_enabled\fR (int)
210 Give more weight to metaslabs with lower LBAs, assuming they have
211 greater bandwidth as is typically the case on a modern constant
212 angular velocity disk drive.
214 Use \fB1\fR for yes (default) and \fB0\fR for no.
220 \fBspa_config_path\fR (charp)
225 Default value: \fB/etc/zfs/zpool.cache\fR.
231 \fBspa_asize_inflation\fR (int)
234 Multiplication factor used to estimate actual disk consumption from the
235 size of data being written. The default value is a worst case estimate,
236 but lower values may be valid for a given pool depending on its
237 configuration. Pool administrators who understand the factors involved
238 may wish to specify a more realistic inflation factor, particularly if
239 they operate close to quota or capacity limits.
247 \fBspa_load_verify_data\fR (int)
250 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
251 import. Use 0 to disable and 1 to enable.
253 An extreme rewind import normally performs a full traversal of all
254 blocks in the pool for verification. If this parameter is set to 0,
255 the traversal skips non-metadata blocks. It can be toggled once the
256 import has started to stop or start the traversal of non-metadata blocks.
264 \fBspa_load_verify_metadata\fR (int)
267 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
268 pool import. Use 0 to disable and 1 to enable.
270 An extreme rewind import normally performs a full traversal of all
271 blocks in the pool for verification. If this parameter is set to 1,
272 the traversal is not performed. It can be toggled once the import has
273 started to stop or start the traversal.
281 \fBspa_load_verify_maxinflight\fR (int)
284 Maximum concurrent I/Os during the traversal performed during an "extreme
285 rewind" (\fB-X\fR) pool import.
293 \fBzfetch_array_rd_sz\fR (ulong)
296 If prefetching is enabled, disable prefetching for reads larger than this size.
298 Default value: \fB1,048,576\fR.
304 \fBzfetch_block_cap\fR (uint)
307 Max number of blocks to prefetch at a time
309 Default value: \fB256\fR.
315 \fBzfetch_max_streams\fR (uint)
318 Max number of streams per zfetch (prefetch streams per file).
320 Default value: \fB8\fR.
326 \fBzfetch_min_sec_reap\fR (uint)
329 Min time before an active prefetch stream can be reclaimed
331 Default value: \fB2\fR.
337 \fBzfs_arc_average_blocksize\fR (int)
340 The ARC's buffer hash table is sized based on the assumption of an average
341 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
342 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
343 For configurations with a known larger average block size this value can be
344 increased to reduce the memory footprint.
347 Default value: \fB8192\fR.
353 \fBzfs_arc_grow_retry\fR (int)
356 Seconds before growing arc size
358 Default value: \fB5\fR.
364 \fBzfs_arc_max\fR (ulong)
369 Default value: \fB0\fR.
375 \fBzfs_arc_memory_throttle_disable\fR (int)
378 Disable memory throttle
380 Use \fB1\fR for yes (default) and \fB0\fR to disable.
386 \fBzfs_arc_meta_limit\fR (ulong)
389 The maximum allowed size in bytes that meta data buffers are allowed to
390 consume in the ARC. When this limit is reached meta data buffers will
391 be reclaimed even if the overall arc_c_max has not been reached. This
392 value defaults to 0 which indicates that 3/4 of the ARC may be used
395 Default value: \fB0\fR.
401 \fBzfs_arc_meta_prune\fR (int)
404 The number of dentries and inodes to be scanned looking for entries
405 which can be dropped. This may be required when the ARC reaches the
406 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
407 in the ARC. Increasing this value will cause to dentry and inode caches
408 to be pruned more aggressively. Setting this value to 0 will disable
409 pruning the inode and dentry caches.
411 Default value: \fB10,000\fR.
417 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
420 The number of restart passes to make while scanning the ARC attempting
421 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
422 This value should not need to be tuned but is available to facilitate
423 performance analysis.
425 Default value: \fB4096\fR.
431 \fBzfs_arc_min\fR (ulong)
436 Default value: \fB100\fR.
442 \fBzfs_arc_min_prefetch_lifespan\fR (int)
445 Min life of prefetch block
447 Default value: \fB100\fR.
453 \fBzfs_arc_p_aggressive_disable\fR (int)
456 Disable aggressive arc_p growth
458 Use \fB1\fR for yes (default) and \fB0\fR to disable.
464 \fBzfs_arc_p_dampener_disable\fR (int)
467 Disable arc_p adapt dampener
469 Use \fB1\fR for yes (default) and \fB0\fR to disable.
475 \fBzfs_arc_shrink_shift\fR (int)
478 log2(fraction of arc to reclaim)
480 Default value: \fB5\fR.
486 \fBzfs_autoimport_disable\fR (int)
489 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
491 Use \fB1\fR for yes and \fB0\fR for no (default).
497 \fBzfs_dbuf_state_index\fR (int)
500 Calculate arc header index
502 Default value: \fB0\fR.
508 \fBzfs_deadman_enabled\fR (int)
513 Use \fB1\fR for yes (default) and \fB0\fR to disable.
519 \fBzfs_deadman_synctime_ms\fR (ulong)
522 Expiration time in milliseconds. This value has two meanings. First it is
523 used to determine when the spa_deadman() logic should fire. By default the
524 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
525 Secondly, the value determines if an I/O is considered "hung". Any I/O that
526 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
527 in a zevent being logged.
529 Default value: \fB1,000,000\fR.
535 \fBzfs_dedup_prefetch\fR (int)
538 Enable prefetching dedup-ed blks
540 Use \fB1\fR for yes and \fB0\fR to disable (default).
546 \fBzfs_delay_min_dirty_percent\fR (int)
549 Start to delay each transaction once there is this amount of dirty data,
550 expressed as a percentage of \fBzfs_dirty_data_max\fR.
551 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
552 See the section "ZFS TRANSACTION DELAY".
554 Default value: \fB60\fR.
560 \fBzfs_delay_scale\fR (int)
563 This controls how quickly the transaction delay approaches infinity.
564 Larger values cause longer delays for a given amount of dirty data.
566 For the smoothest delay, this value should be about 1 billion divided
567 by the maximum number of operations per second. This will smoothly
568 handle between 10x and 1/10th this number.
570 See the section "ZFS TRANSACTION DELAY".
572 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
574 Default value: \fB500,000\fR.
580 \fBzfs_dirty_data_max\fR (int)
583 Determines the dirty space limit in bytes. Once this limit is exceeded, new
584 writes are halted until space frees up. This parameter takes precedence
585 over \fBzfs_dirty_data_max_percent\fR.
586 See the section "ZFS TRANSACTION DELAY".
588 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
594 \fBzfs_dirty_data_max_max\fR (int)
597 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
598 This limit is only enforced at module load time, and will be ignored if
599 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
600 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
601 "ZFS TRANSACTION DELAY".
603 Default value: 25% of physical RAM.
609 \fBzfs_dirty_data_max_max_percent\fR (int)
612 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
613 percentage of physical RAM. This limit is only enforced at module load
614 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
615 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
616 one. See the section "ZFS TRANSACTION DELAY".
624 \fBzfs_dirty_data_max_percent\fR (int)
627 Determines the dirty space limit, expressed as a percentage of all
628 memory. Once this limit is exceeded, new writes are halted until space frees
629 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
630 one. See the section "ZFS TRANSACTION DELAY".
632 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
638 \fBzfs_dirty_data_sync\fR (int)
641 Start syncing out a transaction group if there is at least this much dirty data.
643 Default value: \fB67,108,864\fR.
649 \fBzfs_free_max_blocks\fR (ulong)
652 Maximum number of blocks freed in a single txg.
654 Default value: \fB100,000\fR.
660 \fBzfs_vdev_async_read_max_active\fR (int)
663 Maxium asynchronous read I/Os active to each device.
664 See the section "ZFS I/O SCHEDULER".
666 Default value: \fB3\fR.
672 \fBzfs_vdev_async_read_min_active\fR (int)
675 Minimum asynchronous read I/Os active to each device.
676 See the section "ZFS I/O SCHEDULER".
678 Default value: \fB1\fR.
684 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
687 When the pool has more than
688 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
689 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
690 the dirty data is between min and max, the active I/O limit is linearly
691 interpolated. See the section "ZFS I/O SCHEDULER".
693 Default value: \fB60\fR.
699 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
702 When the pool has less than
703 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
704 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
705 the dirty data is between min and max, the active I/O limit is linearly
706 interpolated. See the section "ZFS I/O SCHEDULER".
708 Default value: \fB30\fR.
714 \fBzfs_vdev_async_write_max_active\fR (int)
717 Maxium asynchronous write I/Os active to each device.
718 See the section "ZFS I/O SCHEDULER".
720 Default value: \fB10\fR.
726 \fBzfs_vdev_async_write_min_active\fR (int)
729 Minimum asynchronous write I/Os active to each device.
730 See the section "ZFS I/O SCHEDULER".
732 Default value: \fB1\fR.
738 \fBzfs_vdev_max_active\fR (int)
741 The maximum number of I/Os active to each device. Ideally, this will be >=
742 the sum of each queue's max_active. It must be at least the sum of each
743 queue's min_active. See the section "ZFS I/O SCHEDULER".
745 Default value: \fB1,000\fR.
751 \fBzfs_vdev_scrub_max_active\fR (int)
754 Maxium scrub I/Os active to each device.
755 See the section "ZFS I/O SCHEDULER".
757 Default value: \fB2\fR.
763 \fBzfs_vdev_scrub_min_active\fR (int)
766 Minimum scrub I/Os active to each device.
767 See the section "ZFS I/O SCHEDULER".
769 Default value: \fB1\fR.
775 \fBzfs_vdev_sync_read_max_active\fR (int)
778 Maxium synchronous read I/Os active to each device.
779 See the section "ZFS I/O SCHEDULER".
781 Default value: \fB10\fR.
787 \fBzfs_vdev_sync_read_min_active\fR (int)
790 Minimum synchronous read I/Os active to each device.
791 See the section "ZFS I/O SCHEDULER".
793 Default value: \fB10\fR.
799 \fBzfs_vdev_sync_write_max_active\fR (int)
802 Maxium synchronous write I/Os active to each device.
803 See the section "ZFS I/O SCHEDULER".
805 Default value: \fB10\fR.
811 \fBzfs_vdev_sync_write_min_active\fR (int)
814 Minimum synchronous write I/Os active to each device.
815 See the section "ZFS I/O SCHEDULER".
817 Default value: \fB10\fR.
823 \fBzfs_disable_dup_eviction\fR (int)
826 Disable duplicate buffer eviction
828 Use \fB1\fR for yes and \fB0\fR for no (default).
834 \fBzfs_expire_snapshot\fR (int)
837 Seconds to expire .zfs/snapshot
839 Default value: \fB300\fR.
845 \fBzfs_flags\fR (int)
848 Set additional debugging flags. The following flags may be bitwise-or'd
860 Enable dprintf entries in the debug log.
862 2 ZFS_DEBUG_DBUF_VERIFY *
863 Enable extra dbuf verifications.
865 4 ZFS_DEBUG_DNODE_VERIFY *
866 Enable extra dnode verifications.
868 8 ZFS_DEBUG_SNAPNAMES
869 Enable snapshot name verification.
872 Check for illegally modified ARC buffers.
875 Enable spa_dbgmsg entries in the debug log.
877 64 ZFS_DEBUG_ZIO_FREE
878 Enable verification of block frees.
880 128 ZFS_DEBUG_HISTOGRAM_VERIFY
881 Enable extra spacemap histogram verifications.
884 * Requires debug build.
886 Default value: \fB0\fR.
892 \fBzfs_free_leak_on_eio\fR (int)
895 If destroy encounters an EIO while reading metadata (e.g. indirect
896 blocks), space referenced by the missing metadata can not be freed.
897 Normally this causes the background destroy to become "stalled", as
898 it is unable to make forward progress. While in this stalled state,
899 all remaining space to free from the error-encountering filesystem is
900 "temporarily leaked". Set this flag to cause it to ignore the EIO,
901 permanently leak the space from indirect blocks that can not be read,
902 and continue to free everything else that it can.
904 The default, "stalling" behavior is useful if the storage partially
905 fails (i.e. some but not all i/os fail), and then later recovers. In
906 this case, we will be able to continue pool operations while it is
907 partially failed, and when it recovers, we can continue to free the
908 space, with no leaks. However, note that this case is actually
911 Typically pools either (a) fail completely (but perhaps temporarily,
912 e.g. a top-level vdev going offline), or (b) have localized,
913 permanent errors (e.g. disk returns the wrong data due to bit flip or
914 firmware bug). In case (a), this setting does not matter because the
915 pool will be suspended and the sync thread will not be able to make
916 forward progress regardless. In case (b), because the error is
917 permanent, the best we can do is leak the minimum amount of space,
918 which is what setting this flag will do. Therefore, it is reasonable
919 for this flag to normally be set, but we chose the more conservative
920 approach of not setting it, so that there is no possibility of
921 leaking space in the "partial temporary" failure case.
923 Default value: \fB0\fR.
929 \fBzfs_free_min_time_ms\fR (int)
932 Min millisecs to free per txg
934 Default value: \fB1,000\fR.
940 \fBzfs_immediate_write_sz\fR (long)
943 Largest data block to write to zil
945 Default value: \fB32,768\fR.
951 \fBzfs_mdcomp_disable\fR (int)
954 Disable meta data compression
956 Use \fB1\fR for yes and \fB0\fR for no (default).
962 \fBzfs_metaslab_fragmentation_threshold\fR (int)
965 Allow metaslabs to keep their active state as long as their fragmentation
966 percentage is less than or equal to this value. An active metaslab that
967 exceeds this threshold will no longer keep its active status allowing
968 better metaslabs to be selected.
970 Default value: \fB70\fR.
976 \fBzfs_mg_fragmentation_threshold\fR (int)
979 Metaslab groups are considered eligible for allocations if their
980 fragmenation metric (measured as a percentage) is less than or equal to
981 this value. If a metaslab group exceeds this threshold then it will be
982 skipped unless all metaslab groups within the metaslab class have also
983 crossed this threshold.
985 Default value: \fB85\fR.
991 \fBzfs_mg_noalloc_threshold\fR (int)
994 Defines a threshold at which metaslab groups should be eligible for
995 allocations. The value is expressed as a percentage of free space
996 beyond which a metaslab group is always eligible for allocations.
997 If a metaslab group's free space is less than or equal to the
998 the threshold, the allocator will avoid allocating to that group
999 unless all groups in the pool have reached the threshold. Once all
1000 groups have reached the threshold, all groups are allowed to accept
1001 allocations. The default value of 0 disables the feature and causes
1002 all metaslab groups to be eligible for allocations.
1004 This parameter allows to deal with pools having heavily imbalanced
1005 vdevs such as would be the case when a new vdev has been added.
1006 Setting the threshold to a non-zero percentage will stop allocations
1007 from being made to vdevs that aren't filled to the specified percentage
1008 and allow lesser filled vdevs to acquire more allocations than they
1009 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1011 Default value: \fB0\fR.
1017 \fBzfs_no_scrub_io\fR (int)
1020 Set for no scrub I/O
1022 Use \fB1\fR for yes and \fB0\fR for no (default).
1028 \fBzfs_no_scrub_prefetch\fR (int)
1031 Set for no scrub prefetching
1033 Use \fB1\fR for yes and \fB0\fR for no (default).
1039 \fBzfs_nocacheflush\fR (int)
1042 Disable cache flushes
1044 Use \fB1\fR for yes and \fB0\fR for no (default).
1050 \fBzfs_nopwrite_enabled\fR (int)
1055 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1061 \fBzfs_pd_bytes_max\fR (int)
1064 The number of bytes which should be prefetched.
1066 Default value: \fB52,428,800\fR.
1072 \fBzfs_prefetch_disable\fR (int)
1075 Disable all ZFS prefetching
1077 Use \fB1\fR for yes and \fB0\fR for no (default).
1083 \fBzfs_read_chunk_size\fR (long)
1086 Bytes to read per chunk
1088 Default value: \fB1,048,576\fR.
1094 \fBzfs_read_history\fR (int)
1097 Historic statistics for the last N reads
1099 Default value: \fB0\fR.
1105 \fBzfs_read_history_hits\fR (int)
1108 Include cache hits in read history
1110 Use \fB1\fR for yes and \fB0\fR for no (default).
1116 \fBzfs_recover\fR (int)
1119 Set to attempt to recover from fatal errors. This should only be used as a
1120 last resort, as it typically results in leaked space, or worse.
1122 Use \fB1\fR for yes and \fB0\fR for no (default).
1128 \fBzfs_resilver_delay\fR (int)
1131 Number of ticks to delay prior to issuing a resilver I/O operation when
1132 a non-resilver or non-scrub I/O operation has occurred within the past
1133 \fBzfs_scan_idle\fR ticks.
1135 Default value: \fB2\fR.
1141 \fBzfs_resilver_min_time_ms\fR (int)
1144 Min millisecs to resilver per txg
1146 Default value: \fB3,000\fR.
1152 \fBzfs_scan_idle\fR (int)
1155 Idle window in clock ticks. During a scrub or a resilver, if
1156 a non-scrub or non-resilver I/O operation has occurred during this
1157 window, the next scrub or resilver operation is delayed by, respectively
1158 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1160 Default value: \fB50\fR.
1166 \fBzfs_scan_min_time_ms\fR (int)
1169 Min millisecs to scrub per txg
1171 Default value: \fB1,000\fR.
1177 \fBzfs_scrub_delay\fR (int)
1180 Number of ticks to delay prior to issuing a scrub I/O operation when
1181 a non-scrub or non-resilver I/O operation has occurred within the past
1182 \fBzfs_scan_idle\fR ticks.
1184 Default value: \fB4\fR.
1190 \fBzfs_send_corrupt_data\fR (int)
1193 Allow to send corrupt data (ignore read/checksum errors when sending data)
1195 Use \fB1\fR for yes and \fB0\fR for no (default).
1201 \fBzfs_sync_pass_deferred_free\fR (int)
1204 Defer frees starting in this pass
1206 Default value: \fB2\fR.
1212 \fBzfs_sync_pass_dont_compress\fR (int)
1215 Don't compress starting in this pass
1217 Default value: \fB5\fR.
1223 \fBzfs_sync_pass_rewrite\fR (int)
1226 Rewrite new bps starting in this pass
1228 Default value: \fB2\fR.
1234 \fBzfs_top_maxinflight\fR (int)
1237 Max I/Os per top-level vdev during scrub or resilver operations.
1239 Default value: \fB32\fR.
1245 \fBzfs_txg_history\fR (int)
1248 Historic statistics for the last N txgs
1250 Default value: \fB0\fR.
1256 \fBzfs_txg_timeout\fR (int)
1259 Max seconds worth of delta per txg
1261 Default value: \fB5\fR.
1267 \fBzfs_vdev_aggregation_limit\fR (int)
1270 Max vdev I/O aggregation size
1272 Default value: \fB131,072\fR.
1278 \fBzfs_vdev_cache_bshift\fR (int)
1281 Shift size to inflate reads too
1283 Default value: \fB16\fR.
1289 \fBzfs_vdev_cache_max\fR (int)
1292 Inflate reads small than max
1298 \fBzfs_vdev_cache_size\fR (int)
1301 Total size of the per-disk cache
1303 Default value: \fB0\fR.
1309 \fBzfs_vdev_mirror_switch_us\fR (int)
1312 Switch mirrors every N usecs
1314 Default value: \fB10,000\fR.
1320 \fBzfs_vdev_read_gap_limit\fR (int)
1323 Aggregate read I/O over gap
1325 Default value: \fB32,768\fR.
1331 \fBzfs_vdev_scheduler\fR (charp)
1336 Default value: \fBnoop\fR.
1342 \fBzfs_vdev_write_gap_limit\fR (int)
1345 Aggregate write I/O over gap
1347 Default value: \fB4,096\fR.
1353 \fBzfs_zevent_cols\fR (int)
1356 Max event column width
1358 Default value: \fB80\fR.
1364 \fBzfs_zevent_console\fR (int)
1367 Log events to the console
1369 Use \fB1\fR for yes and \fB0\fR for no (default).
1375 \fBzfs_zevent_len_max\fR (int)
1378 Max event queue length
1380 Default value: \fB0\fR.
1386 \fBzil_replay_disable\fR (int)
1389 Disable intent logging replay
1391 Use \fB1\fR for yes and \fB0\fR for no (default).
1397 \fBzil_slog_limit\fR (ulong)
1400 Max commit bytes to separate log device
1402 Default value: \fB1,048,576\fR.
1408 \fBzio_delay_max\fR (int)
1411 Max zio millisec delay before posting event
1413 Default value: \fB30,000\fR.
1419 \fBzio_requeue_io_start_cut_in_line\fR (int)
1422 Prioritize requeued I/O
1424 Default value: \fB0\fR.
1430 \fBzvol_inhibit_dev\fR (uint)
1433 Do not create zvol device nodes
1435 Use \fB1\fR for yes and \fB0\fR for no (default).
1441 \fBzvol_major\fR (uint)
1444 Major number for zvol device
1446 Default value: \fB230\fR.
1452 \fBzvol_max_discard_blocks\fR (ulong)
1455 Max number of blocks to discard at once
1457 Default value: \fB16,384\fR.
1463 \fBzvol_threads\fR (uint)
1466 Number of threads for zvol device
1468 Default value: \fB32\fR.
1471 .SH ZFS I/O SCHEDULER
1472 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1473 The I/O scheduler determines when and in what order those operations are
1474 issued. The I/O scheduler divides operations into five I/O classes
1475 prioritized in the following order: sync read, sync write, async read,
1476 async write, and scrub/resilver. Each queue defines the minimum and
1477 maximum number of concurrent operations that may be issued to the
1478 device. In addition, the device has an aggregate maximum,
1479 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1480 must not exceed the aggregate maximum. If the sum of the per-queue
1481 maximums exceeds the aggregate maximum, then the number of active I/Os
1482 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1483 be issued regardless of whether all per-queue minimums have been met.
1485 For many physical devices, throughput increases with the number of
1486 concurrent operations, but latency typically suffers. Further, physical
1487 devices typically have a limit at which more concurrent operations have no
1488 effect on throughput or can actually cause it to decrease.
1490 The scheduler selects the next operation to issue by first looking for an
1491 I/O class whose minimum has not been satisfied. Once all are satisfied and
1492 the aggregate maximum has not been hit, the scheduler looks for classes
1493 whose maximum has not been satisfied. Iteration through the I/O classes is
1494 done in the order specified above. No further operations are issued if the
1495 aggregate maximum number of concurrent operations has been hit or if there
1496 are no operations queued for an I/O class that has not hit its maximum.
1497 Every time an I/O is queued or an operation completes, the I/O scheduler
1498 looks for new operations to issue.
1500 In general, smaller max_active's will lead to lower latency of synchronous
1501 operations. Larger max_active's may lead to higher overall throughput,
1502 depending on underlying storage.
1504 The ratio of the queues' max_actives determines the balance of performance
1505 between reads, writes, and scrubs. E.g., increasing
1506 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1507 more quickly, but reads and writes to have higher latency and lower throughput.
1509 All I/O classes have a fixed maximum number of outstanding operations
1510 except for the async write class. Asynchronous writes represent the data
1511 that is committed to stable storage during the syncing stage for
1512 transaction groups. Transaction groups enter the syncing state
1513 periodically so the number of queued async writes will quickly burst up
1514 and then bleed down to zero. Rather than servicing them as quickly as
1515 possible, the I/O scheduler changes the maximum number of active async
1516 write I/Os according to the amount of dirty data in the pool. Since
1517 both throughput and latency typically increase with the number of
1518 concurrent operations issued to physical devices, reducing the
1519 burstiness in the number of concurrent operations also stabilizes the
1520 response time of operations from other -- and in particular synchronous
1521 -- queues. In broad strokes, the I/O scheduler will issue more
1522 concurrent operations from the async write queue as there's more dirty
1527 The number of concurrent operations issued for the async write I/O class
1528 follows a piece-wise linear function defined by a few adjustable points.
1531 | o---------| <-- zfs_vdev_async_write_max_active
1538 |-------o | | <-- zfs_vdev_async_write_min_active
1539 0|_______^______|_________|
1540 0% | | 100% of zfs_dirty_data_max
1542 | `-- zfs_vdev_async_write_active_max_dirty_percent
1543 `--------- zfs_vdev_async_write_active_min_dirty_percent
1546 Until the amount of dirty data exceeds a minimum percentage of the dirty
1547 data allowed in the pool, the I/O scheduler will limit the number of
1548 concurrent operations to the minimum. As that threshold is crossed, the
1549 number of concurrent operations issued increases linearly to the maximum at
1550 the specified maximum percentage of the dirty data allowed in the pool.
1552 Ideally, the amount of dirty data on a busy pool will stay in the sloped
1553 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
1554 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
1555 maximum percentage, this indicates that the rate of incoming data is
1556 greater than the rate that the backend storage can handle. In this case, we
1557 must further throttle incoming writes, as described in the next section.
1559 .SH ZFS TRANSACTION DELAY
1560 We delay transactions when we've determined that the backend storage
1561 isn't able to accommodate the rate of incoming writes.
1563 If there is already a transaction waiting, we delay relative to when
1564 that transaction will finish waiting. This way the calculated delay time
1565 is independent of the number of threads concurrently executing
1568 If we are the only waiter, wait relative to when the transaction
1569 started, rather than the current time. This credits the transaction for
1570 "time already served", e.g. reading indirect blocks.
1572 The minimum time for a transaction to take is calculated as:
1574 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
1575 min_time is then capped at 100 milliseconds.
1578 The delay has two degrees of freedom that can be adjusted via tunables. The
1579 percentage of dirty data at which we start to delay is defined by
1580 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
1581 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
1582 delay after writing at full speed has failed to keep up with the incoming write
1583 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
1584 this variable determines the amount of delay at the midpoint of the curve.
1588 10ms +-------------------------------------------------------------*+
1604 2ms + (midpoint) * +
1607 | zfs_delay_scale ----------> ******** |
1608 0 +-------------------------------------*********----------------+
1609 0% <- zfs_dirty_data_max -> 100%
1612 Note that since the delay is added to the outstanding time remaining on the
1613 most recent transaction, the delay is effectively the inverse of IOPS.
1614 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
1615 was chosen such that small changes in the amount of accumulated dirty data
1616 in the first 3/4 of the curve yield relatively small differences in the
1619 The effects can be easier to understand when the amount of delay is
1620 represented on a log scale:
1624 100ms +-------------------------------------------------------------++
1633 + zfs_delay_scale ----------> ***** +
1644 +--------------------------------------------------------------+
1645 0% <- zfs_dirty_data_max -> 100%
1648 Note here that only as the amount of dirty data approaches its limit does
1649 the delay start to increase rapidly. The goal of a properly tuned system
1650 should be to keep the amount of dirty data out of that range by first
1651 ensuring that the appropriate limits are set for the I/O scheduler to reach
1652 optimal throughput on the backend storage, and then by changing the value
1653 of \fBzfs_delay_scale\fR to increase the steepness of the curve.