2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBl2arc_feed_again\fR (int)
35 Use \fB1\fR for yes (default) and \fB0\fR to disable.
41 \fBl2arc_feed_min_ms\fR (ulong)
44 Min feed interval in milliseconds
46 Default value: \fB200\fR.
52 \fBl2arc_feed_secs\fR (ulong)
55 Seconds between L2ARC writing
57 Default value: \fB1\fR.
63 \fBl2arc_headroom\fR (ulong)
66 Number of max device writes to precache
68 Default value: \fB2\fR.
74 \fBl2arc_headroom_boost\fR (ulong)
77 Compressed l2arc_headroom multiplier
79 Default value: \fB200\fR.
85 \fBl2arc_nocompress\fR (int)
88 Skip compressing L2ARC buffers
90 Use \fB1\fR for yes and \fB0\fR for no (default).
96 \fBl2arc_noprefetch\fR (int)
99 Skip caching prefetched buffers
101 Use \fB1\fR for yes (default) and \fB0\fR to disable.
107 \fBl2arc_norw\fR (int)
110 No reads during writes
112 Use \fB1\fR for yes and \fB0\fR for no (default).
118 \fBl2arc_write_boost\fR (ulong)
121 Extra write bytes during device warmup
123 Default value: \fB8,388,608\fR.
129 \fBl2arc_write_max\fR (ulong)
132 Max write bytes per interval
134 Default value: \fB8,388,608\fR.
140 \fBmetaslab_bias_enabled\fR (int)
143 Enable metaslab group biasing based on its vdev's over- or under-utilization
144 relative to the pool.
146 Use \fB1\fR for yes (default) and \fB0\fR for no.
152 \fBmetaslab_debug_load\fR (int)
155 Load all metaslabs during pool import.
157 Use \fB1\fR for yes and \fB0\fR for no (default).
163 \fBmetaslab_debug_unload\fR (int)
166 Prevent metaslabs from being unloaded.
168 Use \fB1\fR for yes and \fB0\fR for no (default).
174 \fBmetaslab_fragmentation_factor_enabled\fR (int)
177 Enable use of the fragmentation metric in computing metaslab weights.
179 Use \fB1\fR for yes (default) and \fB0\fR for no.
185 \fBmetaslab_preload_enabled\fR (int)
188 Enable metaslab group preloading.
190 Use \fB1\fR for yes (default) and \fB0\fR for no.
196 \fBmetaslab_lba_weighting_enabled\fR (int)
199 Give more weight to metaslabs with lower LBAs, assuming they have
200 greater bandwidth as is typically the case on a modern constant
201 angular velocity disk drive.
203 Use \fB1\fR for yes (default) and \fB0\fR for no.
209 \fBspa_config_path\fR (charp)
214 Default value: \fB/etc/zfs/zpool.cache\fR.
220 \fBspa_asize_inflation\fR (int)
223 Multiplication factor used to estimate actual disk consumption from the
224 size of data being written. The default value is a worst case estimate,
225 but lower values may be valid for a given pool depending on its
226 configuration. Pool administrators who understand the factors involved
227 may wish to specify a more realistic inflation factor, particularly if
228 they operate close to quota or capacity limits.
236 \fBspa_load_verify_data\fR (int)
239 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
240 import. Use 0 to disable and 1 to enable.
242 An extreme rewind import normally performs a full traversal of all
243 blocks in the pool for verification. If this parameter is set to 0,
244 the traversal skips non-metadata blocks. It can be toggled once the
245 import has started to stop or start the traversal of non-metadata blocks.
253 \fBspa_load_verify_metadata\fR (int)
256 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
257 pool import. Use 0 to disable and 1 to enable.
259 An extreme rewind import normally performs a full traversal of all
260 blocks in the pool for verification. If this parameter is set to 1,
261 the traversal is not performed. It can be toggled once the import has
262 started to stop or start the traversal.
270 \fBspa_load_verify_maxinflight\fR (int)
273 Maximum concurrent I/Os during the traversal performed during an "extreme
274 rewind" (\fB-X\fR) pool import.
282 \fBzfetch_array_rd_sz\fR (ulong)
285 If prefetching is enabled, disable prefetching for reads larger than this size.
287 Default value: \fB1,048,576\fR.
293 \fBzfetch_block_cap\fR (uint)
296 Max number of blocks to prefetch at a time
298 Default value: \fB256\fR.
304 \fBzfetch_max_streams\fR (uint)
307 Max number of streams per zfetch (prefetch streams per file).
309 Default value: \fB8\fR.
315 \fBzfetch_min_sec_reap\fR (uint)
318 Min time before an active prefetch stream can be reclaimed
320 Default value: \fB2\fR.
326 \fBzfs_arc_average_blocksize\fR (int)
329 The ARC's buffer hash table is sized based on the assumption of an average
330 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
331 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
332 For configurations with a known larger average block size this value can be
333 increased to reduce the memory footprint.
336 Default value: \fB8192\fR.
342 \fBzfs_arc_grow_retry\fR (int)
345 Seconds before growing arc size
347 Default value: \fB5\fR.
353 \fBzfs_arc_max\fR (ulong)
358 Default value: \fB0\fR.
364 \fBzfs_arc_memory_throttle_disable\fR (int)
367 Disable memory throttle
369 Use \fB1\fR for yes (default) and \fB0\fR to disable.
375 \fBzfs_arc_meta_limit\fR (ulong)
378 Meta limit for arc size
380 Default value: \fB0\fR.
386 \fBzfs_arc_meta_prune\fR (int)
389 Bytes of meta data to prune
391 Default value: \fB1,048,576\fR.
397 \fBzfs_arc_min\fR (ulong)
402 Default value: \fB100\fR.
408 \fBzfs_arc_min_prefetch_lifespan\fR (int)
411 Min life of prefetch block
413 Default value: \fB100\fR.
419 \fBzfs_arc_p_aggressive_disable\fR (int)
422 Disable aggressive arc_p growth
424 Use \fB1\fR for yes (default) and \fB0\fR to disable.
430 \fBzfs_arc_p_dampener_disable\fR (int)
433 Disable arc_p adapt dampener
435 Use \fB1\fR for yes (default) and \fB0\fR to disable.
441 \fBzfs_arc_shrink_shift\fR (int)
444 log2(fraction of arc to reclaim)
446 Default value: \fB5\fR.
452 \fBzfs_autoimport_disable\fR (int)
455 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
457 Use \fB1\fR for yes and \fB0\fR for no (default).
463 \fBzfs_dbuf_state_index\fR (int)
466 Calculate arc header index
468 Default value: \fB0\fR.
474 \fBzfs_deadman_enabled\fR (int)
479 Use \fB1\fR for yes (default) and \fB0\fR to disable.
485 \fBzfs_deadman_synctime_ms\fR (ulong)
488 Expiration time in milliseconds. This value has two meanings. First it is
489 used to determine when the spa_deadman() logic should fire. By default the
490 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
491 Secondly, the value determines if an I/O is considered "hung". Any I/O that
492 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
493 in a zevent being logged.
495 Default value: \fB1,000,000\fR.
501 \fBzfs_dedup_prefetch\fR (int)
504 Enable prefetching dedup-ed blks
506 Use \fB1\fR for yes and \fB0\fR to disable (default).
512 \fBzfs_delay_min_dirty_percent\fR (int)
515 Start to delay each transaction once there is this amount of dirty data,
516 expressed as a percentage of \fBzfs_dirty_data_max\fR.
517 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
518 See the section "ZFS TRANSACTION DELAY".
520 Default value: \fB60\fR.
526 \fBzfs_delay_scale\fR (int)
529 This controls how quickly the transaction delay approaches infinity.
530 Larger values cause longer delays for a given amount of dirty data.
532 For the smoothest delay, this value should be about 1 billion divided
533 by the maximum number of operations per second. This will smoothly
534 handle between 10x and 1/10th this number.
536 See the section "ZFS TRANSACTION DELAY".
538 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
540 Default value: \fB500,000\fR.
546 \fBzfs_dirty_data_max\fR (int)
549 Determines the dirty space limit in bytes. Once this limit is exceeded, new
550 writes are halted until space frees up. This parameter takes precedence
551 over \fBzfs_dirty_data_max_percent\fR.
552 See the section "ZFS TRANSACTION DELAY".
554 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
560 \fBzfs_dirty_data_max_max\fR (int)
563 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
564 This limit is only enforced at module load time, and will be ignored if
565 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
566 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
567 "ZFS TRANSACTION DELAY".
569 Default value: 25% of physical RAM.
575 \fBzfs_dirty_data_max_max_percent\fR (int)
578 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
579 percentage of physical RAM. This limit is only enforced at module load
580 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
581 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
582 one. See the section "ZFS TRANSACTION DELAY".
590 \fBzfs_dirty_data_max_percent\fR (int)
593 Determines the dirty space limit, expressed as a percentage of all
594 memory. Once this limit is exceeded, new writes are halted until space frees
595 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
596 one. See the section "ZFS TRANSACTION DELAY".
598 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
604 \fBzfs_dirty_data_sync\fR (int)
607 Start syncing out a transaction group if there is at least this much dirty data.
609 Default value: \fB67,108,864\fR.
615 \fBzfs_vdev_async_read_max_active\fR (int)
618 Maxium asynchronous read I/Os active to each device.
619 See the section "ZFS I/O SCHEDULER".
621 Default value: \fB3\fR.
627 \fBzfs_vdev_async_read_min_active\fR (int)
630 Minimum asynchronous read I/Os active to each device.
631 See the section "ZFS I/O SCHEDULER".
633 Default value: \fB1\fR.
639 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
642 When the pool has more than
643 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
644 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
645 the dirty data is between min and max, the active I/O limit is linearly
646 interpolated. See the section "ZFS I/O SCHEDULER".
648 Default value: \fB60\fR.
654 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
657 When the pool has less than
658 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
659 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
660 the dirty data is between min and max, the active I/O limit is linearly
661 interpolated. See the section "ZFS I/O SCHEDULER".
663 Default value: \fB30\fR.
669 \fBzfs_vdev_async_write_max_active\fR (int)
672 Maxium asynchronous write I/Os active to each device.
673 See the section "ZFS I/O SCHEDULER".
675 Default value: \fB10\fR.
681 \fBzfs_vdev_async_write_min_active\fR (int)
684 Minimum asynchronous write I/Os active to each device.
685 See the section "ZFS I/O SCHEDULER".
687 Default value: \fB1\fR.
693 \fBzfs_vdev_max_active\fR (int)
696 The maximum number of I/Os active to each device. Ideally, this will be >=
697 the sum of each queue's max_active. It must be at least the sum of each
698 queue's min_active. See the section "ZFS I/O SCHEDULER".
700 Default value: \fB1,000\fR.
706 \fBzfs_vdev_scrub_max_active\fR (int)
709 Maxium scrub I/Os active to each device.
710 See the section "ZFS I/O SCHEDULER".
712 Default value: \fB2\fR.
718 \fBzfs_vdev_scrub_min_active\fR (int)
721 Minimum scrub I/Os active to each device.
722 See the section "ZFS I/O SCHEDULER".
724 Default value: \fB1\fR.
730 \fBzfs_vdev_sync_read_max_active\fR (int)
733 Maxium synchronous read I/Os active to each device.
734 See the section "ZFS I/O SCHEDULER".
736 Default value: \fB10\fR.
742 \fBzfs_vdev_sync_read_min_active\fR (int)
745 Minimum synchronous read I/Os active to each device.
746 See the section "ZFS I/O SCHEDULER".
748 Default value: \fB10\fR.
754 \fBzfs_vdev_sync_write_max_active\fR (int)
757 Maxium synchronous write I/Os active to each device.
758 See the section "ZFS I/O SCHEDULER".
760 Default value: \fB10\fR.
766 \fBzfs_vdev_sync_write_min_active\fR (int)
769 Minimum synchronous write I/Os active to each device.
770 See the section "ZFS I/O SCHEDULER".
772 Default value: \fB10\fR.
778 \fBzfs_disable_dup_eviction\fR (int)
781 Disable duplicate buffer eviction
783 Use \fB1\fR for yes and \fB0\fR for no (default).
789 \fBzfs_expire_snapshot\fR (int)
792 Seconds to expire .zfs/snapshot
794 Default value: \fB300\fR.
800 \fBzfs_flags\fR (int)
803 Set additional debugging flags
805 Default value: \fB1\fR.
811 \fBzfs_free_leak_on_eio\fR (int)
814 If destroy encounters an EIO while reading metadata (e.g. indirect
815 blocks), space referenced by the missing metadata can not be freed.
816 Normally this causes the background destroy to become "stalled", as
817 it is unable to make forward progress. While in this stalled state,
818 all remaining space to free from the error-encountering filesystem is
819 "temporarily leaked". Set this flag to cause it to ignore the EIO,
820 permanently leak the space from indirect blocks that can not be read,
821 and continue to free everything else that it can.
823 The default, "stalling" behavior is useful if the storage partially
824 fails (i.e. some but not all i/os fail), and then later recovers. In
825 this case, we will be able to continue pool operations while it is
826 partially failed, and when it recovers, we can continue to free the
827 space, with no leaks. However, note that this case is actually
830 Typically pools either (a) fail completely (but perhaps temporarily,
831 e.g. a top-level vdev going offline), or (b) have localized,
832 permanent errors (e.g. disk returns the wrong data due to bit flip or
833 firmware bug). In case (a), this setting does not matter because the
834 pool will be suspended and the sync thread will not be able to make
835 forward progress regardless. In case (b), because the error is
836 permanent, the best we can do is leak the minimum amount of space,
837 which is what setting this flag will do. Therefore, it is reasonable
838 for this flag to normally be set, but we chose the more conservative
839 approach of not setting it, so that there is no possibility of
840 leaking space in the "partial temporary" failure case.
842 Default value: \fB0\fR.
848 \fBzfs_free_min_time_ms\fR (int)
851 Min millisecs to free per txg
853 Default value: \fB1,000\fR.
859 \fBzfs_immediate_write_sz\fR (long)
862 Largest data block to write to zil
864 Default value: \fB32,768\fR.
870 \fBzfs_mdcomp_disable\fR (int)
873 Disable meta data compression
875 Use \fB1\fR for yes and \fB0\fR for no (default).
881 \fBzfs_metaslab_fragmentation_threshold\fR (int)
884 Allow metaslabs to keep their active state as long as their fragmentation
885 percentage is less than or equal to this value. An active metaslab that
886 exceeds this threshold will no longer keep its active status allowing
887 better metaslabs to be selected.
889 Default value: \fB70\fR.
895 \fBzfs_mg_fragmentation_threshold\fR (int)
898 Metaslab groups are considered eligible for allocations if their
899 fragmenation metric (measured as a percentage) is less than or equal to
900 this value. If a metaslab group exceeds this threshold then it will be
901 skipped unless all metaslab groups within the metaslab class have also
902 crossed this threshold.
904 Default value: \fB85\fR.
910 \fBzfs_mg_noalloc_threshold\fR (int)
913 Defines a threshold at which metaslab groups should be eligible for
914 allocations. The value is expressed as a percentage of free space
915 beyond which a metaslab group is always eligible for allocations.
916 If a metaslab group's free space is less than or equal to the
917 the threshold, the allocator will avoid allocating to that group
918 unless all groups in the pool have reached the threshold. Once all
919 groups have reached the threshold, all groups are allowed to accept
920 allocations. The default value of 0 disables the feature and causes
921 all metaslab groups to be eligible for allocations.
923 This parameter allows to deal with pools having heavily imbalanced
924 vdevs such as would be the case when a new vdev has been added.
925 Setting the threshold to a non-zero percentage will stop allocations
926 from being made to vdevs that aren't filled to the specified percentage
927 and allow lesser filled vdevs to acquire more allocations than they
928 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
930 Default value: \fB0\fR.
936 \fBzfs_no_scrub_io\fR (int)
941 Use \fB1\fR for yes and \fB0\fR for no (default).
947 \fBzfs_no_scrub_prefetch\fR (int)
950 Set for no scrub prefetching
952 Use \fB1\fR for yes and \fB0\fR for no (default).
958 \fBzfs_nocacheflush\fR (int)
961 Disable cache flushes
963 Use \fB1\fR for yes and \fB0\fR for no (default).
969 \fBzfs_nopwrite_enabled\fR (int)
974 Use \fB1\fR for yes (default) and \fB0\fR to disable.
980 \fBzfs_pd_blks_max\fR (int)
983 Max number of blocks to prefetch
985 Default value: \fB100\fR.
991 \fBzfs_prefetch_disable\fR (int)
994 Disable all ZFS prefetching
996 Use \fB1\fR for yes and \fB0\fR for no (default).
1002 \fBzfs_read_chunk_size\fR (long)
1005 Bytes to read per chunk
1007 Default value: \fB1,048,576\fR.
1013 \fBzfs_read_history\fR (int)
1016 Historic statistics for the last N reads
1018 Default value: \fB0\fR.
1024 \fBzfs_read_history_hits\fR (int)
1027 Include cache hits in read history
1029 Use \fB1\fR for yes and \fB0\fR for no (default).
1035 \fBzfs_recover\fR (int)
1038 Set to attempt to recover from fatal errors. This should only be used as a
1039 last resort, as it typically results in leaked space, or worse.
1041 Use \fB1\fR for yes and \fB0\fR for no (default).
1047 \fBzfs_resilver_delay\fR (int)
1050 Number of ticks to delay prior to issuing a resilver I/O operation when
1051 a non-resilver or non-scrub I/O operation has occurred within the past
1052 \fBzfs_scan_idle\fR ticks.
1054 Default value: \fB2\fR.
1060 \fBzfs_resilver_min_time_ms\fR (int)
1063 Min millisecs to resilver per txg
1065 Default value: \fB3,000\fR.
1071 \fBzfs_scan_idle\fR (int)
1074 Idle window in clock ticks. During a scrub or a resilver, if
1075 a non-scrub or non-resilver I/O operation has occurred during this
1076 window, the next scrub or resilver operation is delayed by, respectively
1077 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1079 Default value: \fB50\fR.
1085 \fBzfs_scan_min_time_ms\fR (int)
1088 Min millisecs to scrub per txg
1090 Default value: \fB1,000\fR.
1096 \fBzfs_scrub_delay\fR (int)
1099 Number of ticks to delay prior to issuing a scrub I/O operation when
1100 a non-scrub or non-resilver I/O operation has occurred within the past
1101 \fBzfs_scan_idle\fR ticks.
1103 Default value: \fB4\fR.
1109 \fBzfs_send_corrupt_data\fR (int)
1112 Allow to send corrupt data (ignore read/checksum errors when sending data)
1114 Use \fB1\fR for yes and \fB0\fR for no (default).
1120 \fBzfs_sync_pass_deferred_free\fR (int)
1123 Defer frees starting in this pass
1125 Default value: \fB2\fR.
1131 \fBzfs_sync_pass_dont_compress\fR (int)
1134 Don't compress starting in this pass
1136 Default value: \fB5\fR.
1142 \fBzfs_sync_pass_rewrite\fR (int)
1145 Rewrite new bps starting in this pass
1147 Default value: \fB2\fR.
1153 \fBzfs_top_maxinflight\fR (int)
1156 Max I/Os per top-level vdev during scrub or resilver operations.
1158 Default value: \fB32\fR.
1164 \fBzfs_txg_history\fR (int)
1167 Historic statistics for the last N txgs
1169 Default value: \fB0\fR.
1175 \fBzfs_txg_timeout\fR (int)
1178 Max seconds worth of delta per txg
1180 Default value: \fB5\fR.
1186 \fBzfs_vdev_aggregation_limit\fR (int)
1189 Max vdev I/O aggregation size
1191 Default value: \fB131,072\fR.
1197 \fBzfs_vdev_cache_bshift\fR (int)
1200 Shift size to inflate reads too
1202 Default value: \fB16\fR.
1208 \fBzfs_vdev_cache_max\fR (int)
1211 Inflate reads small than max
1217 \fBzfs_vdev_cache_size\fR (int)
1220 Total size of the per-disk cache
1222 Default value: \fB0\fR.
1228 \fBzfs_vdev_mirror_switch_us\fR (int)
1231 Switch mirrors every N usecs
1233 Default value: \fB10,000\fR.
1239 \fBzfs_vdev_read_gap_limit\fR (int)
1242 Aggregate read I/O over gap
1244 Default value: \fB32,768\fR.
1250 \fBzfs_vdev_scheduler\fR (charp)
1255 Default value: \fBnoop\fR.
1261 \fBzfs_vdev_write_gap_limit\fR (int)
1264 Aggregate write I/O over gap
1266 Default value: \fB4,096\fR.
1272 \fBzfs_zevent_cols\fR (int)
1275 Max event column width
1277 Default value: \fB80\fR.
1283 \fBzfs_zevent_console\fR (int)
1286 Log events to the console
1288 Use \fB1\fR for yes and \fB0\fR for no (default).
1294 \fBzfs_zevent_len_max\fR (int)
1297 Max event queue length
1299 Default value: \fB0\fR.
1305 \fBzil_replay_disable\fR (int)
1308 Disable intent logging replay
1310 Use \fB1\fR for yes and \fB0\fR for no (default).
1316 \fBzil_slog_limit\fR (ulong)
1319 Max commit bytes to separate log device
1321 Default value: \fB1,048,576\fR.
1327 \fBzio_bulk_flags\fR (int)
1330 Additional flags to pass to bulk buffers
1332 Default value: \fB0\fR.
1338 \fBzio_delay_max\fR (int)
1341 Max zio millisec delay before posting event
1343 Default value: \fB30,000\fR.
1349 \fBzio_injection_enabled\fR (int)
1352 Enable fault injection
1354 Use \fB1\fR for yes and \fB0\fR for no (default).
1360 \fBzio_requeue_io_start_cut_in_line\fR (int)
1363 Prioritize requeued I/O
1365 Default value: \fB0\fR.
1371 \fBzvol_inhibit_dev\fR (uint)
1374 Do not create zvol device nodes
1376 Use \fB1\fR for yes and \fB0\fR for no (default).
1382 \fBzvol_major\fR (uint)
1385 Major number for zvol device
1387 Default value: \fB230\fR.
1393 \fBzvol_max_discard_blocks\fR (ulong)
1396 Max number of blocks to discard at once
1398 Default value: \fB16,384\fR.
1404 \fBzvol_threads\fR (uint)
1407 Number of threads for zvol device
1409 Default value: \fB32\fR.
1412 .SH ZFS I/O SCHEDULER
1413 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1414 The I/O scheduler determines when and in what order those operations are
1415 issued. The I/O scheduler divides operations into five I/O classes
1416 prioritized in the following order: sync read, sync write, async read,
1417 async write, and scrub/resilver. Each queue defines the minimum and
1418 maximum number of concurrent operations that may be issued to the
1419 device. In addition, the device has an aggregate maximum,
1420 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1421 must not exceed the aggregate maximum. If the sum of the per-queue
1422 maximums exceeds the aggregate maximum, then the number of active I/Os
1423 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1424 be issued regardless of whether all per-queue minimums have been met.
1426 For many physical devices, throughput increases with the number of
1427 concurrent operations, but latency typically suffers. Further, physical
1428 devices typically have a limit at which more concurrent operations have no
1429 effect on throughput or can actually cause it to decrease.
1431 The scheduler selects the next operation to issue by first looking for an
1432 I/O class whose minimum has not been satisfied. Once all are satisfied and
1433 the aggregate maximum has not been hit, the scheduler looks for classes
1434 whose maximum has not been satisfied. Iteration through the I/O classes is
1435 done in the order specified above. No further operations are issued if the
1436 aggregate maximum number of concurrent operations has been hit or if there
1437 are no operations queued for an I/O class that has not hit its maximum.
1438 Every time an I/O is queued or an operation completes, the I/O scheduler
1439 looks for new operations to issue.
1441 In general, smaller max_active's will lead to lower latency of synchronous
1442 operations. Larger max_active's may lead to higher overall throughput,
1443 depending on underlying storage.
1445 The ratio of the queues' max_actives determines the balance of performance
1446 between reads, writes, and scrubs. E.g., increasing
1447 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1448 more quickly, but reads and writes to have higher latency and lower throughput.
1450 All I/O classes have a fixed maximum number of outstanding operations
1451 except for the async write class. Asynchronous writes represent the data
1452 that is committed to stable storage during the syncing stage for
1453 transaction groups. Transaction groups enter the syncing state
1454 periodically so the number of queued async writes will quickly burst up
1455 and then bleed down to zero. Rather than servicing them as quickly as
1456 possible, the I/O scheduler changes the maximum number of active async
1457 write I/Os according to the amount of dirty data in the pool. Since
1458 both throughput and latency typically increase with the number of
1459 concurrent operations issued to physical devices, reducing the
1460 burstiness in the number of concurrent operations also stabilizes the
1461 response time of operations from other -- and in particular synchronous
1462 -- queues. In broad strokes, the I/O scheduler will issue more
1463 concurrent operations from the async write queue as there's more dirty
1468 The number of concurrent operations issued for the async write I/O class
1469 follows a piece-wise linear function defined by a few adjustable points.
1472 | o---------| <-- zfs_vdev_async_write_max_active
1479 |-------o | | <-- zfs_vdev_async_write_min_active
1480 0|_______^______|_________|
1481 0% | | 100% of zfs_dirty_data_max
1483 | `-- zfs_vdev_async_write_active_max_dirty_percent
1484 `--------- zfs_vdev_async_write_active_min_dirty_percent
1487 Until the amount of dirty data exceeds a minimum percentage of the dirty
1488 data allowed in the pool, the I/O scheduler will limit the number of
1489 concurrent operations to the minimum. As that threshold is crossed, the
1490 number of concurrent operations issued increases linearly to the maximum at
1491 the specified maximum percentage of the dirty data allowed in the pool.
1493 Ideally, the amount of dirty data on a busy pool will stay in the sloped
1494 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
1495 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
1496 maximum percentage, this indicates that the rate of incoming data is
1497 greater than the rate that the backend storage can handle. In this case, we
1498 must further throttle incoming writes, as described in the next section.
1500 .SH ZFS TRANSACTION DELAY
1501 We delay transactions when we've determined that the backend storage
1502 isn't able to accommodate the rate of incoming writes.
1504 If there is already a transaction waiting, we delay relative to when
1505 that transaction will finish waiting. This way the calculated delay time
1506 is independent of the number of threads concurrently executing
1509 If we are the only waiter, wait relative to when the transaction
1510 started, rather than the current time. This credits the transaction for
1511 "time already served", e.g. reading indirect blocks.
1513 The minimum time for a transaction to take is calculated as:
1515 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
1516 min_time is then capped at 100 milliseconds.
1519 The delay has two degrees of freedom that can be adjusted via tunables. The
1520 percentage of dirty data at which we start to delay is defined by
1521 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
1522 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
1523 delay after writing at full speed has failed to keep up with the incoming write
1524 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
1525 this variable determines the amount of delay at the midpoint of the curve.
1529 10ms +-------------------------------------------------------------*+
1545 2ms + (midpoint) * +
1548 | zfs_delay_scale ----------> ******** |
1549 0 +-------------------------------------*********----------------+
1550 0% <- zfs_dirty_data_max -> 100%
1553 Note that since the delay is added to the outstanding time remaining on the
1554 most recent transaction, the delay is effectively the inverse of IOPS.
1555 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
1556 was chosen such that small changes in the amount of accumulated dirty data
1557 in the first 3/4 of the curve yield relatively small differences in the
1560 The effects can be easier to understand when the amount of delay is
1561 represented on a log scale:
1565 100ms +-------------------------------------------------------------++
1574 + zfs_delay_scale ----------> ***** +
1585 +--------------------------------------------------------------+
1586 0% <- zfs_dirty_data_max -> 100%
1589 Note here that only as the amount of dirty data approaches its limit does
1590 the delay start to increase rapidly. The goal of a properly tuned system
1591 should be to keep the amount of dirty data out of that range by first
1592 ensuring that the appropriate limits are set for the I/O scheduler to reach
1593 optimal throughput on the backend storage, and then by changing the value
1594 of \fBzfs_delay_scale\fR to increase the steepness of the curve.