2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBl2arc_feed_again\fR (int)
35 Use \fB1\fR for yes (default) and \fB0\fR to disable.
41 \fBl2arc_feed_min_ms\fR (ulong)
44 Min feed interval in milliseconds
46 Default value: \fB200\fR.
52 \fBl2arc_feed_secs\fR (ulong)
55 Seconds between L2ARC writing
57 Default value: \fB1\fR.
63 \fBl2arc_headroom\fR (ulong)
66 Number of max device writes to precache
68 Default value: \fB2\fR.
74 \fBl2arc_headroom_boost\fR (ulong)
77 Compressed l2arc_headroom multiplier
79 Default value: \fB200\fR.
85 \fBl2arc_max_block_size\fR (ulong)
88 The maximum block size which may be written to an L2ARC device, after
89 compression and other factors. This setting is used to prevent a small
90 number of large blocks from pushing a larger number of small blocks out
93 Default value: \fB16,777,216\fR.
99 \fBl2arc_nocompress\fR (int)
102 Skip compressing L2ARC buffers
104 Use \fB1\fR for yes and \fB0\fR for no (default).
110 \fBl2arc_noprefetch\fR (int)
113 Skip caching prefetched buffers
115 Use \fB1\fR for yes (default) and \fB0\fR to disable.
121 \fBl2arc_norw\fR (int)
124 No reads during writes
126 Use \fB1\fR for yes and \fB0\fR for no (default).
132 \fBl2arc_write_boost\fR (ulong)
135 Extra write bytes during device warmup
137 Default value: \fB8,388,608\fR.
143 \fBl2arc_write_max\fR (ulong)
146 Max write bytes per interval
148 Default value: \fB8,388,608\fR.
154 \fBmetaslab_aliquot\fR (ulong)
157 Metaslab granularity, in bytes. This is roughly similar to what would be
158 referred to as the "stripe size" in traditional RAID arrays. In normal
159 operation, ZFS will try to write this amount of data to a top-level vdev
160 before moving on to the next one.
162 Default value: \fB524,288\fR.
168 \fBmetaslab_bias_enabled\fR (int)
171 Enable metaslab group biasing based on its vdev's over- or under-utilization
172 relative to the pool.
174 Use \fB1\fR for yes (default) and \fB0\fR for no.
180 \fBmetaslab_debug_load\fR (int)
183 Load all metaslabs during pool import.
185 Use \fB1\fR for yes and \fB0\fR for no (default).
191 \fBmetaslab_debug_unload\fR (int)
194 Prevent metaslabs from being unloaded.
196 Use \fB1\fR for yes and \fB0\fR for no (default).
202 \fBmetaslab_fragmentation_factor_enabled\fR (int)
205 Enable use of the fragmentation metric in computing metaslab weights.
207 Use \fB1\fR for yes (default) and \fB0\fR for no.
213 \fBmetaslabs_per_vdev\fR (int)
216 When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs.
218 Default value: \fB200\fR.
224 \fBmetaslab_preload_enabled\fR (int)
227 Enable metaslab group preloading.
229 Use \fB1\fR for yes (default) and \fB0\fR for no.
235 \fBmetaslab_lba_weighting_enabled\fR (int)
238 Give more weight to metaslabs with lower LBAs, assuming they have
239 greater bandwidth as is typically the case on a modern constant
240 angular velocity disk drive.
242 Use \fB1\fR for yes (default) and \fB0\fR for no.
248 \fBspa_config_path\fR (charp)
253 Default value: \fB/etc/zfs/zpool.cache\fR.
259 \fBspa_asize_inflation\fR (int)
262 Multiplication factor used to estimate actual disk consumption from the
263 size of data being written. The default value is a worst case estimate,
264 but lower values may be valid for a given pool depending on its
265 configuration. Pool administrators who understand the factors involved
266 may wish to specify a more realistic inflation factor, particularly if
267 they operate close to quota or capacity limits.
275 \fBspa_load_verify_data\fR (int)
278 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
279 import. Use 0 to disable and 1 to enable.
281 An extreme rewind import normally performs a full traversal of all
282 blocks in the pool for verification. If this parameter is set to 0,
283 the traversal skips non-metadata blocks. It can be toggled once the
284 import has started to stop or start the traversal of non-metadata blocks.
292 \fBspa_load_verify_metadata\fR (int)
295 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
296 pool import. Use 0 to disable and 1 to enable.
298 An extreme rewind import normally performs a full traversal of all
299 blocks in the pool for verification. If this parameter is set to 1,
300 the traversal is not performed. It can be toggled once the import has
301 started to stop or start the traversal.
309 \fBspa_load_verify_maxinflight\fR (int)
312 Maximum concurrent I/Os during the traversal performed during an "extreme
313 rewind" (\fB-X\fR) pool import.
321 \fBspa_slop_shift\fR (int)
324 Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
325 in the pool to be consumed. This ensures that we don't run the pool
326 completely out of space, due to unaccounted changes (e.g. to the MOS).
327 It also limits the worst-case time to allocate space. If we have
328 less than this amount of free space, most ZPL operations (e.g. write,
329 create) will return ENOSPC.
337 \fBzfetch_array_rd_sz\fR (ulong)
340 If prefetching is enabled, disable prefetching for reads larger than this size.
342 Default value: \fB1,048,576\fR.
348 \fBzfetch_max_distance\fR (uint)
351 Max bytes to prefetch per stream (default 8MB).
353 Default value: \fB8,388,608\fR.
359 \fBzfetch_max_streams\fR (uint)
362 Max number of streams per zfetch (prefetch streams per file).
364 Default value: \fB8\fR.
370 \fBzfetch_min_sec_reap\fR (uint)
373 Min time before an active prefetch stream can be reclaimed
375 Default value: \fB2\fR.
381 \fBzfs_arc_average_blocksize\fR (int)
384 The ARC's buffer hash table is sized based on the assumption of an average
385 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
386 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
387 For configurations with a known larger average block size this value can be
388 increased to reduce the memory footprint.
391 Default value: \fB8192\fR.
397 \fBzfs_arc_evict_batch_limit\fR (int)
400 Number ARC headers to evict per sub-list before proceeding to another sub-list.
401 This batch-style operation prevents entire sub-lists from being evicted at once
402 but comes at a cost of additional unlocking and locking.
404 Default value: \fB10\fR.
410 \fBzfs_arc_grow_retry\fR (int)
413 Seconds before growing arc size
415 Default value: \fB5\fR.
421 \fBzfs_arc_lotsfree_percent\fR (int)
424 Throttle I/O when free system memory drops below this percentage of total
425 system memory. Setting this value to 0 will disable the throttle.
427 Default value: \fB10\fR.
433 \fBzfs_arc_max\fR (ulong)
438 Default value: \fB0\fR.
444 \fBzfs_arc_meta_limit\fR (ulong)
447 The maximum allowed size in bytes that meta data buffers are allowed to
448 consume in the ARC. When this limit is reached meta data buffers will
449 be reclaimed even if the overall arc_c_max has not been reached. This
450 value defaults to 0 which indicates that 3/4 of the ARC may be used
453 Default value: \fB0\fR.
459 \fBzfs_arc_meta_min\fR (ulong)
462 The minimum allowed size in bytes that meta data buffers may consume in
463 the ARC. This value defaults to 0 which disables a floor on the amount
464 of the ARC devoted meta data.
466 Default value: \fB0\fR.
472 \fBzfs_arc_meta_prune\fR (int)
475 The number of dentries and inodes to be scanned looking for entries
476 which can be dropped. This may be required when the ARC reaches the
477 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
478 in the ARC. Increasing this value will cause to dentry and inode caches
479 to be pruned more aggressively. Setting this value to 0 will disable
480 pruning the inode and dentry caches.
482 Default value: \fB10,000\fR.
488 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
491 The number of restart passes to make while scanning the ARC attempting
492 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
493 This value should not need to be tuned but is available to facilitate
494 performance analysis.
496 Default value: \fB4096\fR.
502 \fBzfs_arc_min\fR (ulong)
507 Default value: \fB100\fR.
513 \fBzfs_arc_min_prefetch_lifespan\fR (int)
516 Min life of prefetch block
518 Default value: \fB100\fR.
524 \fBzfs_arc_num_sublists_per_state\fR (int)
527 To allow more fine-grained locking, each ARC state contains a series
528 of lists for both data and meta data objects. Locking is performed at
529 the level of these "sub-lists". This parameters controls the number of
530 sub-lists per ARC state.
532 Default value: 1 or the number of on-online CPUs, whichever is greater
538 \fBzfs_arc_overflow_shift\fR (int)
541 The ARC size is considered to be overflowing if it exceeds the current
542 ARC target size (arc_c) by a threshold determined by this parameter.
543 The threshold is calculated as a fraction of arc_c using the formula
544 "arc_c >> \fBzfs_arc_overflow_shift\fR".
546 The default value of 8 causes the ARC to be considered to be overflowing
547 if it exceeds the target size by 1/256th (0.3%) of the target size.
549 When the ARC is overflowing, new buffer allocations are stalled until
550 the reclaim thread catches up and the overflow condition no longer exists.
552 Default value: \fB8\fR.
559 \fBzfs_arc_p_min_shift\fR (int)
562 arc_c shift to calc min/max arc_p
564 Default value: \fB4\fR.
570 \fBzfs_arc_p_aggressive_disable\fR (int)
573 Disable aggressive arc_p growth
575 Use \fB1\fR for yes (default) and \fB0\fR to disable.
581 \fBzfs_arc_p_dampener_disable\fR (int)
584 Disable arc_p adapt dampener
586 Use \fB1\fR for yes (default) and \fB0\fR to disable.
592 \fBzfs_arc_shrink_shift\fR (int)
595 log2(fraction of arc to reclaim)
597 Default value: \fB5\fR.
603 \fBzfs_arc_sys_free\fR (ulong)
606 The target number of bytes the ARC should leave as free memory on the system.
607 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
608 option to a non-zero value will override the default.
610 Default value: \fB0\fR.
616 \fBzfs_autoimport_disable\fR (int)
619 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
621 Use \fB1\fR for yes (default) and \fB0\fR for no.
627 \fBzfs_dbgmsg_enable\fR (int)
630 Internally ZFS keeps a small log to facilitate debugging. By default the log
631 is disabled, to enable it set this option to 1. The contents of the log can
632 be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
633 this proc file clears the log.
635 Default value: \fB0\fR.
641 \fBzfs_dbgmsg_maxsize\fR (int)
644 The maximum size in bytes of the internal ZFS debug log.
646 Default value: \fB4M\fR.
652 \fBzfs_dbuf_state_index\fR (int)
655 Calculate arc header index
657 Default value: \fB0\fR.
663 \fBzfs_deadman_enabled\fR (int)
668 Use \fB1\fR for yes (default) and \fB0\fR to disable.
674 \fBzfs_deadman_synctime_ms\fR (ulong)
677 Expiration time in milliseconds. This value has two meanings. First it is
678 used to determine when the spa_deadman() logic should fire. By default the
679 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
680 Secondly, the value determines if an I/O is considered "hung". Any I/O that
681 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
682 in a zevent being logged.
684 Default value: \fB1,000,000\fR.
690 \fBzfs_dedup_prefetch\fR (int)
693 Enable prefetching dedup-ed blks
695 Use \fB1\fR for yes and \fB0\fR to disable (default).
701 \fBzfs_delay_min_dirty_percent\fR (int)
704 Start to delay each transaction once there is this amount of dirty data,
705 expressed as a percentage of \fBzfs_dirty_data_max\fR.
706 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
707 See the section "ZFS TRANSACTION DELAY".
709 Default value: \fB60\fR.
715 \fBzfs_delay_scale\fR (int)
718 This controls how quickly the transaction delay approaches infinity.
719 Larger values cause longer delays for a given amount of dirty data.
721 For the smoothest delay, this value should be about 1 billion divided
722 by the maximum number of operations per second. This will smoothly
723 handle between 10x and 1/10th this number.
725 See the section "ZFS TRANSACTION DELAY".
727 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
729 Default value: \fB500,000\fR.
735 \fBzfs_delete_blocks\fR (ulong)
738 This is the used to define a large file for the purposes of delete. Files
739 containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
740 while smaller files are deleted synchronously. Decreasing this value will
741 reduce the time spent in an unlink(2) system call at the expense of a longer
742 delay before the freed space is available.
744 Default value: \fB20,480\fR.
750 \fBzfs_dirty_data_max\fR (int)
753 Determines the dirty space limit in bytes. Once this limit is exceeded, new
754 writes are halted until space frees up. This parameter takes precedence
755 over \fBzfs_dirty_data_max_percent\fR.
756 See the section "ZFS TRANSACTION DELAY".
758 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
764 \fBzfs_dirty_data_max_max\fR (int)
767 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
768 This limit is only enforced at module load time, and will be ignored if
769 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
770 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
771 "ZFS TRANSACTION DELAY".
773 Default value: 25% of physical RAM.
779 \fBzfs_dirty_data_max_max_percent\fR (int)
782 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
783 percentage of physical RAM. This limit is only enforced at module load
784 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
785 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
786 one. See the section "ZFS TRANSACTION DELAY".
794 \fBzfs_dirty_data_max_percent\fR (int)
797 Determines the dirty space limit, expressed as a percentage of all
798 memory. Once this limit is exceeded, new writes are halted until space frees
799 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
800 one. See the section "ZFS TRANSACTION DELAY".
802 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
808 \fBzfs_dirty_data_sync\fR (int)
811 Start syncing out a transaction group if there is at least this much dirty data.
813 Default value: \fB67,108,864\fR.
819 \fBzfs_free_bpobj_enabled\fR (int)
822 Enable/disable the processing of the free_bpobj object.
824 Default value: \fB1\fR.
830 \fBzfs_free_max_blocks\fR (ulong)
833 Maximum number of blocks freed in a single txg.
835 Default value: \fB100,000\fR.
841 \fBzfs_vdev_async_read_max_active\fR (int)
844 Maxium asynchronous read I/Os active to each device.
845 See the section "ZFS I/O SCHEDULER".
847 Default value: \fB3\fR.
853 \fBzfs_vdev_async_read_min_active\fR (int)
856 Minimum asynchronous read I/Os active to each device.
857 See the section "ZFS I/O SCHEDULER".
859 Default value: \fB1\fR.
865 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
868 When the pool has more than
869 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
870 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
871 the dirty data is between min and max, the active I/O limit is linearly
872 interpolated. See the section "ZFS I/O SCHEDULER".
874 Default value: \fB60\fR.
880 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
883 When the pool has less than
884 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
885 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
886 the dirty data is between min and max, the active I/O limit is linearly
887 interpolated. See the section "ZFS I/O SCHEDULER".
889 Default value: \fB30\fR.
895 \fBzfs_vdev_async_write_max_active\fR (int)
898 Maxium asynchronous write I/Os active to each device.
899 See the section "ZFS I/O SCHEDULER".
901 Default value: \fB10\fR.
907 \fBzfs_vdev_async_write_min_active\fR (int)
910 Minimum asynchronous write I/Os active to each device.
911 See the section "ZFS I/O SCHEDULER".
913 Default value: \fB1\fR.
919 \fBzfs_vdev_max_active\fR (int)
922 The maximum number of I/Os active to each device. Ideally, this will be >=
923 the sum of each queue's max_active. It must be at least the sum of each
924 queue's min_active. See the section "ZFS I/O SCHEDULER".
926 Default value: \fB1,000\fR.
932 \fBzfs_vdev_scrub_max_active\fR (int)
935 Maxium scrub I/Os active to each device.
936 See the section "ZFS I/O SCHEDULER".
938 Default value: \fB2\fR.
944 \fBzfs_vdev_scrub_min_active\fR (int)
947 Minimum scrub I/Os active to each device.
948 See the section "ZFS I/O SCHEDULER".
950 Default value: \fB1\fR.
956 \fBzfs_vdev_sync_read_max_active\fR (int)
959 Maxium synchronous read I/Os active to each device.
960 See the section "ZFS I/O SCHEDULER".
962 Default value: \fB10\fR.
968 \fBzfs_vdev_sync_read_min_active\fR (int)
971 Minimum synchronous read I/Os active to each device.
972 See the section "ZFS I/O SCHEDULER".
974 Default value: \fB10\fR.
980 \fBzfs_vdev_sync_write_max_active\fR (int)
983 Maxium synchronous write I/Os active to each device.
984 See the section "ZFS I/O SCHEDULER".
986 Default value: \fB10\fR.
992 \fBzfs_vdev_sync_write_min_active\fR (int)
995 Minimum synchronous write I/Os active to each device.
996 See the section "ZFS I/O SCHEDULER".
998 Default value: \fB10\fR.
1004 \fBzfs_disable_dup_eviction\fR (int)
1007 Disable duplicate buffer eviction
1009 Use \fB1\fR for yes and \fB0\fR for no (default).
1015 \fBzfs_expire_snapshot\fR (int)
1018 Seconds to expire .zfs/snapshot
1020 Default value: \fB300\fR.
1026 \fBzfs_admin_snapshot\fR (int)
1029 Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1030 directory to cause the creation, destruction, or renaming of snapshots.
1031 When enabled this functionality works both locally and over NFS exports
1032 which have the 'no_root_squash' option set. This functionality is disabled
1035 Use \fB1\fR for yes and \fB0\fR for no (default).
1041 \fBzfs_flags\fR (int)
1044 Set additional debugging flags. The following flags may be bitwise-or'd
1056 Enable dprintf entries in the debug log.
1058 2 ZFS_DEBUG_DBUF_VERIFY *
1059 Enable extra dbuf verifications.
1061 4 ZFS_DEBUG_DNODE_VERIFY *
1062 Enable extra dnode verifications.
1064 8 ZFS_DEBUG_SNAPNAMES
1065 Enable snapshot name verification.
1068 Check for illegally modified ARC buffers.
1071 Enable spa_dbgmsg entries in the debug log.
1073 64 ZFS_DEBUG_ZIO_FREE
1074 Enable verification of block frees.
1076 128 ZFS_DEBUG_HISTOGRAM_VERIFY
1077 Enable extra spacemap histogram verifications.
1080 * Requires debug build.
1082 Default value: \fB0\fR.
1088 \fBzfs_free_leak_on_eio\fR (int)
1091 If destroy encounters an EIO while reading metadata (e.g. indirect
1092 blocks), space referenced by the missing metadata can not be freed.
1093 Normally this causes the background destroy to become "stalled", as
1094 it is unable to make forward progress. While in this stalled state,
1095 all remaining space to free from the error-encountering filesystem is
1096 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1097 permanently leak the space from indirect blocks that can not be read,
1098 and continue to free everything else that it can.
1100 The default, "stalling" behavior is useful if the storage partially
1101 fails (i.e. some but not all i/os fail), and then later recovers. In
1102 this case, we will be able to continue pool operations while it is
1103 partially failed, and when it recovers, we can continue to free the
1104 space, with no leaks. However, note that this case is actually
1107 Typically pools either (a) fail completely (but perhaps temporarily,
1108 e.g. a top-level vdev going offline), or (b) have localized,
1109 permanent errors (e.g. disk returns the wrong data due to bit flip or
1110 firmware bug). In case (a), this setting does not matter because the
1111 pool will be suspended and the sync thread will not be able to make
1112 forward progress regardless. In case (b), because the error is
1113 permanent, the best we can do is leak the minimum amount of space,
1114 which is what setting this flag will do. Therefore, it is reasonable
1115 for this flag to normally be set, but we chose the more conservative
1116 approach of not setting it, so that there is no possibility of
1117 leaking space in the "partial temporary" failure case.
1119 Default value: \fB0\fR.
1125 \fBzfs_free_min_time_ms\fR (int)
1128 Min millisecs to free per txg
1130 Default value: \fB1,000\fR.
1136 \fBzfs_immediate_write_sz\fR (long)
1139 Largest data block to write to zil
1141 Default value: \fB32,768\fR.
1147 \fBzfs_max_recordsize\fR (int)
1150 We currently support block sizes from 512 bytes to 16MB. The benefits of
1151 larger blocks, and thus larger IO, need to be weighed against the cost of
1152 COWing a giant block to modify one byte. Additionally, very large blocks
1153 can have an impact on i/o latency, and also potentially on the memory
1154 allocator. Therefore, we do not allow the recordsize to be set larger than
1155 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1156 this tunable, and pools with larger blocks can always be imported and used,
1157 regardless of this setting.
1159 Default value: \fB1,048,576\fR.
1165 \fBzfs_mdcomp_disable\fR (int)
1168 Disable meta data compression
1170 Use \fB1\fR for yes and \fB0\fR for no (default).
1176 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1179 Allow metaslabs to keep their active state as long as their fragmentation
1180 percentage is less than or equal to this value. An active metaslab that
1181 exceeds this threshold will no longer keep its active status allowing
1182 better metaslabs to be selected.
1184 Default value: \fB70\fR.
1190 \fBzfs_mg_fragmentation_threshold\fR (int)
1193 Metaslab groups are considered eligible for allocations if their
1194 fragmenation metric (measured as a percentage) is less than or equal to
1195 this value. If a metaslab group exceeds this threshold then it will be
1196 skipped unless all metaslab groups within the metaslab class have also
1197 crossed this threshold.
1199 Default value: \fB85\fR.
1205 \fBzfs_mg_noalloc_threshold\fR (int)
1208 Defines a threshold at which metaslab groups should be eligible for
1209 allocations. The value is expressed as a percentage of free space
1210 beyond which a metaslab group is always eligible for allocations.
1211 If a metaslab group's free space is less than or equal to the
1212 threshold, the allocator will avoid allocating to that group
1213 unless all groups in the pool have reached the threshold. Once all
1214 groups have reached the threshold, all groups are allowed to accept
1215 allocations. The default value of 0 disables the feature and causes
1216 all metaslab groups to be eligible for allocations.
1218 This parameter allows to deal with pools having heavily imbalanced
1219 vdevs such as would be the case when a new vdev has been added.
1220 Setting the threshold to a non-zero percentage will stop allocations
1221 from being made to vdevs that aren't filled to the specified percentage
1222 and allow lesser filled vdevs to acquire more allocations than they
1223 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1225 Default value: \fB0\fR.
1231 \fBzfs_no_scrub_io\fR (int)
1234 Set for no scrub I/O
1236 Use \fB1\fR for yes and \fB0\fR for no (default).
1242 \fBzfs_no_scrub_prefetch\fR (int)
1245 Set for no scrub prefetching
1247 Use \fB1\fR for yes and \fB0\fR for no (default).
1253 \fBzfs_nocacheflush\fR (int)
1256 Disable cache flushes
1258 Use \fB1\fR for yes and \fB0\fR for no (default).
1264 \fBzfs_nopwrite_enabled\fR (int)
1269 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1275 \fBzfs_pd_bytes_max\fR (int)
1278 The number of bytes which should be prefetched.
1280 Default value: \fB52,428,800\fR.
1286 \fBzfs_prefetch_disable\fR (int)
1289 This tunable disables predictive prefetch. Note that it leaves "prescient"
1290 prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1291 prescient prefetch never issues i/os that end up not being needed, so it
1292 can't hurt performance.
1294 Use \fB1\fR for yes and \fB0\fR for no (default).
1300 \fBzfs_read_chunk_size\fR (long)
1303 Bytes to read per chunk
1305 Default value: \fB1,048,576\fR.
1311 \fBzfs_read_history\fR (int)
1314 Historic statistics for the last N reads
1316 Default value: \fB0\fR.
1322 \fBzfs_read_history_hits\fR (int)
1325 Include cache hits in read history
1327 Use \fB1\fR for yes and \fB0\fR for no (default).
1333 \fBzfs_recover\fR (int)
1336 Set to attempt to recover from fatal errors. This should only be used as a
1337 last resort, as it typically results in leaked space, or worse.
1339 Use \fB1\fR for yes and \fB0\fR for no (default).
1345 \fBzfs_resilver_delay\fR (int)
1348 Number of ticks to delay prior to issuing a resilver I/O operation when
1349 a non-resilver or non-scrub I/O operation has occurred within the past
1350 \fBzfs_scan_idle\fR ticks.
1352 Default value: \fB2\fR.
1358 \fBzfs_resilver_min_time_ms\fR (int)
1361 Min millisecs to resilver per txg
1363 Default value: \fB3,000\fR.
1369 \fBzfs_scan_idle\fR (int)
1372 Idle window in clock ticks. During a scrub or a resilver, if
1373 a non-scrub or non-resilver I/O operation has occurred during this
1374 window, the next scrub or resilver operation is delayed by, respectively
1375 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1377 Default value: \fB50\fR.
1383 \fBzfs_scan_min_time_ms\fR (int)
1386 Min millisecs to scrub per txg
1388 Default value: \fB1,000\fR.
1394 \fBzfs_scrub_delay\fR (int)
1397 Number of ticks to delay prior to issuing a scrub I/O operation when
1398 a non-scrub or non-resilver I/O operation has occurred within the past
1399 \fBzfs_scan_idle\fR ticks.
1401 Default value: \fB4\fR.
1407 \fBzfs_send_corrupt_data\fR (int)
1410 Allow to send corrupt data (ignore read/checksum errors when sending data)
1412 Use \fB1\fR for yes and \fB0\fR for no (default).
1418 \fBzfs_sync_pass_deferred_free\fR (int)
1421 Defer frees starting in this pass
1423 Default value: \fB2\fR.
1429 \fBzfs_sync_pass_dont_compress\fR (int)
1432 Don't compress starting in this pass
1434 Default value: \fB5\fR.
1440 \fBzfs_sync_pass_rewrite\fR (int)
1443 Rewrite new bps starting in this pass
1445 Default value: \fB2\fR.
1451 \fBzfs_top_maxinflight\fR (int)
1454 Max I/Os per top-level vdev during scrub or resilver operations.
1456 Default value: \fB32\fR.
1462 \fBzfs_txg_history\fR (int)
1465 Historic statistics for the last N txgs
1467 Default value: \fB0\fR.
1473 \fBzfs_txg_timeout\fR (int)
1476 Max seconds worth of delta per txg
1478 Default value: \fB5\fR.
1484 \fBzfs_vdev_aggregation_limit\fR (int)
1487 Max vdev I/O aggregation size
1489 Default value: \fB131,072\fR.
1495 \fBzfs_vdev_cache_bshift\fR (int)
1498 Shift size to inflate reads too
1500 Default value: \fB16\fR.
1506 \fBzfs_vdev_cache_max\fR (int)
1509 Inflate reads small than max
1515 \fBzfs_vdev_cache_size\fR (int)
1518 Total size of the per-disk cache
1520 Default value: \fB0\fR.
1526 \fBzfs_vdev_mirror_switch_us\fR (int)
1529 Switch mirrors every N usecs
1531 Default value: \fB10,000\fR.
1537 \fBzfs_vdev_read_gap_limit\fR (int)
1540 Aggregate read I/O over gap
1542 Default value: \fB32,768\fR.
1548 \fBzfs_vdev_scheduler\fR (charp)
1553 Default value: \fBnoop\fR.
1559 \fBzfs_vdev_write_gap_limit\fR (int)
1562 Aggregate write I/O over gap
1564 Default value: \fB4,096\fR.
1570 \fBzfs_zevent_cols\fR (int)
1573 Max event column width
1575 Default value: \fB80\fR.
1581 \fBzfs_zevent_console\fR (int)
1584 Log events to the console
1586 Use \fB1\fR for yes and \fB0\fR for no (default).
1592 \fBzfs_zevent_len_max\fR (int)
1595 Max event queue length
1597 Default value: \fB0\fR.
1603 \fBzil_replay_disable\fR (int)
1606 Disable intent logging replay
1608 Use \fB1\fR for yes and \fB0\fR for no (default).
1614 \fBzil_slog_limit\fR (ulong)
1617 Max commit bytes to separate log device
1619 Default value: \fB1,048,576\fR.
1625 \fBzio_delay_max\fR (int)
1628 Max zio millisecond delay before posting event
1630 Default value: \fB30,000\fR.
1636 \fBzio_requeue_io_start_cut_in_line\fR (int)
1639 Prioritize requeued I/O
1641 Default value: \fB0\fR.
1647 \fBzio_taskq_batch_pct\fR (uint)
1650 Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
1651 for IO. These workers are responsible for IO work such as compression and
1652 checksum calculations. Fractional number of CPUs will be rounded down.
1654 The default value of 75 was chosen to avoid using all CPUs which can result in
1655 latency issues and inconsistent application performance, especially when high
1656 compression is enabled.
1658 Default value: \fB75\fR.
1664 \fBzvol_inhibit_dev\fR (uint)
1667 Do not create zvol device nodes
1669 Use \fB1\fR for yes and \fB0\fR for no (default).
1675 \fBzvol_major\fR (uint)
1678 Major number for zvol device
1680 Default value: \fB230\fR.
1686 \fBzvol_max_discard_blocks\fR (ulong)
1689 Max number of blocks to discard at once
1691 Default value: \fB16,384\fR.
1697 \fBzvol_prefetch_bytes\fR (uint)
1700 When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
1701 from the start and end of the volume. Prefetching these regions
1702 of the volume is desirable because they are likely to be accessed
1703 immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
1706 Default value: \fB131,072\fR.
1709 .SH ZFS I/O SCHEDULER
1710 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1711 The I/O scheduler determines when and in what order those operations are
1712 issued. The I/O scheduler divides operations into five I/O classes
1713 prioritized in the following order: sync read, sync write, async read,
1714 async write, and scrub/resilver. Each queue defines the minimum and
1715 maximum number of concurrent operations that may be issued to the
1716 device. In addition, the device has an aggregate maximum,
1717 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1718 must not exceed the aggregate maximum. If the sum of the per-queue
1719 maximums exceeds the aggregate maximum, then the number of active I/Os
1720 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1721 be issued regardless of whether all per-queue minimums have been met.
1723 For many physical devices, throughput increases with the number of
1724 concurrent operations, but latency typically suffers. Further, physical
1725 devices typically have a limit at which more concurrent operations have no
1726 effect on throughput or can actually cause it to decrease.
1728 The scheduler selects the next operation to issue by first looking for an
1729 I/O class whose minimum has not been satisfied. Once all are satisfied and
1730 the aggregate maximum has not been hit, the scheduler looks for classes
1731 whose maximum has not been satisfied. Iteration through the I/O classes is
1732 done in the order specified above. No further operations are issued if the
1733 aggregate maximum number of concurrent operations has been hit or if there
1734 are no operations queued for an I/O class that has not hit its maximum.
1735 Every time an I/O is queued or an operation completes, the I/O scheduler
1736 looks for new operations to issue.
1738 In general, smaller max_active's will lead to lower latency of synchronous
1739 operations. Larger max_active's may lead to higher overall throughput,
1740 depending on underlying storage.
1742 The ratio of the queues' max_actives determines the balance of performance
1743 between reads, writes, and scrubs. E.g., increasing
1744 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1745 more quickly, but reads and writes to have higher latency and lower throughput.
1747 All I/O classes have a fixed maximum number of outstanding operations
1748 except for the async write class. Asynchronous writes represent the data
1749 that is committed to stable storage during the syncing stage for
1750 transaction groups. Transaction groups enter the syncing state
1751 periodically so the number of queued async writes will quickly burst up
1752 and then bleed down to zero. Rather than servicing them as quickly as
1753 possible, the I/O scheduler changes the maximum number of active async
1754 write I/Os according to the amount of dirty data in the pool. Since
1755 both throughput and latency typically increase with the number of
1756 concurrent operations issued to physical devices, reducing the
1757 burstiness in the number of concurrent operations also stabilizes the
1758 response time of operations from other -- and in particular synchronous
1759 -- queues. In broad strokes, the I/O scheduler will issue more
1760 concurrent operations from the async write queue as there's more dirty
1765 The number of concurrent operations issued for the async write I/O class
1766 follows a piece-wise linear function defined by a few adjustable points.
1769 | o---------| <-- zfs_vdev_async_write_max_active
1776 |-------o | | <-- zfs_vdev_async_write_min_active
1777 0|_______^______|_________|
1778 0% | | 100% of zfs_dirty_data_max
1780 | `-- zfs_vdev_async_write_active_max_dirty_percent
1781 `--------- zfs_vdev_async_write_active_min_dirty_percent
1784 Until the amount of dirty data exceeds a minimum percentage of the dirty
1785 data allowed in the pool, the I/O scheduler will limit the number of
1786 concurrent operations to the minimum. As that threshold is crossed, the
1787 number of concurrent operations issued increases linearly to the maximum at
1788 the specified maximum percentage of the dirty data allowed in the pool.
1790 Ideally, the amount of dirty data on a busy pool will stay in the sloped
1791 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
1792 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
1793 maximum percentage, this indicates that the rate of incoming data is
1794 greater than the rate that the backend storage can handle. In this case, we
1795 must further throttle incoming writes, as described in the next section.
1797 .SH ZFS TRANSACTION DELAY
1798 We delay transactions when we've determined that the backend storage
1799 isn't able to accommodate the rate of incoming writes.
1801 If there is already a transaction waiting, we delay relative to when
1802 that transaction will finish waiting. This way the calculated delay time
1803 is independent of the number of threads concurrently executing
1806 If we are the only waiter, wait relative to when the transaction
1807 started, rather than the current time. This credits the transaction for
1808 "time already served", e.g. reading indirect blocks.
1810 The minimum time for a transaction to take is calculated as:
1812 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
1813 min_time is then capped at 100 milliseconds.
1816 The delay has two degrees of freedom that can be adjusted via tunables. The
1817 percentage of dirty data at which we start to delay is defined by
1818 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
1819 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
1820 delay after writing at full speed has failed to keep up with the incoming write
1821 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
1822 this variable determines the amount of delay at the midpoint of the curve.
1826 10ms +-------------------------------------------------------------*+
1842 2ms + (midpoint) * +
1845 | zfs_delay_scale ----------> ******** |
1846 0 +-------------------------------------*********----------------+
1847 0% <- zfs_dirty_data_max -> 100%
1850 Note that since the delay is added to the outstanding time remaining on the
1851 most recent transaction, the delay is effectively the inverse of IOPS.
1852 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
1853 was chosen such that small changes in the amount of accumulated dirty data
1854 in the first 3/4 of the curve yield relatively small differences in the
1857 The effects can be easier to understand when the amount of delay is
1858 represented on a log scale:
1862 100ms +-------------------------------------------------------------++
1871 + zfs_delay_scale ----------> ***** +
1882 +--------------------------------------------------------------+
1883 0% <- zfs_dirty_data_max -> 100%
1886 Note here that only as the amount of dirty data approaches its limit does
1887 the delay start to increase rapidly. The goal of a properly tuned system
1888 should be to keep the amount of dirty data out of that range by first
1889 ensuring that the appropriate limits are set for the I/O scheduler to reach
1890 optimal throughput on the backend storage, and then by changing the value
1891 of \fBzfs_delay_scale\fR to increase the steepness of the curve.