2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" Copyright (c) 2017 Datto Inc.
4 .\" Copyright (c) 2018 by Delphix. All rights reserved.
5 .\" The contents of this file are subject to the terms of the Common Development
6 .\" and Distribution License (the "License"). You may not use this file except
7 .\" in compliance with the License. You can obtain a copy of the license at
8 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
10 .\" See the License for the specific language governing permissions and
11 .\" limitations under the License. When distributing Covered Code, include this
12 .\" CDDL HEADER in each file and include the License file at
13 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15 .\" own identifying information:
16 .\" Portions Copyright [yyyy] [name of copyright owner]
17 .TH ZFS-MODULE-PARAMETERS 5 "Oct 28, 2017"
19 zfs\-module\-parameters \- ZFS module parameters
23 Description of the different parameters to the ZFS module.
25 .SS "Module parameters"
32 \fBdbuf_cache_max_bytes\fR (ulong)
35 Maximum size in bytes of the dbuf cache. When \fB0\fR this value will default
36 to \fB1/2^dbuf_cache_shift\fR (1/32) of the target ARC size, otherwise the
37 provided value in bytes will be used. The behavior of the dbuf cache and its
38 associated settings can be observed via the \fB/proc/spl/kstat/zfs/dbufstats\fR
41 Default value: \fB0\fR.
47 \fBdbuf_metadata_cache_max_bytes\fR (ulong)
50 Maximum size in bytes of the metadata dbuf cache. When \fB0\fR this value will
51 default to \fB1/2^dbuf_cache_shift\fR (1/16) of the target ARC size, otherwise
52 the provided value in bytes will be used. The behavior of the metadata dbuf
53 cache and its associated settings can be observed via the
54 \fB/proc/spl/kstat/zfs/dbufstats\fR kstat.
56 Default value: \fB0\fR.
62 \fBdbuf_cache_hiwater_pct\fR (uint)
65 The percentage over \fBdbuf_cache_max_bytes\fR when dbufs must be evicted
68 Default value: \fB10\fR%.
74 \fBdbuf_cache_lowater_pct\fR (uint)
77 The percentage below \fBdbuf_cache_max_bytes\fR when the evict thread stops
80 Default value: \fB10\fR%.
86 \fBdbuf_cache_shift\fR (int)
89 Set the size of the dbuf cache, \fBdbuf_cache_max_bytes\fR, to a log2 fraction
90 of the target arc size.
92 Default value: \fB5\fR.
98 \fBdbuf_metadata_cache_shift\fR (int)
101 Set the size of the dbuf metadata cache, \fBdbuf_metadata_cache_max_bytes\fR,
102 to a log2 fraction of the target arc size.
104 Default value: \fB6\fR.
110 \fBignore_hole_birth\fR (int)
113 When set, the hole_birth optimization will not be used, and all holes will
114 always be sent on zfs send. Useful if you suspect your datasets are affected
115 by a bug in hole_birth.
117 Use \fB1\fR for on (default) and \fB0\fR for off.
123 \fBl2arc_feed_again\fR (int)
126 Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
129 Use \fB1\fR for yes (default) and \fB0\fR to disable.
135 \fBl2arc_feed_min_ms\fR (ulong)
138 Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
139 applicable in related situations.
141 Default value: \fB200\fR.
147 \fBl2arc_feed_secs\fR (ulong)
150 Seconds between L2ARC writing
152 Default value: \fB1\fR.
158 \fBl2arc_headroom\fR (ulong)
161 How far through the ARC lists to search for L2ARC cacheable content, expressed
162 as a multiplier of \fBl2arc_write_max\fR
164 Default value: \fB2\fR.
170 \fBl2arc_headroom_boost\fR (ulong)
173 Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
174 successfully compressed before writing. A value of 100 disables this feature.
176 Default value: \fB200\fR%.
182 \fBl2arc_noprefetch\fR (int)
185 Do not write buffers to L2ARC if they were prefetched but not used by
188 Use \fB1\fR for yes (default) and \fB0\fR to disable.
194 \fBl2arc_norw\fR (int)
197 No reads during writes
199 Use \fB1\fR for yes and \fB0\fR for no (default).
205 \fBl2arc_write_boost\fR (ulong)
208 Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount
209 while they remain cold.
211 Default value: \fB8,388,608\fR.
217 \fBl2arc_write_max\fR (ulong)
220 Max write bytes per interval
222 Default value: \fB8,388,608\fR.
228 \fBmetaslab_aliquot\fR (ulong)
231 Metaslab granularity, in bytes. This is roughly similar to what would be
232 referred to as the "stripe size" in traditional RAID arrays. In normal
233 operation, ZFS will try to write this amount of data to a top-level vdev
234 before moving on to the next one.
236 Default value: \fB524,288\fR.
242 \fBmetaslab_bias_enabled\fR (int)
245 Enable metaslab group biasing based on its vdev's over- or under-utilization
246 relative to the pool.
248 Use \fB1\fR for yes (default) and \fB0\fR for no.
254 \fBmetaslab_force_ganging\fR (ulong)
257 Make some blocks above a certain size be gang blocks. This option is used
258 by the test suite to facilitate testing.
260 Default value: \fB16,777,217\fR.
266 \fBzfs_metaslab_segment_weight_enabled\fR (int)
269 Enable/disable segment-based metaslab selection.
271 Use \fB1\fR for yes (default) and \fB0\fR for no.
277 \fBzfs_metaslab_switch_threshold\fR (int)
280 When using segment-based metaslab selection, continue allocating
281 from the active metaslab until \fBzfs_metaslab_switch_threshold\fR
282 worth of buckets have been exhausted.
284 Default value: \fB2\fR.
290 \fBmetaslab_debug_load\fR (int)
293 Load all metaslabs during pool import.
295 Use \fB1\fR for yes and \fB0\fR for no (default).
301 \fBmetaslab_debug_unload\fR (int)
304 Prevent metaslabs from being unloaded.
306 Use \fB1\fR for yes and \fB0\fR for no (default).
312 \fBmetaslab_fragmentation_factor_enabled\fR (int)
315 Enable use of the fragmentation metric in computing metaslab weights.
317 Use \fB1\fR for yes (default) and \fB0\fR for no.
323 \fBvdev_max_ms_count\fR (int)
326 When a vdev is added target this number of metaslabs per top-level vdev.
328 Default value: \fB200\fR.
334 \fBvdev_min_ms_count\fR (int)
337 Minimum number of metaslabs to create in a top-level vdev.
339 Default value: \fB16\fR.
345 \fBvdev_ms_count_limit\fR (int)
348 Practical upper limit of total metaslabs per top-level vdev.
350 Default value: \fB131,072\fR.
356 \fBmetaslab_preload_enabled\fR (int)
359 Enable metaslab group preloading.
361 Use \fB1\fR for yes (default) and \fB0\fR for no.
367 \fBmetaslab_lba_weighting_enabled\fR (int)
370 Give more weight to metaslabs with lower LBAs, assuming they have
371 greater bandwidth as is typically the case on a modern constant
372 angular velocity disk drive.
374 Use \fB1\fR for yes (default) and \fB0\fR for no.
380 \fBspa_config_path\fR (charp)
385 Default value: \fB/etc/zfs/zpool.cache\fR.
391 \fBspa_asize_inflation\fR (int)
394 Multiplication factor used to estimate actual disk consumption from the
395 size of data being written. The default value is a worst case estimate,
396 but lower values may be valid for a given pool depending on its
397 configuration. Pool administrators who understand the factors involved
398 may wish to specify a more realistic inflation factor, particularly if
399 they operate close to quota or capacity limits.
401 Default value: \fB24\fR.
407 \fBspa_load_print_vdev_tree\fR (int)
410 Whether to print the vdev tree in the debugging message buffer during pool import.
411 Use 0 to disable and 1 to enable.
413 Default value: \fB0\fR.
419 \fBspa_load_verify_data\fR (int)
422 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
423 import. Use 0 to disable and 1 to enable.
425 An extreme rewind import normally performs a full traversal of all
426 blocks in the pool for verification. If this parameter is set to 0,
427 the traversal skips non-metadata blocks. It can be toggled once the
428 import has started to stop or start the traversal of non-metadata blocks.
430 Default value: \fB1\fR.
436 \fBspa_load_verify_metadata\fR (int)
439 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
440 pool import. Use 0 to disable and 1 to enable.
442 An extreme rewind import normally performs a full traversal of all
443 blocks in the pool for verification. If this parameter is set to 0,
444 the traversal is not performed. It can be toggled once the import has
445 started to stop or start the traversal.
447 Default value: \fB1\fR.
453 \fBspa_load_verify_maxinflight\fR (int)
456 Maximum concurrent I/Os during the traversal performed during an "extreme
457 rewind" (\fB-X\fR) pool import.
459 Default value: \fB10000\fR.
465 \fBspa_slop_shift\fR (int)
468 Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
469 in the pool to be consumed. This ensures that we don't run the pool
470 completely out of space, due to unaccounted changes (e.g. to the MOS).
471 It also limits the worst-case time to allocate space. If we have
472 less than this amount of free space, most ZPL operations (e.g. write,
473 create) will return ENOSPC.
475 Default value: \fB5\fR.
481 \fBvdev_removal_max_span\fR (int)
484 During top-level vdev removal, chunks of data are copied from the vdev
485 which may include free space in order to trade bandwidth for IOPS.
486 This parameter determines the maximum span of free space (in bytes)
487 which will be included as "unnecessary" data in a chunk of copied data.
489 The default value here was chosen to align with
490 \fBzfs_vdev_read_gap_limit\fR, which is a similar concept when doing
491 regular reads (but there's no reason it has to be the same).
493 Default value: \fB32,768\fR.
499 \fBzfetch_array_rd_sz\fR (ulong)
502 If prefetching is enabled, disable prefetching for reads larger than this size.
504 Default value: \fB1,048,576\fR.
510 \fBzfetch_max_distance\fR (uint)
513 Max bytes to prefetch per stream (default 8MB).
515 Default value: \fB8,388,608\fR.
521 \fBzfetch_max_streams\fR (uint)
524 Max number of streams per zfetch (prefetch streams per file).
526 Default value: \fB8\fR.
532 \fBzfetch_min_sec_reap\fR (uint)
535 Min time before an active prefetch stream can be reclaimed
537 Default value: \fB2\fR.
543 \fBzfs_arc_dnode_limit\fR (ulong)
546 When the number of bytes consumed by dnodes in the ARC exceeds this number of
547 bytes, try to unpin some of it in response to demand for non-metadata. This
548 value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which
549 indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of
550 the ARC meta buffers that may be used for dnodes.
552 See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used
553 when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather
554 than in response to overall demand for non-metadata.
557 Default value: \fB0\fR.
563 \fBzfs_arc_dnode_limit_percent\fR (ulong)
566 Percentage that can be consumed by dnodes of ARC meta buffers.
568 See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a
569 higher priority if set to nonzero value.
571 Default value: \fB10\fR%.
577 \fBzfs_arc_dnode_reduce_percent\fR (ulong)
580 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
581 when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR.
584 Default value: \fB10\fR% of the number of dnodes in the ARC.
590 \fBzfs_arc_average_blocksize\fR (int)
593 The ARC's buffer hash table is sized based on the assumption of an average
594 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
595 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
596 For configurations with a known larger average block size this value can be
597 increased to reduce the memory footprint.
600 Default value: \fB8192\fR.
606 \fBzfs_arc_evict_batch_limit\fR (int)
609 Number ARC headers to evict per sub-list before proceeding to another sub-list.
610 This batch-style operation prevents entire sub-lists from being evicted at once
611 but comes at a cost of additional unlocking and locking.
613 Default value: \fB10\fR.
619 \fBzfs_arc_grow_retry\fR (int)
622 If set to a non zero value, it will replace the arc_grow_retry value with this value.
623 The arc_grow_retry value (default 5) is the number of seconds the ARC will wait before
624 trying to resume growth after a memory pressure event.
626 Default value: \fB0\fR.
632 \fBzfs_arc_lotsfree_percent\fR (int)
635 Throttle I/O when free system memory drops below this percentage of total
636 system memory. Setting this value to 0 will disable the throttle.
638 Default value: \fB10\fR%.
644 \fBzfs_arc_max\fR (ulong)
647 Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
648 RAM. This value must be at least 67108864 (64 megabytes).
650 This value can be changed dynamically with some caveats. It cannot be set back
651 to 0 while running and reducing it below the current ARC size will not cause
652 the ARC to shrink without memory pressure to induce shrinking.
654 Default value: \fB0\fR.
660 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
663 The number of restart passes to make while scanning the ARC attempting
664 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
665 This value should not need to be tuned but is available to facilitate
666 performance analysis.
668 Default value: \fB4096\fR.
674 \fBzfs_arc_meta_limit\fR (ulong)
677 The maximum allowed size in bytes that meta data buffers are allowed to
678 consume in the ARC. When this limit is reached meta data buffers will
679 be reclaimed even if the overall arc_c_max has not been reached. This
680 value defaults to 0 which indicates that a percent which is based on
681 \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data.
683 This value my be changed dynamically except that it cannot be set back to 0
684 for a specific percent of the ARC; it must be set to an explicit value.
686 Default value: \fB0\fR.
692 \fBzfs_arc_meta_limit_percent\fR (ulong)
695 Percentage of ARC buffers that can be used for meta data.
697 See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a
698 higher priority if set to nonzero value.
701 Default value: \fB75\fR%.
707 \fBzfs_arc_meta_min\fR (ulong)
710 The minimum allowed size in bytes that meta data buffers may consume in
711 the ARC. This value defaults to 0 which disables a floor on the amount
712 of the ARC devoted meta data.
714 Default value: \fB0\fR.
720 \fBzfs_arc_meta_prune\fR (int)
723 The number of dentries and inodes to be scanned looking for entries
724 which can be dropped. This may be required when the ARC reaches the
725 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
726 in the ARC. Increasing this value will cause to dentry and inode caches
727 to be pruned more aggressively. Setting this value to 0 will disable
728 pruning the inode and dentry caches.
730 Default value: \fB10,000\fR.
736 \fBzfs_arc_meta_strategy\fR (int)
739 Define the strategy for ARC meta data buffer eviction (meta reclaim strategy).
740 A value of 0 (META_ONLY) will evict only the ARC meta data buffers.
741 A value of 1 (BALANCED) indicates that additional data buffers may be evicted if
742 that is required to in order to evict the required number of meta data buffers.
744 Default value: \fB1\fR.
750 \fBzfs_arc_min\fR (ulong)
753 Min arc size of ARC in bytes. If set to 0 then arc_c_min will default to
754 consuming the larger of 32M or 1/32 of total system memory.
756 Default value: \fB0\fR.
762 \fBzfs_arc_min_prefetch_ms\fR (int)
765 Minimum time prefetched blocks are locked in the ARC, specified in ms.
766 A value of \fB0\fR will default to 1000 ms.
768 Default value: \fB0\fR.
774 \fBzfs_arc_min_prescient_prefetch_ms\fR (int)
777 Minimum time "prescient prefetched" blocks are locked in the ARC, specified
778 in ms. These blocks are meant to be prefetched fairly aggresively ahead of
779 the code that may use them. A value of \fB0\fR will default to 6000 ms.
781 Default value: \fB0\fR.
787 \fBzfs_max_missing_tvds\fR (int)
790 Number of missing top-level vdevs which will be allowed during
791 pool import (only in read-only mode).
793 Default value: \fB0\fR
799 \fBzfs_multilist_num_sublists\fR (int)
802 To allow more fine-grained locking, each ARC state contains a series
803 of lists for both data and meta data objects. Locking is performed at
804 the level of these "sub-lists". This parameters controls the number of
805 sub-lists per ARC state, and also applies to other uses of the
806 multilist data structure.
808 Default value: \fB4\fR or the number of online CPUs, whichever is greater
814 \fBzfs_arc_overflow_shift\fR (int)
817 The ARC size is considered to be overflowing if it exceeds the current
818 ARC target size (arc_c) by a threshold determined by this parameter.
819 The threshold is calculated as a fraction of arc_c using the formula
820 "arc_c >> \fBzfs_arc_overflow_shift\fR".
822 The default value of 8 causes the ARC to be considered to be overflowing
823 if it exceeds the target size by 1/256th (0.3%) of the target size.
825 When the ARC is overflowing, new buffer allocations are stalled until
826 the reclaim thread catches up and the overflow condition no longer exists.
828 Default value: \fB8\fR.
835 \fBzfs_arc_p_min_shift\fR (int)
838 If set to a non zero value, this will update arc_p_min_shift (default 4)
840 arc_p_min_shift is used to shift of arc_c for calculating both min and max
843 Default value: \fB0\fR.
849 \fBzfs_arc_p_dampener_disable\fR (int)
852 Disable arc_p adapt dampener
854 Use \fB1\fR for yes (default) and \fB0\fR to disable.
860 \fBzfs_arc_shrink_shift\fR (int)
863 If set to a non zero value, this will update arc_shrink_shift (default 7)
866 Default value: \fB0\fR.
872 \fBzfs_arc_pc_percent\fR (uint)
875 Percent of pagecache to reclaim arc to
877 This tunable allows ZFS arc to play more nicely with the kernel's LRU
878 pagecache. It can guarantee that the arc size won't collapse under scanning
879 pressure on the pagecache, yet still allows arc to be reclaimed down to
880 zfs_arc_min if necessary. This value is specified as percent of pagecache
881 size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This
882 only operates during memory pressure/reclaim.
884 Default value: \fB0\fR% (disabled).
890 \fBzfs_arc_sys_free\fR (ulong)
893 The target number of bytes the ARC should leave as free memory on the system.
894 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
895 option to a non-zero value will override the default.
897 Default value: \fB0\fR.
903 \fBzfs_autoimport_disable\fR (int)
906 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
908 Use \fB1\fR for yes (default) and \fB0\fR for no.
914 \fBzfs_checksums_per_second\fR (int)
917 Rate limit checksum events to this many per second. Note that this should
918 not be set below the zed thresholds (currently 10 checksums over 10 sec)
919 or else zed may not trigger any action.
927 \fBzfs_commit_timeout_pct\fR (int)
930 This controls the amount of time that a ZIL block (lwb) will remain "open"
931 when it isn't "full", and it has a thread waiting for it to be committed to
932 stable storage. The timeout is scaled based on a percentage of the last lwb
933 latency to avoid significantly impacting the latency of each individual
934 transaction record (itx).
936 Default value: \fB5\fR%.
942 \fBzfs_condense_indirect_vdevs_enable\fR (int)
945 Enable condensing indirect vdev mappings. When set to a non-zero value,
946 attempt to condense indirect vdev mappings if the mapping uses more than
947 \fBzfs_condense_min_mapping_bytes\fR bytes of memory and if the obsolete
948 space map object uses more than \fBzfs_condense_max_obsolete_bytes\fR
949 bytes on-disk. The condensing process is an attempt to save memory by
950 removing obsolete mappings.
952 Default value: \fB1\fR.
958 \fBzfs_condense_max_obsolete_bytes\fR (ulong)
961 Only attempt to condense indirect vdev mappings if the on-disk size
962 of the obsolete space map object is greater than this number of bytes
963 (see \fBfBzfs_condense_indirect_vdevs_enable\fR).
965 Default value: \fB1,073,741,824\fR.
971 \fBzfs_condense_min_mapping_bytes\fR (ulong)
974 Minimum size vdev mapping to attempt to condense (see
975 \fBzfs_condense_indirect_vdevs_enable\fR).
977 Default value: \fB131,072\fR.
983 \fBzfs_dbgmsg_enable\fR (int)
986 Internally ZFS keeps a small log to facilitate debugging. By default the log
987 is disabled, to enable it set this option to 1. The contents of the log can
988 be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
989 this proc file clears the log.
991 Default value: \fB0\fR.
997 \fBzfs_dbgmsg_maxsize\fR (int)
1000 The maximum size in bytes of the internal ZFS debug log.
1002 Default value: \fB4M\fR.
1008 \fBzfs_dbuf_state_index\fR (int)
1011 This feature is currently unused. It is normally used for controlling what
1012 reporting is available under /proc/spl/kstat/zfs.
1014 Default value: \fB0\fR.
1020 \fBzfs_deadman_enabled\fR (int)
1023 When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR
1024 milliseconds, or when an individual I/O takes longer than
1025 \fBzfs_deadman_ziotime_ms\fR milliseconds, then the operation is considered to
1026 be "hung". If \fBzfs_deadman_enabled\fR is set then the deadman behavior is
1027 invoked as described by the \fBzfs_deadman_failmode\fR module option.
1028 By default the deadman is enabled and configured to \fBwait\fR which results
1029 in "hung" I/Os only being logged. The deadman is automatically disabled
1030 when a pool gets suspended.
1032 Default value: \fB1\fR.
1038 \fBzfs_deadman_failmode\fR (charp)
1041 Controls the failure behavior when the deadman detects a "hung" I/O. Valid
1042 values are \fBwait\fR, \fBcontinue\fR, and \fBpanic\fR.
1044 \fBwait\fR - Wait for a "hung" I/O to complete. For each "hung" I/O a
1045 "deadman" event will be posted describing that I/O.
1047 \fBcontinue\fR - Attempt to recover from a "hung" I/O by re-dispatching it
1048 to the I/O pipeline if possible.
1050 \fBpanic\fR - Panic the system. This can be used to facilitate an automatic
1051 fail-over to a properly configured fail-over partner.
1053 Default value: \fBwait\fR.
1059 \fBzfs_deadman_checktime_ms\fR (int)
1062 Check time in milliseconds. This defines the frequency at which we check
1063 for hung I/O and potentially invoke the \fBzfs_deadman_failmode\fR behavior.
1065 Default value: \fB60,000\fR.
1071 \fBzfs_deadman_synctime_ms\fR (ulong)
1074 Interval in milliseconds after which the deadman is triggered and also
1075 the interval after which a pool sync operation is considered to be "hung".
1076 Once this limit is exceeded the deadman will be invoked every
1077 \fBzfs_deadman_checktime_ms\fR milliseconds until the pool sync completes.
1079 Default value: \fB600,000\fR.
1085 \fBzfs_deadman_ziotime_ms\fR (ulong)
1088 Interval in milliseconds after which the deadman is triggered and an
1089 individual I/O operation is considered to be "hung". As long as the I/O
1090 remains "hung" the deadman will be invoked every \fBzfs_deadman_checktime_ms\fR
1091 milliseconds until the I/O completes.
1093 Default value: \fB300,000\fR.
1099 \fBzfs_dedup_prefetch\fR (int)
1102 Enable prefetching dedup-ed blks
1104 Use \fB1\fR for yes and \fB0\fR to disable (default).
1110 \fBzfs_delay_min_dirty_percent\fR (int)
1113 Start to delay each transaction once there is this amount of dirty data,
1114 expressed as a percentage of \fBzfs_dirty_data_max\fR.
1115 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
1116 See the section "ZFS TRANSACTION DELAY".
1118 Default value: \fB60\fR%.
1124 \fBzfs_delay_scale\fR (int)
1127 This controls how quickly the transaction delay approaches infinity.
1128 Larger values cause longer delays for a given amount of dirty data.
1130 For the smoothest delay, this value should be about 1 billion divided
1131 by the maximum number of operations per second. This will smoothly
1132 handle between 10x and 1/10th this number.
1134 See the section "ZFS TRANSACTION DELAY".
1136 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
1138 Default value: \fB500,000\fR.
1144 \fBzfs_slow_io_events_per_second\fR (int)
1147 Rate limit delay zevents (which report slow I/Os) to this many per second.
1155 \fBzfs_delete_blocks\fR (ulong)
1158 This is the used to define a large file for the purposes of delete. Files
1159 containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
1160 while smaller files are deleted synchronously. Decreasing this value will
1161 reduce the time spent in an unlink(2) system call at the expense of a longer
1162 delay before the freed space is available.
1164 Default value: \fB20,480\fR.
1170 \fBzfs_dirty_data_max\fR (int)
1173 Determines the dirty space limit in bytes. Once this limit is exceeded, new
1174 writes are halted until space frees up. This parameter takes precedence
1175 over \fBzfs_dirty_data_max_percent\fR.
1176 See the section "ZFS TRANSACTION DELAY".
1178 Default value: \fB10\fR% of physical RAM, capped at \fBzfs_dirty_data_max_max\fR.
1184 \fBzfs_dirty_data_max_max\fR (int)
1187 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
1188 This limit is only enforced at module load time, and will be ignored if
1189 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
1190 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
1191 "ZFS TRANSACTION DELAY".
1193 Default value: \fB25\fR% of physical RAM.
1199 \fBzfs_dirty_data_max_max_percent\fR (int)
1202 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
1203 percentage of physical RAM. This limit is only enforced at module load
1204 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
1205 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
1206 one. See the section "ZFS TRANSACTION DELAY".
1208 Default value: \fB25\fR%.
1214 \fBzfs_dirty_data_max_percent\fR (int)
1217 Determines the dirty space limit, expressed as a percentage of all
1218 memory. Once this limit is exceeded, new writes are halted until space frees
1219 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
1220 one. See the section "ZFS TRANSACTION DELAY".
1222 Default value: \fB10\fR%, subject to \fBzfs_dirty_data_max_max\fR.
1228 \fBzfs_dirty_data_sync_percent\fR (int)
1231 Start syncing out a transaction group if there's at least this much dirty data
1232 as a percentage of \fBzfs_dirty_data_max\fR. This should be less than
1233 \fBzfs_vdev_async_write_active_min_dirty_percent\fR.
1235 Default value: \fB20\fR% of \fBzfs_dirty_data_max\fR.
1241 \fBzfs_fletcher_4_impl\fR (string)
1244 Select a fletcher 4 implementation.
1246 Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR,
1247 \fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR.
1248 All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction
1249 set extensions to be available and will only appear if ZFS detects that they are
1250 present at runtime. If multiple implementations of fletcher 4 are available,
1251 the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR
1252 results in the original, CPU based calculation, being used. Selecting any option
1253 other than \fBfastest\fR and \fBscalar\fR results in vector instructions from
1254 the respective CPU instruction set being used.
1256 Default value: \fBfastest\fR.
1262 \fBzfs_free_bpobj_enabled\fR (int)
1265 Enable/disable the processing of the free_bpobj object.
1267 Default value: \fB1\fR.
1273 \fBzfs_async_block_max_blocks\fR (ulong)
1276 Maximum number of blocks freed in a single txg.
1278 Default value: \fB100,000\fR.
1284 \fBzfs_override_estimate_recordsize\fR (ulong)
1287 Record size calculation override for zfs send estimates.
1289 Default value: \fB0\fR.
1295 \fBzfs_vdev_async_read_max_active\fR (int)
1298 Maximum asynchronous read I/Os active to each device.
1299 See the section "ZFS I/O SCHEDULER".
1301 Default value: \fB3\fR.
1307 \fBzfs_vdev_async_read_min_active\fR (int)
1310 Minimum asynchronous read I/Os active to each device.
1311 See the section "ZFS I/O SCHEDULER".
1313 Default value: \fB1\fR.
1319 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
1322 When the pool has more than
1323 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
1324 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
1325 the dirty data is between min and max, the active I/O limit is linearly
1326 interpolated. See the section "ZFS I/O SCHEDULER".
1328 Default value: \fB60\fR%.
1334 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
1337 When the pool has less than
1338 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
1339 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
1340 the dirty data is between min and max, the active I/O limit is linearly
1341 interpolated. See the section "ZFS I/O SCHEDULER".
1343 Default value: \fB30\fR%.
1349 \fBzfs_vdev_async_write_max_active\fR (int)
1352 Maximum asynchronous write I/Os active to each device.
1353 See the section "ZFS I/O SCHEDULER".
1355 Default value: \fB10\fR.
1361 \fBzfs_vdev_async_write_min_active\fR (int)
1364 Minimum asynchronous write I/Os active to each device.
1365 See the section "ZFS I/O SCHEDULER".
1367 Lower values are associated with better latency on rotational media but poorer
1368 resilver performance. The default value of 2 was chosen as a compromise. A
1369 value of 3 has been shown to improve resilver performance further at a cost of
1370 further increasing latency.
1372 Default value: \fB2\fR.
1378 \fBzfs_vdev_max_active\fR (int)
1381 The maximum number of I/Os active to each device. Ideally, this will be >=
1382 the sum of each queue's max_active. It must be at least the sum of each
1383 queue's min_active. See the section "ZFS I/O SCHEDULER".
1385 Default value: \fB1,000\fR.
1391 \fBzfs_vdev_scrub_max_active\fR (int)
1394 Maximum scrub I/Os active to each device.
1395 See the section "ZFS I/O SCHEDULER".
1397 Default value: \fB2\fR.
1403 \fBzfs_vdev_scrub_min_active\fR (int)
1406 Minimum scrub I/Os active to each device.
1407 See the section "ZFS I/O SCHEDULER".
1409 Default value: \fB1\fR.
1415 \fBzfs_vdev_sync_read_max_active\fR (int)
1418 Maximum synchronous read I/Os active to each device.
1419 See the section "ZFS I/O SCHEDULER".
1421 Default value: \fB10\fR.
1427 \fBzfs_vdev_sync_read_min_active\fR (int)
1430 Minimum synchronous read I/Os active to each device.
1431 See the section "ZFS I/O SCHEDULER".
1433 Default value: \fB10\fR.
1439 \fBzfs_vdev_sync_write_max_active\fR (int)
1442 Maximum synchronous write I/Os active to each device.
1443 See the section "ZFS I/O SCHEDULER".
1445 Default value: \fB10\fR.
1451 \fBzfs_vdev_sync_write_min_active\fR (int)
1454 Minimum synchronous write I/Os active to each device.
1455 See the section "ZFS I/O SCHEDULER".
1457 Default value: \fB10\fR.
1463 \fBzfs_vdev_queue_depth_pct\fR (int)
1466 Maximum number of queued allocations per top-level vdev expressed as
1467 a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the
1468 system to detect devices that are more capable of handling allocations
1469 and to allocate more blocks to those devices. It allows for dynamic
1470 allocation distribution when devices are imbalanced as fuller devices
1471 will tend to be slower than empty devices.
1473 See also \fBzio_dva_throttle_enabled\fR.
1475 Default value: \fB1000\fR%.
1481 \fBzfs_expire_snapshot\fR (int)
1484 Seconds to expire .zfs/snapshot
1486 Default value: \fB300\fR.
1492 \fBzfs_admin_snapshot\fR (int)
1495 Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1496 directory to cause the creation, destruction, or renaming of snapshots.
1497 When enabled this functionality works both locally and over NFS exports
1498 which have the 'no_root_squash' option set. This functionality is disabled
1501 Use \fB1\fR for yes and \fB0\fR for no (default).
1507 \fBzfs_flags\fR (int)
1510 Set additional debugging flags. The following flags may be bitwise-or'd
1522 Enable dprintf entries in the debug log.
1524 2 ZFS_DEBUG_DBUF_VERIFY *
1525 Enable extra dbuf verifications.
1527 4 ZFS_DEBUG_DNODE_VERIFY *
1528 Enable extra dnode verifications.
1530 8 ZFS_DEBUG_SNAPNAMES
1531 Enable snapshot name verification.
1534 Check for illegally modified ARC buffers.
1536 64 ZFS_DEBUG_ZIO_FREE
1537 Enable verification of block frees.
1539 128 ZFS_DEBUG_HISTOGRAM_VERIFY
1540 Enable extra spacemap histogram verifications.
1542 256 ZFS_DEBUG_METASLAB_VERIFY
1543 Verify space accounting on disk matches in-core range_trees.
1545 512 ZFS_DEBUG_SET_ERROR
1546 Enable SET_ERROR and dprintf entries in the debug log.
1549 * Requires debug build.
1551 Default value: \fB0\fR.
1557 \fBzfs_free_leak_on_eio\fR (int)
1560 If destroy encounters an EIO while reading metadata (e.g. indirect
1561 blocks), space referenced by the missing metadata can not be freed.
1562 Normally this causes the background destroy to become "stalled", as
1563 it is unable to make forward progress. While in this stalled state,
1564 all remaining space to free from the error-encountering filesystem is
1565 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1566 permanently leak the space from indirect blocks that can not be read,
1567 and continue to free everything else that it can.
1569 The default, "stalling" behavior is useful if the storage partially
1570 fails (i.e. some but not all i/os fail), and then later recovers. In
1571 this case, we will be able to continue pool operations while it is
1572 partially failed, and when it recovers, we can continue to free the
1573 space, with no leaks. However, note that this case is actually
1576 Typically pools either (a) fail completely (but perhaps temporarily,
1577 e.g. a top-level vdev going offline), or (b) have localized,
1578 permanent errors (e.g. disk returns the wrong data due to bit flip or
1579 firmware bug). In case (a), this setting does not matter because the
1580 pool will be suspended and the sync thread will not be able to make
1581 forward progress regardless. In case (b), because the error is
1582 permanent, the best we can do is leak the minimum amount of space,
1583 which is what setting this flag will do. Therefore, it is reasonable
1584 for this flag to normally be set, but we chose the more conservative
1585 approach of not setting it, so that there is no possibility of
1586 leaking space in the "partial temporary" failure case.
1588 Default value: \fB0\fR.
1594 \fBzfs_free_min_time_ms\fR (int)
1597 During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum
1598 of this much time will be spent working on freeing blocks per txg.
1600 Default value: \fB1,000\fR.
1606 \fBzfs_immediate_write_sz\fR (long)
1609 Largest data block to write to zil. Larger blocks will be treated as if the
1610 dataset being written to had the property setting \fBlogbias=throughput\fR.
1612 Default value: \fB32,768\fR.
1618 \fBzfs_lua_max_instrlimit\fR (ulong)
1621 The maximum execution time limit that can be set for a ZFS channel program,
1622 specified as a number of Lua instructions.
1624 Default value: \fB100,000,000\fR.
1630 \fBzfs_lua_max_memlimit\fR (ulong)
1633 The maximum memory limit that can be set for a ZFS channel program, specified
1636 Default value: \fB104,857,600\fR.
1642 \fBzfs_max_dataset_nesting\fR (int)
1645 The maximum depth of nested datasets. This value can be tuned temporarily to
1646 fix existing datasets that exceed the predefined limit.
1648 Default value: \fB50\fR.
1654 \fBzfs_max_recordsize\fR (int)
1657 We currently support block sizes from 512 bytes to 16MB. The benefits of
1658 larger blocks, and thus larger I/O, need to be weighed against the cost of
1659 COWing a giant block to modify one byte. Additionally, very large blocks
1660 can have an impact on i/o latency, and also potentially on the memory
1661 allocator. Therefore, we do not allow the recordsize to be set larger than
1662 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1663 this tunable, and pools with larger blocks can always be imported and used,
1664 regardless of this setting.
1666 Default value: \fB1,048,576\fR.
1672 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1675 Allow metaslabs to keep their active state as long as their fragmentation
1676 percentage is less than or equal to this value. An active metaslab that
1677 exceeds this threshold will no longer keep its active status allowing
1678 better metaslabs to be selected.
1680 Default value: \fB70\fR.
1686 \fBzfs_mg_fragmentation_threshold\fR (int)
1689 Metaslab groups are considered eligible for allocations if their
1690 fragmentation metric (measured as a percentage) is less than or equal to
1691 this value. If a metaslab group exceeds this threshold then it will be
1692 skipped unless all metaslab groups within the metaslab class have also
1693 crossed this threshold.
1695 Default value: \fB85\fR.
1701 \fBzfs_mg_noalloc_threshold\fR (int)
1704 Defines a threshold at which metaslab groups should be eligible for
1705 allocations. The value is expressed as a percentage of free space
1706 beyond which a metaslab group is always eligible for allocations.
1707 If a metaslab group's free space is less than or equal to the
1708 threshold, the allocator will avoid allocating to that group
1709 unless all groups in the pool have reached the threshold. Once all
1710 groups have reached the threshold, all groups are allowed to accept
1711 allocations. The default value of 0 disables the feature and causes
1712 all metaslab groups to be eligible for allocations.
1714 This parameter allows one to deal with pools having heavily imbalanced
1715 vdevs such as would be the case when a new vdev has been added.
1716 Setting the threshold to a non-zero percentage will stop allocations
1717 from being made to vdevs that aren't filled to the specified percentage
1718 and allow lesser filled vdevs to acquire more allocations than they
1719 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1721 Default value: \fB0\fR.
1727 \fBzfs_ddt_data_is_special\fR (int)
1730 If enabled, ZFS will place DDT data into the special allocation class.
1732 Default value: \fB1\fR.
1738 \fBzfs_user_indirect_is_special\fR (int)
1741 If enabled, ZFS will place user data (both file and zvol) indirect blocks
1742 into the special allocation class.
1744 Default value: \fB1\fR.
1750 \fBzfs_multihost_history\fR (int)
1753 Historical statistics for the last N multihost updates will be available in
1754 \fB/proc/spl/kstat/zfs/<pool>/multihost\fR
1756 Default value: \fB0\fR.
1762 \fBzfs_multihost_interval\fR (ulong)
1765 Used to control the frequency of multihost writes which are performed when the
1766 \fBmultihost\fR pool property is on. This is one factor used to determine
1767 the length of the activity check during import.
1769 The multihost write period is \fBzfs_multihost_interval / leaf-vdevs\fR milliseconds.
1770 This means that on average a multihost write will be issued for each leaf vdev every
1771 \fBzfs_multihost_interval\fR milliseconds. In practice, the observed period can
1772 vary with the I/O load and this observed value is the delay which is stored in
1775 On import the activity check waits a minimum amount of time determined by
1776 \fBzfs_multihost_interval * zfs_multihost_import_intervals\fR. The activity
1777 check time may be further extended if the value of mmp delay found in the best
1778 uberblock indicates actual multihost updates happened at longer intervals than
1779 \fBzfs_multihost_interval\fR. A minimum value of \fB100ms\fR is enforced.
1781 Default value: \fB1000\fR.
1787 \fBzfs_multihost_import_intervals\fR (uint)
1790 Used to control the duration of the activity test on import. Smaller values of
1791 \fBzfs_multihost_import_intervals\fR will reduce the import time but increase
1792 the risk of failing to detect an active pool. The total activity check time is
1793 never allowed to drop below one second. A value of 0 is ignored and treated as
1796 Default value: \fB10\fR.
1802 \fBzfs_multihost_fail_intervals\fR (uint)
1805 Controls the behavior of the pool when multihost write failures are detected.
1807 When \fBzfs_multihost_fail_intervals = 0\fR then multihost write failures are ignored.
1808 The failures will still be reported to the ZED which depending on its
1809 configuration may take action such as suspending the pool or offlining a device.
1811 When \fBzfs_multihost_fail_intervals > 0\fR then sequential multihost write failures
1812 will cause the pool to be suspended. This occurs when
1813 \fBzfs_multihost_fail_intervals * zfs_multihost_interval\fR milliseconds have
1814 passed since the last successful multihost write. This guarantees the activity test
1815 will see multihost writes if the pool is imported.
1817 Default value: \fB5\fR.
1823 \fBzfs_no_scrub_io\fR (int)
1826 Set for no scrub I/O. This results in scrubs not actually scrubbing data and
1827 simply doing a metadata crawl of the pool instead.
1829 Use \fB1\fR for yes and \fB0\fR for no (default).
1835 \fBzfs_no_scrub_prefetch\fR (int)
1838 Set to disable block prefetching for scrubs.
1840 Use \fB1\fR for yes and \fB0\fR for no (default).
1846 \fBzfs_nocacheflush\fR (int)
1849 Disable cache flush operations on disks when writing. Beware, this may cause
1850 corruption if disks re-order writes.
1852 Use \fB1\fR for yes and \fB0\fR for no (default).
1858 \fBzfs_nopwrite_enabled\fR (int)
1863 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1869 \fBzfs_dmu_offset_next_sync\fR (int)
1872 Enable forcing txg sync to find holes. When enabled forces ZFS to act
1873 like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which
1874 when a dnode is dirty causes txg's to be synced so that this data can be
1877 Use \fB1\fR for yes and \fB0\fR to disable (default).
1883 \fBzfs_pd_bytes_max\fR (int)
1886 The number of bytes which should be prefetched during a pool traversal
1887 (eg: \fBzfs send\fR or other data crawling operations)
1889 Default value: \fB52,428,800\fR.
1895 \fBzfs_per_txg_dirty_frees_percent \fR (ulong)
1898 Tunable to control percentage of dirtied blocks from frees in one TXG.
1899 After this threshold is crossed, additional dirty blocks from frees
1900 wait until the next TXG.
1901 A value of zero will disable this throttle.
1903 Default value: \fB30\fR and \fB0\fR to disable.
1911 \fBzfs_prefetch_disable\fR (int)
1914 This tunable disables predictive prefetch. Note that it leaves "prescient"
1915 prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1916 prescient prefetch never issues i/os that end up not being needed, so it
1917 can't hurt performance.
1919 Use \fB1\fR for yes and \fB0\fR for no (default).
1925 \fBzfs_read_chunk_size\fR (long)
1928 Bytes to read per chunk
1930 Default value: \fB1,048,576\fR.
1936 \fBzfs_read_history\fR (int)
1939 Historical statistics for the last N reads will be available in
1940 \fB/proc/spl/kstat/zfs/<pool>/reads\fR
1942 Default value: \fB0\fR (no data is kept).
1948 \fBzfs_read_history_hits\fR (int)
1951 Include cache hits in read history
1953 Use \fB1\fR for yes and \fB0\fR for no (default).
1959 \fBzfs_reconstruct_indirect_combinations_max\fR (int)
1962 If an indirect split block contains more than this many possible unique
1963 combinations when being reconstructed, consider it too computationally
1964 expensive to check them all. Instead, try at most
1965 \fBzfs_reconstruct_indirect_combinations_max\fR randomly-selected
1966 combinations each time the block is accessed. This allows all segment
1967 copies to participate fairly in the reconstruction when all combinations
1968 cannot be checked and prevents repeated use of one bad copy.
1970 Default value: \fB256\fR.
1976 \fBzfs_recover\fR (int)
1979 Set to attempt to recover from fatal errors. This should only be used as a
1980 last resort, as it typically results in leaked space, or worse.
1982 Use \fB1\fR for yes and \fB0\fR for no (default).
1988 \fBzfs_removal_ignore_errors\fR (int)
1992 Ignore hard IO errors during device removal. When set, if a device encounters
1993 a hard IO error during the removal process the removal will not be cancelled.
1994 This can result in a normally recoverable block becoming permanently damaged
1995 and is not recommended. This should only be used as a last resort when the
1996 pool cannot be returned to a healthy state prior to removing the device.
1998 Default value: \fB0\fR.
2004 \fBzfs_resilver_min_time_ms\fR (int)
2007 Resilvers are processed by the sync thread. While resilvering it will spend
2008 at least this much time working on a resilver between txg flushes.
2010 Default value: \fB3,000\fR.
2016 \fBzfs_scan_ignore_errors\fR (int)
2019 If set to a nonzero value, remove the DTL (dirty time list) upon
2020 completion of a pool scan (scrub) even if there were unrepairable
2021 errors. It is intended to be used during pool repair or recovery to
2022 stop resilvering when the pool is next imported.
2024 Default value: \fB0\fR.
2030 \fBzfs_scrub_min_time_ms\fR (int)
2033 Scrubs are processed by the sync thread. While scrubbing it will spend
2034 at least this much time working on a scrub between txg flushes.
2036 Default value: \fB1,000\fR.
2042 \fBzfs_scan_checkpoint_intval\fR (int)
2045 To preserve progress across reboots the sequential scan algorithm periodically
2046 needs to stop metadata scanning and issue all the verifications I/Os to disk.
2047 The frequency of this flushing is determined by the
2048 \fBzfs_scan_checkpoint_intval\fR tunable.
2050 Default value: \fB7200\fR seconds (every 2 hours).
2056 \fBzfs_scan_fill_weight\fR (int)
2059 This tunable affects how scrub and resilver I/O segments are ordered. A higher
2060 number indicates that we care more about how filled in a segment is, while a
2061 lower number indicates we care more about the size of the extent without
2062 considering the gaps within a segment. This value is only tunable upon module
2063 insertion. Changing the value afterwards will have no affect on scrub or
2064 resilver performance.
2066 Default value: \fB3\fR.
2072 \fBzfs_scan_issue_strategy\fR (int)
2075 Determines the order that data will be verified while scrubbing or resilvering.
2076 If set to \fB1\fR, data will be verified as sequentially as possible, given the
2077 amount of memory reserved for scrubbing (see \fBzfs_scan_mem_lim_fact\fR). This
2078 may improve scrub performance if the pool's data is very fragmented. If set to
2079 \fB2\fR, the largest mostly-contiguous chunk of found data will be verified
2080 first. By deferring scrubbing of small segments, we may later find adjacent data
2081 to coalesce and increase the segment size. If set to \fB0\fR, zfs will use
2082 strategy \fB1\fR during normal verification and strategy \fB2\fR while taking a
2085 Default value: \fB0\fR.
2091 \fBzfs_scan_legacy\fR (int)
2094 A value of 0 indicates that scrubs and resilvers will gather metadata in
2095 memory before issuing sequential I/O. A value of 1 indicates that the legacy
2096 algorithm will be used where I/O is initiated as soon as it is discovered.
2097 Changing this value to 0 will not affect scrubs or resilvers that are already
2100 Default value: \fB0\fR.
2106 \fBzfs_scan_max_ext_gap\fR (int)
2109 Indicates the largest gap in bytes between scrub / resilver I/Os that will still
2110 be considered sequential for sorting purposes. Changing this value will not
2111 affect scrubs or resilvers that are already in progress.
2113 Default value: \fB2097152 (2 MB)\fR.
2119 \fBzfs_scan_mem_lim_fact\fR (int)
2122 Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2123 This tunable determines the hard limit for I/O sorting memory usage.
2124 When the hard limit is reached we stop scanning metadata and start issuing
2125 data verification I/O. This is done until we get below the soft limit.
2127 Default value: \fB20\fR which is 5% of RAM (1/20).
2133 \fBzfs_scan_mem_lim_soft_fact\fR (int)
2136 The fraction of the hard limit used to determined the soft limit for I/O sorting
2137 by the sequential scan algorithm. When we cross this limit from bellow no action
2138 is taken. When we cross this limit from above it is because we are issuing
2139 verification I/O. In this case (unless the metadata scan is done) we stop
2140 issuing verification I/O and start scanning metadata again until we get to the
2143 Default value: \fB20\fR which is 5% of the hard limit (1/20).
2149 \fBzfs_scan_vdev_limit\fR (int)
2152 Maximum amount of data that can be concurrently issued at once for scrubs and
2153 resilvers per leaf device, given in bytes.
2155 Default value: \fB41943040\fR.
2161 \fBzfs_send_corrupt_data\fR (int)
2164 Allow sending of corrupt data (ignore read/checksum errors when sending data)
2166 Use \fB1\fR for yes and \fB0\fR for no (default).
2172 \fBzfs_send_queue_length\fR (int)
2175 The maximum number of bytes allowed in the \fBzfs send\fR queue. This value
2176 must be at least twice the maximum block size in use.
2178 Default value: \fB16,777,216\fR.
2184 \fBzfs_recv_queue_length\fR (int)
2188 The maximum number of bytes allowed in the \fBzfs receive\fR queue. This value
2189 must be at least twice the maximum block size in use.
2191 Default value: \fB16,777,216\fR.
2197 \fBzfs_sync_pass_deferred_free\fR (int)
2200 Flushing of data to disk is done in passes. Defer frees starting in this pass
2202 Default value: \fB2\fR.
2208 \fBzfs_spa_discard_memory_limit\fR (int)
2211 Maximum memory used for prefetching a checkpoint's space map on each
2212 vdev while discarding the checkpoint.
2214 Default value: \fB16,777,216\fR.
2220 \fBzfs_sync_pass_dont_compress\fR (int)
2223 Don't compress starting in this pass
2225 Default value: \fB5\fR.
2231 \fBzfs_sync_pass_rewrite\fR (int)
2234 Rewrite new block pointers starting in this pass
2236 Default value: \fB2\fR.
2242 \fBzfs_sync_taskq_batch_pct\fR (int)
2245 This controls the number of threads used by the dp_sync_taskq. The default
2246 value of 75% will create a maximum of one thread per cpu.
2248 Default value: \fB75\fR%.
2254 \fBzfs_txg_history\fR (int)
2257 Historical statistics for the last N txgs will be available in
2258 \fB/proc/spl/kstat/zfs/<pool>/txgs\fR
2260 Default value: \fB0\fR.
2266 \fBzfs_txg_timeout\fR (int)
2269 Flush dirty data to disk at least every N seconds (maximum txg duration)
2271 Default value: \fB5\fR.
2277 \fBzfs_vdev_aggregation_limit\fR (int)
2280 Max vdev I/O aggregation size
2282 Default value: \fB131,072\fR.
2288 \fBzfs_vdev_cache_bshift\fR (int)
2291 Shift size to inflate reads too
2293 Default value: \fB16\fR (effectively 65536).
2299 \fBzfs_vdev_cache_max\fR (int)
2302 Inflate reads smaller than this value to meet the \fBzfs_vdev_cache_bshift\fR
2305 Default value: \fB16384\fR.
2311 \fBzfs_vdev_cache_size\fR (int)
2314 Total size of the per-disk cache in bytes.
2316 Currently this feature is disabled as it has been found to not be helpful
2317 for performance and in some cases harmful.
2319 Default value: \fB0\fR.
2325 \fBzfs_vdev_mirror_rotating_inc\fR (int)
2328 A number by which the balancing algorithm increments the load calculation for
2329 the purpose of selecting the least busy mirror member when an I/O immediately
2330 follows its predecessor on rotational vdevs for the purpose of making decisions
2333 Default value: \fB0\fR.
2339 \fBzfs_vdev_mirror_rotating_seek_inc\fR (int)
2342 A number by which the balancing algorithm increments the load calculation for
2343 the purpose of selecting the least busy mirror member when an I/O lacks
2344 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2345 this that are not immediately following the previous I/O are incremented by
2348 Default value: \fB5\fR.
2354 \fBzfs_vdev_mirror_rotating_seek_offset\fR (int)
2357 The maximum distance for the last queued I/O in which the balancing algorithm
2358 considers an I/O to have locality.
2359 See the section "ZFS I/O SCHEDULER".
2361 Default value: \fB1048576\fR.
2367 \fBzfs_vdev_mirror_non_rotating_inc\fR (int)
2370 A number by which the balancing algorithm increments the load calculation for
2371 the purpose of selecting the least busy mirror member on non-rotational vdevs
2372 when I/Os do not immediately follow one another.
2374 Default value: \fB0\fR.
2380 \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int)
2383 A number by which the balancing algorithm increments the load calculation for
2384 the purpose of selecting the least busy mirror member when an I/O lacks
2385 locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2386 this that are not immediately following the previous I/O are incremented by
2389 Default value: \fB1\fR.
2395 \fBzfs_vdev_read_gap_limit\fR (int)
2398 Aggregate read I/O operations if the gap on-disk between them is within this
2401 Default value: \fB32,768\fR.
2407 \fBzfs_vdev_scheduler\fR (charp)
2410 Set the Linux I/O scheduler on whole disk vdevs to this scheduler. Valid options
2411 are noop, cfq, bfq & deadline
2413 Default value: \fBnoop\fR.
2419 \fBzfs_vdev_write_gap_limit\fR (int)
2422 Aggregate write I/O over gap
2424 Default value: \fB4,096\fR.
2430 \fBzfs_vdev_raidz_impl\fR (string)
2433 Parameter for selecting raidz parity implementation to use.
2435 Options marked (always) below may be selected on module load as they are
2436 supported on all systems.
2437 The remaining options may only be set after the module is loaded, as they
2438 are available only if the implementations are compiled in and supported
2439 on the running system.
2441 Once the module is loaded, the content of
2442 /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options
2443 with the currently selected one enclosed in [].
2444 Possible options are:
2445 fastest - (always) implementation selected using built-in benchmark
2446 original - (always) original raidz implementation
2447 scalar - (always) scalar raidz implementation
2448 sse2 - implementation using SSE2 instruction set (64bit x86 only)
2449 ssse3 - implementation using SSSE3 instruction set (64bit x86 only)
2450 avx2 - implementation using AVX2 instruction set (64bit x86 only)
2451 avx512f - implementation using AVX512F instruction set (64bit x86 only)
2452 avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only)
2453 aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only)
2454 aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only)
2456 Default value: \fBfastest\fR.
2462 \fBzfs_zevent_cols\fR (int)
2465 When zevents are logged to the console use this as the word wrap width.
2467 Default value: \fB80\fR.
2473 \fBzfs_zevent_console\fR (int)
2476 Log events to the console
2478 Use \fB1\fR for yes and \fB0\fR for no (default).
2484 \fBzfs_zevent_len_max\fR (int)
2487 Max event queue length. A value of 0 will result in a calculated value which
2488 increases with the number of CPUs in the system (minimum 64 events). Events
2489 in the queue can be viewed with the \fBzpool events\fR command.
2491 Default value: \fB0\fR.
2497 \fBzfs_zil_clean_taskq_maxalloc\fR (int)
2500 The maximum number of taskq entries that are allowed to be cached. When this
2501 limit is exceeded transaction records (itxs) will be cleaned synchronously.
2503 Default value: \fB1048576\fR.
2509 \fBzfs_zil_clean_taskq_minalloc\fR (int)
2512 The number of taskq entries that are pre-populated when the taskq is first
2513 created and are immediately available for use.
2515 Default value: \fB1024\fR.
2521 \fBzfs_zil_clean_taskq_nthr_pct\fR (int)
2524 This controls the number of threads used by the dp_zil_clean_taskq. The default
2525 value of 100% will create a maximum of one thread per cpu.
2527 Default value: \fB100\fR%.
2533 \fBzil_replay_disable\fR (int)
2536 Disable intent logging replay. Can be disabled for recovery from corrupted
2539 Use \fB1\fR for yes and \fB0\fR for no (default).
2545 \fBzil_slog_bulk\fR (ulong)
2548 Limit SLOG write size per commit executed with synchronous priority.
2549 Any writes above that will be executed with lower (asynchronous) priority
2550 to limit potential SLOG device abuse by single active ZIL writer.
2552 Default value: \fB786,432\fR.
2558 \fBzio_decompress_fail_fraction\fR (int)
2561 If non-zero, this value represents the denominator of the probability that zfs
2562 should induce a decompression failure. For instance, for a 5% decompression
2563 failure rate, this value should be set to 20.
2565 Default value: \fB0\fR.
2571 \fBzio_slow_io_ms\fR (int)
2574 When an I/O operation takes more than \fBzio_slow_io_ms\fR milliseconds to
2575 complete is marked as a slow I/O. Each slow I/O causes a delay zevent. Slow
2576 I/O counters can be seen with "zpool status -s".
2579 Default value: \fB30,000\fR.
2585 \fBzio_dva_throttle_enabled\fR (int)
2588 Throttle block allocations in the I/O pipeline. This allows for
2589 dynamic allocation distribution when devices are imbalanced.
2590 When enabled, the maximum number of pending allocations per top-level vdev
2591 is limited by \fBzfs_vdev_queue_depth_pct\fR.
2593 Default value: \fB1\fR.
2599 \fBzio_requeue_io_start_cut_in_line\fR (int)
2602 Prioritize requeued I/O
2604 Default value: \fB0\fR.
2610 \fBzio_taskq_batch_pct\fR (uint)
2613 Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
2614 for I/O. These workers are responsible for I/O work such as compression and
2615 checksum calculations. Fractional number of CPUs will be rounded down.
2617 The default value of 75 was chosen to avoid using all CPUs which can result in
2618 latency issues and inconsistent application performance, especially when high
2619 compression is enabled.
2621 Default value: \fB75\fR.
2627 \fBzvol_inhibit_dev\fR (uint)
2630 Do not create zvol device nodes. This may slightly improve startup time on
2631 systems with a very large number of zvols.
2633 Use \fB1\fR for yes and \fB0\fR for no (default).
2639 \fBzvol_major\fR (uint)
2642 Major number for zvol block devices
2644 Default value: \fB230\fR.
2650 \fBzvol_max_discard_blocks\fR (ulong)
2653 Discard (aka TRIM) operations done on zvols will be done in batches of this
2654 many blocks, where block size is determined by the \fBvolblocksize\fR property
2657 Default value: \fB16,384\fR.
2663 \fBzvol_prefetch_bytes\fR (uint)
2666 When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
2667 from the start and end of the volume. Prefetching these regions
2668 of the volume is desirable because they are likely to be accessed
2669 immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
2672 Default value: \fB131,072\fR.
2678 \fBzvol_request_sync\fR (uint)
2681 When processing I/O requests for a zvol submit them synchronously. This
2682 effectively limits the queue depth to 1 for each I/O submitter. When set
2683 to 0 requests are handled asynchronously by a thread pool. The number of
2684 requests which can be handled concurrently is controller by \fBzvol_threads\fR.
2686 Default value: \fB0\fR.
2692 \fBzvol_threads\fR (uint)
2695 Max number of threads which can handle zvol I/O requests concurrently.
2697 Default value: \fB32\fR.
2703 \fBzvol_volmode\fR (uint)
2706 Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR.
2707 Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none).
2709 Default value: \fB1\fR.
2715 \fBzfs_qat_disable\fR (int)
2718 This tunable disables qat hardware acceleration for gzip compression and.
2719 AES-GCM encryption. It is available only if qat acceleration is compiled in
2720 and the qat driver is present.
2722 Use \fB1\fR for yes and \fB0\fR for no (default).
2725 .SH ZFS I/O SCHEDULER
2726 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
2727 The I/O scheduler determines when and in what order those operations are
2728 issued. The I/O scheduler divides operations into five I/O classes
2729 prioritized in the following order: sync read, sync write, async read,
2730 async write, and scrub/resilver. Each queue defines the minimum and
2731 maximum number of concurrent operations that may be issued to the
2732 device. In addition, the device has an aggregate maximum,
2733 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
2734 must not exceed the aggregate maximum. If the sum of the per-queue
2735 maximums exceeds the aggregate maximum, then the number of active I/Os
2736 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
2737 be issued regardless of whether all per-queue minimums have been met.
2739 For many physical devices, throughput increases with the number of
2740 concurrent operations, but latency typically suffers. Further, physical
2741 devices typically have a limit at which more concurrent operations have no
2742 effect on throughput or can actually cause it to decrease.
2744 The scheduler selects the next operation to issue by first looking for an
2745 I/O class whose minimum has not been satisfied. Once all are satisfied and
2746 the aggregate maximum has not been hit, the scheduler looks for classes
2747 whose maximum has not been satisfied. Iteration through the I/O classes is
2748 done in the order specified above. No further operations are issued if the
2749 aggregate maximum number of concurrent operations has been hit or if there
2750 are no operations queued for an I/O class that has not hit its maximum.
2751 Every time an I/O is queued or an operation completes, the I/O scheduler
2752 looks for new operations to issue.
2754 In general, smaller max_active's will lead to lower latency of synchronous
2755 operations. Larger max_active's may lead to higher overall throughput,
2756 depending on underlying storage.
2758 The ratio of the queues' max_actives determines the balance of performance
2759 between reads, writes, and scrubs. E.g., increasing
2760 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
2761 more quickly, but reads and writes to have higher latency and lower throughput.
2763 All I/O classes have a fixed maximum number of outstanding operations
2764 except for the async write class. Asynchronous writes represent the data
2765 that is committed to stable storage during the syncing stage for
2766 transaction groups. Transaction groups enter the syncing state
2767 periodically so the number of queued async writes will quickly burst up
2768 and then bleed down to zero. Rather than servicing them as quickly as
2769 possible, the I/O scheduler changes the maximum number of active async
2770 write I/Os according to the amount of dirty data in the pool. Since
2771 both throughput and latency typically increase with the number of
2772 concurrent operations issued to physical devices, reducing the
2773 burstiness in the number of concurrent operations also stabilizes the
2774 response time of operations from other -- and in particular synchronous
2775 -- queues. In broad strokes, the I/O scheduler will issue more
2776 concurrent operations from the async write queue as there's more dirty
2781 The number of concurrent operations issued for the async write I/O class
2782 follows a piece-wise linear function defined by a few adjustable points.
2785 | o---------| <-- zfs_vdev_async_write_max_active
2792 |-------o | | <-- zfs_vdev_async_write_min_active
2793 0|_______^______|_________|
2794 0% | | 100% of zfs_dirty_data_max
2796 | `-- zfs_vdev_async_write_active_max_dirty_percent
2797 `--------- zfs_vdev_async_write_active_min_dirty_percent
2800 Until the amount of dirty data exceeds a minimum percentage of the dirty
2801 data allowed in the pool, the I/O scheduler will limit the number of
2802 concurrent operations to the minimum. As that threshold is crossed, the
2803 number of concurrent operations issued increases linearly to the maximum at
2804 the specified maximum percentage of the dirty data allowed in the pool.
2806 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2807 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
2808 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
2809 maximum percentage, this indicates that the rate of incoming data is
2810 greater than the rate that the backend storage can handle. In this case, we
2811 must further throttle incoming writes, as described in the next section.
2813 .SH ZFS TRANSACTION DELAY
2814 We delay transactions when we've determined that the backend storage
2815 isn't able to accommodate the rate of incoming writes.
2817 If there is already a transaction waiting, we delay relative to when
2818 that transaction will finish waiting. This way the calculated delay time
2819 is independent of the number of threads concurrently executing
2822 If we are the only waiter, wait relative to when the transaction
2823 started, rather than the current time. This credits the transaction for
2824 "time already served", e.g. reading indirect blocks.
2826 The minimum time for a transaction to take is calculated as:
2828 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
2829 min_time is then capped at 100 milliseconds.
2832 The delay has two degrees of freedom that can be adjusted via tunables. The
2833 percentage of dirty data at which we start to delay is defined by
2834 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
2835 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
2836 delay after writing at full speed has failed to keep up with the incoming write
2837 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
2838 this variable determines the amount of delay at the midpoint of the curve.
2842 10ms +-------------------------------------------------------------*+
2858 2ms + (midpoint) * +
2861 | zfs_delay_scale ----------> ******** |
2862 0 +-------------------------------------*********----------------+
2863 0% <- zfs_dirty_data_max -> 100%
2866 Note that since the delay is added to the outstanding time remaining on the
2867 most recent transaction, the delay is effectively the inverse of IOPS.
2868 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
2869 was chosen such that small changes in the amount of accumulated dirty data
2870 in the first 3/4 of the curve yield relatively small differences in the
2873 The effects can be easier to understand when the amount of delay is
2874 represented on a log scale:
2878 100ms +-------------------------------------------------------------++
2887 + zfs_delay_scale ----------> ***** +
2898 +--------------------------------------------------------------+
2899 0% <- zfs_dirty_data_max -> 100%
2902 Note here that only as the amount of dirty data approaches its limit does
2903 the delay start to increase rapidly. The goal of a properly tuned system
2904 should be to keep the amount of dirty data out of that range by first
2905 ensuring that the appropriate limits are set for the I/O scheduler to reach
2906 optimal throughput on the backend storage, and then by changing the value
2907 of \fBzfs_delay_scale\fR to increase the steepness of the curve.