]> git.proxmox.com Git - mirror_zfs.git/blame - man/man5/zfs-module-parameters.5
Change target size of metaslabs from 256GB to 16GB
[mirror_zfs.git] / man / man5 / zfs-module-parameters.5
CommitLineData
29714574
TF
1'\" te
2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
d4a72f23 3.\" Copyright (c) 2017 Datto Inc.
917f475f 4.\" Copyright (c) 2018 by Delphix. All rights reserved.
29714574
TF
5.\" The contents of this file are subject to the terms of the Common Development
6.\" and Distribution License (the "License"). You may not use this file except
7.\" in compliance with the License. You can obtain a copy of the license at
8.\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
9.\"
10.\" See the License for the specific language governing permissions and
11.\" limitations under the License. When distributing Covered Code, include this
12.\" CDDL HEADER in each file and include the License file at
13.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15.\" own identifying information:
16.\" Portions Copyright [yyyy] [name of copyright owner]
d4a72f23 17.TH ZFS-MODULE-PARAMETERS 5 "Oct 28, 2017"
29714574
TF
18.SH NAME
19zfs\-module\-parameters \- ZFS module parameters
20.SH DESCRIPTION
21.sp
22.LP
23Description of the different parameters to the ZFS module.
24
25.SS "Module parameters"
26.sp
27.LP
28
de4f8d5d
BB
29.sp
30.ne 2
31.na
32\fBdbuf_cache_max_bytes\fR (ulong)
33.ad
34.RS 12n
35Maximum size in bytes of the dbuf cache. When \fB0\fR this value will default
36to \fB1/2^dbuf_cache_shift\fR (1/32) of the target ARC size, otherwise the
37provided value in bytes will be used. The behavior of the dbuf cache and its
38associated settings can be observed via the \fB/proc/spl/kstat/zfs/dbufstats\fR
39kstat.
40.sp
41Default value: \fB0\fR.
42.RE
43
2e5dc449
MA
44.sp
45.ne 2
46.na
47\fBdbuf_metadata_cache_max_bytes\fR (ulong)
48.ad
49.RS 12n
50Maximum size in bytes of the metadata dbuf cache. When \fB0\fR this value will
51default to \fB1/2^dbuf_cache_shift\fR (1/16) of the target ARC size, otherwise
52the provided value in bytes will be used. The behavior of the metadata dbuf
53cache and its associated settings can be observed via the
54\fB/proc/spl/kstat/zfs/dbufstats\fR kstat.
55.sp
56Default value: \fB0\fR.
57.RE
58
de4f8d5d
BB
59.sp
60.ne 2
61.na
62\fBdbuf_cache_hiwater_pct\fR (uint)
63.ad
64.RS 12n
65The percentage over \fBdbuf_cache_max_bytes\fR when dbufs must be evicted
66directly.
67.sp
68Default value: \fB10\fR%.
69.RE
70
71.sp
72.ne 2
73.na
74\fBdbuf_cache_lowater_pct\fR (uint)
75.ad
76.RS 12n
77The percentage below \fBdbuf_cache_max_bytes\fR when the evict thread stops
78evicting dbufs.
79.sp
80Default value: \fB10\fR%.
81.RE
82
83.sp
84.ne 2
85.na
86\fBdbuf_cache_shift\fR (int)
87.ad
88.RS 12n
89Set the size of the dbuf cache, \fBdbuf_cache_max_bytes\fR, to a log2 fraction
90of the target arc size.
91.sp
92Default value: \fB5\fR.
93.RE
94
2e5dc449
MA
95.sp
96.ne 2
97.na
98\fBdbuf_metadata_cache_shift\fR (int)
99.ad
100.RS 12n
101Set the size of the dbuf metadata cache, \fBdbuf_metadata_cache_max_bytes\fR,
102to a log2 fraction of the target arc size.
103.sp
104Default value: \fB6\fR.
105.RE
106
6d836e6f
RE
107.sp
108.ne 2
109.na
110\fBignore_hole_birth\fR (int)
111.ad
112.RS 12n
113When set, the hole_birth optimization will not be used, and all holes will
114always be sent on zfs send. Useful if you suspect your datasets are affected
115by a bug in hole_birth.
116.sp
9ea9e0b9 117Use \fB1\fR for on (default) and \fB0\fR for off.
6d836e6f
RE
118.RE
119
29714574
TF
120.sp
121.ne 2
122.na
123\fBl2arc_feed_again\fR (int)
124.ad
125.RS 12n
83426735
D
126Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as
127fast as possible.
29714574
TF
128.sp
129Use \fB1\fR for yes (default) and \fB0\fR to disable.
130.RE
131
132.sp
133.ne 2
134.na
135\fBl2arc_feed_min_ms\fR (ulong)
136.ad
137.RS 12n
83426735
D
138Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only
139applicable in related situations.
29714574
TF
140.sp
141Default value: \fB200\fR.
142.RE
143
144.sp
145.ne 2
146.na
147\fBl2arc_feed_secs\fR (ulong)
148.ad
149.RS 12n
150Seconds between L2ARC writing
151.sp
152Default value: \fB1\fR.
153.RE
154
155.sp
156.ne 2
157.na
158\fBl2arc_headroom\fR (ulong)
159.ad
160.RS 12n
83426735
D
161How far through the ARC lists to search for L2ARC cacheable content, expressed
162as a multiplier of \fBl2arc_write_max\fR
29714574
TF
163.sp
164Default value: \fB2\fR.
165.RE
166
167.sp
168.ne 2
169.na
170\fBl2arc_headroom_boost\fR (ulong)
171.ad
172.RS 12n
83426735
D
173Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being
174successfully compressed before writing. A value of 100 disables this feature.
29714574 175.sp
be54a13c 176Default value: \fB200\fR%.
29714574
TF
177.RE
178
29714574
TF
179.sp
180.ne 2
181.na
182\fBl2arc_noprefetch\fR (int)
183.ad
184.RS 12n
83426735
D
185Do not write buffers to L2ARC if they were prefetched but not used by
186applications
29714574
TF
187.sp
188Use \fB1\fR for yes (default) and \fB0\fR to disable.
189.RE
190
191.sp
192.ne 2
193.na
194\fBl2arc_norw\fR (int)
195.ad
196.RS 12n
197No reads during writes
198.sp
199Use \fB1\fR for yes and \fB0\fR for no (default).
200.RE
201
202.sp
203.ne 2
204.na
205\fBl2arc_write_boost\fR (ulong)
206.ad
207.RS 12n
603a1784 208Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount
83426735 209while they remain cold.
29714574
TF
210.sp
211Default value: \fB8,388,608\fR.
212.RE
213
214.sp
215.ne 2
216.na
217\fBl2arc_write_max\fR (ulong)
218.ad
219.RS 12n
220Max write bytes per interval
221.sp
222Default value: \fB8,388,608\fR.
223.RE
224
99b14de4
ED
225.sp
226.ne 2
227.na
228\fBmetaslab_aliquot\fR (ulong)
229.ad
230.RS 12n
231Metaslab granularity, in bytes. This is roughly similar to what would be
232referred to as the "stripe size" in traditional RAID arrays. In normal
233operation, ZFS will try to write this amount of data to a top-level vdev
234before moving on to the next one.
235.sp
236Default value: \fB524,288\fR.
237.RE
238
f3a7f661
GW
239.sp
240.ne 2
241.na
242\fBmetaslab_bias_enabled\fR (int)
243.ad
244.RS 12n
245Enable metaslab group biasing based on its vdev's over- or under-utilization
246relative to the pool.
247.sp
248Use \fB1\fR for yes (default) and \fB0\fR for no.
249.RE
250
d830d479
MA
251.sp
252.ne 2
253.na
254\fBmetaslab_force_ganging\fR (ulong)
255.ad
256.RS 12n
257Make some blocks above a certain size be gang blocks. This option is used
258by the test suite to facilitate testing.
259.sp
260Default value: \fB16,777,217\fR.
261.RE
262
4e21fd06
DB
263.sp
264.ne 2
265.na
266\fBzfs_metaslab_segment_weight_enabled\fR (int)
267.ad
268.RS 12n
269Enable/disable segment-based metaslab selection.
270.sp
271Use \fB1\fR for yes (default) and \fB0\fR for no.
272.RE
273
274.sp
275.ne 2
276.na
277\fBzfs_metaslab_switch_threshold\fR (int)
278.ad
279.RS 12n
280When using segment-based metaslab selection, continue allocating
321204be 281from the active metaslab until \fBzfs_metaslab_switch_threshold\fR
4e21fd06
DB
282worth of buckets have been exhausted.
283.sp
284Default value: \fB2\fR.
285.RE
286
29714574
TF
287.sp
288.ne 2
289.na
aa7d06a9 290\fBmetaslab_debug_load\fR (int)
29714574
TF
291.ad
292.RS 12n
aa7d06a9
GW
293Load all metaslabs during pool import.
294.sp
295Use \fB1\fR for yes and \fB0\fR for no (default).
296.RE
297
298.sp
299.ne 2
300.na
301\fBmetaslab_debug_unload\fR (int)
302.ad
303.RS 12n
304Prevent metaslabs from being unloaded.
29714574
TF
305.sp
306Use \fB1\fR for yes and \fB0\fR for no (default).
307.RE
308
f3a7f661
GW
309.sp
310.ne 2
311.na
312\fBmetaslab_fragmentation_factor_enabled\fR (int)
313.ad
314.RS 12n
315Enable use of the fragmentation metric in computing metaslab weights.
316.sp
317Use \fB1\fR for yes (default) and \fB0\fR for no.
318.RE
319
b8bcca18
MA
320.sp
321.ne 2
322.na
c853f382 323\fBzfs_vdev_default_ms_count\fR (int)
b8bcca18
MA
324.ad
325.RS 12n
e4e94ca3 326When a vdev is added target this number of metaslabs per top-level vdev.
b8bcca18
MA
327.sp
328Default value: \fB200\fR.
329.RE
330
d2734cce
SD
331.sp
332.ne 2
333.na
c853f382 334\fBzfs_vdev_min_ms_count\fR (int)
d2734cce
SD
335.ad
336.RS 12n
337Minimum number of metaslabs to create in a top-level vdev.
338.sp
339Default value: \fB16\fR.
340.RE
341
e4e94ca3
DB
342.sp
343.ne 2
344.na
345\fBvdev_ms_count_limit\fR (int)
346.ad
347.RS 12n
348Practical upper limit of total metaslabs per top-level vdev.
349.sp
350Default value: \fB131,072\fR.
351.RE
352
f3a7f661
GW
353.sp
354.ne 2
355.na
356\fBmetaslab_preload_enabled\fR (int)
357.ad
358.RS 12n
359Enable metaslab group preloading.
360.sp
361Use \fB1\fR for yes (default) and \fB0\fR for no.
362.RE
363
364.sp
365.ne 2
366.na
367\fBmetaslab_lba_weighting_enabled\fR (int)
368.ad
369.RS 12n
370Give more weight to metaslabs with lower LBAs, assuming they have
371greater bandwidth as is typically the case on a modern constant
372angular velocity disk drive.
373.sp
374Use \fB1\fR for yes (default) and \fB0\fR for no.
375.RE
376
29714574
TF
377.sp
378.ne 2
379.na
380\fBspa_config_path\fR (charp)
381.ad
382.RS 12n
383SPA config file
384.sp
385Default value: \fB/etc/zfs/zpool.cache\fR.
386.RE
387
e8b96c60
MA
388.sp
389.ne 2
390.na
391\fBspa_asize_inflation\fR (int)
392.ad
393.RS 12n
394Multiplication factor used to estimate actual disk consumption from the
395size of data being written. The default value is a worst case estimate,
396but lower values may be valid for a given pool depending on its
397configuration. Pool administrators who understand the factors involved
398may wish to specify a more realistic inflation factor, particularly if
399they operate close to quota or capacity limits.
400.sp
83426735 401Default value: \fB24\fR.
e8b96c60
MA
402.RE
403
6cb8e530
PZ
404.sp
405.ne 2
406.na
407\fBspa_load_print_vdev_tree\fR (int)
408.ad
409.RS 12n
410Whether to print the vdev tree in the debugging message buffer during pool import.
411Use 0 to disable and 1 to enable.
412.sp
413Default value: \fB0\fR.
414.RE
415
dea377c0
MA
416.sp
417.ne 2
418.na
419\fBspa_load_verify_data\fR (int)
420.ad
421.RS 12n
422Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
423import. Use 0 to disable and 1 to enable.
424
425An extreme rewind import normally performs a full traversal of all
426blocks in the pool for verification. If this parameter is set to 0,
427the traversal skips non-metadata blocks. It can be toggled once the
428import has started to stop or start the traversal of non-metadata blocks.
429.sp
83426735 430Default value: \fB1\fR.
dea377c0
MA
431.RE
432
433.sp
434.ne 2
435.na
436\fBspa_load_verify_metadata\fR (int)
437.ad
438.RS 12n
439Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
440pool import. Use 0 to disable and 1 to enable.
441
442An extreme rewind import normally performs a full traversal of all
1c012083 443blocks in the pool for verification. If this parameter is set to 0,
dea377c0
MA
444the traversal is not performed. It can be toggled once the import has
445started to stop or start the traversal.
446.sp
83426735 447Default value: \fB1\fR.
dea377c0
MA
448.RE
449
450.sp
451.ne 2
452.na
453\fBspa_load_verify_maxinflight\fR (int)
454.ad
455.RS 12n
456Maximum concurrent I/Os during the traversal performed during an "extreme
457rewind" (\fB-X\fR) pool import.
458.sp
83426735 459Default value: \fB10000\fR.
dea377c0
MA
460.RE
461
6cde6435
BB
462.sp
463.ne 2
464.na
465\fBspa_slop_shift\fR (int)
466.ad
467.RS 12n
468Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space
469in the pool to be consumed. This ensures that we don't run the pool
470completely out of space, due to unaccounted changes (e.g. to the MOS).
471It also limits the worst-case time to allocate space. If we have
472less than this amount of free space, most ZPL operations (e.g. write,
473create) will return ENOSPC.
474.sp
83426735 475Default value: \fB5\fR.
6cde6435
BB
476.RE
477
0dc2f70c
MA
478.sp
479.ne 2
480.na
481\fBvdev_removal_max_span\fR (int)
482.ad
483.RS 12n
484During top-level vdev removal, chunks of data are copied from the vdev
485which may include free space in order to trade bandwidth for IOPS.
486This parameter determines the maximum span of free space (in bytes)
487which will be included as "unnecessary" data in a chunk of copied data.
488
489The default value here was chosen to align with
490\fBzfs_vdev_read_gap_limit\fR, which is a similar concept when doing
491regular reads (but there's no reason it has to be the same).
492.sp
493Default value: \fB32,768\fR.
494.RE
495
29714574
TF
496.sp
497.ne 2
498.na
499\fBzfetch_array_rd_sz\fR (ulong)
500.ad
501.RS 12n
27b293be 502If prefetching is enabled, disable prefetching for reads larger than this size.
29714574
TF
503.sp
504Default value: \fB1,048,576\fR.
505.RE
506
507.sp
508.ne 2
509.na
7f60329a 510\fBzfetch_max_distance\fR (uint)
29714574
TF
511.ad
512.RS 12n
7f60329a 513Max bytes to prefetch per stream (default 8MB).
29714574 514.sp
7f60329a 515Default value: \fB8,388,608\fR.
29714574
TF
516.RE
517
518.sp
519.ne 2
520.na
521\fBzfetch_max_streams\fR (uint)
522.ad
523.RS 12n
27b293be 524Max number of streams per zfetch (prefetch streams per file).
29714574
TF
525.sp
526Default value: \fB8\fR.
527.RE
528
529.sp
530.ne 2
531.na
532\fBzfetch_min_sec_reap\fR (uint)
533.ad
534.RS 12n
27b293be 535Min time before an active prefetch stream can be reclaimed
29714574
TF
536.sp
537Default value: \fB2\fR.
538.RE
539
25458cbe
TC
540.sp
541.ne 2
542.na
543\fBzfs_arc_dnode_limit\fR (ulong)
544.ad
545.RS 12n
546When the number of bytes consumed by dnodes in the ARC exceeds this number of
9907cc1c 547bytes, try to unpin some of it in response to demand for non-metadata. This
627791f3 548value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which
9907cc1c
G
549indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of
550the ARC meta buffers that may be used for dnodes.
25458cbe
TC
551
552See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used
553when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather
554than in response to overall demand for non-metadata.
555
556.sp
9907cc1c
G
557Default value: \fB0\fR.
558.RE
559
560.sp
561.ne 2
562.na
563\fBzfs_arc_dnode_limit_percent\fR (ulong)
564.ad
565.RS 12n
566Percentage that can be consumed by dnodes of ARC meta buffers.
567.sp
568See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a
569higher priority if set to nonzero value.
570.sp
be54a13c 571Default value: \fB10\fR%.
25458cbe
TC
572.RE
573
574.sp
575.ne 2
576.na
577\fBzfs_arc_dnode_reduce_percent\fR (ulong)
578.ad
579.RS 12n
580Percentage of ARC dnodes to try to scan in response to demand for non-metadata
6146e17e 581when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR.
25458cbe
TC
582
583.sp
be54a13c 584Default value: \fB10\fR% of the number of dnodes in the ARC.
25458cbe
TC
585.RE
586
49ddb315
MA
587.sp
588.ne 2
589.na
590\fBzfs_arc_average_blocksize\fR (int)
591.ad
592.RS 12n
593The ARC's buffer hash table is sized based on the assumption of an average
594block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
595to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
596For configurations with a known larger average block size this value can be
597increased to reduce the memory footprint.
598
599.sp
600Default value: \fB8192\fR.
601.RE
602
ca0bf58d
PS
603.sp
604.ne 2
605.na
606\fBzfs_arc_evict_batch_limit\fR (int)
607.ad
608.RS 12n
8f343973 609Number ARC headers to evict per sub-list before proceeding to another sub-list.
ca0bf58d
PS
610This batch-style operation prevents entire sub-lists from being evicted at once
611but comes at a cost of additional unlocking and locking.
612.sp
613Default value: \fB10\fR.
614.RE
615
29714574
TF
616.sp
617.ne 2
618.na
619\fBzfs_arc_grow_retry\fR (int)
620.ad
621.RS 12n
ca85d690 622If set to a non zero value, it will replace the arc_grow_retry value with this value.
d4a72f23 623The arc_grow_retry value (default 5) is the number of seconds the ARC will wait before
ca85d690 624trying to resume growth after a memory pressure event.
29714574 625.sp
ca85d690 626Default value: \fB0\fR.
29714574
TF
627.RE
628
629.sp
630.ne 2
631.na
7e8bddd0 632\fBzfs_arc_lotsfree_percent\fR (int)
29714574
TF
633.ad
634.RS 12n
7e8bddd0
BB
635Throttle I/O when free system memory drops below this percentage of total
636system memory. Setting this value to 0 will disable the throttle.
29714574 637.sp
be54a13c 638Default value: \fB10\fR%.
29714574
TF
639.RE
640
641.sp
642.ne 2
643.na
7e8bddd0 644\fBzfs_arc_max\fR (ulong)
29714574
TF
645.ad
646.RS 12n
83426735
D
647Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system
648RAM. This value must be at least 67108864 (64 megabytes).
649.sp
650This value can be changed dynamically with some caveats. It cannot be set back
651to 0 while running and reducing it below the current ARC size will not cause
652the ARC to shrink without memory pressure to induce shrinking.
29714574 653.sp
7e8bddd0 654Default value: \fB0\fR.
29714574
TF
655.RE
656
ca85d690 657.sp
658.ne 2
659.na
660\fBzfs_arc_meta_adjust_restarts\fR (ulong)
661.ad
662.RS 12n
663The number of restart passes to make while scanning the ARC attempting
664the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
665This value should not need to be tuned but is available to facilitate
666performance analysis.
667.sp
668Default value: \fB4096\fR.
669.RE
670
29714574
TF
671.sp
672.ne 2
673.na
674\fBzfs_arc_meta_limit\fR (ulong)
675.ad
676.RS 12n
2cbb06b5
BB
677The maximum allowed size in bytes that meta data buffers are allowed to
678consume in the ARC. When this limit is reached meta data buffers will
679be reclaimed even if the overall arc_c_max has not been reached. This
9907cc1c
G
680value defaults to 0 which indicates that a percent which is based on
681\fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data.
29714574 682.sp
83426735 683This value my be changed dynamically except that it cannot be set back to 0
9907cc1c 684for a specific percent of the ARC; it must be set to an explicit value.
83426735 685.sp
29714574
TF
686Default value: \fB0\fR.
687.RE
688
9907cc1c
G
689.sp
690.ne 2
691.na
692\fBzfs_arc_meta_limit_percent\fR (ulong)
693.ad
694.RS 12n
695Percentage of ARC buffers that can be used for meta data.
696
697See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a
698higher priority if set to nonzero value.
699
700.sp
be54a13c 701Default value: \fB75\fR%.
9907cc1c
G
702.RE
703
ca0bf58d
PS
704.sp
705.ne 2
706.na
707\fBzfs_arc_meta_min\fR (ulong)
708.ad
709.RS 12n
710The minimum allowed size in bytes that meta data buffers may consume in
711the ARC. This value defaults to 0 which disables a floor on the amount
712of the ARC devoted meta data.
713.sp
714Default value: \fB0\fR.
715.RE
716
29714574
TF
717.sp
718.ne 2
719.na
720\fBzfs_arc_meta_prune\fR (int)
721.ad
722.RS 12n
2cbb06b5
BB
723The number of dentries and inodes to be scanned looking for entries
724which can be dropped. This may be required when the ARC reaches the
725\fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
726in the ARC. Increasing this value will cause to dentry and inode caches
727to be pruned more aggressively. Setting this value to 0 will disable
728pruning the inode and dentry caches.
29714574 729.sp
2cbb06b5 730Default value: \fB10,000\fR.
29714574
TF
731.RE
732
bc888666
BB
733.sp
734.ne 2
735.na
ca85d690 736\fBzfs_arc_meta_strategy\fR (int)
bc888666
BB
737.ad
738.RS 12n
ca85d690 739Define the strategy for ARC meta data buffer eviction (meta reclaim strategy).
740A value of 0 (META_ONLY) will evict only the ARC meta data buffers.
d4a72f23 741A value of 1 (BALANCED) indicates that additional data buffers may be evicted if
ca85d690 742that is required to in order to evict the required number of meta data buffers.
bc888666 743.sp
ca85d690 744Default value: \fB1\fR.
bc888666
BB
745.RE
746
29714574
TF
747.sp
748.ne 2
749.na
750\fBzfs_arc_min\fR (ulong)
751.ad
752.RS 12n
ca85d690 753Min arc size of ARC in bytes. If set to 0 then arc_c_min will default to
754consuming the larger of 32M or 1/32 of total system memory.
29714574 755.sp
ca85d690 756Default value: \fB0\fR.
29714574
TF
757.RE
758
759.sp
760.ne 2
761.na
d4a72f23 762\fBzfs_arc_min_prefetch_ms\fR (int)
29714574
TF
763.ad
764.RS 12n
d4a72f23 765Minimum time prefetched blocks are locked in the ARC, specified in ms.
2b84817f 766A value of \fB0\fR will default to 1000 ms.
d4a72f23
TC
767.sp
768Default value: \fB0\fR.
769.RE
770
771.sp
772.ne 2
773.na
774\fBzfs_arc_min_prescient_prefetch_ms\fR (int)
775.ad
776.RS 12n
777Minimum time "prescient prefetched" blocks are locked in the ARC, specified
778in ms. These blocks are meant to be prefetched fairly aggresively ahead of
2b84817f 779the code that may use them. A value of \fB0\fR will default to 6000 ms.
29714574 780.sp
83426735 781Default value: \fB0\fR.
29714574
TF
782.RE
783
6cb8e530
PZ
784.sp
785.ne 2
786.na
787\fBzfs_max_missing_tvds\fR (int)
788.ad
789.RS 12n
790Number of missing top-level vdevs which will be allowed during
791pool import (only in read-only mode).
792.sp
793Default value: \fB0\fR
794.RE
795
ca0bf58d
PS
796.sp
797.ne 2
798.na
c30e58c4 799\fBzfs_multilist_num_sublists\fR (int)
ca0bf58d
PS
800.ad
801.RS 12n
802To allow more fine-grained locking, each ARC state contains a series
803of lists for both data and meta data objects. Locking is performed at
804the level of these "sub-lists". This parameters controls the number of
c30e58c4
MA
805sub-lists per ARC state, and also applies to other uses of the
806multilist data structure.
ca0bf58d 807.sp
c30e58c4 808Default value: \fB4\fR or the number of online CPUs, whichever is greater
ca0bf58d
PS
809.RE
810
811.sp
812.ne 2
813.na
814\fBzfs_arc_overflow_shift\fR (int)
815.ad
816.RS 12n
817The ARC size is considered to be overflowing if it exceeds the current
818ARC target size (arc_c) by a threshold determined by this parameter.
819The threshold is calculated as a fraction of arc_c using the formula
820"arc_c >> \fBzfs_arc_overflow_shift\fR".
821
822The default value of 8 causes the ARC to be considered to be overflowing
823if it exceeds the target size by 1/256th (0.3%) of the target size.
824
825When the ARC is overflowing, new buffer allocations are stalled until
826the reclaim thread catches up and the overflow condition no longer exists.
827.sp
828Default value: \fB8\fR.
829.RE
830
728d6ae9
BB
831.sp
832.ne 2
833.na
834
835\fBzfs_arc_p_min_shift\fR (int)
836.ad
837.RS 12n
ca85d690 838If set to a non zero value, this will update arc_p_min_shift (default 4)
839with the new value.
d4a72f23 840arc_p_min_shift is used to shift of arc_c for calculating both min and max
ca85d690 841max arc_p
728d6ae9 842.sp
ca85d690 843Default value: \fB0\fR.
728d6ae9
BB
844.RE
845
62422785
PS
846.sp
847.ne 2
848.na
849\fBzfs_arc_p_dampener_disable\fR (int)
850.ad
851.RS 12n
852Disable arc_p adapt dampener
853.sp
854Use \fB1\fR for yes (default) and \fB0\fR to disable.
855.RE
856
29714574
TF
857.sp
858.ne 2
859.na
860\fBzfs_arc_shrink_shift\fR (int)
861.ad
862.RS 12n
ca85d690 863If set to a non zero value, this will update arc_shrink_shift (default 7)
864with the new value.
29714574 865.sp
ca85d690 866Default value: \fB0\fR.
29714574
TF
867.RE
868
03b60eee
DB
869.sp
870.ne 2
871.na
872\fBzfs_arc_pc_percent\fR (uint)
873.ad
874.RS 12n
875Percent of pagecache to reclaim arc to
876
877This tunable allows ZFS arc to play more nicely with the kernel's LRU
878pagecache. It can guarantee that the arc size won't collapse under scanning
879pressure on the pagecache, yet still allows arc to be reclaimed down to
880zfs_arc_min if necessary. This value is specified as percent of pagecache
881size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This
882only operates during memory pressure/reclaim.
883.sp
be54a13c 884Default value: \fB0\fR% (disabled).
03b60eee
DB
885.RE
886
11f552fa
BB
887.sp
888.ne 2
889.na
890\fBzfs_arc_sys_free\fR (ulong)
891.ad
892.RS 12n
893The target number of bytes the ARC should leave as free memory on the system.
894Defaults to the larger of 1/64 of physical memory or 512K. Setting this
895option to a non-zero value will override the default.
896.sp
897Default value: \fB0\fR.
898.RE
899
29714574
TF
900.sp
901.ne 2
902.na
903\fBzfs_autoimport_disable\fR (int)
904.ad
905.RS 12n
27b293be 906Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
29714574 907.sp
70081096 908Use \fB1\fR for yes (default) and \fB0\fR for no.
29714574
TF
909.RE
910
80d52c39
TH
911.sp
912.ne 2
913.na
914\fBzfs_checksums_per_second\fR (int)
915.ad
916.RS 12n
917Rate limit checksum events to this many per second. Note that this should
918not be set below the zed thresholds (currently 10 checksums over 10 sec)
919or else zed may not trigger any action.
920.sp
921Default value: 20
922.RE
923
2fe61a7e
PS
924.sp
925.ne 2
926.na
927\fBzfs_commit_timeout_pct\fR (int)
928.ad
929.RS 12n
930This controls the amount of time that a ZIL block (lwb) will remain "open"
931when it isn't "full", and it has a thread waiting for it to be committed to
932stable storage. The timeout is scaled based on a percentage of the last lwb
933latency to avoid significantly impacting the latency of each individual
934transaction record (itx).
935.sp
be54a13c 936Default value: \fB5\fR%.
2fe61a7e
PS
937.RE
938
0dc2f70c
MA
939.sp
940.ne 2
941.na
942\fBzfs_condense_indirect_vdevs_enable\fR (int)
943.ad
944.RS 12n
945Enable condensing indirect vdev mappings. When set to a non-zero value,
946attempt to condense indirect vdev mappings if the mapping uses more than
947\fBzfs_condense_min_mapping_bytes\fR bytes of memory and if the obsolete
948space map object uses more than \fBzfs_condense_max_obsolete_bytes\fR
949bytes on-disk. The condensing process is an attempt to save memory by
950removing obsolete mappings.
951.sp
952Default value: \fB1\fR.
953.RE
954
955.sp
956.ne 2
957.na
958\fBzfs_condense_max_obsolete_bytes\fR (ulong)
959.ad
960.RS 12n
961Only attempt to condense indirect vdev mappings if the on-disk size
962of the obsolete space map object is greater than this number of bytes
963(see \fBfBzfs_condense_indirect_vdevs_enable\fR).
964.sp
965Default value: \fB1,073,741,824\fR.
966.RE
967
968.sp
969.ne 2
970.na
971\fBzfs_condense_min_mapping_bytes\fR (ulong)
972.ad
973.RS 12n
974Minimum size vdev mapping to attempt to condense (see
975\fBzfs_condense_indirect_vdevs_enable\fR).
976.sp
977Default value: \fB131,072\fR.
978.RE
979
3b36f831
BB
980.sp
981.ne 2
982.na
983\fBzfs_dbgmsg_enable\fR (int)
984.ad
985.RS 12n
986Internally ZFS keeps a small log to facilitate debugging. By default the log
987is disabled, to enable it set this option to 1. The contents of the log can
988be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to
989this proc file clears the log.
990.sp
991Default value: \fB0\fR.
992.RE
993
994.sp
995.ne 2
996.na
997\fBzfs_dbgmsg_maxsize\fR (int)
998.ad
999.RS 12n
1000The maximum size in bytes of the internal ZFS debug log.
1001.sp
1002Default value: \fB4M\fR.
1003.RE
1004
29714574
TF
1005.sp
1006.ne 2
1007.na
1008\fBzfs_dbuf_state_index\fR (int)
1009.ad
1010.RS 12n
83426735
D
1011This feature is currently unused. It is normally used for controlling what
1012reporting is available under /proc/spl/kstat/zfs.
29714574
TF
1013.sp
1014Default value: \fB0\fR.
1015.RE
1016
1017.sp
1018.ne 2
1019.na
1020\fBzfs_deadman_enabled\fR (int)
1021.ad
1022.RS 12n
b81a3ddc 1023When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR
8fb1ede1
BB
1024milliseconds, or when an individual I/O takes longer than
1025\fBzfs_deadman_ziotime_ms\fR milliseconds, then the operation is considered to
1026be "hung". If \fBzfs_deadman_enabled\fR is set then the deadman behavior is
1027invoked as described by the \fBzfs_deadman_failmode\fR module option.
1028By default the deadman is enabled and configured to \fBwait\fR which results
1029in "hung" I/Os only being logged. The deadman is automatically disabled
1030when a pool gets suspended.
29714574 1031.sp
8fb1ede1
BB
1032Default value: \fB1\fR.
1033.RE
1034
1035.sp
1036.ne 2
1037.na
1038\fBzfs_deadman_failmode\fR (charp)
1039.ad
1040.RS 12n
1041Controls the failure behavior when the deadman detects a "hung" I/O. Valid
1042values are \fBwait\fR, \fBcontinue\fR, and \fBpanic\fR.
1043.sp
1044\fBwait\fR - Wait for a "hung" I/O to complete. For each "hung" I/O a
1045"deadman" event will be posted describing that I/O.
1046.sp
1047\fBcontinue\fR - Attempt to recover from a "hung" I/O by re-dispatching it
1048to the I/O pipeline if possible.
1049.sp
1050\fBpanic\fR - Panic the system. This can be used to facilitate an automatic
1051fail-over to a properly configured fail-over partner.
1052.sp
1053Default value: \fBwait\fR.
b81a3ddc
TC
1054.RE
1055
1056.sp
1057.ne 2
1058.na
1059\fBzfs_deadman_checktime_ms\fR (int)
1060.ad
1061.RS 12n
8fb1ede1
BB
1062Check time in milliseconds. This defines the frequency at which we check
1063for hung I/O and potentially invoke the \fBzfs_deadman_failmode\fR behavior.
b81a3ddc 1064.sp
8fb1ede1 1065Default value: \fB60,000\fR.
29714574
TF
1066.RE
1067
1068.sp
1069.ne 2
1070.na
e8b96c60 1071\fBzfs_deadman_synctime_ms\fR (ulong)
29714574
TF
1072.ad
1073.RS 12n
b81a3ddc 1074Interval in milliseconds after which the deadman is triggered and also
8fb1ede1
BB
1075the interval after which a pool sync operation is considered to be "hung".
1076Once this limit is exceeded the deadman will be invoked every
1077\fBzfs_deadman_checktime_ms\fR milliseconds until the pool sync completes.
1078.sp
1079Default value: \fB600,000\fR.
1080.RE
b81a3ddc 1081
29714574 1082.sp
8fb1ede1
BB
1083.ne 2
1084.na
1085\fBzfs_deadman_ziotime_ms\fR (ulong)
1086.ad
1087.RS 12n
1088Interval in milliseconds after which the deadman is triggered and an
ad796b8a 1089individual I/O operation is considered to be "hung". As long as the I/O
8fb1ede1
BB
1090remains "hung" the deadman will be invoked every \fBzfs_deadman_checktime_ms\fR
1091milliseconds until the I/O completes.
1092.sp
1093Default value: \fB300,000\fR.
29714574
TF
1094.RE
1095
1096.sp
1097.ne 2
1098.na
1099\fBzfs_dedup_prefetch\fR (int)
1100.ad
1101.RS 12n
1102Enable prefetching dedup-ed blks
1103.sp
0dfc7324 1104Use \fB1\fR for yes and \fB0\fR to disable (default).
29714574
TF
1105.RE
1106
e8b96c60
MA
1107.sp
1108.ne 2
1109.na
1110\fBzfs_delay_min_dirty_percent\fR (int)
1111.ad
1112.RS 12n
1113Start to delay each transaction once there is this amount of dirty data,
1114expressed as a percentage of \fBzfs_dirty_data_max\fR.
1115This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
1116See the section "ZFS TRANSACTION DELAY".
1117.sp
be54a13c 1118Default value: \fB60\fR%.
e8b96c60
MA
1119.RE
1120
1121.sp
1122.ne 2
1123.na
1124\fBzfs_delay_scale\fR (int)
1125.ad
1126.RS 12n
1127This controls how quickly the transaction delay approaches infinity.
1128Larger values cause longer delays for a given amount of dirty data.
1129.sp
1130For the smoothest delay, this value should be about 1 billion divided
1131by the maximum number of operations per second. This will smoothly
1132handle between 10x and 1/10th this number.
1133.sp
1134See the section "ZFS TRANSACTION DELAY".
1135.sp
1136Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
1137.sp
1138Default value: \fB500,000\fR.
1139.RE
1140
80d52c39
TH
1141.sp
1142.ne 2
1143.na
62ee31ad 1144\fBzfs_slow_io_events_per_second\fR (int)
80d52c39
TH
1145.ad
1146.RS 12n
ad796b8a 1147Rate limit delay zevents (which report slow I/Os) to this many per second.
80d52c39
TH
1148.sp
1149Default value: 20
1150.RE
1151
a966c564
K
1152.sp
1153.ne 2
1154.na
1155\fBzfs_delete_blocks\fR (ulong)
1156.ad
1157.RS 12n
1158This is the used to define a large file for the purposes of delete. Files
1159containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously
1160while smaller files are deleted synchronously. Decreasing this value will
1161reduce the time spent in an unlink(2) system call at the expense of a longer
1162delay before the freed space is available.
1163.sp
1164Default value: \fB20,480\fR.
1165.RE
1166
e8b96c60
MA
1167.sp
1168.ne 2
1169.na
1170\fBzfs_dirty_data_max\fR (int)
1171.ad
1172.RS 12n
1173Determines the dirty space limit in bytes. Once this limit is exceeded, new
1174writes are halted until space frees up. This parameter takes precedence
1175over \fBzfs_dirty_data_max_percent\fR.
1176See the section "ZFS TRANSACTION DELAY".
1177.sp
be54a13c 1178Default value: \fB10\fR% of physical RAM, capped at \fBzfs_dirty_data_max_max\fR.
e8b96c60
MA
1179.RE
1180
1181.sp
1182.ne 2
1183.na
1184\fBzfs_dirty_data_max_max\fR (int)
1185.ad
1186.RS 12n
1187Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
1188This limit is only enforced at module load time, and will be ignored if
1189\fBzfs_dirty_data_max\fR is later changed. This parameter takes
1190precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
1191"ZFS TRANSACTION DELAY".
1192.sp
be54a13c 1193Default value: \fB25\fR% of physical RAM.
e8b96c60
MA
1194.RE
1195
1196.sp
1197.ne 2
1198.na
1199\fBzfs_dirty_data_max_max_percent\fR (int)
1200.ad
1201.RS 12n
1202Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
1203percentage of physical RAM. This limit is only enforced at module load
1204time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
1205The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
1206one. See the section "ZFS TRANSACTION DELAY".
1207.sp
be54a13c 1208Default value: \fB25\fR%.
e8b96c60
MA
1209.RE
1210
1211.sp
1212.ne 2
1213.na
1214\fBzfs_dirty_data_max_percent\fR (int)
1215.ad
1216.RS 12n
1217Determines the dirty space limit, expressed as a percentage of all
1218memory. Once this limit is exceeded, new writes are halted until space frees
1219up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
1220one. See the section "ZFS TRANSACTION DELAY".
1221.sp
be54a13c 1222Default value: \fB10\fR%, subject to \fBzfs_dirty_data_max_max\fR.
e8b96c60
MA
1223.RE
1224
1225.sp
1226.ne 2
1227.na
dfbe2675 1228\fBzfs_dirty_data_sync_percent\fR (int)
e8b96c60
MA
1229.ad
1230.RS 12n
dfbe2675
MA
1231Start syncing out a transaction group if there's at least this much dirty data
1232as a percentage of \fBzfs_dirty_data_max\fR. This should be less than
1233\fBzfs_vdev_async_write_active_min_dirty_percent\fR.
e8b96c60 1234.sp
dfbe2675 1235Default value: \fB20\fR% of \fBzfs_dirty_data_max\fR.
e8b96c60
MA
1236.RE
1237
1eeb4562
JX
1238.sp
1239.ne 2
1240.na
1241\fBzfs_fletcher_4_impl\fR (string)
1242.ad
1243.RS 12n
1244Select a fletcher 4 implementation.
1245.sp
35a76a03 1246Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR,
24cdeaf1 1247\fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR.
70b258fc
GN
1248All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction
1249set extensions to be available and will only appear if ZFS detects that they are
1250present at runtime. If multiple implementations of fletcher 4 are available,
1251the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR
1252results in the original, CPU based calculation, being used. Selecting any option
1253other than \fBfastest\fR and \fBscalar\fR results in vector instructions from
1254the respective CPU instruction set being used.
1eeb4562
JX
1255.sp
1256Default value: \fBfastest\fR.
1257.RE
1258
ba5ad9a4
GW
1259.sp
1260.ne 2
1261.na
1262\fBzfs_free_bpobj_enabled\fR (int)
1263.ad
1264.RS 12n
1265Enable/disable the processing of the free_bpobj object.
1266.sp
1267Default value: \fB1\fR.
1268.RE
1269
36283ca2
MG
1270.sp
1271.ne 2
1272.na
a1d477c2 1273\fBzfs_async_block_max_blocks\fR (ulong)
36283ca2
MG
1274.ad
1275.RS 12n
1276Maximum number of blocks freed in a single txg.
1277.sp
1278Default value: \fB100,000\fR.
1279.RE
1280
ca0845d5
PD
1281.sp
1282.ne 2
1283.na
1284\fBzfs_override_estimate_recordsize\fR (ulong)
1285.ad
1286.RS 12n
1287Record size calculation override for zfs send estimates.
1288.sp
1289Default value: \fB0\fR.
1290.RE
1291
e8b96c60
MA
1292.sp
1293.ne 2
1294.na
1295\fBzfs_vdev_async_read_max_active\fR (int)
1296.ad
1297.RS 12n
83426735 1298Maximum asynchronous read I/Os active to each device.
e8b96c60
MA
1299See the section "ZFS I/O SCHEDULER".
1300.sp
1301Default value: \fB3\fR.
1302.RE
1303
1304.sp
1305.ne 2
1306.na
1307\fBzfs_vdev_async_read_min_active\fR (int)
1308.ad
1309.RS 12n
1310Minimum asynchronous read I/Os active to each device.
1311See the section "ZFS I/O SCHEDULER".
1312.sp
1313Default value: \fB1\fR.
1314.RE
1315
1316.sp
1317.ne 2
1318.na
1319\fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
1320.ad
1321.RS 12n
1322When the pool has more than
1323\fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
1324\fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
1325the dirty data is between min and max, the active I/O limit is linearly
1326interpolated. See the section "ZFS I/O SCHEDULER".
1327.sp
be54a13c 1328Default value: \fB60\fR%.
e8b96c60
MA
1329.RE
1330
1331.sp
1332.ne 2
1333.na
1334\fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
1335.ad
1336.RS 12n
1337When the pool has less than
1338\fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
1339\fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
1340the dirty data is between min and max, the active I/O limit is linearly
1341interpolated. See the section "ZFS I/O SCHEDULER".
1342.sp
be54a13c 1343Default value: \fB30\fR%.
e8b96c60
MA
1344.RE
1345
1346.sp
1347.ne 2
1348.na
1349\fBzfs_vdev_async_write_max_active\fR (int)
1350.ad
1351.RS 12n
83426735 1352Maximum asynchronous write I/Os active to each device.
e8b96c60
MA
1353See the section "ZFS I/O SCHEDULER".
1354.sp
1355Default value: \fB10\fR.
1356.RE
1357
1358.sp
1359.ne 2
1360.na
1361\fBzfs_vdev_async_write_min_active\fR (int)
1362.ad
1363.RS 12n
1364Minimum asynchronous write I/Os active to each device.
1365See the section "ZFS I/O SCHEDULER".
1366.sp
06226b59
D
1367Lower values are associated with better latency on rotational media but poorer
1368resilver performance. The default value of 2 was chosen as a compromise. A
1369value of 3 has been shown to improve resilver performance further at a cost of
1370further increasing latency.
1371.sp
1372Default value: \fB2\fR.
e8b96c60
MA
1373.RE
1374
619f0976
GW
1375.sp
1376.ne 2
1377.na
1378\fBzfs_vdev_initializing_max_active\fR (int)
1379.ad
1380.RS 12n
1381Maximum initializing I/Os active to each device.
1382See the section "ZFS I/O SCHEDULER".
1383.sp
1384Default value: \fB1\fR.
1385.RE
1386
1387.sp
1388.ne 2
1389.na
1390\fBzfs_vdev_initializing_min_active\fR (int)
1391.ad
1392.RS 12n
1393Minimum initializing I/Os active to each device.
1394See the section "ZFS I/O SCHEDULER".
1395.sp
1396Default value: \fB1\fR.
1397.RE
1398
e8b96c60
MA
1399.sp
1400.ne 2
1401.na
1402\fBzfs_vdev_max_active\fR (int)
1403.ad
1404.RS 12n
1405The maximum number of I/Os active to each device. Ideally, this will be >=
1406the sum of each queue's max_active. It must be at least the sum of each
1407queue's min_active. See the section "ZFS I/O SCHEDULER".
1408.sp
1409Default value: \fB1,000\fR.
1410.RE
1411
619f0976
GW
1412.sp
1413.ne 2
1414.na
1415\fBzfs_vdev_removal_max_active\fR (int)
1416.ad
1417.RS 12n
1418Maximum removal I/Os active to each device.
1419See the section "ZFS I/O SCHEDULER".
1420.sp
1421Default value: \fB2\fR.
1422.RE
1423
1424.sp
1425.ne 2
1426.na
1427\fBzfs_vdev_removal_min_active\fR (int)
1428.ad
1429.RS 12n
1430Minimum removal I/Os active to each device.
1431See the section "ZFS I/O SCHEDULER".
1432.sp
1433Default value: \fB1\fR.
1434.RE
1435
e8b96c60
MA
1436.sp
1437.ne 2
1438.na
1439\fBzfs_vdev_scrub_max_active\fR (int)
1440.ad
1441.RS 12n
83426735 1442Maximum scrub I/Os active to each device.
e8b96c60
MA
1443See the section "ZFS I/O SCHEDULER".
1444.sp
1445Default value: \fB2\fR.
1446.RE
1447
1448.sp
1449.ne 2
1450.na
1451\fBzfs_vdev_scrub_min_active\fR (int)
1452.ad
1453.RS 12n
1454Minimum scrub I/Os active to each device.
1455See the section "ZFS I/O SCHEDULER".
1456.sp
1457Default value: \fB1\fR.
1458.RE
1459
1460.sp
1461.ne 2
1462.na
1463\fBzfs_vdev_sync_read_max_active\fR (int)
1464.ad
1465.RS 12n
83426735 1466Maximum synchronous read I/Os active to each device.
e8b96c60
MA
1467See the section "ZFS I/O SCHEDULER".
1468.sp
1469Default value: \fB10\fR.
1470.RE
1471
1472.sp
1473.ne 2
1474.na
1475\fBzfs_vdev_sync_read_min_active\fR (int)
1476.ad
1477.RS 12n
1478Minimum synchronous read I/Os active to each device.
1479See the section "ZFS I/O SCHEDULER".
1480.sp
1481Default value: \fB10\fR.
1482.RE
1483
1484.sp
1485.ne 2
1486.na
1487\fBzfs_vdev_sync_write_max_active\fR (int)
1488.ad
1489.RS 12n
83426735 1490Maximum synchronous write I/Os active to each device.
e8b96c60
MA
1491See the section "ZFS I/O SCHEDULER".
1492.sp
1493Default value: \fB10\fR.
1494.RE
1495
1496.sp
1497.ne 2
1498.na
1499\fBzfs_vdev_sync_write_min_active\fR (int)
1500.ad
1501.RS 12n
1502Minimum synchronous write I/Os active to each device.
1503See the section "ZFS I/O SCHEDULER".
1504.sp
1505Default value: \fB10\fR.
1506.RE
1507
3dfb57a3
DB
1508.sp
1509.ne 2
1510.na
1511\fBzfs_vdev_queue_depth_pct\fR (int)
1512.ad
1513.RS 12n
e815485f
TC
1514Maximum number of queued allocations per top-level vdev expressed as
1515a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the
1516system to detect devices that are more capable of handling allocations
1517and to allocate more blocks to those devices. It allows for dynamic
1518allocation distribution when devices are imbalanced as fuller devices
1519will tend to be slower than empty devices.
1520
1521See also \fBzio_dva_throttle_enabled\fR.
3dfb57a3 1522.sp
be54a13c 1523Default value: \fB1000\fR%.
3dfb57a3
DB
1524.RE
1525
29714574
TF
1526.sp
1527.ne 2
1528.na
1529\fBzfs_expire_snapshot\fR (int)
1530.ad
1531.RS 12n
1532Seconds to expire .zfs/snapshot
1533.sp
1534Default value: \fB300\fR.
1535.RE
1536
0500e835
BB
1537.sp
1538.ne 2
1539.na
1540\fBzfs_admin_snapshot\fR (int)
1541.ad
1542.RS 12n
1543Allow the creation, removal, or renaming of entries in the .zfs/snapshot
1544directory to cause the creation, destruction, or renaming of snapshots.
1545When enabled this functionality works both locally and over NFS exports
1546which have the 'no_root_squash' option set. This functionality is disabled
1547by default.
1548.sp
1549Use \fB1\fR for yes and \fB0\fR for no (default).
1550.RE
1551
29714574
TF
1552.sp
1553.ne 2
1554.na
1555\fBzfs_flags\fR (int)
1556.ad
1557.RS 12n
33b6dbbc
NB
1558Set additional debugging flags. The following flags may be bitwise-or'd
1559together.
1560.sp
1561.TS
1562box;
1563rB lB
1564lB lB
1565r l.
1566Value Symbolic Name
1567 Description
1568_
15691 ZFS_DEBUG_DPRINTF
1570 Enable dprintf entries in the debug log.
1571_
15722 ZFS_DEBUG_DBUF_VERIFY *
1573 Enable extra dbuf verifications.
1574_
15754 ZFS_DEBUG_DNODE_VERIFY *
1576 Enable extra dnode verifications.
1577_
15788 ZFS_DEBUG_SNAPNAMES
1579 Enable snapshot name verification.
1580_
158116 ZFS_DEBUG_MODIFY
1582 Check for illegally modified ARC buffers.
1583_
33b6dbbc
NB
158464 ZFS_DEBUG_ZIO_FREE
1585 Enable verification of block frees.
1586_
1587128 ZFS_DEBUG_HISTOGRAM_VERIFY
1588 Enable extra spacemap histogram verifications.
8740cf4a
NB
1589_
1590256 ZFS_DEBUG_METASLAB_VERIFY
1591 Verify space accounting on disk matches in-core range_trees.
1592_
1593512 ZFS_DEBUG_SET_ERROR
1594 Enable SET_ERROR and dprintf entries in the debug log.
33b6dbbc
NB
1595.TE
1596.sp
1597* Requires debug build.
29714574 1598.sp
33b6dbbc 1599Default value: \fB0\fR.
29714574
TF
1600.RE
1601
fbeddd60
MA
1602.sp
1603.ne 2
1604.na
1605\fBzfs_free_leak_on_eio\fR (int)
1606.ad
1607.RS 12n
1608If destroy encounters an EIO while reading metadata (e.g. indirect
1609blocks), space referenced by the missing metadata can not be freed.
1610Normally this causes the background destroy to become "stalled", as
1611it is unable to make forward progress. While in this stalled state,
1612all remaining space to free from the error-encountering filesystem is
1613"temporarily leaked". Set this flag to cause it to ignore the EIO,
1614permanently leak the space from indirect blocks that can not be read,
1615and continue to free everything else that it can.
1616
1617The default, "stalling" behavior is useful if the storage partially
1618fails (i.e. some but not all i/os fail), and then later recovers. In
1619this case, we will be able to continue pool operations while it is
1620partially failed, and when it recovers, we can continue to free the
1621space, with no leaks. However, note that this case is actually
1622fairly rare.
1623
1624Typically pools either (a) fail completely (but perhaps temporarily,
1625e.g. a top-level vdev going offline), or (b) have localized,
1626permanent errors (e.g. disk returns the wrong data due to bit flip or
1627firmware bug). In case (a), this setting does not matter because the
1628pool will be suspended and the sync thread will not be able to make
1629forward progress regardless. In case (b), because the error is
1630permanent, the best we can do is leak the minimum amount of space,
1631which is what setting this flag will do. Therefore, it is reasonable
1632for this flag to normally be set, but we chose the more conservative
1633approach of not setting it, so that there is no possibility of
1634leaking space in the "partial temporary" failure case.
1635.sp
1636Default value: \fB0\fR.
1637.RE
1638
29714574
TF
1639.sp
1640.ne 2
1641.na
1642\fBzfs_free_min_time_ms\fR (int)
1643.ad
1644.RS 12n
6146e17e 1645During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum
83426735 1646of this much time will be spent working on freeing blocks per txg.
29714574
TF
1647.sp
1648Default value: \fB1,000\fR.
1649.RE
1650
1651.sp
1652.ne 2
1653.na
1654\fBzfs_immediate_write_sz\fR (long)
1655.ad
1656.RS 12n
83426735 1657Largest data block to write to zil. Larger blocks will be treated as if the
6146e17e 1658dataset being written to had the property setting \fBlogbias=throughput\fR.
29714574
TF
1659.sp
1660Default value: \fB32,768\fR.
1661.RE
1662
619f0976
GW
1663.sp
1664.ne 2
1665.na
1666\fBzfs_initialize_value\fR (ulong)
1667.ad
1668.RS 12n
1669Pattern written to vdev free space by \fBzpool initialize\fR.
1670.sp
1671Default value: \fB16,045,690,984,833,335,022\fR (0xdeadbeefdeadbeee).
1672.RE
1673
917f475f
JG
1674.sp
1675.ne 2
1676.na
1677\fBzfs_lua_max_instrlimit\fR (ulong)
1678.ad
1679.RS 12n
1680The maximum execution time limit that can be set for a ZFS channel program,
1681specified as a number of Lua instructions.
1682.sp
1683Default value: \fB100,000,000\fR.
1684.RE
1685
1686.sp
1687.ne 2
1688.na
1689\fBzfs_lua_max_memlimit\fR (ulong)
1690.ad
1691.RS 12n
1692The maximum memory limit that can be set for a ZFS channel program, specified
1693in bytes.
1694.sp
1695Default value: \fB104,857,600\fR.
1696.RE
1697
a7ed98d8
SD
1698.sp
1699.ne 2
1700.na
1701\fBzfs_max_dataset_nesting\fR (int)
1702.ad
1703.RS 12n
1704The maximum depth of nested datasets. This value can be tuned temporarily to
1705fix existing datasets that exceed the predefined limit.
1706.sp
1707Default value: \fB50\fR.
1708.RE
1709
f1512ee6
MA
1710.sp
1711.ne 2
1712.na
1713\fBzfs_max_recordsize\fR (int)
1714.ad
1715.RS 12n
1716We currently support block sizes from 512 bytes to 16MB. The benefits of
ad796b8a 1717larger blocks, and thus larger I/O, need to be weighed against the cost of
f1512ee6
MA
1718COWing a giant block to modify one byte. Additionally, very large blocks
1719can have an impact on i/o latency, and also potentially on the memory
1720allocator. Therefore, we do not allow the recordsize to be set larger than
1721zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1722this tunable, and pools with larger blocks can always be imported and used,
1723regardless of this setting.
1724.sp
1725Default value: \fB1,048,576\fR.
1726.RE
1727
f3a7f661
GW
1728.sp
1729.ne 2
1730.na
1731\fBzfs_metaslab_fragmentation_threshold\fR (int)
1732.ad
1733.RS 12n
1734Allow metaslabs to keep their active state as long as their fragmentation
1735percentage is less than or equal to this value. An active metaslab that
1736exceeds this threshold will no longer keep its active status allowing
1737better metaslabs to be selected.
1738.sp
1739Default value: \fB70\fR.
1740.RE
1741
1742.sp
1743.ne 2
1744.na
1745\fBzfs_mg_fragmentation_threshold\fR (int)
1746.ad
1747.RS 12n
1748Metaslab groups are considered eligible for allocations if their
83426735 1749fragmentation metric (measured as a percentage) is less than or equal to
f3a7f661
GW
1750this value. If a metaslab group exceeds this threshold then it will be
1751skipped unless all metaslab groups within the metaslab class have also
1752crossed this threshold.
1753.sp
1754Default value: \fB85\fR.
1755.RE
1756
f4a4046b
TC
1757.sp
1758.ne 2
1759.na
1760\fBzfs_mg_noalloc_threshold\fR (int)
1761.ad
1762.RS 12n
1763Defines a threshold at which metaslab groups should be eligible for
1764allocations. The value is expressed as a percentage of free space
1765beyond which a metaslab group is always eligible for allocations.
1766If a metaslab group's free space is less than or equal to the
6b4e21c6 1767threshold, the allocator will avoid allocating to that group
f4a4046b
TC
1768unless all groups in the pool have reached the threshold. Once all
1769groups have reached the threshold, all groups are allowed to accept
1770allocations. The default value of 0 disables the feature and causes
1771all metaslab groups to be eligible for allocations.
1772
b58237e7 1773This parameter allows one to deal with pools having heavily imbalanced
f4a4046b
TC
1774vdevs such as would be the case when a new vdev has been added.
1775Setting the threshold to a non-zero percentage will stop allocations
1776from being made to vdevs that aren't filled to the specified percentage
1777and allow lesser filled vdevs to acquire more allocations than they
1778otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1779.sp
1780Default value: \fB0\fR.
1781.RE
1782
cc99f275
DB
1783.sp
1784.ne 2
1785.na
1786\fBzfs_ddt_data_is_special\fR (int)
1787.ad
1788.RS 12n
1789If enabled, ZFS will place DDT data into the special allocation class.
1790.sp
1791Default value: \fB1\fR.
1792.RE
1793
1794.sp
1795.ne 2
1796.na
1797\fBzfs_user_indirect_is_special\fR (int)
1798.ad
1799.RS 12n
1800If enabled, ZFS will place user data (both file and zvol) indirect blocks
1801into the special allocation class.
1802.sp
1803Default value: \fB1\fR.
1804.RE
1805
379ca9cf
OF
1806.sp
1807.ne 2
1808.na
1809\fBzfs_multihost_history\fR (int)
1810.ad
1811.RS 12n
1812Historical statistics for the last N multihost updates will be available in
1813\fB/proc/spl/kstat/zfs/<pool>/multihost\fR
1814.sp
1815Default value: \fB0\fR.
1816.RE
1817
1818.sp
1819.ne 2
1820.na
1821\fBzfs_multihost_interval\fR (ulong)
1822.ad
1823.RS 12n
1824Used to control the frequency of multihost writes which are performed when the
1825\fBmultihost\fR pool property is on. This is one factor used to determine
1826the length of the activity check during import.
1827.sp
1828The multihost write period is \fBzfs_multihost_interval / leaf-vdevs\fR milliseconds.
1829This means that on average a multihost write will be issued for each leaf vdev every
1830\fBzfs_multihost_interval\fR milliseconds. In practice, the observed period can
1831vary with the I/O load and this observed value is the delay which is stored in
1832the uberblock.
1833.sp
1834On import the activity check waits a minimum amount of time determined by
1835\fBzfs_multihost_interval * zfs_multihost_import_intervals\fR. The activity
1836check time may be further extended if the value of mmp delay found in the best
1837uberblock indicates actual multihost updates happened at longer intervals than
1838\fBzfs_multihost_interval\fR. A minimum value of \fB100ms\fR is enforced.
1839.sp
1840Default value: \fB1000\fR.
1841.RE
1842
1843.sp
1844.ne 2
1845.na
1846\fBzfs_multihost_import_intervals\fR (uint)
1847.ad
1848.RS 12n
1849Used to control the duration of the activity test on import. Smaller values of
1850\fBzfs_multihost_import_intervals\fR will reduce the import time but increase
1851the risk of failing to detect an active pool. The total activity check time is
1852never allowed to drop below one second. A value of 0 is ignored and treated as
1853if it was set to 1
1854.sp
1855Default value: \fB10\fR.
1856.RE
1857
1858.sp
1859.ne 2
1860.na
1861\fBzfs_multihost_fail_intervals\fR (uint)
1862.ad
1863.RS 12n
1864Controls the behavior of the pool when multihost write failures are detected.
1865.sp
1866When \fBzfs_multihost_fail_intervals = 0\fR then multihost write failures are ignored.
1867The failures will still be reported to the ZED which depending on its
1868configuration may take action such as suspending the pool or offlining a device.
1869.sp
1870When \fBzfs_multihost_fail_intervals > 0\fR then sequential multihost write failures
1871will cause the pool to be suspended. This occurs when
1872\fBzfs_multihost_fail_intervals * zfs_multihost_interval\fR milliseconds have
1873passed since the last successful multihost write. This guarantees the activity test
1874will see multihost writes if the pool is imported.
1875.sp
1876Default value: \fB5\fR.
1877.RE
1878
29714574
TF
1879.sp
1880.ne 2
1881.na
1882\fBzfs_no_scrub_io\fR (int)
1883.ad
1884.RS 12n
83426735
D
1885Set for no scrub I/O. This results in scrubs not actually scrubbing data and
1886simply doing a metadata crawl of the pool instead.
29714574
TF
1887.sp
1888Use \fB1\fR for yes and \fB0\fR for no (default).
1889.RE
1890
1891.sp
1892.ne 2
1893.na
1894\fBzfs_no_scrub_prefetch\fR (int)
1895.ad
1896.RS 12n
83426735 1897Set to disable block prefetching for scrubs.
29714574
TF
1898.sp
1899Use \fB1\fR for yes and \fB0\fR for no (default).
1900.RE
1901
29714574
TF
1902.sp
1903.ne 2
1904.na
1905\fBzfs_nocacheflush\fR (int)
1906.ad
1907.RS 12n
53b1f5ea
PS
1908Disable cache flush operations on disks when writing. Setting this will
1909cause pool corruption on power loss if a volatile out-of-order write cache
1910is enabled.
29714574
TF
1911.sp
1912Use \fB1\fR for yes and \fB0\fR for no (default).
1913.RE
1914
1915.sp
1916.ne 2
1917.na
1918\fBzfs_nopwrite_enabled\fR (int)
1919.ad
1920.RS 12n
1921Enable NOP writes
1922.sp
1923Use \fB1\fR for yes (default) and \fB0\fR to disable.
1924.RE
1925
66aca247
DB
1926.sp
1927.ne 2
1928.na
1929\fBzfs_dmu_offset_next_sync\fR (int)
1930.ad
1931.RS 12n
1932Enable forcing txg sync to find holes. When enabled forces ZFS to act
1933like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which
1934when a dnode is dirty causes txg's to be synced so that this data can be
1935found.
1936.sp
1937Use \fB1\fR for yes and \fB0\fR to disable (default).
1938.RE
1939
29714574
TF
1940.sp
1941.ne 2
1942.na
b738bc5a 1943\fBzfs_pd_bytes_max\fR (int)
29714574
TF
1944.ad
1945.RS 12n
83426735 1946The number of bytes which should be prefetched during a pool traversal
6146e17e 1947(eg: \fBzfs send\fR or other data crawling operations)
29714574 1948.sp
74aa2ba2 1949Default value: \fB52,428,800\fR.
29714574
TF
1950.RE
1951
bef78122
DQ
1952.sp
1953.ne 2
1954.na
1955\fBzfs_per_txg_dirty_frees_percent \fR (ulong)
1956.ad
1957.RS 12n
1958Tunable to control percentage of dirtied blocks from frees in one TXG.
1959After this threshold is crossed, additional dirty blocks from frees
1960wait until the next TXG.
1961A value of zero will disable this throttle.
1962.sp
1963Default value: \fB30\fR and \fB0\fR to disable.
1964.RE
1965
29714574
TF
1966.sp
1967.ne 2
1968.na
1969\fBzfs_prefetch_disable\fR (int)
1970.ad
1971.RS 12n
7f60329a
MA
1972This tunable disables predictive prefetch. Note that it leaves "prescient"
1973prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch,
1974prescient prefetch never issues i/os that end up not being needed, so it
1975can't hurt performance.
29714574
TF
1976.sp
1977Use \fB1\fR for yes and \fB0\fR for no (default).
1978.RE
1979
1980.sp
1981.ne 2
1982.na
1983\fBzfs_read_chunk_size\fR (long)
1984.ad
1985.RS 12n
1986Bytes to read per chunk
1987.sp
1988Default value: \fB1,048,576\fR.
1989.RE
1990
1991.sp
1992.ne 2
1993.na
1994\fBzfs_read_history\fR (int)
1995.ad
1996.RS 12n
379ca9cf
OF
1997Historical statistics for the last N reads will be available in
1998\fB/proc/spl/kstat/zfs/<pool>/reads\fR
29714574 1999.sp
83426735 2000Default value: \fB0\fR (no data is kept).
29714574
TF
2001.RE
2002
2003.sp
2004.ne 2
2005.na
2006\fBzfs_read_history_hits\fR (int)
2007.ad
2008.RS 12n
2009Include cache hits in read history
2010.sp
2011Use \fB1\fR for yes and \fB0\fR for no (default).
2012.RE
2013
9e052db4
MA
2014.sp
2015.ne 2
2016.na
4589f3ae
BB
2017\fBzfs_reconstruct_indirect_combinations_max\fR (int)
2018.ad
2019.RS 12na
2020If an indirect split block contains more than this many possible unique
2021combinations when being reconstructed, consider it too computationally
2022expensive to check them all. Instead, try at most
2023\fBzfs_reconstruct_indirect_combinations_max\fR randomly-selected
2024combinations each time the block is accessed. This allows all segment
2025copies to participate fairly in the reconstruction when all combinations
2026cannot be checked and prevents repeated use of one bad copy.
2027.sp
64bdf63f 2028Default value: \fB4096\fR.
9e052db4
MA
2029.RE
2030
29714574
TF
2031.sp
2032.ne 2
2033.na
2034\fBzfs_recover\fR (int)
2035.ad
2036.RS 12n
2037Set to attempt to recover from fatal errors. This should only be used as a
2038last resort, as it typically results in leaked space, or worse.
2039.sp
2040Use \fB1\fR for yes and \fB0\fR for no (default).
2041.RE
2042
7c9a4292
BB
2043.sp
2044.ne 2
2045.na
2046\fBzfs_removal_ignore_errors\fR (int)
2047.ad
2048.RS 12n
2049.sp
2050Ignore hard IO errors during device removal. When set, if a device encounters
2051a hard IO error during the removal process the removal will not be cancelled.
2052This can result in a normally recoverable block becoming permanently damaged
2053and is not recommended. This should only be used as a last resort when the
2054pool cannot be returned to a healthy state prior to removing the device.
2055.sp
2056Default value: \fB0\fR.
2057.RE
2058
29714574
TF
2059.sp
2060.ne 2
2061.na
d4a72f23 2062\fBzfs_resilver_min_time_ms\fR (int)
29714574
TF
2063.ad
2064.RS 12n
d4a72f23
TC
2065Resilvers are processed by the sync thread. While resilvering it will spend
2066at least this much time working on a resilver between txg flushes.
29714574 2067.sp
d4a72f23 2068Default value: \fB3,000\fR.
29714574
TF
2069.RE
2070
02638a30
TC
2071.sp
2072.ne 2
2073.na
2074\fBzfs_scan_ignore_errors\fR (int)
2075.ad
2076.RS 12n
2077If set to a nonzero value, remove the DTL (dirty time list) upon
2078completion of a pool scan (scrub) even if there were unrepairable
2079errors. It is intended to be used during pool repair or recovery to
2080stop resilvering when the pool is next imported.
2081.sp
2082Default value: \fB0\fR.
2083.RE
2084
29714574
TF
2085.sp
2086.ne 2
2087.na
d4a72f23 2088\fBzfs_scrub_min_time_ms\fR (int)
29714574
TF
2089.ad
2090.RS 12n
d4a72f23
TC
2091Scrubs are processed by the sync thread. While scrubbing it will spend
2092at least this much time working on a scrub between txg flushes.
29714574 2093.sp
d4a72f23 2094Default value: \fB1,000\fR.
29714574
TF
2095.RE
2096
2097.sp
2098.ne 2
2099.na
d4a72f23 2100\fBzfs_scan_checkpoint_intval\fR (int)
29714574
TF
2101.ad
2102.RS 12n
d4a72f23
TC
2103To preserve progress across reboots the sequential scan algorithm periodically
2104needs to stop metadata scanning and issue all the verifications I/Os to disk.
2105The frequency of this flushing is determined by the
a8577bdb 2106\fBzfs_scan_checkpoint_intval\fR tunable.
29714574 2107.sp
d4a72f23 2108Default value: \fB7200\fR seconds (every 2 hours).
29714574
TF
2109.RE
2110
2111.sp
2112.ne 2
2113.na
d4a72f23 2114\fBzfs_scan_fill_weight\fR (int)
29714574
TF
2115.ad
2116.RS 12n
d4a72f23
TC
2117This tunable affects how scrub and resilver I/O segments are ordered. A higher
2118number indicates that we care more about how filled in a segment is, while a
2119lower number indicates we care more about the size of the extent without
2120considering the gaps within a segment. This value is only tunable upon module
2121insertion. Changing the value afterwards will have no affect on scrub or
2122resilver performance.
29714574 2123.sp
d4a72f23 2124Default value: \fB3\fR.
29714574
TF
2125.RE
2126
2127.sp
2128.ne 2
2129.na
d4a72f23 2130\fBzfs_scan_issue_strategy\fR (int)
29714574
TF
2131.ad
2132.RS 12n
d4a72f23
TC
2133Determines the order that data will be verified while scrubbing or resilvering.
2134If set to \fB1\fR, data will be verified as sequentially as possible, given the
2135amount of memory reserved for scrubbing (see \fBzfs_scan_mem_lim_fact\fR). This
2136may improve scrub performance if the pool's data is very fragmented. If set to
2137\fB2\fR, the largest mostly-contiguous chunk of found data will be verified
2138first. By deferring scrubbing of small segments, we may later find adjacent data
2139to coalesce and increase the segment size. If set to \fB0\fR, zfs will use
2140strategy \fB1\fR during normal verification and strategy \fB2\fR while taking a
2141checkpoint.
29714574 2142.sp
d4a72f23
TC
2143Default value: \fB0\fR.
2144.RE
2145
2146.sp
2147.ne 2
2148.na
2149\fBzfs_scan_legacy\fR (int)
2150.ad
2151.RS 12n
2152A value of 0 indicates that scrubs and resilvers will gather metadata in
2153memory before issuing sequential I/O. A value of 1 indicates that the legacy
2154algorithm will be used where I/O is initiated as soon as it is discovered.
2155Changing this value to 0 will not affect scrubs or resilvers that are already
2156in progress.
2157.sp
2158Default value: \fB0\fR.
2159.RE
2160
2161.sp
2162.ne 2
2163.na
2164\fBzfs_scan_max_ext_gap\fR (int)
2165.ad
2166.RS 12n
2167Indicates the largest gap in bytes between scrub / resilver I/Os that will still
2168be considered sequential for sorting purposes. Changing this value will not
2169affect scrubs or resilvers that are already in progress.
2170.sp
2171Default value: \fB2097152 (2 MB)\fR.
2172.RE
2173
2174.sp
2175.ne 2
2176.na
2177\fBzfs_scan_mem_lim_fact\fR (int)
2178.ad
2179.RS 12n
2180Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
2181This tunable determines the hard limit for I/O sorting memory usage.
2182When the hard limit is reached we stop scanning metadata and start issuing
2183data verification I/O. This is done until we get below the soft limit.
2184.sp
2185Default value: \fB20\fR which is 5% of RAM (1/20).
2186.RE
2187
2188.sp
2189.ne 2
2190.na
2191\fBzfs_scan_mem_lim_soft_fact\fR (int)
2192.ad
2193.RS 12n
2194The fraction of the hard limit used to determined the soft limit for I/O sorting
2195by the sequential scan algorithm. When we cross this limit from bellow no action
2196is taken. When we cross this limit from above it is because we are issuing
2197verification I/O. In this case (unless the metadata scan is done) we stop
2198issuing verification I/O and start scanning metadata again until we get to the
2199hard limit.
2200.sp
2201Default value: \fB20\fR which is 5% of the hard limit (1/20).
2202.RE
2203
2204.sp
2205.ne 2
2206.na
2207\fBzfs_scan_vdev_limit\fR (int)
2208.ad
2209.RS 12n
2210Maximum amount of data that can be concurrently issued at once for scrubs and
2211resilvers per leaf device, given in bytes.
2212.sp
2213Default value: \fB41943040\fR.
29714574
TF
2214.RE
2215
fd8febbd
TF
2216.sp
2217.ne 2
2218.na
2219\fBzfs_send_corrupt_data\fR (int)
2220.ad
2221.RS 12n
83426735 2222Allow sending of corrupt data (ignore read/checksum errors when sending data)
fd8febbd
TF
2223.sp
2224Use \fB1\fR for yes and \fB0\fR for no (default).
2225.RE
2226
3b0d9928
BB
2227.sp
2228.ne 2
2229.na
2230\fBzfs_send_queue_length\fR (int)
2231.ad
2232.RS 12n
2233The maximum number of bytes allowed in the \fBzfs send\fR queue. This value
2234must be at least twice the maximum block size in use.
2235.sp
2236Default value: \fB16,777,216\fR.
2237.RE
2238
2239.sp
2240.ne 2
2241.na
2242\fBzfs_recv_queue_length\fR (int)
2243.ad
2244.RS 12n
2245.sp
2246The maximum number of bytes allowed in the \fBzfs receive\fR queue. This value
2247must be at least twice the maximum block size in use.
2248.sp
2249Default value: \fB16,777,216\fR.
2250.RE
2251
29714574
TF
2252.sp
2253.ne 2
2254.na
2255\fBzfs_sync_pass_deferred_free\fR (int)
2256.ad
2257.RS 12n
83426735 2258Flushing of data to disk is done in passes. Defer frees starting in this pass
29714574
TF
2259.sp
2260Default value: \fB2\fR.
2261.RE
2262
d2734cce
SD
2263.sp
2264.ne 2
2265.na
2266\fBzfs_spa_discard_memory_limit\fR (int)
2267.ad
2268.RS 12n
2269Maximum memory used for prefetching a checkpoint's space map on each
2270vdev while discarding the checkpoint.
2271.sp
2272Default value: \fB16,777,216\fR.
2273.RE
2274
29714574
TF
2275.sp
2276.ne 2
2277.na
2278\fBzfs_sync_pass_dont_compress\fR (int)
2279.ad
2280.RS 12n
2281Don't compress starting in this pass
2282.sp
2283Default value: \fB5\fR.
2284.RE
2285
2286.sp
2287.ne 2
2288.na
2289\fBzfs_sync_pass_rewrite\fR (int)
2290.ad
2291.RS 12n
83426735 2292Rewrite new block pointers starting in this pass
29714574
TF
2293.sp
2294Default value: \fB2\fR.
2295.RE
2296
a032ac4b
BB
2297.sp
2298.ne 2
2299.na
2300\fBzfs_sync_taskq_batch_pct\fR (int)
2301.ad
2302.RS 12n
2303This controls the number of threads used by the dp_sync_taskq. The default
2304value of 75% will create a maximum of one thread per cpu.
2305.sp
be54a13c 2306Default value: \fB75\fR%.
a032ac4b
BB
2307.RE
2308
29714574
TF
2309.sp
2310.ne 2
2311.na
2312\fBzfs_txg_history\fR (int)
2313.ad
2314.RS 12n
379ca9cf
OF
2315Historical statistics for the last N txgs will be available in
2316\fB/proc/spl/kstat/zfs/<pool>/txgs\fR
29714574 2317.sp
ca85d690 2318Default value: \fB0\fR.
29714574
TF
2319.RE
2320
29714574
TF
2321.sp
2322.ne 2
2323.na
2324\fBzfs_txg_timeout\fR (int)
2325.ad
2326.RS 12n
83426735 2327Flush dirty data to disk at least every N seconds (maximum txg duration)
29714574
TF
2328.sp
2329Default value: \fB5\fR.
2330.RE
2331
2332.sp
2333.ne 2
2334.na
2335\fBzfs_vdev_aggregation_limit\fR (int)
2336.ad
2337.RS 12n
2338Max vdev I/O aggregation size
2339.sp
2340Default value: \fB131,072\fR.
2341.RE
2342
2343.sp
2344.ne 2
2345.na
2346\fBzfs_vdev_cache_bshift\fR (int)
2347.ad
2348.RS 12n
2349Shift size to inflate reads too
2350.sp
83426735 2351Default value: \fB16\fR (effectively 65536).
29714574
TF
2352.RE
2353
2354.sp
2355.ne 2
2356.na
2357\fBzfs_vdev_cache_max\fR (int)
2358.ad
2359.RS 12n
ca85d690 2360Inflate reads smaller than this value to meet the \fBzfs_vdev_cache_bshift\fR
2361size (default 64k).
83426735
D
2362.sp
2363Default value: \fB16384\fR.
29714574
TF
2364.RE
2365
2366.sp
2367.ne 2
2368.na
2369\fBzfs_vdev_cache_size\fR (int)
2370.ad
2371.RS 12n
83426735
D
2372Total size of the per-disk cache in bytes.
2373.sp
2374Currently this feature is disabled as it has been found to not be helpful
2375for performance and in some cases harmful.
29714574
TF
2376.sp
2377Default value: \fB0\fR.
2378.RE
2379
29714574
TF
2380.sp
2381.ne 2
2382.na
9f500936 2383\fBzfs_vdev_mirror_rotating_inc\fR (int)
29714574
TF
2384.ad
2385.RS 12n
9f500936 2386A number by which the balancing algorithm increments the load calculation for
2387the purpose of selecting the least busy mirror member when an I/O immediately
2388follows its predecessor on rotational vdevs for the purpose of making decisions
2389based on load.
29714574 2390.sp
9f500936 2391Default value: \fB0\fR.
2392.RE
2393
2394.sp
2395.ne 2
2396.na
2397\fBzfs_vdev_mirror_rotating_seek_inc\fR (int)
2398.ad
2399.RS 12n
2400A number by which the balancing algorithm increments the load calculation for
2401the purpose of selecting the least busy mirror member when an I/O lacks
2402locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2403this that are not immediately following the previous I/O are incremented by
2404half.
2405.sp
2406Default value: \fB5\fR.
2407.RE
2408
2409.sp
2410.ne 2
2411.na
2412\fBzfs_vdev_mirror_rotating_seek_offset\fR (int)
2413.ad
2414.RS 12n
2415The maximum distance for the last queued I/O in which the balancing algorithm
2416considers an I/O to have locality.
2417See the section "ZFS I/O SCHEDULER".
2418.sp
2419Default value: \fB1048576\fR.
2420.RE
2421
2422.sp
2423.ne 2
2424.na
2425\fBzfs_vdev_mirror_non_rotating_inc\fR (int)
2426.ad
2427.RS 12n
2428A number by which the balancing algorithm increments the load calculation for
2429the purpose of selecting the least busy mirror member on non-rotational vdevs
2430when I/Os do not immediately follow one another.
2431.sp
2432Default value: \fB0\fR.
2433.RE
2434
2435.sp
2436.ne 2
2437.na
2438\fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int)
2439.ad
2440.RS 12n
2441A number by which the balancing algorithm increments the load calculation for
2442the purpose of selecting the least busy mirror member when an I/O lacks
2443locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within
2444this that are not immediately following the previous I/O are incremented by
2445half.
2446.sp
2447Default value: \fB1\fR.
29714574
TF
2448.RE
2449
29714574
TF
2450.sp
2451.ne 2
2452.na
2453\fBzfs_vdev_read_gap_limit\fR (int)
2454.ad
2455.RS 12n
83426735
D
2456Aggregate read I/O operations if the gap on-disk between them is within this
2457threshold.
29714574
TF
2458.sp
2459Default value: \fB32,768\fR.
2460.RE
2461
2462.sp
2463.ne 2
2464.na
2465\fBzfs_vdev_scheduler\fR (charp)
2466.ad
2467.RS 12n
ca85d690 2468Set the Linux I/O scheduler on whole disk vdevs to this scheduler. Valid options
2469are noop, cfq, bfq & deadline
29714574
TF
2470.sp
2471Default value: \fBnoop\fR.
2472.RE
2473
29714574
TF
2474.sp
2475.ne 2
2476.na
2477\fBzfs_vdev_write_gap_limit\fR (int)
2478.ad
2479.RS 12n
2480Aggregate write I/O over gap
2481.sp
2482Default value: \fB4,096\fR.
2483.RE
2484
ab9f4b0b
GN
2485.sp
2486.ne 2
2487.na
2488\fBzfs_vdev_raidz_impl\fR (string)
2489.ad
2490.RS 12n
c9187d86 2491Parameter for selecting raidz parity implementation to use.
ab9f4b0b
GN
2492
2493Options marked (always) below may be selected on module load as they are
2494supported on all systems.
2495The remaining options may only be set after the module is loaded, as they
2496are available only if the implementations are compiled in and supported
2497on the running system.
2498
2499Once the module is loaded, the content of
2500/sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options
2501with the currently selected one enclosed in [].
2502Possible options are:
2503 fastest - (always) implementation selected using built-in benchmark
2504 original - (always) original raidz implementation
2505 scalar - (always) scalar raidz implementation
ae25d222
GN
2506 sse2 - implementation using SSE2 instruction set (64bit x86 only)
2507 ssse3 - implementation using SSSE3 instruction set (64bit x86 only)
ab9f4b0b 2508 avx2 - implementation using AVX2 instruction set (64bit x86 only)
7f547f85
RD
2509 avx512f - implementation using AVX512F instruction set (64bit x86 only)
2510 avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only)
62a65a65
RD
2511 aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only)
2512 aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only)
ab9f4b0b
GN
2513.sp
2514Default value: \fBfastest\fR.
2515.RE
2516
29714574
TF
2517.sp
2518.ne 2
2519.na
2520\fBzfs_zevent_cols\fR (int)
2521.ad
2522.RS 12n
83426735 2523When zevents are logged to the console use this as the word wrap width.
29714574
TF
2524.sp
2525Default value: \fB80\fR.
2526.RE
2527
2528.sp
2529.ne 2
2530.na
2531\fBzfs_zevent_console\fR (int)
2532.ad
2533.RS 12n
2534Log events to the console
2535.sp
2536Use \fB1\fR for yes and \fB0\fR for no (default).
2537.RE
2538
2539.sp
2540.ne 2
2541.na
2542\fBzfs_zevent_len_max\fR (int)
2543.ad
2544.RS 12n
83426735
D
2545Max event queue length. A value of 0 will result in a calculated value which
2546increases with the number of CPUs in the system (minimum 64 events). Events
2547in the queue can be viewed with the \fBzpool events\fR command.
29714574
TF
2548.sp
2549Default value: \fB0\fR.
2550.RE
2551
a032ac4b
BB
2552.sp
2553.ne 2
2554.na
2555\fBzfs_zil_clean_taskq_maxalloc\fR (int)
2556.ad
2557.RS 12n
2558The maximum number of taskq entries that are allowed to be cached. When this
2fe61a7e 2559limit is exceeded transaction records (itxs) will be cleaned synchronously.
a032ac4b
BB
2560.sp
2561Default value: \fB1048576\fR.
2562.RE
2563
2564.sp
2565.ne 2
2566.na
2567\fBzfs_zil_clean_taskq_minalloc\fR (int)
2568.ad
2569.RS 12n
2570The number of taskq entries that are pre-populated when the taskq is first
2571created and are immediately available for use.
2572.sp
2573Default value: \fB1024\fR.
2574.RE
2575
2576.sp
2577.ne 2
2578.na
2579\fBzfs_zil_clean_taskq_nthr_pct\fR (int)
2580.ad
2581.RS 12n
2582This controls the number of threads used by the dp_zil_clean_taskq. The default
2583value of 100% will create a maximum of one thread per cpu.
2584.sp
be54a13c 2585Default value: \fB100\fR%.
a032ac4b
BB
2586.RE
2587
53b1f5ea
PS
2588.sp
2589.ne 2
2590.na
2591\fBzil_nocacheflush\fR (int)
2592.ad
2593.RS 12n
2594Disable the cache flush commands that are normally sent to the disk(s) by
2595the ZIL after an LWB write has completed. Setting this will cause ZIL
2596corruption on power loss if a volatile out-of-order write cache is enabled.
2597.sp
2598Use \fB1\fR for yes and \fB0\fR for no (default).
2599.RE
2600
29714574
TF
2601.sp
2602.ne 2
2603.na
2604\fBzil_replay_disable\fR (int)
2605.ad
2606.RS 12n
83426735
D
2607Disable intent logging replay. Can be disabled for recovery from corrupted
2608ZIL
29714574
TF
2609.sp
2610Use \fB1\fR for yes and \fB0\fR for no (default).
2611.RE
2612
2613.sp
2614.ne 2
2615.na
1b7c1e5c 2616\fBzil_slog_bulk\fR (ulong)
29714574
TF
2617.ad
2618.RS 12n
1b7c1e5c
GDN
2619Limit SLOG write size per commit executed with synchronous priority.
2620Any writes above that will be executed with lower (asynchronous) priority
2621to limit potential SLOG device abuse by single active ZIL writer.
29714574 2622.sp
1b7c1e5c 2623Default value: \fB786,432\fR.
29714574
TF
2624.RE
2625
c3bd3fb4
TC
2626.sp
2627.ne 2
2628.na
2629\fBzio_decompress_fail_fraction\fR (int)
2630.ad
2631.RS 12n
2632If non-zero, this value represents the denominator of the probability that zfs
2633should induce a decompression failure. For instance, for a 5% decompression
2634failure rate, this value should be set to 20.
2635.sp
2636Default value: \fB0\fR.
2637.RE
2638
29714574
TF
2639.sp
2640.ne 2
2641.na
ad796b8a 2642\fBzio_slow_io_ms\fR (int)
29714574
TF
2643.ad
2644.RS 12n
ad796b8a
TH
2645When an I/O operation takes more than \fBzio_slow_io_ms\fR milliseconds to
2646complete is marked as a slow I/O. Each slow I/O causes a delay zevent. Slow
2647I/O counters can be seen with "zpool status -s".
2648
29714574
TF
2649.sp
2650Default value: \fB30,000\fR.
2651.RE
2652
3dfb57a3
DB
2653.sp
2654.ne 2
2655.na
2656\fBzio_dva_throttle_enabled\fR (int)
2657.ad
2658.RS 12n
ad796b8a 2659Throttle block allocations in the I/O pipeline. This allows for
3dfb57a3 2660dynamic allocation distribution when devices are imbalanced.
e815485f
TC
2661When enabled, the maximum number of pending allocations per top-level vdev
2662is limited by \fBzfs_vdev_queue_depth_pct\fR.
3dfb57a3 2663.sp
27f2b90d 2664Default value: \fB1\fR.
3dfb57a3
DB
2665.RE
2666
29714574
TF
2667.sp
2668.ne 2
2669.na
2670\fBzio_requeue_io_start_cut_in_line\fR (int)
2671.ad
2672.RS 12n
2673Prioritize requeued I/O
2674.sp
2675Default value: \fB0\fR.
2676.RE
2677
dcb6bed1
D
2678.sp
2679.ne 2
2680.na
2681\fBzio_taskq_batch_pct\fR (uint)
2682.ad
2683.RS 12n
2684Percentage of online CPUs (or CPU cores, etc) which will run a worker thread
ad796b8a 2685for I/O. These workers are responsible for I/O work such as compression and
dcb6bed1
D
2686checksum calculations. Fractional number of CPUs will be rounded down.
2687.sp
2688The default value of 75 was chosen to avoid using all CPUs which can result in
2689latency issues and inconsistent application performance, especially when high
2690compression is enabled.
2691.sp
2692Default value: \fB75\fR.
2693.RE
2694
29714574
TF
2695.sp
2696.ne 2
2697.na
2698\fBzvol_inhibit_dev\fR (uint)
2699.ad
2700.RS 12n
83426735
D
2701Do not create zvol device nodes. This may slightly improve startup time on
2702systems with a very large number of zvols.
29714574
TF
2703.sp
2704Use \fB1\fR for yes and \fB0\fR for no (default).
2705.RE
2706
2707.sp
2708.ne 2
2709.na
2710\fBzvol_major\fR (uint)
2711.ad
2712.RS 12n
83426735 2713Major number for zvol block devices
29714574
TF
2714.sp
2715Default value: \fB230\fR.
2716.RE
2717
2718.sp
2719.ne 2
2720.na
2721\fBzvol_max_discard_blocks\fR (ulong)
2722.ad
2723.RS 12n
83426735
D
2724Discard (aka TRIM) operations done on zvols will be done in batches of this
2725many blocks, where block size is determined by the \fBvolblocksize\fR property
2726of a zvol.
29714574
TF
2727.sp
2728Default value: \fB16,384\fR.
2729.RE
2730
9965059a
BB
2731.sp
2732.ne 2
2733.na
2734\fBzvol_prefetch_bytes\fR (uint)
2735.ad
2736.RS 12n
2737When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR
2738from the start and end of the volume. Prefetching these regions
2739of the volume is desirable because they are likely to be accessed
2740immediately by \fBblkid(8)\fR or by the kernel scanning for a partition
2741table.
2742.sp
2743Default value: \fB131,072\fR.
2744.RE
2745
692e55b8
CC
2746.sp
2747.ne 2
2748.na
2749\fBzvol_request_sync\fR (uint)
2750.ad
2751.RS 12n
2752When processing I/O requests for a zvol submit them synchronously. This
2753effectively limits the queue depth to 1 for each I/O submitter. When set
2754to 0 requests are handled asynchronously by a thread pool. The number of
2755requests which can be handled concurrently is controller by \fBzvol_threads\fR.
2756.sp
8fa5250f 2757Default value: \fB0\fR.
692e55b8
CC
2758.RE
2759
2760.sp
2761.ne 2
2762.na
2763\fBzvol_threads\fR (uint)
2764.ad
2765.RS 12n
2766Max number of threads which can handle zvol I/O requests concurrently.
2767.sp
2768Default value: \fB32\fR.
2769.RE
2770
cf8738d8 2771.sp
2772.ne 2
2773.na
2774\fBzvol_volmode\fR (uint)
2775.ad
2776.RS 12n
2777Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR.
2778Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none).
2779.sp
2780Default value: \fB1\fR.
2781.RE
2782
39ccc909 2783.sp
2784.ne 2
2785.na
2786\fBzfs_qat_disable\fR (int)
2787.ad
2788.RS 12n
cf637391
TC
2789This tunable disables qat hardware acceleration for gzip compression and.
2790AES-GCM encryption. It is available only if qat acceleration is compiled in
2791and the qat driver is present.
39ccc909 2792.sp
2793Use \fB1\fR for yes and \fB0\fR for no (default).
2794.RE
2795
e8b96c60
MA
2796.SH ZFS I/O SCHEDULER
2797ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
2798The I/O scheduler determines when and in what order those operations are
2799issued. The I/O scheduler divides operations into five I/O classes
2800prioritized in the following order: sync read, sync write, async read,
2801async write, and scrub/resilver. Each queue defines the minimum and
2802maximum number of concurrent operations that may be issued to the
2803device. In addition, the device has an aggregate maximum,
2804\fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
2805must not exceed the aggregate maximum. If the sum of the per-queue
2806maximums exceeds the aggregate maximum, then the number of active I/Os
2807may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
2808be issued regardless of whether all per-queue minimums have been met.
2809.sp
2810For many physical devices, throughput increases with the number of
2811concurrent operations, but latency typically suffers. Further, physical
2812devices typically have a limit at which more concurrent operations have no
2813effect on throughput or can actually cause it to decrease.
2814.sp
2815The scheduler selects the next operation to issue by first looking for an
2816I/O class whose minimum has not been satisfied. Once all are satisfied and
2817the aggregate maximum has not been hit, the scheduler looks for classes
2818whose maximum has not been satisfied. Iteration through the I/O classes is
2819done in the order specified above. No further operations are issued if the
2820aggregate maximum number of concurrent operations has been hit or if there
2821are no operations queued for an I/O class that has not hit its maximum.
2822Every time an I/O is queued or an operation completes, the I/O scheduler
2823looks for new operations to issue.
2824.sp
2825In general, smaller max_active's will lead to lower latency of synchronous
2826operations. Larger max_active's may lead to higher overall throughput,
2827depending on underlying storage.
2828.sp
2829The ratio of the queues' max_actives determines the balance of performance
2830between reads, writes, and scrubs. E.g., increasing
2831\fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
2832more quickly, but reads and writes to have higher latency and lower throughput.
2833.sp
2834All I/O classes have a fixed maximum number of outstanding operations
2835except for the async write class. Asynchronous writes represent the data
2836that is committed to stable storage during the syncing stage for
2837transaction groups. Transaction groups enter the syncing state
2838periodically so the number of queued async writes will quickly burst up
2839and then bleed down to zero. Rather than servicing them as quickly as
2840possible, the I/O scheduler changes the maximum number of active async
2841write I/Os according to the amount of dirty data in the pool. Since
2842both throughput and latency typically increase with the number of
2843concurrent operations issued to physical devices, reducing the
2844burstiness in the number of concurrent operations also stabilizes the
2845response time of operations from other -- and in particular synchronous
2846-- queues. In broad strokes, the I/O scheduler will issue more
2847concurrent operations from the async write queue as there's more dirty
2848data in the pool.
2849.sp
2850Async Writes
2851.sp
2852The number of concurrent operations issued for the async write I/O class
2853follows a piece-wise linear function defined by a few adjustable points.
2854.nf
2855
2856 | o---------| <-- zfs_vdev_async_write_max_active
2857 ^ | /^ |
2858 | | / | |
2859active | / | |
2860 I/O | / | |
2861count | / | |
2862 | / | |
2863 |-------o | | <-- zfs_vdev_async_write_min_active
2864 0|_______^______|_________|
2865 0% | | 100% of zfs_dirty_data_max
2866 | |
2867 | `-- zfs_vdev_async_write_active_max_dirty_percent
2868 `--------- zfs_vdev_async_write_active_min_dirty_percent
2869
2870.fi
2871Until the amount of dirty data exceeds a minimum percentage of the dirty
2872data allowed in the pool, the I/O scheduler will limit the number of
2873concurrent operations to the minimum. As that threshold is crossed, the
2874number of concurrent operations issued increases linearly to the maximum at
2875the specified maximum percentage of the dirty data allowed in the pool.
2876.sp
2877Ideally, the amount of dirty data on a busy pool will stay in the sloped
2878part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
2879and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
2880maximum percentage, this indicates that the rate of incoming data is
2881greater than the rate that the backend storage can handle. In this case, we
2882must further throttle incoming writes, as described in the next section.
2883
2884.SH ZFS TRANSACTION DELAY
2885We delay transactions when we've determined that the backend storage
2886isn't able to accommodate the rate of incoming writes.
2887.sp
2888If there is already a transaction waiting, we delay relative to when
2889that transaction will finish waiting. This way the calculated delay time
2890is independent of the number of threads concurrently executing
2891transactions.
2892.sp
2893If we are the only waiter, wait relative to when the transaction
2894started, rather than the current time. This credits the transaction for
2895"time already served", e.g. reading indirect blocks.
2896.sp
2897The minimum time for a transaction to take is calculated as:
2898.nf
2899 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
2900 min_time is then capped at 100 milliseconds.
2901.fi
2902.sp
2903The delay has two degrees of freedom that can be adjusted via tunables. The
2904percentage of dirty data at which we start to delay is defined by
2905\fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
2906\fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
2907delay after writing at full speed has failed to keep up with the incoming write
2908rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
2909this variable determines the amount of delay at the midpoint of the curve.
2910.sp
2911.nf
2912delay
2913 10ms +-------------------------------------------------------------*+
2914 | *|
2915 9ms + *+
2916 | *|
2917 8ms + *+
2918 | * |
2919 7ms + * +
2920 | * |
2921 6ms + * +
2922 | * |
2923 5ms + * +
2924 | * |
2925 4ms + * +
2926 | * |
2927 3ms + * +
2928 | * |
2929 2ms + (midpoint) * +
2930 | | ** |
2931 1ms + v *** +
2932 | zfs_delay_scale ----------> ******** |
2933 0 +-------------------------------------*********----------------+
2934 0% <- zfs_dirty_data_max -> 100%
2935.fi
2936.sp
2937Note that since the delay is added to the outstanding time remaining on the
2938most recent transaction, the delay is effectively the inverse of IOPS.
2939Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
2940was chosen such that small changes in the amount of accumulated dirty data
2941in the first 3/4 of the curve yield relatively small differences in the
2942amount of delay.
2943.sp
2944The effects can be easier to understand when the amount of delay is
2945represented on a log scale:
2946.sp
2947.nf
2948delay
2949100ms +-------------------------------------------------------------++
2950 + +
2951 | |
2952 + *+
2953 10ms + *+
2954 + ** +
2955 | (midpoint) ** |
2956 + | ** +
2957 1ms + v **** +
2958 + zfs_delay_scale ----------> ***** +
2959 | **** |
2960 + **** +
2961100us + ** +
2962 + * +
2963 | * |
2964 + * +
2965 10us + * +
2966 + +
2967 | |
2968 + +
2969 +--------------------------------------------------------------+
2970 0% <- zfs_dirty_data_max -> 100%
2971.fi
2972.sp
2973Note here that only as the amount of dirty data approaches its limit does
2974the delay start to increase rapidly. The goal of a properly tuned system
2975should be to keep the amount of dirty data out of that range by first
2976ensuring that the appropriate limits are set for the I/O scheduler to reach
2977optimal throughput on the backend storage, and then by changing the value
2978of \fBzfs_delay_scale\fR to increase the steepness of the curve.