]> git.proxmox.com Git - mirror_zfs.git/blame - man/man4/zfs.4
ZIO: Remove READY pipeline stage from root ZIOs
[mirror_zfs.git] / man / man4 / zfs.4
CommitLineData
2d815d95 1.\"
29714574 2.\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
dd0b5c85 3.\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
65282ee9 4.\" Copyright (c) 2019 Datto Inc.
29714574
TF
5.\" The contents of this file are subject to the terms of the Common Development
6.\" and Distribution License (the "License"). You may not use this file except
7.\" in compliance with the License. You can obtain a copy of the license at
1d3ba0bf 8.\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
29714574
TF
9.\"
10.\" See the License for the specific language governing permissions and
11.\" limitations under the License. When distributing Covered Code, include this
12.\" CDDL HEADER in each file and include the License file at
13.\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14.\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15.\" own identifying information:
16.\" Portions Copyright [yyyy] [name of copyright owner]
2d815d95 17.\"
46adb282 18.Dd July 21, 2023
2badb345 19.Dt ZFS 4
2d815d95
AZ
20.Os
21.
22.Sh NAME
2badb345
AZ
23.Nm zfs
24.Nd tuning of the ZFS kernel module
2d815d95
AZ
25.
26.Sh DESCRIPTION
2badb345 27The ZFS module supports these parameters:
2d815d95 28.Bl -tag -width Ds
ab8d9c17 29.It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
2d815d95
AZ
30Maximum size in bytes of the dbuf cache.
31The target size is determined by the MIN versus
32.No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
77f6826b 33of the target ARC size.
2d815d95
AZ
34The behavior of the dbuf cache and its associated settings
35can be observed via the
36.Pa /proc/spl/kstat/zfs/dbufstats
37kstat.
38.
ab8d9c17 39.It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64
2d815d95
AZ
40Maximum size in bytes of the metadata dbuf cache.
41The target size is determined by the MIN versus
42.No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
43of the target ARC size.
44The behavior of the metadata dbuf cache and its associated settings
45can be observed via the
46.Pa /proc/spl/kstat/zfs/dbufstats
47kstat.
48.
49.It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
50The percentage over
51.Sy dbuf_cache_max_bytes
52when dbufs must be evicted directly.
53.
54.It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
55The percentage below
56.Sy dbuf_cache_max_bytes
57when the evict thread stops evicting dbufs.
58.
fdc2d303 59.It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint
2d815d95
AZ
60Set the size of the dbuf cache
61.Pq Sy dbuf_cache_max_bytes
62to a log2 fraction of the target ARC size.
63.
fdc2d303 64.It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint
2d815d95
AZ
65Set the size of the dbuf metadata cache
66.Pq Sy dbuf_metadata_cache_max_bytes
77f6826b 67to a log2 fraction of the target ARC size.
2d815d95 68.
505df8d1
BB
69.It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint
70Set the size of the mutex array for the dbuf cache.
71When set to
72.Sy 0
73the array is dynamically sized based on total system memory.
74.
fdc2d303 75.It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint
2d815d95
AZ
76dnode slots allocated in a single operation as a power of 2.
77The default value minimizes lock contention for the bulk operation performed.
78.
fdc2d303 79.It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
2d815d95 80Limit the amount we can prefetch with one call to this amount in bytes.
d9b4bf06 81This helps to limit the amount of memory that can be used by prefetching.
2d815d95
AZ
82.
83.It Sy ignore_hole_birth Pq int
84Alias for
85.Sy send_holes_without_birth_time .
86.
87.It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
88Turbo L2ARC warm-up.
89When the L2ARC is cold the fill interval will be set as fast as possible.
90.
ab8d9c17 91.It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64
2d815d95
AZ
92Min feed interval in milliseconds.
93Requires
94.Sy l2arc_feed_again Ns = Ns Ar 1
95and only applicable in related situations.
96.
ab8d9c17 97.It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64
2d815d95
AZ
98Seconds between L2ARC writing.
99.
ab8d9c17 100.It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64
2d815d95
AZ
101How far through the ARC lists to search for L2ARC cacheable content,
102expressed as a multiplier of
103.Sy l2arc_write_max .
104ARC persistence across reboots can be achieved with persistent L2ARC
105by setting this parameter to
106.Sy 0 ,
107allowing the full length of ARC lists to be searched for cacheable content.
108.
ab8d9c17 109.It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64
2d815d95
AZ
110Scales
111.Sy l2arc_headroom
112by this percentage when L2ARC contents are being successfully compressed
113before writing.
114A value of
115.Sy 100
116disables this feature.
117.
c9d62d13 118.It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int
0175272f 119Controls whether buffers present on special vdevs are eligible for caching
c9d62d13
GA
120into L2ARC.
121If set to 1, exclude dbufs on special vdevs from being cached to L2ARC.
122.
2d815d95 123.It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int
feb3a7ee
GA
124Controls whether only MFU metadata and data are cached from ARC into L2ARC.
125This may be desired to avoid wasting space on L2ARC when reading/writing large
2d815d95
AZ
126amounts of data that are not expected to be accessed more than once.
127.Pp
128The default is off,
129meaning both MRU and MFU data and metadata are cached.
130When turning off this feature, some MRU buffers will still be present
131in ARC and eventually cached on L2ARC.
132.No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
08532162 133some prefetched buffers will be cached to L2ARC, and those might later
2d815d95
AZ
134transition to MRU, in which case the
135.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
136.Pp
137Regardless of
138.Sy l2arc_noprefetch ,
139some MFU buffers might be evicted from ARC,
140accessed later on as prefetches and transition to MRU as prefetches.
141If accessed again they are counted as MRU and the
142.Sy l2arc_mru_asize No arcstat will not be Sy 0 .
143.Pp
144The ARC status of L2ARC buffers when they were first cached in
145L2ARC can be seen in the
146.Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
147arcstats when importing the pool or onlining a cache
148device if persistent L2ARC is enabled.
149.Pp
150The
151.Sy evict_l2_eligible_mru
37b00fb0 152arcstat does not take into account if this option is enabled as the information
2d815d95
AZ
153provided by the
154.Sy evict_l2_eligible_m[rf]u
155arcstats can be used to decide if toggling this option is appropriate
156for the current workload.
157.
fdc2d303 158.It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint
523e1295 159Percent of ARC size allowed for L2ARC-only headers.
2d815d95
AZ
160Since L2ARC buffers are not evicted on memory pressure,
161too many headers on a system with an irrationally large L2ARC
162can render it slow or unusable.
163This parameter limits L2ARC writes and rebuilds to achieve the target.
164.
ab8d9c17 165.It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64
2d815d95
AZ
166Trims ahead of the current write size
167.Pq Sy l2arc_write_max
168on L2ARC devices by this percentage of write size if we have filled the device.
169If set to
170.Sy 100
171we TRIM twice the space required to accommodate upcoming writes.
172A minimum of
a894ae75 173.Sy 64 MiB
2d815d95
AZ
174will be trimmed.
175It also enables TRIM of the whole L2ARC device upon creation
176or addition to an existing pool or if the header of the device is
177invalid upon importing a pool or onlining a cache device.
178A value of
179.Sy 0
b7654bd7 180disables TRIM on L2ARC altogether and is the default as it can put significant
2d815d95
AZ
181stress on the underlying storage devices.
182This will vary depending of how well the specific device handles these commands.
183.
184.It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
83426735 185Do not write buffers to L2ARC if they were prefetched but not used by
2d815d95
AZ
186applications.
187In case there are prefetched buffers in L2ARC and this option
188is later set, we do not read the prefetched buffers from L2ARC.
189Unsetting this option is useful for caching sequential reads from the
190disks to L2ARC and serve those reads from L2ARC later on.
191This may be beneficial in case the L2ARC device is significantly faster
192in sequential reads than the disks of the pool.
193.Pp
194Use
195.Sy 1
196to disable and
197.Sy 0
198to enable caching/reading prefetches to/from L2ARC.
199.
200.It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
77f6826b 201No reads during writes.
2d815d95 202.
ab8d9c17 203.It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
2d815d95
AZ
204Cold L2ARC devices will have
205.Sy l2arc_write_max
206increased by this amount while they remain cold.
207.
ab8d9c17 208.It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64
77f6826b 209Max write bytes per interval.
2d815d95
AZ
210.
211.It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
212Rebuild the L2ARC when importing a pool (persistent L2ARC).
213This can be disabled if there are problems importing a pool
214or attaching an L2ARC device (e.g. the L2ARC device is slow
215in reading stored log metadata, or the metadata
77f6826b 216has become somehow fragmented/unusable).
2d815d95 217.
ab8d9c17 218.It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
2d815d95
AZ
219Mininum size of an L2ARC device required in order to write log blocks in it.
220The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
221.Pp
a894ae75 222For L2ARC devices less than 1 GiB, the amount of data
2d815d95
AZ
223.Fn l2arc_evict
224evicts is significant compared to the amount of restored L2ARC data.
225In this case, do not write log blocks in L2ARC in order not to waste space.
226.
ab8d9c17 227.It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2d815d95
AZ
228Metaslab granularity, in bytes.
229This is roughly similar to what would be referred to as the "stripe size"
230in traditional RAID arrays.
c55b2932
AM
231In normal operation, ZFS will try to write this amount of data to each disk
232before moving on to the next top-level vdev.
2d815d95
AZ
233.
234.It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
235Enable metaslab group biasing based on their vdevs' over- or under-utilization
f3a7f661 236relative to the pool.
2d815d95 237.
ab8d9c17 238.It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64
2d815d95
AZ
239Make some blocks above a certain size be gang blocks.
240This option is used by the test suite to facilitate testing.
241.
46adb282
RN
242.It Sy metaslab_force_ganging_pct Ns = Ns Sy 3 Ns % Pq uint
243For blocks that could be forced to be a gang block (due to
244.Sy metaslab_force_ganging ) ,
245force this many of them to be gang blocks.
246.
2b10e325
RE
247.It Sy zfs_ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
248Default DDT ZAP data block size as a power of 2. Note that changing this after
249creating a DDT on the pool will not affect existing DDTs, only newly created
250ones.
251.
252.It Sy zfs_ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int
253Default DDT ZAP indirect block size as a power of 2. Note that changing this
254after creating a DDT on the pool will not affect existing DDTs, only newly
255created ones.
256.
926715b9
MP
257.It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int
258Default dnode block size as a power of 2.
259.
260.It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int
261Default dnode indirect block size as a power of 2.
262.
ab8d9c17 263.It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2d815d95
AZ
264When attempting to log an output nvlist of an ioctl in the on-disk history,
265the output will not be stored if it is larger than this size (in bytes).
266This must be less than
a894ae75 267.Sy DMU_MAX_ACCESS Pq 64 MiB .
2d815d95
AZ
268This applies primarily to
269.Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
270.
271.It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
93e28d66 272Prevent log spacemaps from being destroyed during pool exports and destroys.
2d815d95
AZ
273.
274.It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
4e21fd06 275Enable/disable segment-based metaslab selection.
2d815d95
AZ
276.
277.It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
4e21fd06 278When using segment-based metaslab selection, continue allocating
2d815d95 279from the active metaslab until this option's
4e21fd06 280worth of buckets have been exhausted.
2d815d95
AZ
281.
282.It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
aa7d06a9 283Load all metaslabs during pool import.
2d815d95
AZ
284.
285.It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
aa7d06a9 286Prevent metaslabs from being unloaded.
2d815d95
AZ
287.
288.It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
f3a7f661 289Enable use of the fragmentation metric in computing metaslab weights.
2d815d95 290.
fdc2d303 291.It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2d815d95
AZ
292Maximum distance to search forward from the last offset.
293Without this limit, fragmented pools can see
294.Em >100`000
295iterations and
296.Fn metaslab_block_picker
d3230d76 297becomes the performance limiting factor on high-performance storage.
2d815d95
AZ
298.Pp
299With the default setting of
a894ae75 300.Sy 16 MiB ,
2d815d95
AZ
301we typically see less than
302.Em 500
303iterations, even with very fragmented
304.Sy ashift Ns = Ns Sy 9
305pools.
306The maximum number of iterations possible is
307.Sy metaslab_df_max_search / 2^(ashift+1) .
308With the default setting of
a894ae75 309.Sy 16 MiB
2d815d95
AZ
310this is
311.Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
312or
313.Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
314.
315.It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
316If not searching forward (due to
317.Sy metaslab_df_max_search , metaslab_df_free_pct ,
318.No or Sy metaslab_df_alloc_threshold ) ,
319this tunable controls which segment is used.
320If set, we will use the largest free segment.
321If unset, we will use a segment of at least the requested size.
322.
ab8d9c17 323.It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64
2d815d95
AZ
324When we unload a metaslab, we cache the size of the largest free chunk.
325We use that cached size to determine whether or not to load a metaslab
326for a given allocation.
327As more frees accumulate in that metaslab while it's unloaded,
328the cached max size becomes less and less accurate.
329After a number of seconds controlled by this tunable,
330we stop considering the cached max size and start
c81f1790 331considering only the histogram instead.
2d815d95 332.
fdc2d303 333.It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint
f09fda50 334When we are loading a new metaslab, we check the amount of memory being used
2d815d95
AZ
335to store metaslab range trees.
336If it is over a threshold, we attempt to unload the least recently used metaslab
337to prevent the system from clogging all of its memory with range trees.
338This tunable sets the percentage of total system memory that is the threshold.
339.
340.It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
341.Bl -item -compact
342.It
343If unset, we will first try normal allocation.
344.It
be5c6d96 345If that fails then we will do a gang allocation.
2d815d95 346.It
be5c6d96 347If that fails then we will do a "try hard" gang allocation.
2d815d95 348.It
be5c6d96 349If that fails then we will have a multi-layer gang block.
2d815d95
AZ
350.El
351.Pp
352.Bl -item -compact
353.It
be5c6d96 354If set, we will first try normal allocation.
2d815d95 355.It
be5c6d96 356If that fails then we will do a "try hard" allocation.
2d815d95 357.It
be5c6d96 358If that fails we will do a gang allocation.
2d815d95 359.It
be5c6d96 360If that fails we will do a "try hard" gang allocation.
2d815d95 361.It
be5c6d96 362If that fails then we will have a multi-layer gang block.
2d815d95
AZ
363.El
364.
fdc2d303 365.It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint
be5c6d96
MA
366When not trying hard, we only consider this number of the best metaslabs.
367This improves performance, especially when there are many metaslabs per vdev
2d815d95
AZ
368and the allocation can't actually be satisfied
369(so we would otherwise iterate all metaslabs).
370.
fdc2d303 371.It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint
2d815d95
AZ
372When a vdev is added, target this number of metaslabs per top-level vdev.
373.
fdc2d303 374.It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint
ff73574c
RN
375Default lower limit for metaslab size.
376.
377.It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint
378Default upper limit for metaslab size.
2d815d95 379.
ab8d9c17 380.It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint
b46be903
DS
381Maximum ashift used when optimizing for logical \[->] physical sector size on
382new
6fe3498c 383top-level vdevs.
37f6845c
AM
384May be increased up to
385.Sy ASHIFT_MAX Po 16 Pc ,
386but this may negatively impact pool space efficiency.
2d815d95 387.
ab8d9c17 388.It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint
6fe3498c 389Minimum ashift used when creating new top-level vdevs.
2d815d95 390.
fdc2d303 391.It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint
d2734cce 392Minimum number of metaslabs to create in a top-level vdev.
2d815d95
AZ
393.
394.It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
395Skip label validation steps during pool import.
396Changing is not recommended unless you know what you're doing
397and are recovering a damaged label.
398.
fdc2d303 399.It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint
e4e94ca3 400Practical upper limit of total metaslabs per top-level vdev.
2d815d95
AZ
401.
402.It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
f3a7f661 403Enable metaslab group preloading.
2d815d95 404.
342357cd
AM
405.It Sy metaslab_preload_limit Ns = Ns Sy 10 Pq uint
406Maximum number of metaslabs per group to preload
407.
408.It Sy metaslab_preload_pct Ns = Ns Sy 50 Pq uint
409Percentage of CPUs to run a metaslab preload taskq
410.
2d815d95
AZ
411.It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
412Give more weight to metaslabs with lower LBAs,
413assuming they have greater bandwidth,
414as is typically the case on a modern constant angular velocity disk drive.
415.
fdc2d303 416.It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint
2d815d95
AZ
417After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
418reduce unnecessary reloading.
419Note that both this many TXGs and
420.Sy metaslab_unload_delay_ms
421milliseconds must pass before unloading will occur.
422.
fdc2d303 423.It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
2d815d95
AZ
424After a metaslab is used, we keep it loaded for this many milliseconds,
425to attempt to reduce unnecessary reloading.
426Note, that both this many milliseconds and
427.Sy metaslab_unload_delay
428TXGs must pass before unloading will occur.
429.
fdc2d303 430.It Sy reference_history Ns = Ns Sy 3 Pq uint
b46be903
DS
431Maximum reference holders being tracked when reference_tracking_enable is
432active.
2d815d95
AZ
433.
434.It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
435Track reference holders to
436.Sy refcount_t
437objects (debug builds only).
438.
439.It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
440When set, the
441.Sy hole_birth
442optimization will not be used, and all holes will always be sent during a
443.Nm zfs Cm send .
444This is useful if you suspect your datasets are affected by a bug in
445.Sy hole_birth .
446.
447.It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
448SPA config file.
449.
fdc2d303 450.It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint
e8b96c60 451Multiplication factor used to estimate actual disk consumption from the
2d815d95
AZ
452size of data being written.
453The default value is a worst case estimate,
454but lower values may be valid for a given pool depending on its configuration.
455Pool administrators who understand the factors involved
456may wish to specify a more realistic inflation factor,
457particularly if they operate close to quota or capacity limits.
458.
459.It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
b46be903
DS
460Whether to print the vdev tree in the debugging message buffer during pool
461import.
2d815d95
AZ
462.
463.It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
464Whether to traverse data blocks during an "extreme rewind"
465.Pq Fl X
466import.
467.Pp
dea377c0 468An extreme rewind import normally performs a full traversal of all
2d815d95
AZ
469blocks in the pool for verification.
470If this parameter is unset, the traversal skips non-metadata blocks.
471It can be toggled once the
dea377c0 472import has started to stop or start the traversal of non-metadata blocks.
2d815d95
AZ
473.
474.It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int
475Whether to traverse blocks during an "extreme rewind"
476.Pq Fl X
477pool import.
478.Pp
dea377c0 479An extreme rewind import normally performs a full traversal of all
2d815d95
AZ
480blocks in the pool for verification.
481If this parameter is unset, the traversal is not performed.
482It can be toggled once the import has started to stop or start the traversal.
483.
fdc2d303 484.It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint
c8242a96 485Sets the maximum number of bytes to consume during pool import to the log2
77f6826b 486fraction of the target ARC size.
2d815d95
AZ
487.
488.It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
489Normally, we don't allow the last
490.Sy 3.2% Pq Sy 1/2^spa_slop_shift
491of space in the pool to be consumed.
492This ensures that we don't run the pool completely out of space,
493due to unaccounted changes (e.g. to the MOS).
494It also limits the worst-case time to allocate space.
495If we have less than this amount of free space,
496most ZPL operations (e.g. write, create) will return
497.Sy ENOSPC .
498.
0409d332
GA
499.It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint
500Limits the number of on-disk error log entries that will be converted to the
501new format when enabling the
502.Sy head_errlog
503feature.
504The default is to convert all log entries.
505.
fdc2d303 506.It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
0dc2f70c
MA
507During top-level vdev removal, chunks of data are copied from the vdev
508which may include free space in order to trade bandwidth for IOPS.
2d815d95 509This parameter determines the maximum span of free space, in bytes,
0dc2f70c 510which will be included as "unnecessary" data in a chunk of copied data.
2d815d95 511.Pp
0dc2f70c 512The default value here was chosen to align with
2d815d95
AZ
513.Sy zfs_vdev_read_gap_limit ,
514which is a similar concept when doing
0dc2f70c 515regular reads (but there's no reason it has to be the same).
2d815d95 516.
ab8d9c17 517.It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
c494aa7f 518Logical ashift for file-based devices.
2d815d95 519.
ab8d9c17 520.It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64
c494aa7f 521Physical ashift for file-based devices.
2d815d95
AZ
522.
523.It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
524If set, when we start iterating over a ZAP object,
525prefetch the entire object (all leaf blocks).
526However, this is limited by
527.Sy dmu_prefetch_max .
528.
a4b21ead
MP
529.It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int
530Maximum micro ZAP size.
531A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size.
532.
6aa8c21a
AM
533.It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
534Min bytes to prefetch per stream.
535Prefetch distance starts from the demand access size and quickly grows to
536this value, doubling on each hit.
537After that it may grow further by 1/8 per hit, but only if some prefetch
538since last time haven't completed in time to satisfy demand request, i.e.
539prefetch depth didn't cover the read latency or the pool got saturated.
540.
541.It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
7dfc56d8 542Max bytes to prefetch per stream.
2d815d95 543.
a894ae75 544.It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint
7dfc56d8 545Max bytes to prefetch indirects for per stream.
2d815d95
AZ
546.
547.It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
27b293be 548Max number of streams per zfetch (prefetch streams per file).
2d815d95 549.
6aa8c21a
AM
550.It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint
551Min time before inactive prefetch stream can be reclaimed
552.
553.It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint
554Max time before inactive prefetch stream can be deleted
2d815d95
AZ
555.
556.It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
67709516 557Enables ARC from using scatter/gather lists and forces all allocations to be
2d815d95
AZ
558linear in kernel memory.
559Disabling can improve performance in some code paths
67709516 560at the expense of fragmented kernel memory.
2d815d95 561.
12bd322d 562.It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
67709516 563Maximum number of consecutive memory pages allocated in a single block for
2d815d95
AZ
564scatter/gather lists.
565.Pp
566The value of
567.Sy MAX_ORDER
568depends on kernel configuration.
569.
a894ae75 570.It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint
2d815d95
AZ
571This is the minimum allocation size that will use scatter (page-based) ABDs.
572Smaller allocations will use linear ABDs.
573.
ab8d9c17 574.It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64
25458cbe 575When the number of bytes consumed by dnodes in the ARC exceeds this number of
2d815d95
AZ
576bytes, try to unpin some of it in response to demand for non-metadata.
577This value acts as a ceiling to the amount of dnode metadata, and defaults to
578.Sy 0 ,
579which indicates that a percent which is based on
580.Sy zfs_arc_dnode_limit_percent
581of the ARC meta buffers that may be used for dnodes.
ab8d9c17 582.It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64
9907cc1c 583Percentage that can be consumed by dnodes of ARC meta buffers.
2d815d95
AZ
584.Pp
585See also
586.Sy zfs_arc_dnode_limit ,
587which serves a similar purpose but has a higher priority if nonzero.
588.
ab8d9c17 589.It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64
25458cbe 590Percentage of ARC dnodes to try to scan in response to demand for non-metadata
2d815d95
AZ
591when the number of bytes consumed by dnodes exceeds
592.Sy zfs_arc_dnode_limit .
593.
fdc2d303 594.It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint
49ddb315 595The ARC's buffer hash table is sized based on the assumption of an average
2d815d95 596block size of this value.
a894ae75 597This works out to roughly 1 MiB of hash table per 1 GiB of physical memory
2d815d95
AZ
598with 8-byte pointers.
599For configurations with a known larger average block size,
600this value can be increased to reduce the memory footprint.
601.
fdc2d303 602.It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint
2d815d95
AZ
603When
604.Fn arc_is_overflowing ,
605.Fn arc_get_data_impl
606waits for this percent of the requested amount of data to be evicted.
607For example, by default, for every
a894ae75 608.Em 2 KiB
2d815d95 609that's evicted,
a894ae75 610.Em 1 KiB
2d815d95
AZ
611of it may be "reused" by a new allocation.
612Since this is above
613.Sy 100 Ns % ,
614it ensures that progress is made towards getting
615.Sy arc_size No under Sy arc_c .
616Since this is finite, it ensures that allocations can still happen,
617even during the potentially long time that
618.Sy arc_size No is more than Sy arc_c .
619.
fdc2d303 620.It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint
8f343973 621Number ARC headers to evict per sub-list before proceeding to another sub-list.
ca0bf58d
PS
622This batch-style operation prevents entire sub-lists from being evicted at once
623but comes at a cost of additional unlocking and locking.
2d815d95 624.
fdc2d303 625.It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint
2d815d95
AZ
626If set to a non zero value, it will replace the
627.Sy arc_grow_retry
628value with this value.
629The
630.Sy arc_grow_retry
631.No value Pq default Sy 5 Ns s
632is the number of seconds the ARC will wait before
ca85d690 633trying to resume growth after a memory pressure event.
2d815d95
AZ
634.
635.It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
7e8bddd0 636Throttle I/O when free system memory drops below this percentage of total
2d815d95
AZ
637system memory.
638Setting this value to
639.Sy 0
640will disable the throttle.
641.
ab8d9c17 642.It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64
2d815d95
AZ
643Max size of ARC in bytes.
644If
645.Sy 0 ,
646then the max size of ARC is determined by the amount of system memory installed.
647Under Linux, half of system memory will be used as the limit.
648Under
649.Fx ,
650the larger of
a894ae75 651.Sy all_system_memory No \- Sy 1 GiB
12bd322d
AZ
652and
653.Sy 5/8 No \(mu Sy all_system_memory
2d815d95
AZ
654will be used as the limit.
655This value must be at least
a894ae75 656.Sy 67108864 Ns B Pq 64 MiB .
2d815d95
AZ
657.Pp
658This value can be changed dynamically, with some caveats.
659It cannot be set back to
660.Sy 0
661while running, and reducing it below the current ARC size will not cause
83426735 662the ARC to shrink without memory pressure to induce shrinking.
2d815d95 663.
a8d83e2a
AM
664.It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint
665Balance between metadata and data on ghost hits.
666Values above 100 increase metadata caching by proportionally reducing effect
667of ghost data hits on target data/metadata rate.
2d815d95 668.
ab8d9c17 669.It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64
2d815d95
AZ
670Min size of ARC in bytes.
671.No If set to Sy 0 , arc_c_min
672will default to consuming the larger of
a894ae75 673.Sy 32 MiB
12bd322d
AZ
674and
675.Sy all_system_memory No / Sy 32 .
2d815d95 676.
fdc2d303 677.It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
2d815d95
AZ
678Minimum time prefetched blocks are locked in the ARC.
679.
fdc2d303 680.It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
2d815d95
AZ
681Minimum time "prescient prefetched" blocks are locked in the ARC.
682These blocks are meant to be prefetched fairly aggressively ahead of
683the code that may use them.
684.
462217d1
AM
685.It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int
686Number of arc_prune threads.
687.Fx
688does not need more than one.
689Linux may theoretically use one per mount point up to number of CPUs,
690but that was not proven to be useful.
691.
2d815d95 692.It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
6cb8e530
PZ
693Number of missing top-level vdevs which will be allowed during
694pool import (only in read-only mode).
2d815d95 695.
ab8d9c17 696.It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64
2d815d95
AZ
697Maximum size in bytes allowed to be passed as
698.Sy zc_nvlist_src_size
699for ioctls on
700.Pa /dev/zfs .
701This prevents a user from causing the kernel to allocate
702an excessive amount of memory.
703When the limit is exceeded, the ioctl fails with
704.Sy EINVAL
705and a description of the error is sent to the
706.Pa zfs-dbgmsg
707log.
708This parameter should not need to be touched under normal circumstances.
709If
710.Sy 0 ,
711equivalent to a quarter of the user-wired memory limit under
712.Fx
713and to
a894ae75 714.Sy 134217728 Ns B Pq 128 MiB
2d815d95
AZ
715under Linux.
716.
fdc2d303 717.It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint
ca0bf58d 718To allow more fine-grained locking, each ARC state contains a series
2d815d95
AZ
719of lists for both data and metadata objects.
720Locking is performed at the level of these "sub-lists".
721This parameters controls the number of sub-lists per ARC state,
722and also applies to other uses of the multilist data structure.
723.Pp
724If
725.Sy 0 ,
726equivalent to the greater of the number of online CPUs and
727.Sy 4 .
728.
729.It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
ca0bf58d 730The ARC size is considered to be overflowing if it exceeds the current
2d815d95
AZ
731ARC target size
732.Pq Sy arc_c
f7de776d
AM
733by thresholds determined by this parameter.
734Exceeding by
12bd322d 735.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2
f7de776d
AM
736starts ARC reclamation process.
737If that appears insufficient, exceeding by
12bd322d 738.Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5
f7de776d
AM
739blocks new buffer allocation until the reclaim thread catches up.
740Started reclamation process continues till ARC size returns below the
741target size.
2d815d95
AZ
742.Pp
743The default value of
744.Sy 8
f7de776d
AM
745causes the ARC to start reclamation if it exceeds the target size by
746.Em 0.2%
747of the target size, and block allocations by
748.Em 0.6% .
2d815d95 749.
fdc2d303 750.It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint
2d815d95
AZ
751If nonzero, this will update
752.Sy arc_shrink_shift Pq default Sy 7
ca85d690 753with the new value.
2d815d95
AZ
754.
755.It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
756Percent of pagecache to reclaim ARC to.
757.Pp
758This tunable allows the ZFS ARC to play more nicely
759with the kernel's LRU pagecache.
760It can guarantee that the ARC size won't collapse under scanning
761pressure on the pagecache, yet still allows the ARC to be reclaimed down to
762.Sy zfs_arc_min
763if necessary.
764This value is specified as percent of pagecache size (as measured by
765.Sy NR_FILE_PAGES ) ,
766where that percent may exceed
767.Sy 100 .
768This
03b60eee 769only operates during memory pressure/reclaim.
2d815d95
AZ
770.
771.It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int
3442c2a0 772This is a limit on how many pages the ARC shrinker makes available for
2d815d95
AZ
773eviction in response to one page allocation attempt.
774Note that in practice, the kernel's shrinker can ask us to evict
775up to about four times this for one allocation attempt.
776.Pp
777The default limit of
a894ae75 778.Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages
2d815d95 779limits the amount of time spent attempting to reclaim ARC memory to
a894ae75
W
780less than 100 ms per allocation attempt,
781even with a small average compressed block size of ~8 KiB.
2d815d95
AZ
782.Pp
783The parameter can be set to 0 (zero) to disable the limit,
784and only applies on Linux.
785.
ab8d9c17 786.It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64
11f552fa 787The target number of bytes the ARC should leave as free memory on the system.
2d815d95 788If zero, equivalent to the bigger of
a894ae75 789.Sy 512 KiB No and Sy all_system_memory/64 .
2d815d95
AZ
790.
791.It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
792Disable pool import at module load by ignoring the cache file
793.Pq Sy spa_config_path .
794.
795.It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
796Rate limit checksum events to this many per second.
797Note that this should not be set below the ZED thresholds
798(currently 10 checksums over 10 seconds)
799or else the daemon may not trigger any action.
800.
252f46be 801.It Sy zfs_commit_timeout_pct Ns = Ns Sy 10 Ns % Pq uint
2fe61a7e
PS
802This controls the amount of time that a ZIL block (lwb) will remain "open"
803when it isn't "full", and it has a thread waiting for it to be committed to
2d815d95
AZ
804stable storage.
805The timeout is scaled based on a percentage of the last lwb
2fe61a7e
PS
806latency to avoid significantly impacting the latency of each individual
807transaction record (itx).
2d815d95
AZ
808.
809.It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
67709516 810Vdev indirection layer (used for device removal) sleeps for this many
2d815d95
AZ
811milliseconds during mapping generation.
812Intended for use with the test suite to throttle vdev removal speed.
813.
fdc2d303 814.It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint
b46be903
DS
815Minimum percent of obsolete bytes in vdev mapping required to attempt to
816condense
2d815d95
AZ
817.Pq see Sy zfs_condense_indirect_vdevs_enable .
818Intended for use with the test suite
819to facilitate triggering condensing as needed.
820.
821.It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
822Enable condensing indirect vdev mappings.
823When set, attempt to condense indirect vdev mappings
824if the mapping uses more than
825.Sy zfs_condense_min_mapping_bytes
826bytes of memory and if the obsolete space map object uses more than
827.Sy zfs_condense_max_obsolete_bytes
828bytes on-disk.
b46be903
DS
829The condensing process is an attempt to save memory by removing obsolete
830mappings.
2d815d95 831.
ab8d9c17 832.It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
0dc2f70c
MA
833Only attempt to condense indirect vdev mappings if the on-disk size
834of the obsolete space map object is greater than this number of bytes
2d815d95
AZ
835.Pq see Sy zfs_condense_indirect_vdevs_enable .
836.
ab8d9c17 837.It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64
2d815d95
AZ
838Minimum size vdev mapping to attempt to condense
839.Pq see Sy zfs_condense_indirect_vdevs_enable .
840.
841.It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
842Internally ZFS keeps a small log to facilitate debugging.
843The log is enabled by default, and can be disabled by unsetting this option.
844The contents of the log can be accessed by reading
845.Pa /proc/spl/kstat/zfs/dbgmsg .
846Writing
847.Sy 0
848to the file clears the log.
849.Pp
850This setting does not influence debug prints due to
851.Sy zfs_flags .
852.
fdc2d303 853.It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint
2d815d95
AZ
854Maximum size of the internal ZFS debug log.
855.
856.It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
857Historically used for controlling what reporting was available under
858.Pa /proc/spl/kstat/zfs .
859No effect.
860.
861.It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
862When a pool sync operation takes longer than
863.Sy zfs_deadman_synctime_ms ,
864or when an individual I/O operation takes longer than
865.Sy zfs_deadman_ziotime_ms ,
866then the operation is considered to be "hung".
867If
868.Sy zfs_deadman_enabled
869is set, then the deadman behavior is invoked as described by
870.Sy zfs_deadman_failmode .
871By default, the deadman is enabled and set to
872.Sy wait
a737b415 873which results in "hung" I/O operations only being logged.
2d815d95
AZ
874The deadman is automatically disabled when a pool gets suspended.
875.
876.It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
877Controls the failure behavior when the deadman detects a "hung" I/O operation.
878Valid values are:
879.Bl -tag -compact -offset 4n -width "continue"
880.It Sy wait
881Wait for a "hung" operation to complete.
882For each "hung" operation a "deadman" event will be posted
883describing that operation.
884.It Sy continue
885Attempt to recover from a "hung" operation by re-dispatching it
8fb1ede1 886to the I/O pipeline if possible.
2d815d95
AZ
887.It Sy panic
888Panic the system.
889This can be used to facilitate automatic fail-over
890to a properly configured fail-over partner.
891.El
892.
ab8d9c17 893.It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
2d815d95
AZ
894Check time in milliseconds.
895This defines the frequency at which we check for hung I/O requests
896and potentially invoke the
897.Sy zfs_deadman_failmode
898behavior.
899.
ab8d9c17 900.It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
b81a3ddc 901Interval in milliseconds after which the deadman is triggered and also
8fb1ede1
BB
902the interval after which a pool sync operation is considered to be "hung".
903Once this limit is exceeded the deadman will be invoked every
2d815d95
AZ
904.Sy zfs_deadman_checktime_ms
905milliseconds until the pool sync completes.
906.
ab8d9c17 907.It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
8fb1ede1 908Interval in milliseconds after which the deadman is triggered and an
2d815d95
AZ
909individual I/O operation is considered to be "hung".
910As long as the operation remains "hung",
911the deadman will be invoked every
912.Sy zfs_deadman_checktime_ms
913milliseconds until the operation completes.
914.
915.It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
916Enable prefetching dedup-ed blocks which are going to be freed.
917.
fdc2d303 918.It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
e8b96c60 919Start to delay each transaction once there is this amount of dirty data,
2d815d95
AZ
920expressed as a percentage of
921.Sy zfs_dirty_data_max .
922This value should be at least
923.Sy zfs_vdev_async_write_active_max_dirty_percent .
924.No See Sx ZFS TRANSACTION DELAY .
925.
926.It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
e8b96c60
MA
927This controls how quickly the transaction delay approaches infinity.
928Larger values cause longer delays for a given amount of dirty data.
2d815d95 929.Pp
e8b96c60 930For the smoothest delay, this value should be about 1 billion divided
2d815d95
AZ
931by the maximum number of operations per second.
932This will smoothly handle between ten times and a tenth of this number.
933.No See Sx ZFS TRANSACTION DELAY .
934.Pp
12bd322d 935.Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 .
2d815d95
AZ
936.
937.It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
938Disables requirement for IVset GUIDs to be present and match when doing a raw
939receive of encrypted datasets.
940Intended for users whose pools were created with
d0249a4b 941OpenZFS pre-release versions and now have compatibility issues.
2d815d95
AZ
942.
943.It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
67709516 944Maximum number of uses of a single salt value before generating a new one for
2d815d95
AZ
945encrypted datasets.
946The default value is also the maximum.
947.
948.It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
67709516 949Size of the znode hashtable used for holds.
2d815d95 950.Pp
67709516
D
951Due to the need to hold locks on objects that may not exist yet, kernel mutexes
952are not created per-object and instead a hashtable is used where collisions
953will result in objects waiting when there is not actually contention on the
954same object.
2d815d95
AZ
955.
956.It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
b46be903
DS
957Rate limit delay and deadman zevents (which report slow I/O operations) to this
958many per
e778b048 959second.
2d815d95 960.
ab8d9c17 961.It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64
93e28d66 962Upper-bound limit for unflushed metadata changes to be held by the
2d815d95
AZ
963log spacemap in memory, in bytes.
964.
ab8d9c17 965.It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64
2d815d95
AZ
966Part of overall system memory that ZFS allows to be used
967for unflushed metadata changes by the log spacemap, in millionths.
968.
ab8d9c17 969.It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64
93e28d66 970Describes the maximum number of log spacemap blocks allowed for each pool.
2d815d95
AZ
971The default value means that the space in all the log spacemaps
972can add up to no more than
600a02b8 973.Sy 131072
2d815d95 974blocks (which means
a894ae75 975.Em 16 GiB
2d815d95
AZ
976of logical space before compression and ditto blocks,
977assuming that blocksize is
a894ae75 978.Em 128 KiB ) .
2d815d95 979.Pp
93e28d66
SD
980This tunable is important because it involves a trade-off between import
981time after an unclean export and the frequency of flushing metaslabs.
982The higher this number is, the more log blocks we allow when the pool is
983active which means that we flush metaslabs less often and thus decrease
a737b415 984the number of I/O operations for spacemap updates per TXG.
93e28d66
SD
985At the same time though, that means that in the event of an unclean export,
986there will be more log spacemap blocks for us to read, inducing overhead
987in the import time of the pool.
2d815d95 988The lower the number, the amount of flushing increases, destroying log
93e28d66
SD
989blocks quicker as they become obsolete faster, which leaves less blocks
990to be read during import time after a crash.
2d815d95 991.Pp
93e28d66
SD
992Each log spacemap block existing during pool import leads to approximately
993one extra logical I/O issued.
994This is the reason why this tunable is exposed in terms of blocks rather
995than space used.
2d815d95 996.
ab8d9c17 997.It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64
2d815d95
AZ
998If the number of metaslabs is small and our incoming rate is high,
999we could get into a situation that we are flushing all our metaslabs every TXG.
93e28d66 1000Thus we always allow at least this many log blocks.
2d815d95 1001.
ab8d9c17 1002.It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64
93e28d66
SD
1003Tunable used to determine the number of blocks that can be used for
1004the spacemap log, expressed as a percentage of the total number of
600a02b8
AM
1005unflushed metaslabs in the pool.
1006.
ab8d9c17 1007.It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64
600a02b8
AM
1008Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1009It effectively limits maximum number of unflushed per-TXG spacemap logs
1010that need to be read after unclean pool export.
2d815d95
AZ
1011.
1012.It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
dcec0a12 1013When enabled, files will not be asynchronously removed from the list of pending
2d815d95
AZ
1014unlinks and the space they consume will be leaked.
1015Once this option has been disabled and the dataset is remounted,
1016the pending unlinks will be processed and the freed space returned to the pool.
1017This option is used by the test suite.
1018.
1019.It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1020This is the used to define a large file for the purposes of deletion.
1021Files containing more than
1022.Sy zfs_delete_blocks
1023will be deleted asynchronously, while smaller files are deleted synchronously.
1024Decreasing this value will reduce the time spent in an
1025.Xr unlink 2
b46be903
DS
1026system call, at the expense of a longer delay before the freed space is
1027available.
ab8d9c17 1028This only applies on Linux.
2d815d95
AZ
1029.
1030.It Sy zfs_dirty_data_max Ns = Pq int
1031Determines the dirty space limit in bytes.
1032Once this limit is exceeded, new writes are halted until space frees up.
1033This parameter takes precedence over
1034.Sy zfs_dirty_data_max_percent .
1035.No See Sx ZFS TRANSACTION DELAY .
1036.Pp
1037Defaults to
1038.Sy physical_ram/10 ,
1039capped at
1040.Sy zfs_dirty_data_max_max .
1041.
1042.It Sy zfs_dirty_data_max_max Ns = Pq int
1043Maximum allowable value of
1044.Sy zfs_dirty_data_max ,
1045expressed in bytes.
1046This limit is only enforced at module load time, and will be ignored if
1047.Sy zfs_dirty_data_max
1048is later changed.
1049This parameter takes precedence over
1050.Sy zfs_dirty_data_max_max_percent .
1051.No See Sx ZFS TRANSACTION DELAY .
1052.Pp
1053Defaults to
a379083d
GM
1054.Sy min(physical_ram/4, 4GiB) ,
1055or
1056.Sy min(physical_ram/4, 1GiB)
1057for 32-bit systems.
2d815d95 1058.
fdc2d303 1059.It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint
2d815d95
AZ
1060Maximum allowable value of
1061.Sy zfs_dirty_data_max ,
1062expressed as a percentage of physical RAM.
e8b96c60 1063This limit is only enforced at module load time, and will be ignored if
2d815d95
AZ
1064.Sy zfs_dirty_data_max
1065is later changed.
1066The parameter
1067.Sy zfs_dirty_data_max_max
1068takes precedence over this one.
1069.No See Sx ZFS TRANSACTION DELAY .
1070.
fdc2d303 1071.It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint
2d815d95
AZ
1072Determines the dirty space limit, expressed as a percentage of all memory.
1073Once this limit is exceeded, new writes are halted until space frees up.
1074The parameter
1075.Sy zfs_dirty_data_max
1076takes precedence over this one.
1077.No See Sx ZFS TRANSACTION DELAY .
1078.Pp
1079Subject to
1080.Sy zfs_dirty_data_max_max .
1081.
fdc2d303 1082.It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint
dfbe2675 1083Start syncing out a transaction group if there's at least this much dirty data
2d815d95
AZ
1084.Pq as a percentage of Sy zfs_dirty_data_max .
1085This should be less than
1086.Sy zfs_vdev_async_write_active_min_dirty_percent .
1087.
a7bd20e3
KJ
1088.It Sy zfs_wrlog_data_max Ns = Pq int
1089The upper limit of write-transaction zil log data size in bytes.
84d0a03f
AM
1090Write operations are throttled when approaching the limit until log data is
1091cleared out after transaction group sync.
1092Because of some overhead, it should be set at least 2 times the size of
a7bd20e3 1093.Sy zfs_dirty_data_max
b46be903 1094.No to prevent harming normal write throughput .
a7bd20e3
KJ
1095It also should be smaller than the size of the slog device if slog is present.
1096.Pp
1097Defaults to
1098.Sy zfs_dirty_data_max*2
1099.
2d815d95 1100.It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
f734301d
AD
1101Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1102preallocated for a file in order to guarantee that later writes will not
2d815d95
AZ
1103run out of space.
1104Instead,
1105.Xr fallocate 2
1106space preallocation only checks that sufficient space is currently available
1107in the pool or the user's project quota allocation,
1108and then creates a sparse file of the requested size.
1109The requested space is multiplied by
1110.Sy zfs_fallocate_reserve_percent
f734301d 1111to allow additional space for indirect blocks and other internal metadata.
2d815d95
AZ
1112Setting this to
1113.Sy 0
1114disables support for
1115.Xr fallocate 2
1116and causes it to return
1117.Sy EOPNOTSUPP .
1118.
1119.It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1eeb4562 1120Select a fletcher 4 implementation.
2d815d95
AZ
1121.Pp
1122Supported selectors are:
1123.Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1124.No and Sy aarch64_neon .
1125All except
1126.Sy fastest No and Sy scalar
1127require instruction set extensions to be available,
1128and will only appear if ZFS detects that they are present at runtime.
1129If multiple implementations of fletcher 4 are available, the
1130.Sy fastest
1131will be chosen using a micro benchmark.
1132Selecting
1133.Sy scalar
1134results in the original CPU-based calculation being used.
1135Selecting any option other than
1136.Sy fastest No or Sy scalar
1137results in vector instructions
1138from the respective CPU instruction set being used.
1139.
eeca9d27
TR
1140.It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string
1141Select a BLAKE3 implementation.
1142.Pp
1143Supported selectors are:
1144.Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 .
1145All except
1146.Sy cycle , fastest No and Sy generic
1147require instruction set extensions to be available,
1148and will only appear if ZFS detects that they are present at runtime.
1149If multiple implementations of BLAKE3 are available, the
1150.Sy fastest will be chosen using a micro benchmark. You can see the
1151benchmark results by reading this kstat file:
1152.Pa /proc/spl/kstat/zfs/chksum_bench .
1153.
2d815d95 1154.It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
ba5ad9a4 1155Enable/disable the processing of the free_bpobj object.
2d815d95 1156.
ab8d9c17 1157.It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64
2d815d95
AZ
1158Maximum number of blocks freed in a single TXG.
1159.
ab8d9c17 1160.It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64
2d815d95
AZ
1161Maximum number of dedup blocks freed in a single TXG.
1162.
fdc2d303 1163.It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint
2d815d95
AZ
1164Maximum asynchronous read I/O operations active to each device.
1165.No See Sx ZFS I/O SCHEDULER .
1166.
fdc2d303 1167.It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1168Minimum asynchronous read I/O operation active to each device.
1169.No See Sx ZFS I/O SCHEDULER .
1170.
fdc2d303 1171.It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint
2d815d95
AZ
1172When the pool has more than this much dirty data, use
1173.Sy zfs_vdev_async_write_max_active
1174to limit active async writes.
1175If the dirty data is between the minimum and maximum,
1176the active I/O limit is linearly interpolated.
1177.No See Sx ZFS I/O SCHEDULER .
1178.
fdc2d303 1179.It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint
2d815d95
AZ
1180When the pool has less than this much dirty data, use
1181.Sy zfs_vdev_async_write_min_active
1182to limit active async writes.
1183If the dirty data is between the minimum and maximum,
1184the active I/O limit is linearly
1185interpolated.
1186.No See Sx ZFS I/O SCHEDULER .
1187.
077fd55e 1188.It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint
2d815d95
AZ
1189Maximum asynchronous write I/O operations active to each device.
1190.No See Sx ZFS I/O SCHEDULER .
1191.
fdc2d303 1192.It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1193Minimum asynchronous write I/O operations active to each device.
1194.No See Sx ZFS I/O SCHEDULER .
1195.Pp
06226b59 1196Lower values are associated with better latency on rotational media but poorer
2d815d95
AZ
1197resilver performance.
1198The default value of
1199.Sy 2
1200was chosen as a compromise.
1201A value of
1202.Sy 3
1203has been shown to improve resilver performance further at a cost of
06226b59 1204further increasing latency.
2d815d95 1205.
fdc2d303 1206.It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1207Maximum initializing I/O operations active to each device.
1208.No See Sx ZFS I/O SCHEDULER .
1209.
fdc2d303 1210.It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1211Minimum initializing I/O operations active to each device.
1212.No See Sx ZFS I/O SCHEDULER .
1213.
fdc2d303 1214.It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint
2d815d95
AZ
1215The maximum number of I/O operations active to each device.
1216Ideally, this will be at least the sum of each queue's
1217.Sy max_active .
1218.No See Sx ZFS I/O SCHEDULER .
1219.
f66ffe68
SD
1220.It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint
1221Timeout value to wait before determining a device is missing
1222during import.
1223This is helpful for transient missing paths due
1224to links being briefly removed and recreated in response to
1225udev events.
1226.
fdc2d303 1227.It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint
2d815d95
AZ
1228Maximum sequential resilver I/O operations active to each device.
1229.No See Sx ZFS I/O SCHEDULER .
1230.
fdc2d303 1231.It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1232Minimum sequential resilver I/O operations active to each device.
1233.No See Sx ZFS I/O SCHEDULER .
1234.
fdc2d303 1235.It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1236Maximum removal I/O operations active to each device.
1237.No See Sx ZFS I/O SCHEDULER .
1238.
fdc2d303 1239.It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1240Minimum removal I/O operations active to each device.
1241.No See Sx ZFS I/O SCHEDULER .
1242.
fdc2d303 1243.It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1244Maximum scrub I/O operations active to each device.
1245.No See Sx ZFS I/O SCHEDULER .
1246.
fdc2d303 1247.It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1248Minimum scrub I/O operations active to each device.
1249.No See Sx ZFS I/O SCHEDULER .
1250.
fdc2d303 1251.It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint
2d815d95
AZ
1252Maximum synchronous read I/O operations active to each device.
1253.No See Sx ZFS I/O SCHEDULER .
1254.
fdc2d303 1255.It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint
2d815d95
AZ
1256Minimum synchronous read I/O operations active to each device.
1257.No See Sx ZFS I/O SCHEDULER .
1258.
fdc2d303 1259.It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint
2d815d95
AZ
1260Maximum synchronous write I/O operations active to each device.
1261.No See Sx ZFS I/O SCHEDULER .
1262.
fdc2d303 1263.It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint
2d815d95
AZ
1264Minimum synchronous write I/O operations active to each device.
1265.No See Sx ZFS I/O SCHEDULER .
1266.
fdc2d303 1267.It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1268Maximum trim/discard I/O operations active to each device.
1269.No See Sx ZFS I/O SCHEDULER .
1270.
fdc2d303 1271.It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint
2d815d95
AZ
1272Minimum trim/discard I/O operations active to each device.
1273.No See Sx ZFS I/O SCHEDULER .
1274.
fdc2d303 1275.It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint
6f5aac3c 1276For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
2d815d95
AZ
1277the number of concurrently-active I/O operations is limited to
1278.Sy zfs_*_min_active ,
1279unless the vdev is "idle".
0175272f 1280When there are no interactive I/O operations active (synchronous or otherwise),
2d815d95
AZ
1281and
1282.Sy zfs_vdev_nia_delay
1283operations have completed since the last interactive operation,
1284then the vdev is considered to be "idle",
1285and the number of concurrently-active non-interactive operations is increased to
1286.Sy zfs_*_max_active .
1287.No See Sx ZFS I/O SCHEDULER .
1288.
fdc2d303 1289.It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint
2d815d95
AZ
1290Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1291random I/O latency reaches several seconds.
1292On some HDDs this happens even if sequential I/O operations
1293are submitted one at a time, and so setting
1294.Sy zfs_*_max_active Ns = Sy 1
1295does not help.
1296To prevent non-interactive I/O, like scrub,
1297from monopolizing the device, no more than
1298.Sy zfs_vdev_nia_credit operations can be sent
1299while there are outstanding incomplete interactive operations.
1300This enforced wait ensures the HDD services the interactive I/O
6f5aac3c 1301within a reasonable amount of time.
2d815d95
AZ
1302.No See Sx ZFS I/O SCHEDULER .
1303.
fdc2d303 1304.It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint
e815485f 1305Maximum number of queued allocations per top-level vdev expressed as
2d815d95
AZ
1306a percentage of
1307.Sy zfs_vdev_async_write_max_active ,
1308which allows the system to detect devices that are more capable
1309of handling allocations and to allocate more blocks to those devices.
1310This allows for dynamic allocation distribution when devices are imbalanced,
1311as fuller devices will tend to be slower than empty devices.
1312.Pp
1313Also see
1314.Sy zio_dva_throttle_enabled .
1315.
ece7ab7e
RN
1316.It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint
1317Default queue depth for each vdev IO allocator.
1318Higher values allow for better coalescing of sequential writes before sending
1319them to the disk, but can increase transaction commit times.
1320.
16f0fdad
MZ
1321.It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint
1322Defines if the driver should retire on a given error type.
1323The following options may be bitwise-ored together:
1324.TS
1325box;
1326lbz r l l .
1327 Value Name Description
1328_
1329 1 Device No driver retries on device errors
1330 2 Transport No driver retries on transport errors.
1331 4 Driver No driver retries on driver errors.
1332.TE
1333.
2d815d95
AZ
1334.It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1335Time before expiring
1336.Pa .zfs/snapshot .
1337.
1338.It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1339Allow the creation, removal, or renaming of entries in the
1340.Sy .zfs/snapshot
0500e835 1341directory to cause the creation, destruction, or renaming of snapshots.
2d815d95
AZ
1342When enabled, this functionality works both locally and over NFS exports
1343which have the
1344.Em no_root_squash
1345option set.
1346.
1347.It Sy zfs_flags Ns = Ns Sy 0 Pq int
1348Set additional debugging flags.
1349The following flags may be bitwise-ored together:
33b6dbbc
NB
1350.TS
1351box;
2d815d95 1352lbz r l l .
16f0fdad 1353 Value Name Description
33b6dbbc 1354_
2d815d95
AZ
1355 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log.
1356* 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications.
1357* 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications.
1358 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification.
bacf366f 1359* 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers.
2d815d95
AZ
1360 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees.
1361 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications.
1362 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1363 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1364 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal.
1365 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree.
1366 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log
1367 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
33b6dbbc 1368.TE
b46be903 1369.Sy \& * No Requires debug build .
2d815d95 1370.
b24d1c77
RY
1371.It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint
1372Enables btree verification.
1373The following settings are culminative:
1374.TS
1375box;
1376lbz r l l .
1377 Value Description
1378
1379 1 Verify height.
1380 2 Verify pointers from children to parent.
1381 3 Verify element counts.
1382 4 Verify element order. (expensive)
1383* 5 Verify unused memory is poisoned. (expensive)
1384.TE
b46be903 1385.Sy \& * No Requires debug build .
b24d1c77 1386.
2d815d95
AZ
1387.It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1388If destroy encounters an
1389.Sy EIO
1390while reading metadata (e.g. indirect blocks),
1391space referenced by the missing metadata can not be freed.
1392Normally this causes the background destroy to become "stalled",
1393as it is unable to make forward progress.
1394While in this stalled state, all remaining space to free
1395from the error-encountering filesystem is "temporarily leaked".
1396Set this flag to cause it to ignore the
1397.Sy EIO ,
fbeddd60
MA
1398permanently leak the space from indirect blocks that can not be read,
1399and continue to free everything else that it can.
2d815d95
AZ
1400.Pp
1401The default "stalling" behavior is useful if the storage partially
1402fails (i.e. some but not all I/O operations fail), and then later recovers.
1403In this case, we will be able to continue pool operations while it is
fbeddd60 1404partially failed, and when it recovers, we can continue to free the
2d815d95
AZ
1405space, with no leaks.
1406Note, however, that this case is actually fairly rare.
1407.Pp
1408Typically pools either
1409.Bl -enum -compact -offset 4n -width "1."
1410.It
1411fail completely (but perhaps temporarily,
1412e.g. due to a top-level vdev going offline), or
1413.It
1414have localized, permanent errors (e.g. disk returns the wrong data
1415due to bit flip or firmware bug).
1416.El
1417In the former case, this setting does not matter because the
fbeddd60 1418pool will be suspended and the sync thread will not be able to make
2d815d95
AZ
1419forward progress regardless.
1420In the latter, because the error is permanent, the best we can do
1421is leak the minimum amount of space,
1422which is what setting this flag will do.
1423It is therefore reasonable for this flag to normally be set,
1424but we chose the more conservative approach of not setting it,
1425so that there is no possibility of
fbeddd60 1426leaking space in the "partial temporary" failure case.
2d815d95 1427.
fdc2d303 1428.It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
2d815d95
AZ
1429During a
1430.Nm zfs Cm destroy
1431operation using the
1432.Sy async_destroy
1433feature,
1434a minimum of this much time will be spent working on freeing blocks per TXG.
1435.
fdc2d303 1436.It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
2d815d95
AZ
1437Similar to
1438.Sy zfs_free_min_time_ms ,
1439but for cleanup of old indirection records for removed vdevs.
1440.
ab8d9c17 1441.It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64
2d815d95
AZ
1442Largest data block to write to the ZIL.
1443Larger blocks will be treated as if the dataset being written to had the
1444.Sy logbias Ns = Ns Sy throughput
1445property set.
1446.
ab8d9c17 1447.It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64
2d815d95
AZ
1448Pattern written to vdev free space by
1449.Xr zpool-initialize 8 .
1450.
ab8d9c17 1451.It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2d815d95
AZ
1452Size of writes used by
1453.Xr zpool-initialize 8 .
1454This option is used by the test suite.
1455.
ab8d9c17 1456.It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64
37f03da8
SH
1457The threshold size (in block pointers) at which we create a new sub-livelist.
1458Larger sublists are more costly from a memory perspective but the fewer
1459sublists there are, the lower the cost of insertion.
2d815d95
AZ
1460.
1461.It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
37f03da8 1462If the amount of shared space between a snapshot and its clone drops below
2d815d95
AZ
1463this threshold, the clone turns off the livelist and reverts to the old
1464deletion method.
1465This is in place because livelists no long give us a benefit
1466once a clone has been overwritten enough.
1467.
1468.It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
37f03da8
SH
1469Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1470it is being condensed.
1471This option is used by the test suite to track race conditions.
2d815d95
AZ
1472.
1473.It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
37f03da8 1474Incremented each time livelist condensing is canceled while in
2d815d95 1475.Fn spa_livelist_condense_sync .
37f03da8 1476This option is used by the test suite to track race conditions.
2d815d95
AZ
1477.
1478.It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
37f03da8 1479When set, the livelist condense process pauses indefinitely before
12bd322d 1480executing the synctask \(em
2d815d95 1481.Fn spa_livelist_condense_sync .
37f03da8 1482This option is used by the test suite to trigger race conditions.
2d815d95
AZ
1483.
1484.It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
37f03da8 1485Incremented each time livelist condensing is canceled while in
2d815d95 1486.Fn spa_livelist_condense_cb .
37f03da8 1487This option is used by the test suite to track race conditions.
2d815d95
AZ
1488.
1489.It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
37f03da8 1490When set, the livelist condense process pauses indefinitely before
2d815d95
AZ
1491executing the open context condensing work in
1492.Fn spa_livelist_condense_cb .
37f03da8 1493This option is used by the test suite to trigger race conditions.
2d815d95 1494.
ab8d9c17 1495.It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64
917f475f
JG
1496The maximum execution time limit that can be set for a ZFS channel program,
1497specified as a number of Lua instructions.
2d815d95 1498.
ab8d9c17 1499.It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64
917f475f
JG
1500The maximum memory limit that can be set for a ZFS channel program, specified
1501in bytes.
2d815d95
AZ
1502.
1503.It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1504The maximum depth of nested datasets.
1505This value can be tuned temporarily to
a7ed98d8 1506fix existing datasets that exceed the predefined limit.
2d815d95 1507.
ab8d9c17 1508.It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64
93e28d66
SD
1509The number of past TXGs that the flushing algorithm of the log spacemap
1510feature uses to estimate incoming log blocks.
2d815d95 1511.
ab8d9c17 1512.It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64
93e28d66 1513Maximum number of rows allowed in the summary of the spacemap log.
2d815d95 1514.
fdc2d303 1515.It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint
2d815d95 1516We currently support block sizes from
a894ae75 1517.Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc .
2d815d95
AZ
1518The benefits of larger blocks, and thus larger I/O,
1519need to be weighed against the cost of COWing a giant block to modify one byte.
1520Additionally, very large blocks can have an impact on I/O latency,
1521and also potentially on the memory allocator.
f2330bd1
RE
1522Therefore, we formerly forbade creating blocks larger than 1M.
1523Larger blocks could be created by changing it,
2d815d95 1524and pools with larger blocks can always be imported and used,
f1512ee6 1525regardless of this setting.
2d815d95
AZ
1526.
1527.It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1528Allow datasets received with redacted send/receive to be mounted.
1529Normally disabled because these datasets may be missing key data.
1530.
ab8d9c17 1531.It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64
2d815d95
AZ
1532Minimum number of metaslabs to flush per dirty TXG.
1533.
fdc2d303 1534.It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint
f3a7f661 1535Allow metaslabs to keep their active state as long as their fragmentation
2d815d95
AZ
1536percentage is no more than this value.
1537An active metaslab that exceeds this threshold
1538will no longer keep its active status allowing better metaslabs to be selected.
1539.
fdc2d303 1540.It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint
f3a7f661 1541Metaslab groups are considered eligible for allocations if their
83426735 1542fragmentation metric (measured as a percentage) is less than or equal to
2d815d95
AZ
1543this value.
1544If a metaslab group exceeds this threshold then it will be
f3a7f661
GW
1545skipped unless all metaslab groups within the metaslab class have also
1546crossed this threshold.
2d815d95 1547.
fdc2d303 1548.It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint
2d815d95
AZ
1549Defines a threshold at which metaslab groups should be eligible for allocations.
1550The value is expressed as a percentage of free space
f4a4046b
TC
1551beyond which a metaslab group is always eligible for allocations.
1552If a metaslab group's free space is less than or equal to the
6b4e21c6 1553threshold, the allocator will avoid allocating to that group
2d815d95
AZ
1554unless all groups in the pool have reached the threshold.
1555Once all groups have reached the threshold, all groups are allowed to accept
1556allocations.
1557The default value of
1558.Sy 0
b46be903
DS
1559disables the feature and causes all metaslab groups to be eligible for
1560allocations.
2d815d95 1561.Pp
b58237e7 1562This parameter allows one to deal with pools having heavily imbalanced
f4a4046b
TC
1563vdevs such as would be the case when a new vdev has been added.
1564Setting the threshold to a non-zero percentage will stop allocations
1565from being made to vdevs that aren't filled to the specified percentage
1566and allow lesser filled vdevs to acquire more allocations than they
2d815d95
AZ
1567otherwise would under the old
1568.Sy zfs_mg_alloc_failures
1569facility.
1570.
1571.It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
cc99f275 1572If enabled, ZFS will place DDT data into the special allocation class.
2d815d95
AZ
1573.
1574.It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1575If enabled, ZFS will place user data indirect blocks
cc99f275 1576into the special allocation class.
2d815d95 1577.
fdc2d303 1578.It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint
b46be903
DS
1579Historical statistics for this many latest multihost updates will be available
1580in
2d815d95
AZ
1581.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1582.
ab8d9c17 1583.It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
379ca9cf 1584Used to control the frequency of multihost writes which are performed when the
2d815d95
AZ
1585.Sy multihost
1586pool property is on.
1587This is one of the factors used to determine the
060f0226 1588length of the activity check during import.
2d815d95
AZ
1589.Pp
1590The multihost write period is
12bd322d 1591.Sy zfs_multihost_interval No / Sy leaf-vdevs .
2d815d95
AZ
1592On average a multihost write will be issued for each leaf vdev
1593every
1594.Sy zfs_multihost_interval
1595milliseconds.
1596In practice, the observed period can vary with the I/O load
1597and this observed value is the delay which is stored in the uberblock.
1598.
1599.It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1600Used to control the duration of the activity test on import.
1601Smaller values of
1602.Sy zfs_multihost_import_intervals
1603will reduce the import time but increase
1604the risk of failing to detect an active pool.
1605The total activity check time is never allowed to drop below one second.
1606.Pp
060f0226 1607On import the activity check waits a minimum amount of time determined by
12bd322d 1608.Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals ,
2d815d95
AZ
1609or the same product computed on the host which last had the pool imported,
1610whichever is greater.
1611The activity check time may be further extended if the value of MMP
060f0226 1612delay found in the best uberblock indicates actual multihost updates happened
2d815d95
AZ
1613at longer intervals than
1614.Sy zfs_multihost_interval .
1615A minimum of
a894ae75 1616.Em 100 ms
2d815d95
AZ
1617is enforced.
1618.Pp
1619.Sy 0 No is equivalent to Sy 1 .
1620.
1621.It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
060f0226
OF
1622Controls the behavior of the pool when multihost write failures or delays are
1623detected.
2d815d95
AZ
1624.Pp
1625When
1626.Sy 0 ,
1627multihost write failures or delays are ignored.
1628The failures will still be reported to the ZED which depending on
060f0226
OF
1629its configuration may take action such as suspending the pool or offlining a
1630device.
2d815d95
AZ
1631.Pp
1632Otherwise, the pool will be suspended if
12bd322d 1633.Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval
2d815d95
AZ
1634milliseconds pass without a successful MMP write.
1635This guarantees the activity test will see MMP writes if the pool is imported.
1636.Sy 1 No is equivalent to Sy 2 ;
1637this is necessary to prevent the pool from being suspended
060f0226 1638due to normal, small I/O latency variations.
2d815d95
AZ
1639.
1640.It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1641Set to disable scrub I/O.
1642This results in scrubs not actually scrubbing data and
83426735 1643simply doing a metadata crawl of the pool instead.
2d815d95
AZ
1644.
1645.It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
83426735 1646Set to disable block prefetching for scrubs.
2d815d95
AZ
1647.
1648.It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1649Disable cache flush operations on disks when writing.
1650Setting this will cause pool corruption on power loss
1651if a volatile out-of-order write cache is enabled.
1652.
1653.It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1654Allow no-operation writes.
1655The occurrence of nopwrites will further depend on other pool properties
1656.Pq i.a. the checksumming and compression algorithms .
1657.
05b3eb6d 1658.It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int
2d815d95 1659Enable forcing TXG sync to find holes.
05b3eb6d 1660When enabled forces ZFS to sync data when
2d815d95 1661.Sy SEEK_HOLE No or Sy SEEK_DATA
05b3eb6d
BB
1662flags are used allowing holes in a file to be accurately reported.
1663When disabled holes will not be reported in recently dirtied files.
2d815d95 1664.
a894ae75 1665.It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int
2d815d95
AZ
1666The number of bytes which should be prefetched during a pool traversal, like
1667.Nm zfs Cm send
1668or other data crawling operations.
1669.
fdc2d303 1670.It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint
2d815d95
AZ
1671The number of blocks pointed by indirect (non-L0) block which should be
1672prefetched during a pool traversal, like
1673.Nm zfs Cm send
1674or other data crawling operations.
1675.
ab8d9c17 1676.It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64
2d815d95
AZ
1677Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1678After this threshold is crossed, additional frees will wait until the next TXG.
b46be903 1679.Sy 0 No disables this throttle .
2d815d95
AZ
1680.
1681.It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1682Disable predictive prefetch.
2d232ca8
AZ
1683Note that it leaves "prescient" prefetch
1684.Pq for, e.g., Nm zfs Cm send
2d815d95
AZ
1685intact.
1686Unlike predictive prefetch, prescient prefetch never issues I/O
1687that ends up not being needed, so it can't hurt performance.
1688.
1689.It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1690Disable QAT hardware acceleration for SHA256 checksums.
1691May be unset after the ZFS modules have been loaded to initialize the QAT
1692hardware as long as support is compiled in and the QAT driver is present.
1693.
1694.It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1695Disable QAT hardware acceleration for gzip compression.
1696May be unset after the ZFS modules have been loaded to initialize the QAT
1697hardware as long as support is compiled in and the QAT driver is present.
1698.
1699.It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1700Disable QAT hardware acceleration for AES-GCM encryption.
1701May be unset after the ZFS modules have been loaded to initialize the QAT
1702hardware as long as support is compiled in and the QAT driver is present.
1703.
ab8d9c17 1704.It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
2d815d95
AZ
1705Bytes to read per chunk.
1706.
fdc2d303 1707.It Sy zfs_read_history Ns = Ns Sy 0 Pq uint
2d815d95
AZ
1708Historical statistics for this many latest reads will be available in
1709.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
1710.
1711.It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
29714574 1712Include cache hits in read history
2d815d95 1713.
ab8d9c17 1714.It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64
9a49d3f3
BB
1715Maximum read segment size to issue when sequentially resilvering a
1716top-level vdev.
2d815d95
AZ
1717.
1718.It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
b2255edc
BB
1719Automatically start a pool scrub when the last active sequential resilver
1720completes in order to verify the checksums of all blocks which have been
2d815d95
AZ
1721resilvered.
1722This is enabled by default and strongly recommended.
1723.
973934b9 1724.It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
2d815d95 1725Maximum amount of I/O that can be concurrently issued for a sequential
b2255edc 1726resilver per leaf device, given in bytes.
2d815d95
AZ
1727.
1728.It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
4589f3ae
BB
1729If an indirect split block contains more than this many possible unique
1730combinations when being reconstructed, consider it too computationally
2d815d95
AZ
1731expensive to check them all.
1732Instead, try at most this many randomly selected
1733combinations each time the block is accessed.
1734This allows all segment copies to participate fairly
1735in the reconstruction when all combinations
4589f3ae 1736cannot be checked and prevents repeated use of one bad copy.
2d815d95
AZ
1737.
1738.It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
1739Set to attempt to recover from fatal errors.
1740This should only be used as a last resort,
1741as it typically results in leaked space, or worse.
1742.
1743.It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
a737b415
AZ
1744Ignore hard I/O errors during device removal.
1745When set, if a device encounters a hard I/O error during the removal process
2d815d95 1746the removal will not be cancelled.
7c9a4292 1747This can result in a normally recoverable block becoming permanently damaged
2d815d95
AZ
1748and is hence not recommended.
1749This should only be used as a last resort when the
7c9a4292 1750pool cannot be returned to a healthy state prior to removing the device.
2d815d95 1751.
fdc2d303 1752.It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
53dce5ac
MA
1753This is used by the test suite so that it can ensure that certain actions
1754happen while in the middle of a removal.
2d815d95 1755.
fdc2d303 1756.It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
53dce5ac 1757The largest contiguous segment that we will attempt to allocate when removing
2d815d95
AZ
1758a device.
1759If there is a performance problem with attempting to allocate large blocks,
1760consider decreasing this.
1761The default value is also the maximum.
1762.
1763.It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
1764Ignore the
1765.Sy resilver_defer
1766feature, causing an operation that would start a resilver to
1767immediately restart the one in progress.
1768.
fdc2d303 1769.It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
2d815d95
AZ
1770Resilvers are processed by the sync thread.
1771While resilvering, it will spend at least this much time
1772working on a resilver between TXG flushes.
1773.
1774.It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1775If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
1776even if there were unrepairable errors.
1777Intended to be used during pool repair or recovery to
02638a30 1778stop resilvering when the pool is next imported.
2d815d95 1779.
fdc2d303 1780.It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
2d815d95
AZ
1781Scrubs are processed by the sync thread.
1782While scrubbing, it will spend at least this much time
1783working on a scrub between TXG flushes.
1784.
482eeef8
GA
1785.It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint
1786Error blocks to be scrubbed in one txg.
1787.
fdc2d303 1788.It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint
2d815d95
AZ
1789To preserve progress across reboots, the sequential scan algorithm periodically
1790needs to stop metadata scanning and issue all the verification I/O to disk.
1791The frequency of this flushing is determined by this tunable.
1792.
fdc2d303 1793.It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint
2d815d95
AZ
1794This tunable affects how scrub and resilver I/O segments are ordered.
1795A higher number indicates that we care more about how filled in a segment is,
1796while a lower number indicates we care more about the size of the extent without
1797considering the gaps within a segment.
1798This value is only tunable upon module insertion.
b46be903
DS
1799Changing the value afterwards will have no effect on scrub or resilver
1800performance.
2d815d95 1801.
fdc2d303 1802.It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint
2d815d95
AZ
1803Determines the order that data will be verified while scrubbing or resilvering:
1804.Bl -tag -compact -offset 4n -width "a"
1805.It Sy 1
1806Data will be verified as sequentially as possible, given the
1807amount of memory reserved for scrubbing
1808.Pq see Sy zfs_scan_mem_lim_fact .
1809This may improve scrub performance if the pool's data is very fragmented.
1810.It Sy 2
1811The largest mostly-contiguous chunk of found data will be verified first.
1812By deferring scrubbing of small segments, we may later find adjacent data
1813to coalesce and increase the segment size.
1814.It Sy 0
1815.No Use strategy Sy 1 No during normal verification
b46be903 1816.No and strategy Sy 2 No while taking a checkpoint .
2d815d95
AZ
1817.El
1818.
1819.It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
1820If unset, indicates that scrubs and resilvers will gather metadata in
1821memory before issuing sequential I/O.
1822Otherwise indicates that the legacy algorithm will be used,
1823where I/O is initiated as soon as it is discovered.
1824Unsetting will not affect scrubs or resilvers that are already in progress.
1825.
a894ae75 1826.It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int
2d815d95
AZ
1827Sets the largest gap in bytes between scrub/resilver I/O operations
1828that will still be considered sequential for sorting purposes.
1829Changing this value will not
d4a72f23 1830affect scrubs or resilvers that are already in progress.
2d815d95 1831.
fdc2d303 1832.It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
d4a72f23
TC
1833Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
1834This tunable determines the hard limit for I/O sorting memory usage.
1835When the hard limit is reached we stop scanning metadata and start issuing
2d815d95
AZ
1836data verification I/O.
1837This is done until we get below the soft limit.
1838.
fdc2d303 1839.It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
d4a72f23 1840The fraction of the hard limit used to determined the soft limit for I/O sorting
2d815d95
AZ
1841by the sequential scan algorithm.
1842When we cross this limit from below no action is taken.
b46be903
DS
1843When we cross this limit from above it is because we are issuing verification
1844I/O.
2d815d95
AZ
1845In this case (unless the metadata scan is done) we stop issuing verification I/O
1846and start scanning metadata again until we get to the hard limit.
1847.
c85ac731
BB
1848.It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1849When reporting resilver throughput and estimated completion time use the
1850performance observed over roughly the last
1851.Sy zfs_scan_report_txgs
1852TXGs.
1853When set to zero performance is calculated over the time between checkpoints.
1854.
2d815d95
AZ
1855.It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1856Enforce tight memory limits on pool scans when a sequential scan is in progress.
1857When disabled, the memory limit may be exceeded by fast disks.
1858.
1859.It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1860Freezes a scrub/resilver in progress without actually pausing it.
1861Intended for testing/debugging.
1862.
c0aea7cf 1863.It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
d4a72f23
TC
1864Maximum amount of data that can be concurrently issued at once for scrubs and
1865resilvers per leaf device, given in bytes.
2d815d95
AZ
1866.
1867.It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
1868Allow sending of corrupt data (ignore read/checksum errors when sending).
1869.
1870.It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
1871Include unmodified spill blocks in the send stream.
1872Under certain circumstances, previous versions of ZFS could incorrectly
1873remove the spill block from an existing object.
1874Including unmodified copies of the spill blocks creates a backwards-compatible
1875stream which will recreate a spill block if it was incorrectly removed.
1876.
fdc2d303 1877.It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2d815d95
AZ
1878The fill fraction of the
1879.Nm zfs Cm send
1880internal queues.
1881The fill fraction controls the timing with which internal threads are woken up.
1882.
fdc2d303 1883.It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2d815d95
AZ
1884The maximum number of bytes allowed in
1885.Nm zfs Cm send Ns 's
1886internal queues.
1887.
fdc2d303 1888.It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2d815d95
AZ
1889The fill fraction of the
1890.Nm zfs Cm send
1891prefetch queue.
1892The fill fraction controls the timing with which internal threads are woken up.
1893.
fdc2d303 1894.It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2d815d95
AZ
1895The maximum number of bytes allowed that will be prefetched by
1896.Nm zfs Cm send .
1897This value must be at least twice the maximum block size in use.
1898.
fdc2d303 1899.It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2d815d95
AZ
1900The fill fraction of the
1901.Nm zfs Cm receive
1902queue.
1903The fill fraction controls the timing with which internal threads are woken up.
1904.
fdc2d303 1905.It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint
2d815d95
AZ
1906The maximum number of bytes allowed in the
1907.Nm zfs Cm receive
1908queue.
30af21b0 1909This value must be at least twice the maximum block size in use.
2d815d95 1910.
fdc2d303 1911.It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2d815d95
AZ
1912The maximum amount of data, in bytes, that
1913.Nm zfs Cm receive
1914will write in one DMU transaction.
1915This is the uncompressed size, even when receiving a compressed send stream.
1916This setting will not reduce the write size below a single block.
1917Capped at a maximum of
a894ae75 1918.Sy 32 MiB .
2d815d95 1919.
e8cf3a4f
AP
1920.It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int
1921When this variable is set to non-zero a corrective receive:
1922.Bl -enum -compact -offset 4n -width "1."
1923.It
1924Does not enforce the restriction of source & destination snapshot GUIDs
1925matching.
1926.It
1927If there is an error during healing, the healing receive is not
1928terminated instead it moves on to the next record.
1929.El
1930.
fdc2d303 1931.It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint
30af21b0 1932Setting this variable overrides the default logic for estimating block
2d815d95
AZ
1933sizes when doing a
1934.Nm zfs Cm send .
1935The default heuristic is that the average block size
1936will be the current recordsize.
1937Override this value if most data in your dataset is not of that size
1938and you require accurate zfs send size estimates.
1939.
fdc2d303 1940.It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1941Flushing of data to disk is done in passes.
1942Defer frees starting in this pass.
1943.
a894ae75 1944.It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int
d2734cce
SD
1945Maximum memory used for prefetching a checkpoint's space map on each
1946vdev while discarding the checkpoint.
2d815d95 1947.
fdc2d303 1948.It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint
1f02ecc5 1949Only allow small data blocks to be allocated on the special and dedup vdev
b46be903
DS
1950types when the available free space percentage on these vdevs exceeds this
1951value.
2d815d95 1952This ensures reserved space is available for pool metadata as the
1f02ecc5 1953special vdevs approach capacity.
2d815d95 1954.
fdc2d303 1955.It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint
2d815d95 1956Starting in this sync pass, disable compression (including of metadata).
be89734a
MA
1957With the default setting, in practice, we don't have this many sync passes,
1958so this has no effect.
2d815d95 1959.Pp
be89734a 1960The original intent was that disabling compression would help the sync passes
2d815d95
AZ
1961to converge.
1962However, in practice, disabling compression increases
1963the average number of sync passes; because when we turn compression off,
1964many blocks' size will change, and thus we have to re-allocate
1965(not overwrite) them.
1966It also increases the number of
a894ae75 1967.Em 128 KiB
2d815d95
AZ
1968allocations (e.g. for indirect blocks and spacemaps)
1969because these will not be compressed.
1970The
a894ae75 1971.Em 128 KiB
2d815d95 1972allocations are especially detrimental to performance
b46be903
DS
1973on highly fragmented systems, which may have very few free segments of this
1974size,
2d815d95
AZ
1975and may need to load new metaslabs to satisfy these allocations.
1976.
fdc2d303 1977.It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint
2d815d95
AZ
1978Rewrite new block pointers starting in this pass.
1979.
1980.It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int
1981This controls the number of threads used by
1982.Sy dp_sync_taskq .
1983The default value of
1984.Sy 75%
1985will create a maximum of one thread per CPU.
1986.
a894ae75 1987.It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint
2d815d95 1988Maximum size of TRIM command.
b46be903
DS
1989Larger ranges will be split into chunks no larger than this value before
1990issuing.
2d815d95 1991.
a894ae75 1992.It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2d815d95
AZ
1993Minimum size of TRIM commands.
1994TRIM ranges smaller than this will be skipped,
1995unless they're part of a larger range which was chunked.
1996This is done because it's common for these small TRIMs
1997to negatively impact overall performance.
1998.
1999.It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2000Skip uninitialized metaslabs during the TRIM process.
b46be903
DS
2001This option is useful for pools constructed from large thinly-provisioned
2002devices
2d815d95
AZ
2003where TRIM operations are slow.
2004As a pool ages, an increasing fraction of the pool's metaslabs
2005will be initialized, progressively degrading the usefulness of this option.
2006This setting is stored when starting a manual TRIM and will
1b939560 2007persist for the duration of the requested TRIM.
2d815d95
AZ
2008.
2009.It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
2010Maximum number of queued TRIMs outstanding per leaf vdev.
2011The number of concurrent TRIM commands issued to the device is controlled by
2012.Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
2013.
2014.It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
2015The number of transaction groups' worth of frees which should be aggregated
2016before TRIM operations are issued to the device.
2017This setting represents a trade-off between issuing larger,
2018more efficient TRIM operations and the delay
2019before the recently trimmed space is available for use by the device.
2020.Pp
1b939560 2021Increasing this value will allow frees to be aggregated for a longer time.
b46be903
DS
2022This will result is larger TRIM operations and potentially increased memory
2023usage.
2d815d95
AZ
2024Decreasing this value will have the opposite effect.
2025The default of
2026.Sy 32
2027was determined to be a reasonable compromise.
2028.
fdc2d303 2029.It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint
2d815d95
AZ
2030Historical statistics for this many latest TXGs will be available in
2031.Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
2032.
fdc2d303 2033.It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint
b46be903
DS
2034Flush dirty data to disk at least every this many seconds (maximum TXG
2035duration).
2d815d95 2036.
fdc2d303 2037.It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint
2d815d95
AZ
2038Max vdev I/O aggregation size.
2039.
fdc2d303 2040.It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2d815d95
AZ
2041Max vdev I/O aggregation size for non-rotating media.
2042.
2d815d95 2043.It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
9f500936 2044A number by which the balancing algorithm increments the load calculation for
2d815d95
AZ
2045the purpose of selecting the least busy mirror member when an I/O operation
2046immediately follows its predecessor on rotational vdevs
2047for the purpose of making decisions based on load.
2048.
2049.It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
9f500936 2050A number by which the balancing algorithm increments the load calculation for
2d815d95
AZ
2051the purpose of selecting the least busy mirror member when an I/O operation
2052lacks locality as defined by
2053.Sy zfs_vdev_mirror_rotating_seek_offset .
2054Operations within this that are not immediately following the previous operation
2055are incremented by half.
2056.
a894ae75 2057.It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int
2d815d95
AZ
2058The maximum distance for the last queued I/O operation in which
2059the balancing algorithm considers an operation to have locality.
2060.No See Sx ZFS I/O SCHEDULER .
2061.
2062.It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
9f500936 2063A number by which the balancing algorithm increments the load calculation for
2064the purpose of selecting the least busy mirror member on non-rotational vdevs
2d815d95
AZ
2065when I/O operations do not immediately follow one another.
2066.
2067.It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
9f500936 2068A number by which the balancing algorithm increments the load calculation for
b46be903
DS
2069the purpose of selecting the least busy mirror member when an I/O operation
2070lacks
2d815d95
AZ
2071locality as defined by the
2072.Sy zfs_vdev_mirror_rotating_seek_offset .
2073Operations within this that are not immediately following the previous operation
2074are incremented by half.
2075.
fdc2d303 2076.It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint
2d815d95 2077Aggregate read I/O operations if the on-disk gap between them is within this
83426735 2078threshold.
2d815d95 2079.
fdc2d303 2080.It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint
2d815d95
AZ
2081Aggregate write I/O operations if the on-disk gap between them is within this
2082threshold.
2083.
2084.It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
2085Select the raidz parity implementation to use.
2086.Pp
2087Variants that don't depend on CPU-specific features
2088may be selected on module load, as they are supported on all systems.
2089The remaining options may only be set after the module is loaded,
2090as they are available only if the implementations are compiled in
2091and supported on the running system.
2092.Pp
2093Once the module is loaded,
2094.Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2095will show the available options,
2096with the currently selected one enclosed in square brackets.
2097.Pp
2098.TS
2099lb l l .
2100fastest selected by built-in benchmark
2101original original implementation
2102scalar scalar implementation
2103sse2 SSE2 instruction set 64-bit x86
2104ssse3 SSSE3 instruction set 64-bit x86
2105avx2 AVX2 instruction set 64-bit x86
2106avx512f AVX512F instruction set 64-bit x86
2107avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2108aarch64_neon NEON Aarch64/64-bit ARMv8
2109aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2110powerpc_altivec Altivec PowerPC
2111.TE
2112.
2113.It Sy zfs_vdev_scheduler Pq charp
2114.Sy DEPRECATED .
0f402668 2115Prints warning to kernel log for compatibility.
2d815d95 2116.
fdc2d303 2117.It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint
032a213e 2118Max event queue length.
2d815d95
AZ
2119Events in the queue can be viewed with
2120.Xr zpool-events 8 .
2121.
2122.It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2123Maximum recent zevent records to retain for duplicate checking.
2124Setting this to
2125.Sy 0
2126disables duplicate detection.
2127.
a894ae75 2128.It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int
4f072827 2129Lifespan for a recent ereport that was retained for duplicate checking.
2d815d95
AZ
2130.
2131.It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2132The maximum number of taskq entries that are allowed to be cached.
2133When this limit is exceeded transaction records (itxs)
2134will be cleaned synchronously.
2135.
2136.It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
a032ac4b
BB
2137The number of taskq entries that are pre-populated when the taskq is first
2138created and are immediately available for use.
2d815d95
AZ
2139.
2140.It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2141This controls the number of threads used by
2142.Sy dp_zil_clean_taskq .
2143The default value of
2144.Sy 100%
2145will create a maximum of one thread per cpu.
2146.
fdc2d303 2147.It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2d815d95
AZ
2148This sets the maximum block size used by the ZIL.
2149On very fragmented pools, lowering this
a894ae75 2150.Pq typically to Sy 36 KiB
2d815d95
AZ
2151can improve performance.
2152.
66b81b34
AM
2153.It Sy zil_maxcopied Ns = Ns Sy 7680 Ns B Po 7.5 KiB Pc Pq uint
2154This sets the maximum number of write bytes logged via WR_COPIED.
2155It tunes a tradeoff between additional memory copy and possibly worse log
2156space efficiency vs additional range lock/unlock.
2157.
2d815d95
AZ
2158.It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2159Disable the cache flush commands that are normally sent to disk by
2160the ZIL after an LWB write has completed.
2161Setting this will cause ZIL corruption on power loss
2162if a volatile out-of-order write cache is enabled.
2163.
2164.It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2165Disable intent logging replay.
2166Can be disabled for recovery from corrupted ZIL.
2167.
c0e58995 2168.It Sy zil_slog_bulk Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64
1b7c1e5c
GDN
2169Limit SLOG write size per commit executed with synchronous priority.
2170Any writes above that will be executed with lower (asynchronous) priority
2171to limit potential SLOG device abuse by single active ZIL writer.
2d815d95 2172.
361a7e82
JP
2173.It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int
2174Setting this tunable to zero disables ZIL logging of new
2175.Sy xattr Ns = Ns Sy sa
2176records if the
2177.Sy org.openzfs:zilsaxattr
2178feature is enabled on the pool.
2179This would only be necessary to work around bugs in the ZIL logging or replay
2180code for this record type.
2181The tunable has no effect if the feature is disabled.
2182.
fdc2d303 2183.It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint
2d815d95
AZ
2184Usually, one metaslab from each normal-class vdev is dedicated for use by
2185the ZIL to log synchronous writes.
2186However, if there are fewer than
2187.Sy zfs_embedded_slog_min_ms
2188metaslabs in the vdev, this functionality is disabled.
b46be903
DS
2189This ensures that we don't set aside an unreasonable amount of space for the
2190ZIL.
2d815d95 2191.
fdc2d303 2192.It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint
f375b23c
RE
2193Whether heuristic for detection of incompressible data with zstd levels >= 3
2194using LZ4 and zstd-1 passes is enabled.
2195.
fdc2d303 2196.It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint
f375b23c
RE
2197Minimal uncompressed size (inclusive) of a record before the early abort
2198heuristic will be attempted.
2199.
2d815d95
AZ
2200.It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2201If non-zero, the zio deadman will produce debugging messages
2202.Pq see Sy zfs_dbgmsg_enable
2203for all zios, rather than only for leaf zios possessing a vdev.
2204This is meant to be used by developers to gain
638dd5f4 2205diagnostic information for hang conditions which don't involve a mutex
2d815d95 2206or other locking primitive: typically conditions in which a thread in
638dd5f4 2207the zio pipeline is looping indefinitely.
2d815d95 2208.
a894ae75 2209.It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2d815d95
AZ
2210When an I/O operation takes more than this much time to complete,
2211it's marked as slow.
2212Each slow operation causes a delay zevent.
2213Slow I/O counters can be seen with
2214.Nm zpool Cm status Fl s .
2215.
2216.It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2217Throttle block allocations in the I/O pipeline.
2218This allows for dynamic allocation distribution when devices are imbalanced.
e815485f 2219When enabled, the maximum number of pending allocations per top-level vdev
2d815d95
AZ
2220is limited by
2221.Sy zfs_vdev_queue_depth_pct .
2222.
5c006134
RM
2223.It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int
2224Control the naming scheme used when setting new xattrs in the user namespace.
2225If
2226.Sy 0
2227.Pq the default on Linux ,
2228user namespace xattr names are prefixed with the namespace, to be backwards
2229compatible with previous versions of ZFS on Linux.
2230If
2231.Sy 1
2232.Pq the default on Fx ,
2233user namespace xattr names are not prefixed, to be backwards compatible with
2234previous versions of ZFS on illumos and
2235.Fx .
2236.Pp
2237Either naming scheme can be read on this and future versions of ZFS, regardless
2238of this tunable, but legacy ZFS on illumos or
2239.Fx
2240are unable to read user namespace xattrs written in the Linux format, and
2241legacy versions of ZFS on Linux are unable to read user namespace xattrs written
2242in the legacy ZFS format.
2243.Pp
2244An existing xattr with the alternate naming scheme is removed when overwriting
2245the xattr so as to not accumulate duplicates.
2246.
2d815d95
AZ
2247.It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2248Prioritize requeued I/O.
2249.
2250.It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2251Percentage of online CPUs which will run a worker thread for I/O.
2252These workers are responsible for I/O work such as compression and
2253checksum calculations.
2254Fractional number of CPUs will be rounded down.
2255.Pp
2256The default value of
2257.Sy 80%
2258was chosen to avoid using all CPUs which can result in
2259latency issues and inconsistent application performance,
2260especially when slower compression and/or checksumming is enabled.
2261.
2262.It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2263Number of worker threads per taskq.
2264Lower values improve I/O ordering and CPU utilization,
2265while higher reduces lock contention.
2266.Pp
2267If
2268.Sy 0 ,
2269generate a system-dependent value close to 6 threads per taskq.
2270.
2271.It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2272Do not create zvol device nodes.
2273This may slightly improve startup time on
83426735 2274systems with a very large number of zvols.
2d815d95
AZ
2275.
2276.It Sy zvol_major Ns = Ns Sy 230 Pq uint
2277Major number for zvol block devices.
2278.
ab8d9c17 2279.It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long
2d815d95
AZ
2280Discard (TRIM) operations done on zvols will be done in batches of this
2281many blocks, where block size is determined by the
2282.Sy volblocksize
2283property of a zvol.
2284.
a894ae75 2285.It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint
2d815d95
AZ
2286When adding a zvol to the system, prefetch this many bytes
2287from the start and end of the volume.
2288Prefetching these regions of the volume is desirable,
2289because they are likely to be accessed immediately by
2290.Xr blkid 8
2291or the kernel partitioner.
2292.
2293.It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2294When processing I/O requests for a zvol, submit them synchronously.
2295This effectively limits the queue depth to
2296.Em 1
2297for each I/O submitter.
2298When unset, requests are handled asynchronously by a thread pool.
2299The number of requests which can be handled concurrently is controlled by
2300.Sy zvol_threads .
6f73d021
TH
2301.Sy zvol_request_sync
2302is ignored when running on a kernel that supports block multiqueue
2303.Pq Li blk-mq .
2d815d95 2304.
6f73d021
TH
2305.It Sy zvol_threads Ns = Ns Sy 0 Pq uint
2306The number of system wide threads to use for processing zvol block IOs.
2307If
2308.Sy 0
2309(the default) then internally set
2310.Sy zvol_threads
2311to the number of CPUs present or 32 (whichever is greater).
2312.
05c4710e
TH
2313.It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint
2314The number of threads per zvol to use for queuing IO requests.
2315This parameter will only appear if your kernel supports
2316.Li blk-mq
2317and is only read and assigned to a zvol at zvol load time.
2318If
2319.Sy 0
2320(the default) then internally set
2321.Sy zvol_blk_mq_threads
2322to the number of CPUs present.
2323.
2324.It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2325Set to
2326.Sy 1
2327to use the
2328.Li blk-mq
2329API for zvols.
2330Set to
2331.Sy 0
2332(the default) to use the legacy zvol APIs.
2333This setting can give better or worse zvol performance depending on
2334the workload.
2335This parameter will only appear if your kernel supports
2336.Li blk-mq
2337and is only read and assigned to a zvol at zvol load time.
2338.
2339.It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint
2340If
2341.Sy zvol_use_blk_mq
2342is enabled, then process this number of
2343.Sy volblocksize Ns -sized blocks per zvol thread.
2344This tunable can be use to favor better performance for zvol reads (lower
2345values) or writes (higher values).
2346If set to
2347.Sy 0 ,
2348then the zvol layer will process the maximum number of blocks
2349per thread that it can.
2350This parameter will only appear if your kernel supports
2351.Li blk-mq
2352and is only applied at each zvol's load time.
2353.
2354.It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint
2355The queue_depth value for the zvol
2356.Li blk-mq
2357interface.
2358This parameter will only appear if your kernel supports
2359.Li blk-mq
2360and is only applied at each zvol's load time.
2361If
2362.Sy 0
2363(the default) then use the kernel's default queue depth.
2364Values are clamped to the kernel's
2365.Dv BLKDEV_MIN_RQ
2366and
2367.Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ
2368limits.
2369.
2d815d95
AZ
2370.It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2371Defines zvol block devices behaviour when
2372.Sy volmode Ns = Ns Sy default :
2373.Bl -tag -compact -offset 4n -width "a"
2374.It Sy 1
2375.No equivalent to Sy full
2376.It Sy 2
2377.No equivalent to Sy dev
2378.It Sy 3
2379.No equivalent to Sy none
2380.El
945b4074
MZ
2381.
2382.It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2383Enable strict ZVOL quota enforcement.
2384The strict quota enforcement may have a performance impact.
2d815d95
AZ
2385.El
2386.
2387.Sh ZFS I/O SCHEDULER
2388ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2389The scheduler determines when and in what order those operations are issued.
2390The scheduler divides operations into five I/O classes,
e8b96c60 2391prioritized in the following order: sync read, sync write, async read,
2d815d95
AZ
2392async write, and scrub/resilver.
2393Each queue defines the minimum and maximum number of concurrent operations
2394that may be issued to the device.
2395In addition, the device has an aggregate maximum,
2396.Sy zfs_vdev_max_active .
2397Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2398If the sum of the per-queue maxima exceeds the aggregate maximum,
2399then the number of active operations may reach
2400.Sy zfs_vdev_max_active ,
2401in which case no further operations will be issued,
2402regardless of whether all per-queue minima have been met.
2403.Pp
e8b96c60 2404For many physical devices, throughput increases with the number of
2d815d95
AZ
2405concurrent operations, but latency typically suffers.
2406Furthermore, physical devices typically have a limit
2407at which more concurrent operations have no
e8b96c60 2408effect on throughput or can actually cause it to decrease.
2d815d95 2409.Pp
e8b96c60 2410The scheduler selects the next operation to issue by first looking for an
2d815d95
AZ
2411I/O class whose minimum has not been satisfied.
2412Once all are satisfied and the aggregate maximum has not been hit,
2413the scheduler looks for classes whose maximum has not been satisfied.
2414Iteration through the I/O classes is done in the order specified above.
2415No further operations are issued
2416if the aggregate maximum number of concurrent operations has been hit,
b46be903
DS
2417or if there are no operations queued for an I/O class that has not hit its
2418maximum.
2d815d95
AZ
2419Every time an I/O operation is queued or an operation completes,
2420the scheduler looks for new operations to issue.
2421.Pp
2422In general, smaller
2423.Sy max_active Ns s
2424will lead to lower latency of synchronous operations.
2425Larger
2426.Sy max_active Ns s
2427may lead to higher overall throughput, depending on underlying storage.
2428.Pp
2429The ratio of the queues'
2430.Sy max_active Ns s
2431determines the balance of performance between reads, writes, and scrubs.
2432For example, increasing
2433.Sy zfs_vdev_scrub_max_active
2434will cause the scrub or resilver to complete more quickly,
2435but reads and writes to have higher latency and lower throughput.
2436.Pp
2437All I/O classes have a fixed maximum number of outstanding operations,
2438except for the async write class.
2439Asynchronous writes represent the data that is committed to stable storage
2440during the syncing stage for transaction groups.
2441Transaction groups enter the syncing state periodically,
2442so the number of queued async writes will quickly burst up
2443and then bleed down to zero.
2444Rather than servicing them as quickly as possible,
2445the I/O scheduler changes the maximum number of active async write operations
2446according to the amount of dirty data in the pool.
2447Since both throughput and latency typically increase with the number of
e8b96c60 2448concurrent operations issued to physical devices, reducing the
b46be903
DS
2449burstiness in the number of simultaneous operations also stabilizes the
2450response time of operations from other queues, in particular synchronous ones.
2d815d95 2451In broad strokes, the I/O scheduler will issue more concurrent operations
b46be903 2452from the async write queue as there is more dirty data in the pool.
2d815d95
AZ
2453.
2454.Ss Async Writes
e8b96c60 2455The number of concurrent operations issued for the async write I/O class
2d815d95
AZ
2456follows a piece-wise linear function defined by a few adjustable points:
2457.Bd -literal
2458 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
e8b96c60
MA
2459 ^ | /^ |
2460 | | / | |
2461active | / | |
2462 I/O | / | |
2463count | / | |
2464 | / | |
2d815d95 2465 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
e8b96c60 2466 0|_______^______|_________|
2d815d95 2467 0% | | 100% of \fBzfs_dirty_data_max\fP
e8b96c60 2468 | |
2d815d95
AZ
2469 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2470 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2471.Ed
2472.Pp
e8b96c60
MA
2473Until the amount of dirty data exceeds a minimum percentage of the dirty
2474data allowed in the pool, the I/O scheduler will limit the number of
2d815d95
AZ
2475concurrent operations to the minimum.
2476As that threshold is crossed, the number of concurrent operations issued
2477increases linearly to the maximum at the specified maximum percentage
2478of the dirty data allowed in the pool.
2479.Pp
e8b96c60 2480Ideally, the amount of dirty data on a busy pool will stay in the sloped
2d815d95
AZ
2481part of the function between
2482.Sy zfs_vdev_async_write_active_min_dirty_percent
2483and
2484.Sy zfs_vdev_async_write_active_max_dirty_percent .
2485If it exceeds the maximum percentage,
2486this indicates that the rate of incoming data is
2487greater than the rate that the backend storage can handle.
2488In this case, we must further throttle incoming writes,
2489as described in the next section.
2490.
2491.Sh ZFS TRANSACTION DELAY
e8b96c60
MA
2492We delay transactions when we've determined that the backend storage
2493isn't able to accommodate the rate of incoming writes.
2d815d95 2494.Pp
e8b96c60 2495If there is already a transaction waiting, we delay relative to when
2d815d95
AZ
2496that transaction will finish waiting.
2497This way the calculated delay time
2498is independent of the number of threads concurrently executing transactions.
2499.Pp
2500If we are the only waiter, wait relative to when the transaction started,
2501rather than the current time.
2502This credits the transaction for "time already served",
2503e.g. reading indirect blocks.
2504.Pp
2505The minimum time for a transaction to take is calculated as
12bd322d 2506.D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2d815d95
AZ
2507.Pp
2508The delay has two degrees of freedom that can be adjusted via tunables.
2509The percentage of dirty data at which we start to delay is defined by
2510.Sy zfs_delay_min_dirty_percent .
2511This should typically be at or above
2512.Sy zfs_vdev_async_write_active_max_dirty_percent ,
2513so that we only start to delay after writing at full speed
2514has failed to keep up with the incoming write rate.
2515The scale of the curve is defined by
2516.Sy zfs_delay_scale .
b46be903
DS
2517Roughly speaking, this variable determines the amount of delay at the midpoint
2518of the curve.
2d815d95 2519.Bd -literal
e8b96c60
MA
2520delay
2521 10ms +-------------------------------------------------------------*+
2522 | *|
2523 9ms + *+
2524 | *|
2525 8ms + *+
2526 | * |
2527 7ms + * +
2528 | * |
2529 6ms + * +
2530 | * |
2531 5ms + * +
2532 | * |
2533 4ms + * +
2534 | * |
2535 3ms + * +
2536 | * |
2537 2ms + (midpoint) * +
2538 | | ** |
2539 1ms + v *** +
2d815d95 2540 | \fBzfs_delay_scale\fP ----------> ******** |
e8b96c60 2541 0 +-------------------------------------*********----------------+
2d815d95
AZ
2542 0% <- \fBzfs_dirty_data_max\fP -> 100%
2543.Ed
2544.Pp
2545Note, that since the delay is added to the outstanding time remaining on the
2546most recent transaction it's effectively the inverse of IOPS.
2547Here, the midpoint of
a894ae75 2548.Em 500 us
2d815d95
AZ
2549translates to
2550.Em 2000 IOPS .
2551The shape of the curve
e8b96c60 2552was chosen such that small changes in the amount of accumulated dirty data
2d815d95
AZ
2553in the first three quarters of the curve yield relatively small differences
2554in the amount of delay.
2555.Pp
e8b96c60 2556The effects can be easier to understand when the amount of delay is
2d815d95
AZ
2557represented on a logarithmic scale:
2558.Bd -literal
e8b96c60
MA
2559delay
2560100ms +-------------------------------------------------------------++
2561 + +
2562 | |
2563 + *+
2564 10ms + *+
2565 + ** +
2566 | (midpoint) ** |
2567 + | ** +
2568 1ms + v **** +
2d815d95 2569 + \fBzfs_delay_scale\fP ----------> ***** +
e8b96c60
MA
2570 | **** |
2571 + **** +
2572100us + ** +
2573 + * +
2574 | * |
2575 + * +
2576 10us + * +
2577 + +
2578 | |
2579 + +
2580 +--------------------------------------------------------------+
2d815d95
AZ
2581 0% <- \fBzfs_dirty_data_max\fP -> 100%
2582.Ed
2583.Pp
e8b96c60 2584Note here that only as the amount of dirty data approaches its limit does
2d815d95
AZ
2585the delay start to increase rapidly.
2586The goal of a properly tuned system should be to keep the amount of dirty data
2587out of that range by first ensuring that the appropriate limits are set
2588for the I/O scheduler to reach optimal throughput on the back-end storage,
2589and then by changing the value of
2590.Sy zfs_delay_scale
2591to increase the steepness of the curve.