]> git.proxmox.com Git - mirror_zfs.git/blob - man/man4/zfs.4
Move properties, parameters, events, and concepts around manual sections
[mirror_zfs.git] / man / man4 / zfs.4
1 .\"
2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" Copyright (c) 2019, 2021 by Delphix. All rights reserved.
4 .\" Copyright (c) 2019 Datto Inc.
5 .\" The contents of this file are subject to the terms of the Common Development
6 .\" and Distribution License (the "License"). You may not use this file except
7 .\" in compliance with the License. You can obtain a copy of the license at
8 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
9 .\"
10 .\" See the License for the specific language governing permissions and
11 .\" limitations under the License. When distributing Covered Code, include this
12 .\" CDDL HEADER in each file and include the License file at
13 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
14 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
15 .\" own identifying information:
16 .\" Portions Copyright [yyyy] [name of copyright owner]
17 .\"
18 .Dd June 1, 2021
19 .Dt ZFS 4
20 .Os
21 .
22 .Sh NAME
23 .Nm zfs
24 .Nd tuning of the ZFS kernel module
25 .
26 .Sh DESCRIPTION
27 The ZFS module supports these parameters:
28 .Bl -tag -width Ds
29 .It Sy dbuf_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
30 Maximum size in bytes of the dbuf cache.
31 The target size is determined by the MIN versus
32 .No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd
33 of the target ARC size.
34 The behavior of the dbuf cache and its associated settings
35 can be observed via the
36 .Pa /proc/spl/kstat/zfs/dbufstats
37 kstat.
38 .
39 .It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy ULONG_MAX Ns B Pq ulong
40 Maximum size in bytes of the metadata dbuf cache.
41 The target size is determined by the MIN versus
42 .No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th
43 of the target ARC size.
44 The behavior of the metadata dbuf cache and its associated settings
45 can be observed via the
46 .Pa /proc/spl/kstat/zfs/dbufstats
47 kstat.
48 .
49 .It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint
50 The percentage over
51 .Sy dbuf_cache_max_bytes
52 when dbufs must be evicted directly.
53 .
54 .It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint
55 The percentage below
56 .Sy dbuf_cache_max_bytes
57 when the evict thread stops evicting dbufs.
58 .
59 .It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq int
60 Set the size of the dbuf cache
61 .Pq Sy dbuf_cache_max_bytes
62 to a log2 fraction of the target ARC size.
63 .
64 .It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq int
65 Set the size of the dbuf metadata cache
66 .Pq Sy dbuf_metadata_cache_max_bytes
67 to a log2 fraction of the target ARC size.
68 .
69 .It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq int
70 dnode slots allocated in a single operation as a power of 2.
71 The default value minimizes lock contention for the bulk operation performed.
72 .
73 .It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128MB Pc Pq int
74 Limit the amount we can prefetch with one call to this amount in bytes.
75 This helps to limit the amount of memory that can be used by prefetching.
76 .
77 .It Sy ignore_hole_birth Pq int
78 Alias for
79 .Sy send_holes_without_birth_time .
80 .
81 .It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int
82 Turbo L2ARC warm-up.
83 When the L2ARC is cold the fill interval will be set as fast as possible.
84 .
85 .It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq ulong
86 Min feed interval in milliseconds.
87 Requires
88 .Sy l2arc_feed_again Ns = Ns Ar 1
89 and only applicable in related situations.
90 .
91 .It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq ulong
92 Seconds between L2ARC writing.
93 .
94 .It Sy l2arc_headroom Ns = Ns Sy 2 Pq ulong
95 How far through the ARC lists to search for L2ARC cacheable content,
96 expressed as a multiplier of
97 .Sy l2arc_write_max .
98 ARC persistence across reboots can be achieved with persistent L2ARC
99 by setting this parameter to
100 .Sy 0 ,
101 allowing the full length of ARC lists to be searched for cacheable content.
102 .
103 .It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq ulong
104 Scales
105 .Sy l2arc_headroom
106 by this percentage when L2ARC contents are being successfully compressed
107 before writing.
108 A value of
109 .Sy 100
110 disables this feature.
111 .
112 .It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int
113 Controls whether only MFU metadata and data are cached from ARC into L2ARC.
114 This may be desired to avoid wasting space on L2ARC when reading/writing large
115 amounts of data that are not expected to be accessed more than once.
116 .Pp
117 The default is off,
118 meaning both MRU and MFU data and metadata are cached.
119 When turning off this feature, some MRU buffers will still be present
120 in ARC and eventually cached on L2ARC.
121 .No If Sy l2arc_noprefetch Ns = Ns Sy 0 ,
122 some prefetched buffers will be cached to L2ARC, and those might later
123 transition to MRU, in which case the
124 .Sy l2arc_mru_asize No arcstat will not be Sy 0 .
125 .Pp
126 Regardless of
127 .Sy l2arc_noprefetch ,
128 some MFU buffers might be evicted from ARC,
129 accessed later on as prefetches and transition to MRU as prefetches.
130 If accessed again they are counted as MRU and the
131 .Sy l2arc_mru_asize No arcstat will not be Sy 0 .
132 .Pp
133 The ARC status of L2ARC buffers when they were first cached in
134 L2ARC can be seen in the
135 .Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize
136 arcstats when importing the pool or onlining a cache
137 device if persistent L2ARC is enabled.
138 .Pp
139 The
140 .Sy evict_l2_eligible_mru
141 arcstat does not take into account if this option is enabled as the information
142 provided by the
143 .Sy evict_l2_eligible_m[rf]u
144 arcstats can be used to decide if toggling this option is appropriate
145 for the current workload.
146 .
147 .It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq int
148 Percent of ARC size allowed for L2ARC-only headers.
149 Since L2ARC buffers are not evicted on memory pressure,
150 too many headers on a system with an irrationally large L2ARC
151 can render it slow or unusable.
152 This parameter limits L2ARC writes and rebuilds to achieve the target.
153 .
154 .It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq ulong
155 Trims ahead of the current write size
156 .Pq Sy l2arc_write_max
157 on L2ARC devices by this percentage of write size if we have filled the device.
158 If set to
159 .Sy 100
160 we TRIM twice the space required to accommodate upcoming writes.
161 A minimum of
162 .Sy 64MB
163 will be trimmed.
164 It also enables TRIM of the whole L2ARC device upon creation
165 or addition to an existing pool or if the header of the device is
166 invalid upon importing a pool or onlining a cache device.
167 A value of
168 .Sy 0
169 disables TRIM on L2ARC altogether and is the default as it can put significant
170 stress on the underlying storage devices.
171 This will vary depending of how well the specific device handles these commands.
172 .
173 .It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
174 Do not write buffers to L2ARC if they were prefetched but not used by
175 applications.
176 In case there are prefetched buffers in L2ARC and this option
177 is later set, we do not read the prefetched buffers from L2ARC.
178 Unsetting this option is useful for caching sequential reads from the
179 disks to L2ARC and serve those reads from L2ARC later on.
180 This may be beneficial in case the L2ARC device is significantly faster
181 in sequential reads than the disks of the pool.
182 .Pp
183 Use
184 .Sy 1
185 to disable and
186 .Sy 0
187 to enable caching/reading prefetches to/from L2ARC.
188 .
189 .It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int
190 No reads during writes.
191 .
192 .It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8MB Pc Pq ulong
193 Cold L2ARC devices will have
194 .Sy l2arc_write_max
195 increased by this amount while they remain cold.
196 .
197 .It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8MB Pc Pq ulong
198 Max write bytes per interval.
199 .
200 .It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
201 Rebuild the L2ARC when importing a pool (persistent L2ARC).
202 This can be disabled if there are problems importing a pool
203 or attaching an L2ARC device (e.g. the L2ARC device is slow
204 in reading stored log metadata, or the metadata
205 has become somehow fragmented/unusable).
206 .
207 .It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1GB Pc Pq ulong
208 Mininum size of an L2ARC device required in order to write log blocks in it.
209 The log blocks are used upon importing the pool to rebuild the persistent L2ARC.
210 .Pp
211 For L2ARC devices less than 1GB, the amount of data
212 .Fn l2arc_evict
213 evicts is significant compared to the amount of restored L2ARC data.
214 In this case, do not write log blocks in L2ARC in order not to waste space.
215 .
216 .It Sy metaslab_aliquot Ns = Ns Sy 524288 Ns B Po 512kB Pc Pq ulong
217 Metaslab granularity, in bytes.
218 This is roughly similar to what would be referred to as the "stripe size"
219 in traditional RAID arrays.
220 In normal operation, ZFS will try to write this amount of data
221 to a top-level vdev before moving on to the next one.
222 .
223 .It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
224 Enable metaslab group biasing based on their vdevs' over- or under-utilization
225 relative to the pool.
226 .
227 .It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Ns B Po 16MB + 1B Pc Pq ulong
228 Make some blocks above a certain size be gang blocks.
229 This option is used by the test suite to facilitate testing.
230 .
231 .It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Ns B Po 1MB Pc Pq int
232 When attempting to log an output nvlist of an ioctl in the on-disk history,
233 the output will not be stored if it is larger than this size (in bytes).
234 This must be less than
235 .Sy DMU_MAX_ACCESS Pq 64MB .
236 This applies primarily to
237 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
238 .
239 .It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int
240 Prevent log spacemaps from being destroyed during pool exports and destroys.
241 .
242 .It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
243 Enable/disable segment-based metaslab selection.
244 .
245 .It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int
246 When using segment-based metaslab selection, continue allocating
247 from the active metaslab until this option's
248 worth of buckets have been exhausted.
249 .
250 .It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int
251 Load all metaslabs during pool import.
252 .
253 .It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int
254 Prevent metaslabs from being unloaded.
255 .
256 .It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
257 Enable use of the fragmentation metric in computing metaslab weights.
258 .
259 .It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16MB Pc Pq int
260 Maximum distance to search forward from the last offset.
261 Without this limit, fragmented pools can see
262 .Em >100`000
263 iterations and
264 .Fn metaslab_block_picker
265 becomes the performance limiting factor on high-performance storage.
266 .Pp
267 With the default setting of
268 .Sy 16MB ,
269 we typically see less than
270 .Em 500
271 iterations, even with very fragmented
272 .Sy ashift Ns = Ns Sy 9
273 pools.
274 The maximum number of iterations possible is
275 .Sy metaslab_df_max_search / 2^(ashift+1) .
276 With the default setting of
277 .Sy 16MB
278 this is
279 .Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9
280 or
281 .Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 .
282 .
283 .It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int
284 If not searching forward (due to
285 .Sy metaslab_df_max_search , metaslab_df_free_pct ,
286 .No or Sy metaslab_df_alloc_threshold ) ,
287 this tunable controls which segment is used.
288 If set, we will use the largest free segment.
289 If unset, we will use a segment of at least the requested size.
290 .
291 .It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1h Pc Pq ulong
292 When we unload a metaslab, we cache the size of the largest free chunk.
293 We use that cached size to determine whether or not to load a metaslab
294 for a given allocation.
295 As more frees accumulate in that metaslab while it's unloaded,
296 the cached max size becomes less and less accurate.
297 After a number of seconds controlled by this tunable,
298 we stop considering the cached max size and start
299 considering only the histogram instead.
300 .
301 .It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq int
302 When we are loading a new metaslab, we check the amount of memory being used
303 to store metaslab range trees.
304 If it is over a threshold, we attempt to unload the least recently used metaslab
305 to prevent the system from clogging all of its memory with range trees.
306 This tunable sets the percentage of total system memory that is the threshold.
307 .
308 .It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int
309 .Bl -item -compact
310 .It
311 If unset, we will first try normal allocation.
312 .It
313 If that fails then we will do a gang allocation.
314 .It
315 If that fails then we will do a "try hard" gang allocation.
316 .It
317 If that fails then we will have a multi-layer gang block.
318 .El
319 .Pp
320 .Bl -item -compact
321 .It
322 If set, we will first try normal allocation.
323 .It
324 If that fails then we will do a "try hard" allocation.
325 .It
326 If that fails we will do a gang allocation.
327 .It
328 If that fails we will do a "try hard" gang allocation.
329 .It
330 If that fails then we will have a multi-layer gang block.
331 .El
332 .
333 .It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq int
334 When not trying hard, we only consider this number of the best metaslabs.
335 This improves performance, especially when there are many metaslabs per vdev
336 and the allocation can't actually be satisfied
337 (so we would otherwise iterate all metaslabs).
338 .
339 .It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq int
340 When a vdev is added, target this number of metaslabs per top-level vdev.
341 .
342 .It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512MB Pc Pq int
343 Default limit for metaslab size.
344 .
345 .It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy ASHIFT_MAX Po 16 Pc Pq ulong
346 Maximum ashift used when optimizing for logical -> physical sector size on new
347 top-level vdevs.
348 .
349 .It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq ulong
350 Minimum ashift used when creating new top-level vdevs.
351 .
352 .It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq int
353 Minimum number of metaslabs to create in a top-level vdev.
354 .
355 .It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int
356 Skip label validation steps during pool import.
357 Changing is not recommended unless you know what you're doing
358 and are recovering a damaged label.
359 .
360 .It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq int
361 Practical upper limit of total metaslabs per top-level vdev.
362 .
363 .It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
364 Enable metaslab group preloading.
365 .
366 .It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
367 Give more weight to metaslabs with lower LBAs,
368 assuming they have greater bandwidth,
369 as is typically the case on a modern constant angular velocity disk drive.
370 .
371 .It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq int
372 After a metaslab is used, we keep it loaded for this many TXGs, to attempt to
373 reduce unnecessary reloading.
374 Note that both this many TXGs and
375 .Sy metaslab_unload_delay_ms
376 milliseconds must pass before unloading will occur.
377 .
378 .It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10min Pc Pq int
379 After a metaslab is used, we keep it loaded for this many milliseconds,
380 to attempt to reduce unnecessary reloading.
381 Note, that both this many milliseconds and
382 .Sy metaslab_unload_delay
383 TXGs must pass before unloading will occur.
384 .
385 .It Sy reference_history Ns = Ns Sy 3 Pq int
386 Maximum reference holders being tracked when reference_tracking_enable is active.
387 .
388 .It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int
389 Track reference holders to
390 .Sy refcount_t
391 objects (debug builds only).
392 .
393 .It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int
394 When set, the
395 .Sy hole_birth
396 optimization will not be used, and all holes will always be sent during a
397 .Nm zfs Cm send .
398 This is useful if you suspect your datasets are affected by a bug in
399 .Sy hole_birth .
400 .
401 .It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
402 SPA config file.
403 .
404 .It Sy spa_asize_inflation Ns = Ns Sy 24 Pq int
405 Multiplication factor used to estimate actual disk consumption from the
406 size of data being written.
407 The default value is a worst case estimate,
408 but lower values may be valid for a given pool depending on its configuration.
409 Pool administrators who understand the factors involved
410 may wish to specify a more realistic inflation factor,
411 particularly if they operate close to quota or capacity limits.
412 .
413 .It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int
414 Whether to print the vdev tree in the debugging message buffer during pool import.
415 .
416 .It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int
417 Whether to traverse data blocks during an "extreme rewind"
418 .Pq Fl X
419 import.
420 .Pp
421 An extreme rewind import normally performs a full traversal of all
422 blocks in the pool for verification.
423 If this parameter is unset, the traversal skips non-metadata blocks.
424 It can be toggled once the
425 import has started to stop or start the traversal of non-metadata blocks.
426 .
427 .It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int
428 Whether to traverse blocks during an "extreme rewind"
429 .Pq Fl X
430 pool import.
431 .Pp
432 An extreme rewind import normally performs a full traversal of all
433 blocks in the pool for verification.
434 If this parameter is unset, the traversal is not performed.
435 It can be toggled once the import has started to stop or start the traversal.
436 .
437 .It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq int
438 Sets the maximum number of bytes to consume during pool import to the log2
439 fraction of the target ARC size.
440 .
441 .It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int
442 Normally, we don't allow the last
443 .Sy 3.2% Pq Sy 1/2^spa_slop_shift
444 of space in the pool to be consumed.
445 This ensures that we don't run the pool completely out of space,
446 due to unaccounted changes (e.g. to the MOS).
447 It also limits the worst-case time to allocate space.
448 If we have less than this amount of free space,
449 most ZPL operations (e.g. write, create) will return
450 .Sy ENOSPC .
451 .
452 .It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32kB Pc Pq int
453 During top-level vdev removal, chunks of data are copied from the vdev
454 which may include free space in order to trade bandwidth for IOPS.
455 This parameter determines the maximum span of free space, in bytes,
456 which will be included as "unnecessary" data in a chunk of copied data.
457 .Pp
458 The default value here was chosen to align with
459 .Sy zfs_vdev_read_gap_limit ,
460 which is a similar concept when doing
461 regular reads (but there's no reason it has to be the same).
462 .
463 .It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512B Pc Pq ulong
464 Logical ashift for file-based devices.
465 .
466 .It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512B Pc Pq ulong
467 Physical ashift for file-based devices.
468 .
469 .It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int
470 If set, when we start iterating over a ZAP object,
471 prefetch the entire object (all leaf blocks).
472 However, this is limited by
473 .Sy dmu_prefetch_max .
474 .
475 .It Sy zfetch_array_rd_sz Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq ulong
476 If prefetching is enabled, disable prefetching for reads larger than this size.
477 .
478 .It Sy zfetch_max_distance Ns = Ns Sy 8388608 Ns B Po 8MB Pc Pq uint
479 Max bytes to prefetch per stream.
480 .
481 .It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64MB Pc Pq uint
482 Max bytes to prefetch indirects for per stream.
483 .
484 .It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint
485 Max number of streams per zfetch (prefetch streams per file).
486 .
487 .It Sy zfetch_min_sec_reap Ns = Ns Sy 2 Pq uint
488 Min time before an active prefetch stream can be reclaimed
489 .
490 .It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
491 Enables ARC from using scatter/gather lists and forces all allocations to be
492 linear in kernel memory.
493 Disabling can improve performance in some code paths
494 at the expense of fragmented kernel memory.
495 .
496 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER-1 Pq uint
497 Maximum number of consecutive memory pages allocated in a single block for
498 scatter/gather lists.
499 .Pp
500 The value of
501 .Sy MAX_ORDER
502 depends on kernel configuration.
503 .
504 .It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5kB Pc Pq uint
505 This is the minimum allocation size that will use scatter (page-based) ABDs.
506 Smaller allocations will use linear ABDs.
507 .
508 .It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq ulong
509 When the number of bytes consumed by dnodes in the ARC exceeds this number of
510 bytes, try to unpin some of it in response to demand for non-metadata.
511 This value acts as a ceiling to the amount of dnode metadata, and defaults to
512 .Sy 0 ,
513 which indicates that a percent which is based on
514 .Sy zfs_arc_dnode_limit_percent
515 of the ARC meta buffers that may be used for dnodes.
516 .Pp
517 Also see
518 .Sy zfs_arc_meta_prune
519 which serves a similar purpose but is used
520 when the amount of metadata in the ARC exceeds
521 .Sy zfs_arc_meta_limit
522 rather than in response to overall demand for non-metadata.
523 .
524 .It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq ulong
525 Percentage that can be consumed by dnodes of ARC meta buffers.
526 .Pp
527 See also
528 .Sy zfs_arc_dnode_limit ,
529 which serves a similar purpose but has a higher priority if nonzero.
530 .
531 .It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq ulong
532 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
533 when the number of bytes consumed by dnodes exceeds
534 .Sy zfs_arc_dnode_limit .
535 .
536 .It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8kB Pc Pq int
537 The ARC's buffer hash table is sized based on the assumption of an average
538 block size of this value.
539 This works out to roughly 1MB of hash table per 1GB of physical memory
540 with 8-byte pointers.
541 For configurations with a known larger average block size,
542 this value can be increased to reduce the memory footprint.
543 .
544 .It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq int
545 When
546 .Fn arc_is_overflowing ,
547 .Fn arc_get_data_impl
548 waits for this percent of the requested amount of data to be evicted.
549 For example, by default, for every
550 .Em 2kB
551 that's evicted,
552 .Em 1kB
553 of it may be "reused" by a new allocation.
554 Since this is above
555 .Sy 100 Ns % ,
556 it ensures that progress is made towards getting
557 .Sy arc_size No under Sy arc_c .
558 Since this is finite, it ensures that allocations can still happen,
559 even during the potentially long time that
560 .Sy arc_size No is more than Sy arc_c .
561 .
562 .It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq int
563 Number ARC headers to evict per sub-list before proceeding to another sub-list.
564 This batch-style operation prevents entire sub-lists from being evicted at once
565 but comes at a cost of additional unlocking and locking.
566 .
567 .It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq int
568 If set to a non zero value, it will replace the
569 .Sy arc_grow_retry
570 value with this value.
571 The
572 .Sy arc_grow_retry
573 .No value Pq default Sy 5 Ns s
574 is the number of seconds the ARC will wait before
575 trying to resume growth after a memory pressure event.
576 .
577 .It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int
578 Throttle I/O when free system memory drops below this percentage of total
579 system memory.
580 Setting this value to
581 .Sy 0
582 will disable the throttle.
583 .
584 .It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq ulong
585 Max size of ARC in bytes.
586 If
587 .Sy 0 ,
588 then the max size of ARC is determined by the amount of system memory installed.
589 Under Linux, half of system memory will be used as the limit.
590 Under
591 .Fx ,
592 the larger of
593 .Sy all_system_memory - 1GB No and Sy 5/8 * all_system_memory
594 will be used as the limit.
595 This value must be at least
596 .Sy 67108864 Ns B Pq 64MB .
597 .Pp
598 This value can be changed dynamically, with some caveats.
599 It cannot be set back to
600 .Sy 0
601 while running, and reducing it below the current ARC size will not cause
602 the ARC to shrink without memory pressure to induce shrinking.
603 .
604 .It Sy zfs_arc_meta_adjust_restarts Ns = Ns Sy 4096 Pq ulong
605 The number of restart passes to make while scanning the ARC attempting
606 the free buffers in order to stay below the
607 .Sy fs_arc_meta_limit .
608 This value should not need to be tuned but is available to facilitate
609 performance analysis.
610 .
611 .It Sy zfs_arc_meta_limit Ns = Ns Sy 0 Ns B Pq ulong
612 The maximum allowed size in bytes that metadata buffers are allowed to
613 consume in the ARC.
614 When this limit is reached, metadata buffers will be reclaimed,
615 even if the overall
616 .Sy arc_c_max
617 has not been reached.
618 It defaults to
619 .Sy 0 ,
620 which indicates that a percentage based on
621 .Sy zfs_arc_meta_limit_percent
622 of the ARC may be used for metadata.
623 .Pp
624 This value my be changed dynamically, except that must be set to an explicit value
625 .Pq cannot be set back to Sy 0 .
626 .
627 .It Sy zfs_arc_meta_limit_percent Ns = Ns Sy 75 Ns % Pq ulong
628 Percentage of ARC buffers that can be used for metadata.
629 .Pp
630 See also
631 .Sy zfs_arc_meta_limit ,
632 which serves a similar purpose but has a higher priority if nonzero.
633 .
634 .It Sy zfs_arc_meta_min Ns = Ns Sy 0 Ns B Pq ulong
635 The minimum allowed size in bytes that metadata buffers may consume in
636 the ARC.
637 .
638 .It Sy zfs_arc_meta_prune Ns = Ns Sy 10000 Pq int
639 The number of dentries and inodes to be scanned looking for entries
640 which can be dropped.
641 This may be required when the ARC reaches the
642 .Sy zfs_arc_meta_limit
643 because dentries and inodes can pin buffers in the ARC.
644 Increasing this value will cause to dentry and inode caches
645 to be pruned more aggressively.
646 Setting this value to
647 .Sy 0
648 will disable pruning the inode and dentry caches.
649 .
650 .It Sy zfs_arc_meta_strategy Ns = Ns Sy 1 Ns | Ns 0 Pq int
651 Define the strategy for ARC metadata buffer eviction (meta reclaim strategy):
652 .Bl -tag -compact -offset 4n -width "0 (META_ONLY)"
653 .It Sy 0 Pq META_ONLY
654 evict only the ARC metadata buffers
655 .It Sy 1 Pq BALANCED
656 additional data buffers may be evicted if required
657 to evict the required number of metadata buffers.
658 .El
659 .
660 .It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq ulong
661 Min size of ARC in bytes.
662 .No If set to Sy 0 , arc_c_min
663 will default to consuming the larger of
664 .Sy 32MB No or Sy all_system_memory/32 .
665 .
666 .It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq int
667 Minimum time prefetched blocks are locked in the ARC.
668 .
669 .It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq int
670 Minimum time "prescient prefetched" blocks are locked in the ARC.
671 These blocks are meant to be prefetched fairly aggressively ahead of
672 the code that may use them.
673 .
674 .It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int
675 Number of missing top-level vdevs which will be allowed during
676 pool import (only in read-only mode).
677 .
678 .It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq ulong
679 Maximum size in bytes allowed to be passed as
680 .Sy zc_nvlist_src_size
681 for ioctls on
682 .Pa /dev/zfs .
683 This prevents a user from causing the kernel to allocate
684 an excessive amount of memory.
685 When the limit is exceeded, the ioctl fails with
686 .Sy EINVAL
687 and a description of the error is sent to the
688 .Pa zfs-dbgmsg
689 log.
690 This parameter should not need to be touched under normal circumstances.
691 If
692 .Sy 0 ,
693 equivalent to a quarter of the user-wired memory limit under
694 .Fx
695 and to
696 .Sy 134217728 Ns B Pq 128MB
697 under Linux.
698 .
699 .It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq int
700 To allow more fine-grained locking, each ARC state contains a series
701 of lists for both data and metadata objects.
702 Locking is performed at the level of these "sub-lists".
703 This parameters controls the number of sub-lists per ARC state,
704 and also applies to other uses of the multilist data structure.
705 .Pp
706 If
707 .Sy 0 ,
708 equivalent to the greater of the number of online CPUs and
709 .Sy 4 .
710 .
711 .It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int
712 The ARC size is considered to be overflowing if it exceeds the current
713 ARC target size
714 .Pq Sy arc_c
715 by a threshold determined by this parameter.
716 The threshold is calculated as a fraction of
717 .Sy arc_c
718 using the formula
719 .Sy arc_c >> zfs_arc_overflow_shift .
720 .Pp
721 The default value of
722 .Sy 8
723 causes the ARC to be considered overflowing if it exceeds the target size by
724 .Em 1/256th Pq Em 0.3%
725 of the target size.
726 .Pp
727 When the ARC is overflowing, new buffer allocations are stalled until
728 the reclaim thread catches up and the overflow condition no longer exists.
729 .
730 .It Sy zfs_arc_p_min_shift Ns = Ns Sy 0 Pq int
731 If nonzero, this will update
732 .Sy arc_p_min_shift Pq default Sy 4
733 with the new value.
734 .Sy arc_p_min_shift No is used as a shift of Sy arc_c
735 when calculating the minumum
736 .Sy arc_p No size.
737 .
738 .It Sy zfs_arc_p_dampener_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
739 Disable
740 .Sy arc_p
741 adapt dampener, which reduces the maximum single adjustment to
742 .Sy arc_p .
743 .
744 .It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq int
745 If nonzero, this will update
746 .Sy arc_shrink_shift Pq default Sy 7
747 with the new value.
748 .
749 .It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint
750 Percent of pagecache to reclaim ARC to.
751 .Pp
752 This tunable allows the ZFS ARC to play more nicely
753 with the kernel's LRU pagecache.
754 It can guarantee that the ARC size won't collapse under scanning
755 pressure on the pagecache, yet still allows the ARC to be reclaimed down to
756 .Sy zfs_arc_min
757 if necessary.
758 This value is specified as percent of pagecache size (as measured by
759 .Sy NR_FILE_PAGES ) ,
760 where that percent may exceed
761 .Sy 100 .
762 This
763 only operates during memory pressure/reclaim.
764 .
765 .It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int
766 This is a limit on how many pages the ARC shrinker makes available for
767 eviction in response to one page allocation attempt.
768 Note that in practice, the kernel's shrinker can ask us to evict
769 up to about four times this for one allocation attempt.
770 .Pp
771 The default limit of
772 .Sy 10000 Pq in practice, Em 160MB No per allocation attempt with 4kB pages
773 limits the amount of time spent attempting to reclaim ARC memory to
774 less than 100ms per allocation attempt,
775 even with a small average compressed block size of ~8kB.
776 .Pp
777 The parameter can be set to 0 (zero) to disable the limit,
778 and only applies on Linux.
779 .
780 .It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq ulong
781 The target number of bytes the ARC should leave as free memory on the system.
782 If zero, equivalent to the bigger of
783 .Sy 512kB No and Sy all_system_memory/64 .
784 .
785 .It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int
786 Disable pool import at module load by ignoring the cache file
787 .Pq Sy spa_config_path .
788 .
789 .It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint
790 Rate limit checksum events to this many per second.
791 Note that this should not be set below the ZED thresholds
792 (currently 10 checksums over 10 seconds)
793 or else the daemon may not trigger any action.
794 .
795 .It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq int
796 This controls the amount of time that a ZIL block (lwb) will remain "open"
797 when it isn't "full", and it has a thread waiting for it to be committed to
798 stable storage.
799 The timeout is scaled based on a percentage of the last lwb
800 latency to avoid significantly impacting the latency of each individual
801 transaction record (itx).
802 .
803 .It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
804 Vdev indirection layer (used for device removal) sleeps for this many
805 milliseconds during mapping generation.
806 Intended for use with the test suite to throttle vdev removal speed.
807 .
808 .It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq int
809 Minimum percent of obsolete bytes in vdev mapping required to attempt to condense
810 .Pq see Sy zfs_condense_indirect_vdevs_enable .
811 Intended for use with the test suite
812 to facilitate triggering condensing as needed.
813 .
814 .It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
815 Enable condensing indirect vdev mappings.
816 When set, attempt to condense indirect vdev mappings
817 if the mapping uses more than
818 .Sy zfs_condense_min_mapping_bytes
819 bytes of memory and if the obsolete space map object uses more than
820 .Sy zfs_condense_max_obsolete_bytes
821 bytes on-disk.
822 The condensing process is an attempt to save memory by removing obsolete mappings.
823 .
824 .It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1GB Pc Pq ulong
825 Only attempt to condense indirect vdev mappings if the on-disk size
826 of the obsolete space map object is greater than this number of bytes
827 .Pq see Sy zfs_condense_indirect_vdevs_enable .
828 .
829 .It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128kB Pc Pq ulong
830 Minimum size vdev mapping to attempt to condense
831 .Pq see Sy zfs_condense_indirect_vdevs_enable .
832 .
833 .It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int
834 Internally ZFS keeps a small log to facilitate debugging.
835 The log is enabled by default, and can be disabled by unsetting this option.
836 The contents of the log can be accessed by reading
837 .Pa /proc/spl/kstat/zfs/dbgmsg .
838 Writing
839 .Sy 0
840 to the file clears the log.
841 .Pp
842 This setting does not influence debug prints due to
843 .Sy zfs_flags .
844 .
845 .It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4MB Pc Pq int
846 Maximum size of the internal ZFS debug log.
847 .
848 .It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int
849 Historically used for controlling what reporting was available under
850 .Pa /proc/spl/kstat/zfs .
851 No effect.
852 .
853 .It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
854 When a pool sync operation takes longer than
855 .Sy zfs_deadman_synctime_ms ,
856 or when an individual I/O operation takes longer than
857 .Sy zfs_deadman_ziotime_ms ,
858 then the operation is considered to be "hung".
859 If
860 .Sy zfs_deadman_enabled
861 is set, then the deadman behavior is invoked as described by
862 .Sy zfs_deadman_failmode .
863 By default, the deadman is enabled and set to
864 .Sy wait
865 which results in "hung" I/Os only being logged.
866 The deadman is automatically disabled when a pool gets suspended.
867 .
868 .It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp
869 Controls the failure behavior when the deadman detects a "hung" I/O operation.
870 Valid values are:
871 .Bl -tag -compact -offset 4n -width "continue"
872 .It Sy wait
873 Wait for a "hung" operation to complete.
874 For each "hung" operation a "deadman" event will be posted
875 describing that operation.
876 .It Sy continue
877 Attempt to recover from a "hung" operation by re-dispatching it
878 to the I/O pipeline if possible.
879 .It Sy panic
880 Panic the system.
881 This can be used to facilitate automatic fail-over
882 to a properly configured fail-over partner.
883 .El
884 .
885 .It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1min Pc Pq int
886 Check time in milliseconds.
887 This defines the frequency at which we check for hung I/O requests
888 and potentially invoke the
889 .Sy zfs_deadman_failmode
890 behavior.
891 .
892 .It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10min Pc Pq ulong
893 Interval in milliseconds after which the deadman is triggered and also
894 the interval after which a pool sync operation is considered to be "hung".
895 Once this limit is exceeded the deadman will be invoked every
896 .Sy zfs_deadman_checktime_ms
897 milliseconds until the pool sync completes.
898 .
899 .It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5min Pc Pq ulong
900 Interval in milliseconds after which the deadman is triggered and an
901 individual I/O operation is considered to be "hung".
902 As long as the operation remains "hung",
903 the deadman will be invoked every
904 .Sy zfs_deadman_checktime_ms
905 milliseconds until the operation completes.
906 .
907 .It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
908 Enable prefetching dedup-ed blocks which are going to be freed.
909 .
910 .It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq int
911 Start to delay each transaction once there is this amount of dirty data,
912 expressed as a percentage of
913 .Sy zfs_dirty_data_max .
914 This value should be at least
915 .Sy zfs_vdev_async_write_active_max_dirty_percent .
916 .No See Sx ZFS TRANSACTION DELAY .
917 .
918 .It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int
919 This controls how quickly the transaction delay approaches infinity.
920 Larger values cause longer delays for a given amount of dirty data.
921 .Pp
922 For the smoothest delay, this value should be about 1 billion divided
923 by the maximum number of operations per second.
924 This will smoothly handle between ten times and a tenth of this number.
925 .No See Sx ZFS TRANSACTION DELAY .
926 .Pp
927 .Sy zfs_delay_scale * zfs_dirty_data_max Em must be smaller than Sy 2^64 .
928 .
929 .It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int
930 Disables requirement for IVset GUIDs to be present and match when doing a raw
931 receive of encrypted datasets.
932 Intended for users whose pools were created with
933 OpenZFS pre-release versions and now have compatibility issues.
934 .
935 .It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong
936 Maximum number of uses of a single salt value before generating a new one for
937 encrypted datasets.
938 The default value is also the maximum.
939 .
940 .It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint
941 Size of the znode hashtable used for holds.
942 .Pp
943 Due to the need to hold locks on objects that may not exist yet, kernel mutexes
944 are not created per-object and instead a hashtable is used where collisions
945 will result in objects waiting when there is not actually contention on the
946 same object.
947 .
948 .It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int
949 Rate limit delay and deadman zevents (which report slow I/Os) to this many per
950 second.
951 .
952 .It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1GB Pc Pq ulong
953 Upper-bound limit for unflushed metadata changes to be held by the
954 log spacemap in memory, in bytes.
955 .
956 .It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq ulong
957 Part of overall system memory that ZFS allows to be used
958 for unflushed metadata changes by the log spacemap, in millionths.
959 .
960 .It Sy zfs_unflushed_log_block_max Ns = Ns Sy 262144 Po 256k Pc Pq ulong
961 Describes the maximum number of log spacemap blocks allowed for each pool.
962 The default value means that the space in all the log spacemaps
963 can add up to no more than
964 .Sy 262144
965 blocks (which means
966 .Em 32GB
967 of logical space before compression and ditto blocks,
968 assuming that blocksize is
969 .Em 128kB ) .
970 .Pp
971 This tunable is important because it involves a trade-off between import
972 time after an unclean export and the frequency of flushing metaslabs.
973 The higher this number is, the more log blocks we allow when the pool is
974 active which means that we flush metaslabs less often and thus decrease
975 the number of I/Os for spacemap updates per TXG.
976 At the same time though, that means that in the event of an unclean export,
977 there will be more log spacemap blocks for us to read, inducing overhead
978 in the import time of the pool.
979 The lower the number, the amount of flushing increases, destroying log
980 blocks quicker as they become obsolete faster, which leaves less blocks
981 to be read during import time after a crash.
982 .Pp
983 Each log spacemap block existing during pool import leads to approximately
984 one extra logical I/O issued.
985 This is the reason why this tunable is exposed in terms of blocks rather
986 than space used.
987 .
988 .It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq ulong
989 If the number of metaslabs is small and our incoming rate is high,
990 we could get into a situation that we are flushing all our metaslabs every TXG.
991 Thus we always allow at least this many log blocks.
992 .
993 .It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq ulong
994 Tunable used to determine the number of blocks that can be used for
995 the spacemap log, expressed as a percentage of the total number of
996 metaslabs in the pool.
997 .
998 .It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint
999 When enabled, files will not be asynchronously removed from the list of pending
1000 unlinks and the space they consume will be leaked.
1001 Once this option has been disabled and the dataset is remounted,
1002 the pending unlinks will be processed and the freed space returned to the pool.
1003 This option is used by the test suite.
1004 .
1005 .It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong
1006 This is the used to define a large file for the purposes of deletion.
1007 Files containing more than
1008 .Sy zfs_delete_blocks
1009 will be deleted asynchronously, while smaller files are deleted synchronously.
1010 Decreasing this value will reduce the time spent in an
1011 .Xr unlink 2
1012 system call, at the expense of a longer delay before the freed space is available.
1013 .
1014 .It Sy zfs_dirty_data_max Ns = Pq int
1015 Determines the dirty space limit in bytes.
1016 Once this limit is exceeded, new writes are halted until space frees up.
1017 This parameter takes precedence over
1018 .Sy zfs_dirty_data_max_percent .
1019 .No See Sx ZFS TRANSACTION DELAY .
1020 .Pp
1021 Defaults to
1022 .Sy physical_ram/10 ,
1023 capped at
1024 .Sy zfs_dirty_data_max_max .
1025 .
1026 .It Sy zfs_dirty_data_max_max Ns = Pq int
1027 Maximum allowable value of
1028 .Sy zfs_dirty_data_max ,
1029 expressed in bytes.
1030 This limit is only enforced at module load time, and will be ignored if
1031 .Sy zfs_dirty_data_max
1032 is later changed.
1033 This parameter takes precedence over
1034 .Sy zfs_dirty_data_max_max_percent .
1035 .No See Sx ZFS TRANSACTION DELAY .
1036 .Pp
1037 Defaults to
1038 .Sy physical_ram/4 ,
1039 .
1040 .It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq int
1041 Maximum allowable value of
1042 .Sy zfs_dirty_data_max ,
1043 expressed as a percentage of physical RAM.
1044 This limit is only enforced at module load time, and will be ignored if
1045 .Sy zfs_dirty_data_max
1046 is later changed.
1047 The parameter
1048 .Sy zfs_dirty_data_max_max
1049 takes precedence over this one.
1050 .No See Sx ZFS TRANSACTION DELAY .
1051 .
1052 .It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq int
1053 Determines the dirty space limit, expressed as a percentage of all memory.
1054 Once this limit is exceeded, new writes are halted until space frees up.
1055 The parameter
1056 .Sy zfs_dirty_data_max
1057 takes precedence over this one.
1058 .No See Sx ZFS TRANSACTION DELAY .
1059 .Pp
1060 Subject to
1061 .Sy zfs_dirty_data_max_max .
1062 .
1063 .It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq int
1064 Start syncing out a transaction group if there's at least this much dirty data
1065 .Pq as a percentage of Sy zfs_dirty_data_max .
1066 This should be less than
1067 .Sy zfs_vdev_async_write_active_min_dirty_percent .
1068 .
1069 .It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint
1070 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1071 preallocated for a file in order to guarantee that later writes will not
1072 run out of space.
1073 Instead,
1074 .Xr fallocate 2
1075 space preallocation only checks that sufficient space is currently available
1076 in the pool or the user's project quota allocation,
1077 and then creates a sparse file of the requested size.
1078 The requested space is multiplied by
1079 .Sy zfs_fallocate_reserve_percent
1080 to allow additional space for indirect blocks and other internal metadata.
1081 Setting this to
1082 .Sy 0
1083 disables support for
1084 .Xr fallocate 2
1085 and causes it to return
1086 .Sy EOPNOTSUPP .
1087 .
1088 .It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string
1089 Select a fletcher 4 implementation.
1090 .Pp
1091 Supported selectors are:
1092 .Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw ,
1093 .No and Sy aarch64_neon .
1094 All except
1095 .Sy fastest No and Sy scalar
1096 require instruction set extensions to be available,
1097 and will only appear if ZFS detects that they are present at runtime.
1098 If multiple implementations of fletcher 4 are available, the
1099 .Sy fastest
1100 will be chosen using a micro benchmark.
1101 Selecting
1102 .Sy scalar
1103 results in the original CPU-based calculation being used.
1104 Selecting any option other than
1105 .Sy fastest No or Sy scalar
1106 results in vector instructions
1107 from the respective CPU instruction set being used.
1108 .
1109 .It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1110 Enable/disable the processing of the free_bpobj object.
1111 .
1112 .It Sy zfs_async_block_max_blocks Ns = Ns Sy ULONG_MAX Po unlimited Pc Pq ulong
1113 Maximum number of blocks freed in a single TXG.
1114 .
1115 .It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq ulong
1116 Maximum number of dedup blocks freed in a single TXG.
1117 .
1118 .It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Pq ulong
1119 If nonzer, override record size calculation for
1120 .Nm zfs Cm send
1121 estimates.
1122 .
1123 .It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq int
1124 Maximum asynchronous read I/O operations active to each device.
1125 .No See Sx ZFS I/O SCHEDULER .
1126 .
1127 .It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq int
1128 Minimum asynchronous read I/O operation active to each device.
1129 .No See Sx ZFS I/O SCHEDULER .
1130 .
1131 .It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq int
1132 When the pool has more than this much dirty data, use
1133 .Sy zfs_vdev_async_write_max_active
1134 to limit active async writes.
1135 If the dirty data is between the minimum and maximum,
1136 the active I/O limit is linearly interpolated.
1137 .No See Sx ZFS I/O SCHEDULER .
1138 .
1139 .It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq int
1140 When the pool has less than this much dirty data, use
1141 .Sy zfs_vdev_async_write_min_active
1142 to limit active async writes.
1143 If the dirty data is between the minimum and maximum,
1144 the active I/O limit is linearly
1145 interpolated.
1146 .No See Sx ZFS I/O SCHEDULER .
1147 .
1148 .It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 30 Pq int
1149 Maximum asynchronous write I/O operations active to each device.
1150 .No See Sx ZFS I/O SCHEDULER .
1151 .
1152 .It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq int
1153 Minimum asynchronous write I/O operations active to each device.
1154 .No See Sx ZFS I/O SCHEDULER .
1155 .Pp
1156 Lower values are associated with better latency on rotational media but poorer
1157 resilver performance.
1158 The default value of
1159 .Sy 2
1160 was chosen as a compromise.
1161 A value of
1162 .Sy 3
1163 has been shown to improve resilver performance further at a cost of
1164 further increasing latency.
1165 .
1166 .It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq int
1167 Maximum initializing I/O operations active to each device.
1168 .No See Sx ZFS I/O SCHEDULER .
1169 .
1170 .It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq int
1171 Minimum initializing I/O operations active to each device.
1172 .No See Sx ZFS I/O SCHEDULER .
1173 .
1174 .It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq int
1175 The maximum number of I/O operations active to each device.
1176 Ideally, this will be at least the sum of each queue's
1177 .Sy max_active .
1178 .No See Sx ZFS I/O SCHEDULER .
1179 .
1180 .It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq int
1181 Maximum sequential resilver I/O operations active to each device.
1182 .No See Sx ZFS I/O SCHEDULER .
1183 .
1184 .It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq int
1185 Minimum sequential resilver I/O operations active to each device.
1186 .No See Sx ZFS I/O SCHEDULER .
1187 .
1188 .It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq int
1189 Maximum removal I/O operations active to each device.
1190 .No See Sx ZFS I/O SCHEDULER .
1191 .
1192 .It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq int
1193 Minimum removal I/O operations active to each device.
1194 .No See Sx ZFS I/O SCHEDULER .
1195 .
1196 .It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq int
1197 Maximum scrub I/O operations active to each device.
1198 .No See Sx ZFS I/O SCHEDULER .
1199 .
1200 .It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq int
1201 Minimum scrub I/O operations active to each device.
1202 .No See Sx ZFS I/O SCHEDULER .
1203 .
1204 .It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq int
1205 Maximum synchronous read I/O operations active to each device.
1206 .No See Sx ZFS I/O SCHEDULER .
1207 .
1208 .It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq int
1209 Minimum synchronous read I/O operations active to each device.
1210 .No See Sx ZFS I/O SCHEDULER .
1211 .
1212 .It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq int
1213 Maximum synchronous write I/O operations active to each device.
1214 .No See Sx ZFS I/O SCHEDULER .
1215 .
1216 .It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq int
1217 Minimum synchronous write I/O operations active to each device.
1218 .No See Sx ZFS I/O SCHEDULER .
1219 .
1220 .It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq int
1221 Maximum trim/discard I/O operations active to each device.
1222 .No See Sx ZFS I/O SCHEDULER .
1223 .
1224 .It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq int
1225 Minimum trim/discard I/O operations active to each device.
1226 .No See Sx ZFS I/O SCHEDULER .
1227 .
1228 .It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq int
1229 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1230 the number of concurrently-active I/O operations is limited to
1231 .Sy zfs_*_min_active ,
1232 unless the vdev is "idle".
1233 When there are no interactive I/O operatinons active (synchronous or otherwise),
1234 and
1235 .Sy zfs_vdev_nia_delay
1236 operations have completed since the last interactive operation,
1237 then the vdev is considered to be "idle",
1238 and the number of concurrently-active non-interactive operations is increased to
1239 .Sy zfs_*_max_active .
1240 .No See Sx ZFS I/O SCHEDULER .
1241 .
1242 .It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq int
1243 Some HDDs tend to prioritize sequential I/O so strongly, that concurrent
1244 random I/O latency reaches several seconds.
1245 On some HDDs this happens even if sequential I/O operations
1246 are submitted one at a time, and so setting
1247 .Sy zfs_*_max_active Ns = Sy 1
1248 does not help.
1249 To prevent non-interactive I/O, like scrub,
1250 from monopolizing the device, no more than
1251 .Sy zfs_vdev_nia_credit operations can be sent
1252 while there are outstanding incomplete interactive operations.
1253 This enforced wait ensures the HDD services the interactive I/O
1254 within a reasonable amount of time.
1255 .No See Sx ZFS I/O SCHEDULER .
1256 .
1257 .It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq int
1258 Maximum number of queued allocations per top-level vdev expressed as
1259 a percentage of
1260 .Sy zfs_vdev_async_write_max_active ,
1261 which allows the system to detect devices that are more capable
1262 of handling allocations and to allocate more blocks to those devices.
1263 This allows for dynamic allocation distribution when devices are imbalanced,
1264 as fuller devices will tend to be slower than empty devices.
1265 .Pp
1266 Also see
1267 .Sy zio_dva_throttle_enabled .
1268 .
1269 .It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int
1270 Time before expiring
1271 .Pa .zfs/snapshot .
1272 .
1273 .It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int
1274 Allow the creation, removal, or renaming of entries in the
1275 .Sy .zfs/snapshot
1276 directory to cause the creation, destruction, or renaming of snapshots.
1277 When enabled, this functionality works both locally and over NFS exports
1278 which have the
1279 .Em no_root_squash
1280 option set.
1281 .
1282 .It Sy zfs_flags Ns = Ns Sy 0 Pq int
1283 Set additional debugging flags.
1284 The following flags may be bitwise-ored together:
1285 .TS
1286 box;
1287 lbz r l l .
1288 Value Symbolic Name Description
1289 _
1290 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log.
1291 * 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications.
1292 * 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications.
1293 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification.
1294 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers.
1295 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees.
1296 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications.
1297 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1298 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log.
1299 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal.
1300 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree.
1301 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log
1302 and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing.
1303 .TE
1304 .Sy \& * No Requires debug build.
1305 .
1306 .It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int
1307 If destroy encounters an
1308 .Sy EIO
1309 while reading metadata (e.g. indirect blocks),
1310 space referenced by the missing metadata can not be freed.
1311 Normally this causes the background destroy to become "stalled",
1312 as it is unable to make forward progress.
1313 While in this stalled state, all remaining space to free
1314 from the error-encountering filesystem is "temporarily leaked".
1315 Set this flag to cause it to ignore the
1316 .Sy EIO ,
1317 permanently leak the space from indirect blocks that can not be read,
1318 and continue to free everything else that it can.
1319 .Pp
1320 The default "stalling" behavior is useful if the storage partially
1321 fails (i.e. some but not all I/O operations fail), and then later recovers.
1322 In this case, we will be able to continue pool operations while it is
1323 partially failed, and when it recovers, we can continue to free the
1324 space, with no leaks.
1325 Note, however, that this case is actually fairly rare.
1326 .Pp
1327 Typically pools either
1328 .Bl -enum -compact -offset 4n -width "1."
1329 .It
1330 fail completely (but perhaps temporarily,
1331 e.g. due to a top-level vdev going offline), or
1332 .It
1333 have localized, permanent errors (e.g. disk returns the wrong data
1334 due to bit flip or firmware bug).
1335 .El
1336 In the former case, this setting does not matter because the
1337 pool will be suspended and the sync thread will not be able to make
1338 forward progress regardless.
1339 In the latter, because the error is permanent, the best we can do
1340 is leak the minimum amount of space,
1341 which is what setting this flag will do.
1342 It is therefore reasonable for this flag to normally be set,
1343 but we chose the more conservative approach of not setting it,
1344 so that there is no possibility of
1345 leaking space in the "partial temporary" failure case.
1346 .
1347 .It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq int
1348 During a
1349 .Nm zfs Cm destroy
1350 operation using the
1351 .Sy async_destroy
1352 feature,
1353 a minimum of this much time will be spent working on freeing blocks per TXG.
1354 .
1355 .It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq int
1356 Similar to
1357 .Sy zfs_free_min_time_ms ,
1358 but for cleanup of old indirection records for removed vdevs.
1359 .
1360 .It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32kB Pc Pq long
1361 Largest data block to write to the ZIL.
1362 Larger blocks will be treated as if the dataset being written to had the
1363 .Sy logbias Ns = Ns Sy throughput
1364 property set.
1365 .
1366 .It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq ulong
1367 Pattern written to vdev free space by
1368 .Xr zpool-initialize 8 .
1369 .
1370 .It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq ulong
1371 Size of writes used by
1372 .Xr zpool-initialize 8 .
1373 This option is used by the test suite.
1374 .
1375 .It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq ulong
1376 The threshold size (in block pointers) at which we create a new sub-livelist.
1377 Larger sublists are more costly from a memory perspective but the fewer
1378 sublists there are, the lower the cost of insertion.
1379 .
1380 .It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int
1381 If the amount of shared space between a snapshot and its clone drops below
1382 this threshold, the clone turns off the livelist and reverts to the old
1383 deletion method.
1384 This is in place because livelists no long give us a benefit
1385 once a clone has been overwritten enough.
1386 .
1387 .It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int
1388 Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1389 it is being condensed.
1390 This option is used by the test suite to track race conditions.
1391 .
1392 .It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int
1393 Incremented each time livelist condensing is canceled while in
1394 .Fn spa_livelist_condense_sync .
1395 This option is used by the test suite to track race conditions.
1396 .
1397 .It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1398 When set, the livelist condense process pauses indefinitely before
1399 executing the synctask -
1400 .Fn spa_livelist_condense_sync .
1401 This option is used by the test suite to trigger race conditions.
1402 .
1403 .It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int
1404 Incremented each time livelist condensing is canceled while in
1405 .Fn spa_livelist_condense_cb .
1406 This option is used by the test suite to track race conditions.
1407 .
1408 .It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int
1409 When set, the livelist condense process pauses indefinitely before
1410 executing the open context condensing work in
1411 .Fn spa_livelist_condense_cb .
1412 This option is used by the test suite to trigger race conditions.
1413 .
1414 .It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq ulong
1415 The maximum execution time limit that can be set for a ZFS channel program,
1416 specified as a number of Lua instructions.
1417 .
1418 .It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100MB Pc Pq ulong
1419 The maximum memory limit that can be set for a ZFS channel program, specified
1420 in bytes.
1421 .
1422 .It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int
1423 The maximum depth of nested datasets.
1424 This value can be tuned temporarily to
1425 fix existing datasets that exceed the predefined limit.
1426 .
1427 .It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq ulong
1428 The number of past TXGs that the flushing algorithm of the log spacemap
1429 feature uses to estimate incoming log blocks.
1430 .
1431 .It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq ulong
1432 Maximum number of rows allowed in the summary of the spacemap log.
1433 .
1434 .It Sy zfs_max_recordsize Ns = Ns Sy 1048576 Po 1MB Pc Pq int
1435 We currently support block sizes from
1436 .Em 512B No to Em 16MB .
1437 The benefits of larger blocks, and thus larger I/O,
1438 need to be weighed against the cost of COWing a giant block to modify one byte.
1439 Additionally, very large blocks can have an impact on I/O latency,
1440 and also potentially on the memory allocator.
1441 Therefore, we do not allow the recordsize to be set larger than this tunable.
1442 Larger blocks can be created by changing it,
1443 and pools with larger blocks can always be imported and used,
1444 regardless of this setting.
1445 .
1446 .It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int
1447 Allow datasets received with redacted send/receive to be mounted.
1448 Normally disabled because these datasets may be missing key data.
1449 .
1450 .It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq ulong
1451 Minimum number of metaslabs to flush per dirty TXG.
1452 .
1453 .It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq int
1454 Allow metaslabs to keep their active state as long as their fragmentation
1455 percentage is no more than this value.
1456 An active metaslab that exceeds this threshold
1457 will no longer keep its active status allowing better metaslabs to be selected.
1458 .
1459 .It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq int
1460 Metaslab groups are considered eligible for allocations if their
1461 fragmentation metric (measured as a percentage) is less than or equal to
1462 this value.
1463 If a metaslab group exceeds this threshold then it will be
1464 skipped unless all metaslab groups within the metaslab class have also
1465 crossed this threshold.
1466 .
1467 .It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq int
1468 Defines a threshold at which metaslab groups should be eligible for allocations.
1469 The value is expressed as a percentage of free space
1470 beyond which a metaslab group is always eligible for allocations.
1471 If a metaslab group's free space is less than or equal to the
1472 threshold, the allocator will avoid allocating to that group
1473 unless all groups in the pool have reached the threshold.
1474 Once all groups have reached the threshold, all groups are allowed to accept
1475 allocations.
1476 The default value of
1477 .Sy 0
1478 disables the feature and causes all metaslab groups to be eligible for allocations.
1479 .Pp
1480 This parameter allows one to deal with pools having heavily imbalanced
1481 vdevs such as would be the case when a new vdev has been added.
1482 Setting the threshold to a non-zero percentage will stop allocations
1483 from being made to vdevs that aren't filled to the specified percentage
1484 and allow lesser filled vdevs to acquire more allocations than they
1485 otherwise would under the old
1486 .Sy zfs_mg_alloc_failures
1487 facility.
1488 .
1489 .It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1490 If enabled, ZFS will place DDT data into the special allocation class.
1491 .
1492 .It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int
1493 If enabled, ZFS will place user data indirect blocks
1494 into the special allocation class.
1495 .
1496 .It Sy zfs_multihost_history Ns = Ns Sy 0 Pq int
1497 Historical statistics for this many latest multihost updates will be available in
1498 .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost .
1499 .
1500 .It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq ulong
1501 Used to control the frequency of multihost writes which are performed when the
1502 .Sy multihost
1503 pool property is on.
1504 This is one of the factors used to determine the
1505 length of the activity check during import.
1506 .Pp
1507 The multihost write period is
1508 .Sy zfs_multihost_interval / leaf-vdevs .
1509 On average a multihost write will be issued for each leaf vdev
1510 every
1511 .Sy zfs_multihost_interval
1512 milliseconds.
1513 In practice, the observed period can vary with the I/O load
1514 and this observed value is the delay which is stored in the uberblock.
1515 .
1516 .It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint
1517 Used to control the duration of the activity test on import.
1518 Smaller values of
1519 .Sy zfs_multihost_import_intervals
1520 will reduce the import time but increase
1521 the risk of failing to detect an active pool.
1522 The total activity check time is never allowed to drop below one second.
1523 .Pp
1524 On import the activity check waits a minimum amount of time determined by
1525 .Sy zfs_multihost_interval * zfs_multihost_import_intervals ,
1526 or the same product computed on the host which last had the pool imported,
1527 whichever is greater.
1528 The activity check time may be further extended if the value of MMP
1529 delay found in the best uberblock indicates actual multihost updates happened
1530 at longer intervals than
1531 .Sy zfs_multihost_interval .
1532 A minimum of
1533 .Em 100ms
1534 is enforced.
1535 .Pp
1536 .Sy 0 No is equivalent to Sy 1 .
1537 .
1538 .It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint
1539 Controls the behavior of the pool when multihost write failures or delays are
1540 detected.
1541 .Pp
1542 When
1543 .Sy 0 ,
1544 multihost write failures or delays are ignored.
1545 The failures will still be reported to the ZED which depending on
1546 its configuration may take action such as suspending the pool or offlining a
1547 device.
1548 .Pp
1549 Otherwise, the pool will be suspended if
1550 .Sy zfs_multihost_fail_intervals * zfs_multihost_interval
1551 milliseconds pass without a successful MMP write.
1552 This guarantees the activity test will see MMP writes if the pool is imported.
1553 .Sy 1 No is equivalent to Sy 2 ;
1554 this is necessary to prevent the pool from being suspended
1555 due to normal, small I/O latency variations.
1556 .
1557 .It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int
1558 Set to disable scrub I/O.
1559 This results in scrubs not actually scrubbing data and
1560 simply doing a metadata crawl of the pool instead.
1561 .
1562 .It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int
1563 Set to disable block prefetching for scrubs.
1564 .
1565 .It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
1566 Disable cache flush operations on disks when writing.
1567 Setting this will cause pool corruption on power loss
1568 if a volatile out-of-order write cache is enabled.
1569 .
1570 .It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1571 Allow no-operation writes.
1572 The occurrence of nopwrites will further depend on other pool properties
1573 .Pq i.a. the checksumming and compression algorithms .
1574 .
1575 .It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 0 Ns | ns 1 Pq int
1576 Enable forcing TXG sync to find holes.
1577 When enabled forces ZFS to act like prior versions when
1578 .Sy SEEK_HOLE No or Sy SEEK_DATA
1579 flags are used, which, when a dnode is dirty,
1580 causes TXGs to be synced so that this data can be found.
1581 .
1582 .It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50MB Pc Pq int
1583 The number of bytes which should be prefetched during a pool traversal, like
1584 .Nm zfs Cm send
1585 or other data crawling operations.
1586 .
1587 .It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq int
1588 The number of blocks pointed by indirect (non-L0) block which should be
1589 prefetched during a pool traversal, like
1590 .Nm zfs Cm send
1591 or other data crawling operations.
1592 .
1593 .It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 5 Ns % Pq ulong
1594 Control percentage of dirtied indirect blocks from frees allowed into one TXG.
1595 After this threshold is crossed, additional frees will wait until the next TXG.
1596 .Sy 0 No disables this throttle.
1597 .
1598 .It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1599 Disable predictive prefetch.
1600 Note that it leaves "prescient" prefetch (for. e.g.\&
1601 .Nm zfs Cm send )
1602 intact.
1603 Unlike predictive prefetch, prescient prefetch never issues I/O
1604 that ends up not being needed, so it can't hurt performance.
1605 .
1606 .It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1607 Disable QAT hardware acceleration for SHA256 checksums.
1608 May be unset after the ZFS modules have been loaded to initialize the QAT
1609 hardware as long as support is compiled in and the QAT driver is present.
1610 .
1611 .It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1612 Disable QAT hardware acceleration for gzip compression.
1613 May be unset after the ZFS modules have been loaded to initialize the QAT
1614 hardware as long as support is compiled in and the QAT driver is present.
1615 .
1616 .It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
1617 Disable QAT hardware acceleration for AES-GCM encryption.
1618 May be unset after the ZFS modules have been loaded to initialize the QAT
1619 hardware as long as support is compiled in and the QAT driver is present.
1620 .
1621 .It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq long
1622 Bytes to read per chunk.
1623 .
1624 .It Sy zfs_read_history Ns = Ns Sy 0 Pq int
1625 Historical statistics for this many latest reads will be available in
1626 .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads .
1627 .
1628 .It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int
1629 Include cache hits in read history
1630 .
1631 .It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq ulong
1632 Maximum read segment size to issue when sequentially resilvering a
1633 top-level vdev.
1634 .
1635 .It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
1636 Automatically start a pool scrub when the last active sequential resilver
1637 completes in order to verify the checksums of all blocks which have been
1638 resilvered.
1639 This is enabled by default and strongly recommended.
1640 .
1641 .It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 33554432 Ns B Po 32MB Pc Pq ulong
1642 Maximum amount of I/O that can be concurrently issued for a sequential
1643 resilver per leaf device, given in bytes.
1644 .
1645 .It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int
1646 If an indirect split block contains more than this many possible unique
1647 combinations when being reconstructed, consider it too computationally
1648 expensive to check them all.
1649 Instead, try at most this many randomly selected
1650 combinations each time the block is accessed.
1651 This allows all segment copies to participate fairly
1652 in the reconstruction when all combinations
1653 cannot be checked and prevents repeated use of one bad copy.
1654 .
1655 .It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int
1656 Set to attempt to recover from fatal errors.
1657 This should only be used as a last resort,
1658 as it typically results in leaked space, or worse.
1659 .
1660 .It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1661 Ignore hard IO errors during device removal.
1662 When set, if a device encounters a hard IO error during the removal process
1663 the removal will not be cancelled.
1664 This can result in a normally recoverable block becoming permanently damaged
1665 and is hence not recommended.
1666 This should only be used as a last resort when the
1667 pool cannot be returned to a healthy state prior to removing the device.
1668 .
1669 .It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1670 This is used by the test suite so that it can ensure that certain actions
1671 happen while in the middle of a removal.
1672 .
1673 .It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16MB Pc Pq int
1674 The largest contiguous segment that we will attempt to allocate when removing
1675 a device.
1676 If there is a performance problem with attempting to allocate large blocks,
1677 consider decreasing this.
1678 The default value is also the maximum.
1679 .
1680 .It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int
1681 Ignore the
1682 .Sy resilver_defer
1683 feature, causing an operation that would start a resilver to
1684 immediately restart the one in progress.
1685 .
1686 .It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3s Pc Pq int
1687 Resilvers are processed by the sync thread.
1688 While resilvering, it will spend at least this much time
1689 working on a resilver between TXG flushes.
1690 .
1691 .It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int
1692 If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
1693 even if there were unrepairable errors.
1694 Intended to be used during pool repair or recovery to
1695 stop resilvering when the pool is next imported.
1696 .
1697 .It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq int
1698 Scrubs are processed by the sync thread.
1699 While scrubbing, it will spend at least this much time
1700 working on a scrub between TXG flushes.
1701 .
1702 .It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2h Pc Pq int
1703 To preserve progress across reboots, the sequential scan algorithm periodically
1704 needs to stop metadata scanning and issue all the verification I/O to disk.
1705 The frequency of this flushing is determined by this tunable.
1706 .
1707 .It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq int
1708 This tunable affects how scrub and resilver I/O segments are ordered.
1709 A higher number indicates that we care more about how filled in a segment is,
1710 while a lower number indicates we care more about the size of the extent without
1711 considering the gaps within a segment.
1712 This value is only tunable upon module insertion.
1713 Changing the value afterwards will have no affect on scrub or resilver performance.
1714 .
1715 .It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq int
1716 Determines the order that data will be verified while scrubbing or resilvering:
1717 .Bl -tag -compact -offset 4n -width "a"
1718 .It Sy 1
1719 Data will be verified as sequentially as possible, given the
1720 amount of memory reserved for scrubbing
1721 .Pq see Sy zfs_scan_mem_lim_fact .
1722 This may improve scrub performance if the pool's data is very fragmented.
1723 .It Sy 2
1724 The largest mostly-contiguous chunk of found data will be verified first.
1725 By deferring scrubbing of small segments, we may later find adjacent data
1726 to coalesce and increase the segment size.
1727 .It Sy 0
1728 .No Use strategy Sy 1 No during normal verification
1729 .No and strategy Sy 2 No while taking a checkpoint.
1730 .El
1731 .
1732 .It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int
1733 If unset, indicates that scrubs and resilvers will gather metadata in
1734 memory before issuing sequential I/O.
1735 Otherwise indicates that the legacy algorithm will be used,
1736 where I/O is initiated as soon as it is discovered.
1737 Unsetting will not affect scrubs or resilvers that are already in progress.
1738 .
1739 .It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2MB Pc Pq int
1740 Sets the largest gap in bytes between scrub/resilver I/O operations
1741 that will still be considered sequential for sorting purposes.
1742 Changing this value will not
1743 affect scrubs or resilvers that are already in progress.
1744 .
1745 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq int
1746 Maximum fraction of RAM used for I/O sorting by sequential scan algorithm.
1747 This tunable determines the hard limit for I/O sorting memory usage.
1748 When the hard limit is reached we stop scanning metadata and start issuing
1749 data verification I/O.
1750 This is done until we get below the soft limit.
1751 .
1752 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq int
1753 The fraction of the hard limit used to determined the soft limit for I/O sorting
1754 by the sequential scan algorithm.
1755 When we cross this limit from below no action is taken.
1756 When we cross this limit from above it is because we are issuing verification I/O.
1757 In this case (unless the metadata scan is done) we stop issuing verification I/O
1758 and start scanning metadata again until we get to the hard limit.
1759 .
1760 .It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1761 Enforce tight memory limits on pool scans when a sequential scan is in progress.
1762 When disabled, the memory limit may be exceeded by fast disks.
1763 .
1764 .It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int
1765 Freezes a scrub/resilver in progress without actually pausing it.
1766 Intended for testing/debugging.
1767 .
1768 .It Sy zfs_scan_vdev_limit Ns = Ns Sy 4194304 Ns B Po 4MB Pc Pq int
1769 Maximum amount of data that can be concurrently issued at once for scrubs and
1770 resilvers per leaf device, given in bytes.
1771 .
1772 .It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int
1773 Allow sending of corrupt data (ignore read/checksum errors when sending).
1774 .
1775 .It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int
1776 Include unmodified spill blocks in the send stream.
1777 Under certain circumstances, previous versions of ZFS could incorrectly
1778 remove the spill block from an existing object.
1779 Including unmodified copies of the spill blocks creates a backwards-compatible
1780 stream which will recreate a spill block if it was incorrectly removed.
1781 .
1782 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^-1 Pq int
1783 The fill fraction of the
1784 .Nm zfs Cm send
1785 internal queues.
1786 The fill fraction controls the timing with which internal threads are woken up.
1787 .
1788 .It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq int
1789 The maximum number of bytes allowed in
1790 .Nm zfs Cm send Ns 's
1791 internal queues.
1792 .
1793 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^-1 Pq int
1794 The fill fraction of the
1795 .Nm zfs Cm send
1796 prefetch queue.
1797 The fill fraction controls the timing with which internal threads are woken up.
1798 .
1799 .It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16MB Pc Pq int
1800 The maximum number of bytes allowed that will be prefetched by
1801 .Nm zfs Cm send .
1802 This value must be at least twice the maximum block size in use.
1803 .
1804 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^-1 Pq int
1805 The fill fraction of the
1806 .Nm zfs Cm receive
1807 queue.
1808 The fill fraction controls the timing with which internal threads are woken up.
1809 .
1810 .It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16MB Pc Pq int
1811 The maximum number of bytes allowed in the
1812 .Nm zfs Cm receive
1813 queue.
1814 This value must be at least twice the maximum block size in use.
1815 .
1816 .It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq int
1817 The maximum amount of data, in bytes, that
1818 .Nm zfs Cm receive
1819 will write in one DMU transaction.
1820 This is the uncompressed size, even when receiving a compressed send stream.
1821 This setting will not reduce the write size below a single block.
1822 Capped at a maximum of
1823 .Sy 32MB .
1824 .
1825 .It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq ulong
1826 Setting this variable overrides the default logic for estimating block
1827 sizes when doing a
1828 .Nm zfs Cm send .
1829 The default heuristic is that the average block size
1830 will be the current recordsize.
1831 Override this value if most data in your dataset is not of that size
1832 and you require accurate zfs send size estimates.
1833 .
1834 .It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq int
1835 Flushing of data to disk is done in passes.
1836 Defer frees starting in this pass.
1837 .
1838 .It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16MB Pc Pq int
1839 Maximum memory used for prefetching a checkpoint's space map on each
1840 vdev while discarding the checkpoint.
1841 .
1842 .It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq int
1843 Only allow small data blocks to be allocated on the special and dedup vdev
1844 types when the available free space percentage on these vdevs exceeds this value.
1845 This ensures reserved space is available for pool metadata as the
1846 special vdevs approach capacity.
1847 .
1848 .It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq int
1849 Starting in this sync pass, disable compression (including of metadata).
1850 With the default setting, in practice, we don't have this many sync passes,
1851 so this has no effect.
1852 .Pp
1853 The original intent was that disabling compression would help the sync passes
1854 to converge.
1855 However, in practice, disabling compression increases
1856 the average number of sync passes; because when we turn compression off,
1857 many blocks' size will change, and thus we have to re-allocate
1858 (not overwrite) them.
1859 It also increases the number of
1860 .Em 128kB
1861 allocations (e.g. for indirect blocks and spacemaps)
1862 because these will not be compressed.
1863 The
1864 .Em 128kB
1865 allocations are especially detrimental to performance
1866 on highly fragmented systems, which may have very few free segments of this size,
1867 and may need to load new metaslabs to satisfy these allocations.
1868 .
1869 .It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq int
1870 Rewrite new block pointers starting in this pass.
1871 .
1872 .It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int
1873 This controls the number of threads used by
1874 .Sy dp_sync_taskq .
1875 The default value of
1876 .Sy 75%
1877 will create a maximum of one thread per CPU.
1878 .
1879 .It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128MB Pc Pq uint
1880 Maximum size of TRIM command.
1881 Larger ranges will be split into chunks no larger than this value before issuing.
1882 .
1883 .It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32kB Pc Pq uint
1884 Minimum size of TRIM commands.
1885 TRIM ranges smaller than this will be skipped,
1886 unless they're part of a larger range which was chunked.
1887 This is done because it's common for these small TRIMs
1888 to negatively impact overall performance.
1889 .
1890 .It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint
1891 Skip uninitialized metaslabs during the TRIM process.
1892 This option is useful for pools constructed from large thinly-provisioned devices
1893 where TRIM operations are slow.
1894 As a pool ages, an increasing fraction of the pool's metaslabs
1895 will be initialized, progressively degrading the usefulness of this option.
1896 This setting is stored when starting a manual TRIM and will
1897 persist for the duration of the requested TRIM.
1898 .
1899 .It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint
1900 Maximum number of queued TRIMs outstanding per leaf vdev.
1901 The number of concurrent TRIM commands issued to the device is controlled by
1902 .Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active .
1903 .
1904 .It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint
1905 The number of transaction groups' worth of frees which should be aggregated
1906 before TRIM operations are issued to the device.
1907 This setting represents a trade-off between issuing larger,
1908 more efficient TRIM operations and the delay
1909 before the recently trimmed space is available for use by the device.
1910 .Pp
1911 Increasing this value will allow frees to be aggregated for a longer time.
1912 This will result is larger TRIM operations and potentially increased memory usage.
1913 Decreasing this value will have the opposite effect.
1914 The default of
1915 .Sy 32
1916 was determined to be a reasonable compromise.
1917 .
1918 .It Sy zfs_txg_history Ns = Ns Sy 0 Pq int
1919 Historical statistics for this many latest TXGs will be available in
1920 .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs .
1921 .
1922 .It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq int
1923 Flush dirty data to disk at least every this many seconds (maximum TXG duration).
1924 .
1925 .It Sy zfs_vdev_aggregate_trim Ns = Ns Sy 0 Ns | Ns 1 Pq int
1926 Allow TRIM I/Os to be aggregated.
1927 This is normally not helpful because the extents to be trimmed
1928 will have been already been aggregated by the metaslab.
1929 This option is provided for debugging and performance analysis.
1930 .
1931 .It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq int
1932 Max vdev I/O aggregation size.
1933 .
1934 .It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128kB Pc Pq int
1935 Max vdev I/O aggregation size for non-rotating media.
1936 .
1937 .It Sy zfs_vdev_cache_bshift Ns = Ns Sy 16 Po 64kB Pc Pq int
1938 Shift size to inflate reads to.
1939 .
1940 .It Sy zfs_vdev_cache_max Ns = Ns Sy 16384 Ns B Po 16kB Pc Pq int
1941 Inflate reads smaller than this value to meet the
1942 .Sy zfs_vdev_cache_bshift
1943 size
1944 .Pq default Sy 64kB .
1945 .
1946 .It Sy zfs_vdev_cache_size Ns = Ns Sy 0 Pq int
1947 Total size of the per-disk cache in bytes.
1948 .Pp
1949 Currently this feature is disabled, as it has been found to not be helpful
1950 for performance and in some cases harmful.
1951 .
1952 .It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int
1953 A number by which the balancing algorithm increments the load calculation for
1954 the purpose of selecting the least busy mirror member when an I/O operation
1955 immediately follows its predecessor on rotational vdevs
1956 for the purpose of making decisions based on load.
1957 .
1958 .It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int
1959 A number by which the balancing algorithm increments the load calculation for
1960 the purpose of selecting the least busy mirror member when an I/O operation
1961 lacks locality as defined by
1962 .Sy zfs_vdev_mirror_rotating_seek_offset .
1963 Operations within this that are not immediately following the previous operation
1964 are incremented by half.
1965 .
1966 .It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1MB Pc Pq int
1967 The maximum distance for the last queued I/O operation in which
1968 the balancing algorithm considers an operation to have locality.
1969 .No See Sx ZFS I/O SCHEDULER .
1970 .
1971 .It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int
1972 A number by which the balancing algorithm increments the load calculation for
1973 the purpose of selecting the least busy mirror member on non-rotational vdevs
1974 when I/O operations do not immediately follow one another.
1975 .
1976 .It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int
1977 A number by which the balancing algorithm increments the load calculation for
1978 the purpose of selecting the least busy mirror member when an I/O operation lacks
1979 locality as defined by the
1980 .Sy zfs_vdev_mirror_rotating_seek_offset .
1981 Operations within this that are not immediately following the previous operation
1982 are incremented by half.
1983 .
1984 .It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32kB Pc Pq int
1985 Aggregate read I/O operations if the on-disk gap between them is within this
1986 threshold.
1987 .
1988 .It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4kB Pc Pq int
1989 Aggregate write I/O operations if the on-disk gap between them is within this
1990 threshold.
1991 .
1992 .It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string
1993 Select the raidz parity implementation to use.
1994 .Pp
1995 Variants that don't depend on CPU-specific features
1996 may be selected on module load, as they are supported on all systems.
1997 The remaining options may only be set after the module is loaded,
1998 as they are available only if the implementations are compiled in
1999 and supported on the running system.
2000 .Pp
2001 Once the module is loaded,
2002 .Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl
2003 will show the available options,
2004 with the currently selected one enclosed in square brackets.
2005 .Pp
2006 .TS
2007 lb l l .
2008 fastest selected by built-in benchmark
2009 original original implementation
2010 scalar scalar implementation
2011 sse2 SSE2 instruction set 64-bit x86
2012 ssse3 SSSE3 instruction set 64-bit x86
2013 avx2 AVX2 instruction set 64-bit x86
2014 avx512f AVX512F instruction set 64-bit x86
2015 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2016 aarch64_neon NEON Aarch64/64-bit ARMv8
2017 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2018 powerpc_altivec Altivec PowerPC
2019 .TE
2020 .
2021 .It Sy zfs_vdev_scheduler Pq charp
2022 .Sy DEPRECATED .
2023 Prints warning to kernel log for compatiblity.
2024 .
2025 .It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq int
2026 Max event queue length.
2027 Events in the queue can be viewed with
2028 .Xr zpool-events 8 .
2029 .
2030 .It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int
2031 Maximum recent zevent records to retain for duplicate checking.
2032 Setting this to
2033 .Sy 0
2034 disables duplicate detection.
2035 .
2036 .It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15min Pc Pq int
2037 Lifespan for a recent ereport that was retained for duplicate checking.
2038 .
2039 .It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int
2040 The maximum number of taskq entries that are allowed to be cached.
2041 When this limit is exceeded transaction records (itxs)
2042 will be cleaned synchronously.
2043 .
2044 .It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int
2045 The number of taskq entries that are pre-populated when the taskq is first
2046 created and are immediately available for use.
2047 .
2048 .It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int
2049 This controls the number of threads used by
2050 .Sy dp_zil_clean_taskq .
2051 The default value of
2052 .Sy 100%
2053 will create a maximum of one thread per cpu.
2054 .
2055 .It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128kB Pc Pq int
2056 This sets the maximum block size used by the ZIL.
2057 On very fragmented pools, lowering this
2058 .Pq typically to Sy 36kB
2059 can improve performance.
2060 .
2061 .It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int
2062 Disable the cache flush commands that are normally sent to disk by
2063 the ZIL after an LWB write has completed.
2064 Setting this will cause ZIL corruption on power loss
2065 if a volatile out-of-order write cache is enabled.
2066 .
2067 .It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int
2068 Disable intent logging replay.
2069 Can be disabled for recovery from corrupted ZIL.
2070 .
2071 .It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768kB Pc Pq ulong
2072 Limit SLOG write size per commit executed with synchronous priority.
2073 Any writes above that will be executed with lower (asynchronous) priority
2074 to limit potential SLOG device abuse by single active ZIL writer.
2075 .
2076 .It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq int
2077 Usually, one metaslab from each normal-class vdev is dedicated for use by
2078 the ZIL to log synchronous writes.
2079 However, if there are fewer than
2080 .Sy zfs_embedded_slog_min_ms
2081 metaslabs in the vdev, this functionality is disabled.
2082 This ensures that we don't set aside an unreasonable amount of space for the ZIL.
2083 .
2084 .It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int
2085 If non-zero, the zio deadman will produce debugging messages
2086 .Pq see Sy zfs_dbgmsg_enable
2087 for all zios, rather than only for leaf zios possessing a vdev.
2088 This is meant to be used by developers to gain
2089 diagnostic information for hang conditions which don't involve a mutex
2090 or other locking primitive: typically conditions in which a thread in
2091 the zio pipeline is looping indefinitely.
2092 .
2093 .It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30s Pc Pq int
2094 When an I/O operation takes more than this much time to complete,
2095 it's marked as slow.
2096 Each slow operation causes a delay zevent.
2097 Slow I/O counters can be seen with
2098 .Nm zpool Cm status Fl s .
2099 .
2100 .It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int
2101 Throttle block allocations in the I/O pipeline.
2102 This allows for dynamic allocation distribution when devices are imbalanced.
2103 When enabled, the maximum number of pending allocations per top-level vdev
2104 is limited by
2105 .Sy zfs_vdev_queue_depth_pct .
2106 .
2107 .It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int
2108 Prioritize requeued I/O.
2109 .
2110 .It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint
2111 Percentage of online CPUs which will run a worker thread for I/O.
2112 These workers are responsible for I/O work such as compression and
2113 checksum calculations.
2114 Fractional number of CPUs will be rounded down.
2115 .Pp
2116 The default value of
2117 .Sy 80%
2118 was chosen to avoid using all CPUs which can result in
2119 latency issues and inconsistent application performance,
2120 especially when slower compression and/or checksumming is enabled.
2121 .
2122 .It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint
2123 Number of worker threads per taskq.
2124 Lower values improve I/O ordering and CPU utilization,
2125 while higher reduces lock contention.
2126 .Pp
2127 If
2128 .Sy 0 ,
2129 generate a system-dependent value close to 6 threads per taskq.
2130 .
2131 .It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2132 Do not create zvol device nodes.
2133 This may slightly improve startup time on
2134 systems with a very large number of zvols.
2135 .
2136 .It Sy zvol_major Ns = Ns Sy 230 Pq uint
2137 Major number for zvol block devices.
2138 .
2139 .It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq ulong
2140 Discard (TRIM) operations done on zvols will be done in batches of this
2141 many blocks, where block size is determined by the
2142 .Sy volblocksize
2143 property of a zvol.
2144 .
2145 .It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128kB Pc Pq uint
2146 When adding a zvol to the system, prefetch this many bytes
2147 from the start and end of the volume.
2148 Prefetching these regions of the volume is desirable,
2149 because they are likely to be accessed immediately by
2150 .Xr blkid 8
2151 or the kernel partitioner.
2152 .
2153 .It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint
2154 When processing I/O requests for a zvol, submit them synchronously.
2155 This effectively limits the queue depth to
2156 .Em 1
2157 for each I/O submitter.
2158 When unset, requests are handled asynchronously by a thread pool.
2159 The number of requests which can be handled concurrently is controlled by
2160 .Sy zvol_threads .
2161 .
2162 .It Sy zvol_threads Ns = Ns Sy 32 Pq uint
2163 Max number of threads which can handle zvol I/O requests concurrently.
2164 .
2165 .It Sy zvol_volmode Ns = Ns Sy 1 Pq uint
2166 Defines zvol block devices behaviour when
2167 .Sy volmode Ns = Ns Sy default :
2168 .Bl -tag -compact -offset 4n -width "a"
2169 .It Sy 1
2170 .No equivalent to Sy full
2171 .It Sy 2
2172 .No equivalent to Sy dev
2173 .It Sy 3
2174 .No equivalent to Sy none
2175 .El
2176 .El
2177 .
2178 .Sh ZFS I/O SCHEDULER
2179 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations.
2180 The scheduler determines when and in what order those operations are issued.
2181 The scheduler divides operations into five I/O classes,
2182 prioritized in the following order: sync read, sync write, async read,
2183 async write, and scrub/resilver.
2184 Each queue defines the minimum and maximum number of concurrent operations
2185 that may be issued to the device.
2186 In addition, the device has an aggregate maximum,
2187 .Sy zfs_vdev_max_active .
2188 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2189 If the sum of the per-queue maxima exceeds the aggregate maximum,
2190 then the number of active operations may reach
2191 .Sy zfs_vdev_max_active ,
2192 in which case no further operations will be issued,
2193 regardless of whether all per-queue minima have been met.
2194 .Pp
2195 For many physical devices, throughput increases with the number of
2196 concurrent operations, but latency typically suffers.
2197 Furthermore, physical devices typically have a limit
2198 at which more concurrent operations have no
2199 effect on throughput or can actually cause it to decrease.
2200 .Pp
2201 The scheduler selects the next operation to issue by first looking for an
2202 I/O class whose minimum has not been satisfied.
2203 Once all are satisfied and the aggregate maximum has not been hit,
2204 the scheduler looks for classes whose maximum has not been satisfied.
2205 Iteration through the I/O classes is done in the order specified above.
2206 No further operations are issued
2207 if the aggregate maximum number of concurrent operations has been hit,
2208 or if there are no operations queued for an I/O class that has not hit its maximum.
2209 Every time an I/O operation is queued or an operation completes,
2210 the scheduler looks for new operations to issue.
2211 .Pp
2212 In general, smaller
2213 .Sy max_active Ns s
2214 will lead to lower latency of synchronous operations.
2215 Larger
2216 .Sy max_active Ns s
2217 may lead to higher overall throughput, depending on underlying storage.
2218 .Pp
2219 The ratio of the queues'
2220 .Sy max_active Ns s
2221 determines the balance of performance between reads, writes, and scrubs.
2222 For example, increasing
2223 .Sy zfs_vdev_scrub_max_active
2224 will cause the scrub or resilver to complete more quickly,
2225 but reads and writes to have higher latency and lower throughput.
2226 .Pp
2227 All I/O classes have a fixed maximum number of outstanding operations,
2228 except for the async write class.
2229 Asynchronous writes represent the data that is committed to stable storage
2230 during the syncing stage for transaction groups.
2231 Transaction groups enter the syncing state periodically,
2232 so the number of queued async writes will quickly burst up
2233 and then bleed down to zero.
2234 Rather than servicing them as quickly as possible,
2235 the I/O scheduler changes the maximum number of active async write operations
2236 according to the amount of dirty data in the pool.
2237 Since both throughput and latency typically increase with the number of
2238 concurrent operations issued to physical devices, reducing the
2239 burstiness in the number of concurrent operations also stabilizes the
2240 response time of operations from other – and in particular synchronous – queues.
2241 In broad strokes, the I/O scheduler will issue more concurrent operations
2242 from the async write queue as there's more dirty data in the pool.
2243 .
2244 .Ss Async Writes
2245 The number of concurrent operations issued for the async write I/O class
2246 follows a piece-wise linear function defined by a few adjustable points:
2247 .Bd -literal
2248 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2249 ^ | /^ |
2250 | | / | |
2251 active | / | |
2252 I/O | / | |
2253 count | / | |
2254 | / | |
2255 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2256 0|_______^______|_________|
2257 0% | | 100% of \fBzfs_dirty_data_max\fP
2258 | |
2259 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2260 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2261 .Ed
2262 .Pp
2263 Until the amount of dirty data exceeds a minimum percentage of the dirty
2264 data allowed in the pool, the I/O scheduler will limit the number of
2265 concurrent operations to the minimum.
2266 As that threshold is crossed, the number of concurrent operations issued
2267 increases linearly to the maximum at the specified maximum percentage
2268 of the dirty data allowed in the pool.
2269 .Pp
2270 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2271 part of the function between
2272 .Sy zfs_vdev_async_write_active_min_dirty_percent
2273 and
2274 .Sy zfs_vdev_async_write_active_max_dirty_percent .
2275 If it exceeds the maximum percentage,
2276 this indicates that the rate of incoming data is
2277 greater than the rate that the backend storage can handle.
2278 In this case, we must further throttle incoming writes,
2279 as described in the next section.
2280 .
2281 .Sh ZFS TRANSACTION DELAY
2282 We delay transactions when we've determined that the backend storage
2283 isn't able to accommodate the rate of incoming writes.
2284 .Pp
2285 If there is already a transaction waiting, we delay relative to when
2286 that transaction will finish waiting.
2287 This way the calculated delay time
2288 is independent of the number of threads concurrently executing transactions.
2289 .Pp
2290 If we are the only waiter, wait relative to when the transaction started,
2291 rather than the current time.
2292 This credits the transaction for "time already served",
2293 e.g. reading indirect blocks.
2294 .Pp
2295 The minimum time for a transaction to take is calculated as
2296 .Dl min_time = min( Ns Sy zfs_delay_scale No * (dirty - min) / (max - dirty), 100ms)
2297 .Pp
2298 The delay has two degrees of freedom that can be adjusted via tunables.
2299 The percentage of dirty data at which we start to delay is defined by
2300 .Sy zfs_delay_min_dirty_percent .
2301 This should typically be at or above
2302 .Sy zfs_vdev_async_write_active_max_dirty_percent ,
2303 so that we only start to delay after writing at full speed
2304 has failed to keep up with the incoming write rate.
2305 The scale of the curve is defined by
2306 .Sy zfs_delay_scale .
2307 Roughly speaking, this variable determines the amount of delay at the midpoint of the curve.
2308 .Bd -literal
2309 delay
2310 10ms +-------------------------------------------------------------*+
2311 | *|
2312 9ms + *+
2313 | *|
2314 8ms + *+
2315 | * |
2316 7ms + * +
2317 | * |
2318 6ms + * +
2319 | * |
2320 5ms + * +
2321 | * |
2322 4ms + * +
2323 | * |
2324 3ms + * +
2325 | * |
2326 2ms + (midpoint) * +
2327 | | ** |
2328 1ms + v *** +
2329 | \fBzfs_delay_scale\fP ----------> ******** |
2330 0 +-------------------------------------*********----------------+
2331 0% <- \fBzfs_dirty_data_max\fP -> 100%
2332 .Ed
2333 .Pp
2334 Note, that since the delay is added to the outstanding time remaining on the
2335 most recent transaction it's effectively the inverse of IOPS.
2336 Here, the midpoint of
2337 .Em 500us
2338 translates to
2339 .Em 2000 IOPS .
2340 The shape of the curve
2341 was chosen such that small changes in the amount of accumulated dirty data
2342 in the first three quarters of the curve yield relatively small differences
2343 in the amount of delay.
2344 .Pp
2345 The effects can be easier to understand when the amount of delay is
2346 represented on a logarithmic scale:
2347 .Bd -literal
2348 delay
2349 100ms +-------------------------------------------------------------++
2350 + +
2351 | |
2352 + *+
2353 10ms + *+
2354 + ** +
2355 | (midpoint) ** |
2356 + | ** +
2357 1ms + v **** +
2358 + \fBzfs_delay_scale\fP ----------> ***** +
2359 | **** |
2360 + **** +
2361 100us + ** +
2362 + * +
2363 | * |
2364 + * +
2365 10us + * +
2366 + +
2367 | |
2368 + +
2369 +--------------------------------------------------------------+
2370 0% <- \fBzfs_dirty_data_max\fP -> 100%
2371 .Ed
2372 .Pp
2373 Note here that only as the amount of dirty data approaches its limit does
2374 the delay start to increase rapidly.
2375 The goal of a properly tuned system should be to keep the amount of dirty data
2376 out of that range by first ensuring that the appropriate limits are set
2377 for the I/O scheduler to reach optimal throughput on the back-end storage,
2378 and then by changing the value of
2379 .Sy zfs_delay_scale
2380 to increase the steepness of the curve.