]> git.proxmox.com Git - mirror_zfs.git/blob - man/man5/zfs-module-parameters.5
Update arc_memory_throttle() to check pageout
[mirror_zfs.git] / man / man5 / zfs-module-parameters.5
1 '\" te
2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
7 .\"
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
16 .SH NAME
17 zfs\-module\-parameters \- ZFS module parameters
18 .SH DESCRIPTION
19 .sp
20 .LP
21 Description of the different parameters to the ZFS module.
22
23 .SS "Module parameters"
24 .sp
25 .LP
26
27 .sp
28 .ne 2
29 .na
30 \fBl2arc_feed_again\fR (int)
31 .ad
32 .RS 12n
33 Turbo L2ARC warmup
34 .sp
35 Use \fB1\fR for yes (default) and \fB0\fR to disable.
36 .RE
37
38 .sp
39 .ne 2
40 .na
41 \fBl2arc_feed_min_ms\fR (ulong)
42 .ad
43 .RS 12n
44 Min feed interval in milliseconds
45 .sp
46 Default value: \fB200\fR.
47 .RE
48
49 .sp
50 .ne 2
51 .na
52 \fBl2arc_feed_secs\fR (ulong)
53 .ad
54 .RS 12n
55 Seconds between L2ARC writing
56 .sp
57 Default value: \fB1\fR.
58 .RE
59
60 .sp
61 .ne 2
62 .na
63 \fBl2arc_headroom\fR (ulong)
64 .ad
65 .RS 12n
66 Number of max device writes to precache
67 .sp
68 Default value: \fB2\fR.
69 .RE
70
71 .sp
72 .ne 2
73 .na
74 \fBl2arc_headroom_boost\fR (ulong)
75 .ad
76 .RS 12n
77 Compressed l2arc_headroom multiplier
78 .sp
79 Default value: \fB200\fR.
80 .RE
81
82 .sp
83 .ne 2
84 .na
85 \fBl2arc_nocompress\fR (int)
86 .ad
87 .RS 12n
88 Skip compressing L2ARC buffers
89 .sp
90 Use \fB1\fR for yes and \fB0\fR for no (default).
91 .RE
92
93 .sp
94 .ne 2
95 .na
96 \fBl2arc_noprefetch\fR (int)
97 .ad
98 .RS 12n
99 Skip caching prefetched buffers
100 .sp
101 Use \fB1\fR for yes (default) and \fB0\fR to disable.
102 .RE
103
104 .sp
105 .ne 2
106 .na
107 \fBl2arc_norw\fR (int)
108 .ad
109 .RS 12n
110 No reads during writes
111 .sp
112 Use \fB1\fR for yes and \fB0\fR for no (default).
113 .RE
114
115 .sp
116 .ne 2
117 .na
118 \fBl2arc_write_boost\fR (ulong)
119 .ad
120 .RS 12n
121 Extra write bytes during device warmup
122 .sp
123 Default value: \fB8,388,608\fR.
124 .RE
125
126 .sp
127 .ne 2
128 .na
129 \fBl2arc_write_max\fR (ulong)
130 .ad
131 .RS 12n
132 Max write bytes per interval
133 .sp
134 Default value: \fB8,388,608\fR.
135 .RE
136
137 .sp
138 .ne 2
139 .na
140 \fBmetaslab_aliquot\fR (ulong)
141 .ad
142 .RS 12n
143 Metaslab granularity, in bytes. This is roughly similar to what would be
144 referred to as the "stripe size" in traditional RAID arrays. In normal
145 operation, ZFS will try to write this amount of data to a top-level vdev
146 before moving on to the next one.
147 .sp
148 Default value: \fB524,288\fR.
149 .RE
150
151 .sp
152 .ne 2
153 .na
154 \fBmetaslab_bias_enabled\fR (int)
155 .ad
156 .RS 12n
157 Enable metaslab group biasing based on its vdev's over- or under-utilization
158 relative to the pool.
159 .sp
160 Use \fB1\fR for yes (default) and \fB0\fR for no.
161 .RE
162
163 .sp
164 .ne 2
165 .na
166 \fBmetaslab_debug_load\fR (int)
167 .ad
168 .RS 12n
169 Load all metaslabs during pool import.
170 .sp
171 Use \fB1\fR for yes and \fB0\fR for no (default).
172 .RE
173
174 .sp
175 .ne 2
176 .na
177 \fBmetaslab_debug_unload\fR (int)
178 .ad
179 .RS 12n
180 Prevent metaslabs from being unloaded.
181 .sp
182 Use \fB1\fR for yes and \fB0\fR for no (default).
183 .RE
184
185 .sp
186 .ne 2
187 .na
188 \fBmetaslab_fragmentation_factor_enabled\fR (int)
189 .ad
190 .RS 12n
191 Enable use of the fragmentation metric in computing metaslab weights.
192 .sp
193 Use \fB1\fR for yes (default) and \fB0\fR for no.
194 .RE
195
196 .sp
197 .ne 2
198 .na
199 \fBmetaslabs_per_vdev\fR (int)
200 .ad
201 .RS 12n
202 When a vdev is added, it will be divided into approximately (but no more than) this number of metaslabs.
203 .sp
204 Default value: \fB200\fR.
205 .RE
206
207 .sp
208 .ne 2
209 .na
210 \fBmetaslab_preload_enabled\fR (int)
211 .ad
212 .RS 12n
213 Enable metaslab group preloading.
214 .sp
215 Use \fB1\fR for yes (default) and \fB0\fR for no.
216 .RE
217
218 .sp
219 .ne 2
220 .na
221 \fBmetaslab_lba_weighting_enabled\fR (int)
222 .ad
223 .RS 12n
224 Give more weight to metaslabs with lower LBAs, assuming they have
225 greater bandwidth as is typically the case on a modern constant
226 angular velocity disk drive.
227 .sp
228 Use \fB1\fR for yes (default) and \fB0\fR for no.
229 .RE
230
231 .sp
232 .ne 2
233 .na
234 \fBspa_config_path\fR (charp)
235 .ad
236 .RS 12n
237 SPA config file
238 .sp
239 Default value: \fB/etc/zfs/zpool.cache\fR.
240 .RE
241
242 .sp
243 .ne 2
244 .na
245 \fBspa_asize_inflation\fR (int)
246 .ad
247 .RS 12n
248 Multiplication factor used to estimate actual disk consumption from the
249 size of data being written. The default value is a worst case estimate,
250 but lower values may be valid for a given pool depending on its
251 configuration. Pool administrators who understand the factors involved
252 may wish to specify a more realistic inflation factor, particularly if
253 they operate close to quota or capacity limits.
254 .sp
255 Default value: 24
256 .RE
257
258 .sp
259 .ne 2
260 .na
261 \fBspa_load_verify_data\fR (int)
262 .ad
263 .RS 12n
264 Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR)
265 import. Use 0 to disable and 1 to enable.
266
267 An extreme rewind import normally performs a full traversal of all
268 blocks in the pool for verification. If this parameter is set to 0,
269 the traversal skips non-metadata blocks. It can be toggled once the
270 import has started to stop or start the traversal of non-metadata blocks.
271 .sp
272 Default value: 1
273 .RE
274
275 .sp
276 .ne 2
277 .na
278 \fBspa_load_verify_metadata\fR (int)
279 .ad
280 .RS 12n
281 Whether to traverse blocks during an "extreme rewind" (\fB-X\fR)
282 pool import. Use 0 to disable and 1 to enable.
283
284 An extreme rewind import normally performs a full traversal of all
285 blocks in the pool for verification. If this parameter is set to 1,
286 the traversal is not performed. It can be toggled once the import has
287 started to stop or start the traversal.
288 .sp
289 Default value: 1
290 .RE
291
292 .sp
293 .ne 2
294 .na
295 \fBspa_load_verify_maxinflight\fR (int)
296 .ad
297 .RS 12n
298 Maximum concurrent I/Os during the traversal performed during an "extreme
299 rewind" (\fB-X\fR) pool import.
300 .sp
301 Default value: 10000
302 .RE
303
304 .sp
305 .ne 2
306 .na
307 \fBzfetch_array_rd_sz\fR (ulong)
308 .ad
309 .RS 12n
310 If prefetching is enabled, disable prefetching for reads larger than this size.
311 .sp
312 Default value: \fB1,048,576\fR.
313 .RE
314
315 .sp
316 .ne 2
317 .na
318 \fBzfetch_block_cap\fR (uint)
319 .ad
320 .RS 12n
321 Max number of blocks to prefetch at a time
322 .sp
323 Default value: \fB256\fR.
324 .RE
325
326 .sp
327 .ne 2
328 .na
329 \fBzfetch_max_streams\fR (uint)
330 .ad
331 .RS 12n
332 Max number of streams per zfetch (prefetch streams per file).
333 .sp
334 Default value: \fB8\fR.
335 .RE
336
337 .sp
338 .ne 2
339 .na
340 \fBzfetch_min_sec_reap\fR (uint)
341 .ad
342 .RS 12n
343 Min time before an active prefetch stream can be reclaimed
344 .sp
345 Default value: \fB2\fR.
346 .RE
347
348 .sp
349 .ne 2
350 .na
351 \fBzfs_arc_average_blocksize\fR (int)
352 .ad
353 .RS 12n
354 The ARC's buffer hash table is sized based on the assumption of an average
355 block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out
356 to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers.
357 For configurations with a known larger average block size this value can be
358 increased to reduce the memory footprint.
359
360 .sp
361 Default value: \fB8192\fR.
362 .RE
363
364 .sp
365 .ne 2
366 .na
367 \fBzfs_arc_evict_batch_limit\fR (int)
368 .ad
369 .RS 12n
370 Number ARC headers to evict per sub-list before proceeding to another sub-list.
371 This batch-style operation prevents entire sub-lists from being evicted at once
372 but comes at a cost of additional unlocking and locking.
373 .sp
374 Default value: \fB10\fR.
375 .RE
376
377 .sp
378 .ne 2
379 .na
380 \fBzfs_arc_grow_retry\fR (int)
381 .ad
382 .RS 12n
383 Seconds before growing arc size
384 .sp
385 Default value: \fB5\fR.
386 .RE
387
388 .sp
389 .ne 2
390 .na
391 \fBzfs_arc_lotsfree_percent\fR (int)
392 .ad
393 .RS 12n
394 Throttle I/O when free system memory drops below this percentage of total
395 system memory. Setting this value to 0 will disable the throttle.
396 .sp
397 Default value: \fB10\fR.
398 .RE
399
400 .sp
401 .ne 2
402 .na
403 \fBzfs_arc_max\fR (ulong)
404 .ad
405 .RS 12n
406 Max arc size
407 .sp
408 Default value: \fB0\fR.
409 .RE
410
411 .sp
412 .ne 2
413 .na
414 \fBzfs_arc_meta_limit\fR (ulong)
415 .ad
416 .RS 12n
417 The maximum allowed size in bytes that meta data buffers are allowed to
418 consume in the ARC. When this limit is reached meta data buffers will
419 be reclaimed even if the overall arc_c_max has not been reached. This
420 value defaults to 0 which indicates that 3/4 of the ARC may be used
421 for meta data.
422 .sp
423 Default value: \fB0\fR.
424 .RE
425
426 .sp
427 .ne 2
428 .na
429 \fBzfs_arc_meta_min\fR (ulong)
430 .ad
431 .RS 12n
432 The minimum allowed size in bytes that meta data buffers may consume in
433 the ARC. This value defaults to 0 which disables a floor on the amount
434 of the ARC devoted meta data.
435 .sp
436 Default value: \fB0\fR.
437 .RE
438
439 .sp
440 .ne 2
441 .na
442 \fBzfs_arc_meta_prune\fR (int)
443 .ad
444 .RS 12n
445 The number of dentries and inodes to be scanned looking for entries
446 which can be dropped. This may be required when the ARC reaches the
447 \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers
448 in the ARC. Increasing this value will cause to dentry and inode caches
449 to be pruned more aggressively. Setting this value to 0 will disable
450 pruning the inode and dentry caches.
451 .sp
452 Default value: \fB10,000\fR.
453 .RE
454
455 .sp
456 .ne 2
457 .na
458 \fBzfs_arc_meta_adjust_restarts\fR (ulong)
459 .ad
460 .RS 12n
461 The number of restart passes to make while scanning the ARC attempting
462 the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR.
463 This value should not need to be tuned but is available to facilitate
464 performance analysis.
465 .sp
466 Default value: \fB4096\fR.
467 .RE
468
469 .sp
470 .ne 2
471 .na
472 \fBzfs_arc_min\fR (ulong)
473 .ad
474 .RS 12n
475 Min arc size
476 .sp
477 Default value: \fB100\fR.
478 .RE
479
480 .sp
481 .ne 2
482 .na
483 \fBzfs_arc_min_prefetch_lifespan\fR (int)
484 .ad
485 .RS 12n
486 Min life of prefetch block
487 .sp
488 Default value: \fB100\fR.
489 .RE
490
491 .sp
492 .ne 2
493 .na
494 \fBzfs_arc_num_sublists_per_state\fR (int)
495 .ad
496 .RS 12n
497 To allow more fine-grained locking, each ARC state contains a series
498 of lists for both data and meta data objects. Locking is performed at
499 the level of these "sub-lists". This parameters controls the number of
500 sub-lists per ARC state.
501 .sp
502 Default value: 1 or the number of on-online CPUs, whichever is greater
503 .RE
504
505 .sp
506 .ne 2
507 .na
508 \fBzfs_arc_overflow_shift\fR (int)
509 .ad
510 .RS 12n
511 The ARC size is considered to be overflowing if it exceeds the current
512 ARC target size (arc_c) by a threshold determined by this parameter.
513 The threshold is calculated as a fraction of arc_c using the formula
514 "arc_c >> \fBzfs_arc_overflow_shift\fR".
515
516 The default value of 8 causes the ARC to be considered to be overflowing
517 if it exceeds the target size by 1/256th (0.3%) of the target size.
518
519 When the ARC is overflowing, new buffer allocations are stalled until
520 the reclaim thread catches up and the overflow condition no longer exists.
521 .sp
522 Default value: \fB8\fR.
523 .RE
524
525 .sp
526 .ne 2
527 .na
528
529 \fBzfs_arc_p_min_shift\fR (int)
530 .ad
531 .RS 12n
532 arc_c shift to calc min/max arc_p
533 .sp
534 Default value: \fB4\fR.
535 .RE
536
537 .sp
538 .ne 2
539 .na
540 \fBzfs_arc_p_aggressive_disable\fR (int)
541 .ad
542 .RS 12n
543 Disable aggressive arc_p growth
544 .sp
545 Use \fB1\fR for yes (default) and \fB0\fR to disable.
546 .RE
547
548 .sp
549 .ne 2
550 .na
551 \fBzfs_arc_p_dampener_disable\fR (int)
552 .ad
553 .RS 12n
554 Disable arc_p adapt dampener
555 .sp
556 Use \fB1\fR for yes (default) and \fB0\fR to disable.
557 .RE
558
559 .sp
560 .ne 2
561 .na
562 \fBzfs_arc_shrink_shift\fR (int)
563 .ad
564 .RS 12n
565 log2(fraction of arc to reclaim)
566 .sp
567 Default value: \fB5\fR.
568 .RE
569
570 .sp
571 .ne 2
572 .na
573 \fBzfs_arc_sys_free\fR (ulong)
574 .ad
575 .RS 12n
576 The target number of bytes the ARC should leave as free memory on the system.
577 Defaults to the larger of 1/64 of physical memory or 512K. Setting this
578 option to a non-zero value will override the default.
579 .sp
580 Default value: \fB0\fR.
581 .RE
582
583 .sp
584 .ne 2
585 .na
586 \fBzfs_autoimport_disable\fR (int)
587 .ad
588 .RS 12n
589 Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR).
590 .sp
591 Use \fB1\fR for yes (default) and \fB0\fR for no.
592 .RE
593
594 .sp
595 .ne 2
596 .na
597 \fBzfs_dbuf_state_index\fR (int)
598 .ad
599 .RS 12n
600 Calculate arc header index
601 .sp
602 Default value: \fB0\fR.
603 .RE
604
605 .sp
606 .ne 2
607 .na
608 \fBzfs_deadman_enabled\fR (int)
609 .ad
610 .RS 12n
611 Enable deadman timer
612 .sp
613 Use \fB1\fR for yes (default) and \fB0\fR to disable.
614 .RE
615
616 .sp
617 .ne 2
618 .na
619 \fBzfs_deadman_synctime_ms\fR (ulong)
620 .ad
621 .RS 12n
622 Expiration time in milliseconds. This value has two meanings. First it is
623 used to determine when the spa_deadman() logic should fire. By default the
624 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
625 Secondly, the value determines if an I/O is considered "hung". Any I/O that
626 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
627 in a zevent being logged.
628 .sp
629 Default value: \fB1,000,000\fR.
630 .RE
631
632 .sp
633 .ne 2
634 .na
635 \fBzfs_dedup_prefetch\fR (int)
636 .ad
637 .RS 12n
638 Enable prefetching dedup-ed blks
639 .sp
640 Use \fB1\fR for yes and \fB0\fR to disable (default).
641 .RE
642
643 .sp
644 .ne 2
645 .na
646 \fBzfs_delay_min_dirty_percent\fR (int)
647 .ad
648 .RS 12n
649 Start to delay each transaction once there is this amount of dirty data,
650 expressed as a percentage of \fBzfs_dirty_data_max\fR.
651 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
652 See the section "ZFS TRANSACTION DELAY".
653 .sp
654 Default value: \fB60\fR.
655 .RE
656
657 .sp
658 .ne 2
659 .na
660 \fBzfs_delay_scale\fR (int)
661 .ad
662 .RS 12n
663 This controls how quickly the transaction delay approaches infinity.
664 Larger values cause longer delays for a given amount of dirty data.
665 .sp
666 For the smoothest delay, this value should be about 1 billion divided
667 by the maximum number of operations per second. This will smoothly
668 handle between 10x and 1/10th this number.
669 .sp
670 See the section "ZFS TRANSACTION DELAY".
671 .sp
672 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
673 .sp
674 Default value: \fB500,000\fR.
675 .RE
676
677 .sp
678 .ne 2
679 .na
680 \fBzfs_dirty_data_max\fR (int)
681 .ad
682 .RS 12n
683 Determines the dirty space limit in bytes. Once this limit is exceeded, new
684 writes are halted until space frees up. This parameter takes precedence
685 over \fBzfs_dirty_data_max_percent\fR.
686 See the section "ZFS TRANSACTION DELAY".
687 .sp
688 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
689 .RE
690
691 .sp
692 .ne 2
693 .na
694 \fBzfs_dirty_data_max_max\fR (int)
695 .ad
696 .RS 12n
697 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
698 This limit is only enforced at module load time, and will be ignored if
699 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
700 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
701 "ZFS TRANSACTION DELAY".
702 .sp
703 Default value: 25% of physical RAM.
704 .RE
705
706 .sp
707 .ne 2
708 .na
709 \fBzfs_dirty_data_max_max_percent\fR (int)
710 .ad
711 .RS 12n
712 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
713 percentage of physical RAM. This limit is only enforced at module load
714 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
715 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
716 one. See the section "ZFS TRANSACTION DELAY".
717 .sp
718 Default value: 25
719 .RE
720
721 .sp
722 .ne 2
723 .na
724 \fBzfs_dirty_data_max_percent\fR (int)
725 .ad
726 .RS 12n
727 Determines the dirty space limit, expressed as a percentage of all
728 memory. Once this limit is exceeded, new writes are halted until space frees
729 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
730 one. See the section "ZFS TRANSACTION DELAY".
731 .sp
732 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
733 .RE
734
735 .sp
736 .ne 2
737 .na
738 \fBzfs_dirty_data_sync\fR (int)
739 .ad
740 .RS 12n
741 Start syncing out a transaction group if there is at least this much dirty data.
742 .sp
743 Default value: \fB67,108,864\fR.
744 .RE
745
746 .sp
747 .ne 2
748 .na
749 \fBzfs_free_max_blocks\fR (ulong)
750 .ad
751 .RS 12n
752 Maximum number of blocks freed in a single txg.
753 .sp
754 Default value: \fB100,000\fR.
755 .RE
756
757 .sp
758 .ne 2
759 .na
760 \fBzfs_vdev_async_read_max_active\fR (int)
761 .ad
762 .RS 12n
763 Maxium asynchronous read I/Os active to each device.
764 See the section "ZFS I/O SCHEDULER".
765 .sp
766 Default value: \fB3\fR.
767 .RE
768
769 .sp
770 .ne 2
771 .na
772 \fBzfs_vdev_async_read_min_active\fR (int)
773 .ad
774 .RS 12n
775 Minimum asynchronous read I/Os active to each device.
776 See the section "ZFS I/O SCHEDULER".
777 .sp
778 Default value: \fB1\fR.
779 .RE
780
781 .sp
782 .ne 2
783 .na
784 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
785 .ad
786 .RS 12n
787 When the pool has more than
788 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
789 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
790 the dirty data is between min and max, the active I/O limit is linearly
791 interpolated. See the section "ZFS I/O SCHEDULER".
792 .sp
793 Default value: \fB60\fR.
794 .RE
795
796 .sp
797 .ne 2
798 .na
799 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
800 .ad
801 .RS 12n
802 When the pool has less than
803 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
804 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
805 the dirty data is between min and max, the active I/O limit is linearly
806 interpolated. See the section "ZFS I/O SCHEDULER".
807 .sp
808 Default value: \fB30\fR.
809 .RE
810
811 .sp
812 .ne 2
813 .na
814 \fBzfs_vdev_async_write_max_active\fR (int)
815 .ad
816 .RS 12n
817 Maxium asynchronous write I/Os active to each device.
818 See the section "ZFS I/O SCHEDULER".
819 .sp
820 Default value: \fB10\fR.
821 .RE
822
823 .sp
824 .ne 2
825 .na
826 \fBzfs_vdev_async_write_min_active\fR (int)
827 .ad
828 .RS 12n
829 Minimum asynchronous write I/Os active to each device.
830 See the section "ZFS I/O SCHEDULER".
831 .sp
832 Default value: \fB1\fR.
833 .RE
834
835 .sp
836 .ne 2
837 .na
838 \fBzfs_vdev_max_active\fR (int)
839 .ad
840 .RS 12n
841 The maximum number of I/Os active to each device. Ideally, this will be >=
842 the sum of each queue's max_active. It must be at least the sum of each
843 queue's min_active. See the section "ZFS I/O SCHEDULER".
844 .sp
845 Default value: \fB1,000\fR.
846 .RE
847
848 .sp
849 .ne 2
850 .na
851 \fBzfs_vdev_scrub_max_active\fR (int)
852 .ad
853 .RS 12n
854 Maxium scrub I/Os active to each device.
855 See the section "ZFS I/O SCHEDULER".
856 .sp
857 Default value: \fB2\fR.
858 .RE
859
860 .sp
861 .ne 2
862 .na
863 \fBzfs_vdev_scrub_min_active\fR (int)
864 .ad
865 .RS 12n
866 Minimum scrub I/Os active to each device.
867 See the section "ZFS I/O SCHEDULER".
868 .sp
869 Default value: \fB1\fR.
870 .RE
871
872 .sp
873 .ne 2
874 .na
875 \fBzfs_vdev_sync_read_max_active\fR (int)
876 .ad
877 .RS 12n
878 Maxium synchronous read I/Os active to each device.
879 See the section "ZFS I/O SCHEDULER".
880 .sp
881 Default value: \fB10\fR.
882 .RE
883
884 .sp
885 .ne 2
886 .na
887 \fBzfs_vdev_sync_read_min_active\fR (int)
888 .ad
889 .RS 12n
890 Minimum synchronous read I/Os active to each device.
891 See the section "ZFS I/O SCHEDULER".
892 .sp
893 Default value: \fB10\fR.
894 .RE
895
896 .sp
897 .ne 2
898 .na
899 \fBzfs_vdev_sync_write_max_active\fR (int)
900 .ad
901 .RS 12n
902 Maxium synchronous write I/Os active to each device.
903 See the section "ZFS I/O SCHEDULER".
904 .sp
905 Default value: \fB10\fR.
906 .RE
907
908 .sp
909 .ne 2
910 .na
911 \fBzfs_vdev_sync_write_min_active\fR (int)
912 .ad
913 .RS 12n
914 Minimum synchronous write I/Os active to each device.
915 See the section "ZFS I/O SCHEDULER".
916 .sp
917 Default value: \fB10\fR.
918 .RE
919
920 .sp
921 .ne 2
922 .na
923 \fBzfs_disable_dup_eviction\fR (int)
924 .ad
925 .RS 12n
926 Disable duplicate buffer eviction
927 .sp
928 Use \fB1\fR for yes and \fB0\fR for no (default).
929 .RE
930
931 .sp
932 .ne 2
933 .na
934 \fBzfs_expire_snapshot\fR (int)
935 .ad
936 .RS 12n
937 Seconds to expire .zfs/snapshot
938 .sp
939 Default value: \fB300\fR.
940 .RE
941
942 .sp
943 .ne 2
944 .na
945 \fBzfs_flags\fR (int)
946 .ad
947 .RS 12n
948 Set additional debugging flags. The following flags may be bitwise-or'd
949 together.
950 .sp
951 .TS
952 box;
953 rB lB
954 lB lB
955 r l.
956 Value Symbolic Name
957 Description
958 _
959 1 ZFS_DEBUG_DPRINTF
960 Enable dprintf entries in the debug log.
961 _
962 2 ZFS_DEBUG_DBUF_VERIFY *
963 Enable extra dbuf verifications.
964 _
965 4 ZFS_DEBUG_DNODE_VERIFY *
966 Enable extra dnode verifications.
967 _
968 8 ZFS_DEBUG_SNAPNAMES
969 Enable snapshot name verification.
970 _
971 16 ZFS_DEBUG_MODIFY
972 Check for illegally modified ARC buffers.
973 _
974 32 ZFS_DEBUG_SPA
975 Enable spa_dbgmsg entries in the debug log.
976 _
977 64 ZFS_DEBUG_ZIO_FREE
978 Enable verification of block frees.
979 _
980 128 ZFS_DEBUG_HISTOGRAM_VERIFY
981 Enable extra spacemap histogram verifications.
982 .TE
983 .sp
984 * Requires debug build.
985 .sp
986 Default value: \fB0\fR.
987 .RE
988
989 .sp
990 .ne 2
991 .na
992 \fBzfs_free_leak_on_eio\fR (int)
993 .ad
994 .RS 12n
995 If destroy encounters an EIO while reading metadata (e.g. indirect
996 blocks), space referenced by the missing metadata can not be freed.
997 Normally this causes the background destroy to become "stalled", as
998 it is unable to make forward progress. While in this stalled state,
999 all remaining space to free from the error-encountering filesystem is
1000 "temporarily leaked". Set this flag to cause it to ignore the EIO,
1001 permanently leak the space from indirect blocks that can not be read,
1002 and continue to free everything else that it can.
1003
1004 The default, "stalling" behavior is useful if the storage partially
1005 fails (i.e. some but not all i/os fail), and then later recovers. In
1006 this case, we will be able to continue pool operations while it is
1007 partially failed, and when it recovers, we can continue to free the
1008 space, with no leaks. However, note that this case is actually
1009 fairly rare.
1010
1011 Typically pools either (a) fail completely (but perhaps temporarily,
1012 e.g. a top-level vdev going offline), or (b) have localized,
1013 permanent errors (e.g. disk returns the wrong data due to bit flip or
1014 firmware bug). In case (a), this setting does not matter because the
1015 pool will be suspended and the sync thread will not be able to make
1016 forward progress regardless. In case (b), because the error is
1017 permanent, the best we can do is leak the minimum amount of space,
1018 which is what setting this flag will do. Therefore, it is reasonable
1019 for this flag to normally be set, but we chose the more conservative
1020 approach of not setting it, so that there is no possibility of
1021 leaking space in the "partial temporary" failure case.
1022 .sp
1023 Default value: \fB0\fR.
1024 .RE
1025
1026 .sp
1027 .ne 2
1028 .na
1029 \fBzfs_free_min_time_ms\fR (int)
1030 .ad
1031 .RS 12n
1032 Min millisecs to free per txg
1033 .sp
1034 Default value: \fB1,000\fR.
1035 .RE
1036
1037 .sp
1038 .ne 2
1039 .na
1040 \fBzfs_immediate_write_sz\fR (long)
1041 .ad
1042 .RS 12n
1043 Largest data block to write to zil
1044 .sp
1045 Default value: \fB32,768\fR.
1046 .RE
1047
1048 .sp
1049 .ne 2
1050 .na
1051 \fBzfs_max_recordsize\fR (int)
1052 .ad
1053 .RS 12n
1054 We currently support block sizes from 512 bytes to 16MB. The benefits of
1055 larger blocks, and thus larger IO, need to be weighed against the cost of
1056 COWing a giant block to modify one byte. Additionally, very large blocks
1057 can have an impact on i/o latency, and also potentially on the memory
1058 allocator. Therefore, we do not allow the recordsize to be set larger than
1059 zfs_max_recordsize (default 1MB). Larger blocks can be created by changing
1060 this tunable, and pools with larger blocks can always be imported and used,
1061 regardless of this setting.
1062 .sp
1063 Default value: \fB1,048,576\fR.
1064 .RE
1065
1066 .sp
1067 .ne 2
1068 .na
1069 \fBzfs_mdcomp_disable\fR (int)
1070 .ad
1071 .RS 12n
1072 Disable meta data compression
1073 .sp
1074 Use \fB1\fR for yes and \fB0\fR for no (default).
1075 .RE
1076
1077 .sp
1078 .ne 2
1079 .na
1080 \fBzfs_metaslab_fragmentation_threshold\fR (int)
1081 .ad
1082 .RS 12n
1083 Allow metaslabs to keep their active state as long as their fragmentation
1084 percentage is less than or equal to this value. An active metaslab that
1085 exceeds this threshold will no longer keep its active status allowing
1086 better metaslabs to be selected.
1087 .sp
1088 Default value: \fB70\fR.
1089 .RE
1090
1091 .sp
1092 .ne 2
1093 .na
1094 \fBzfs_mg_fragmentation_threshold\fR (int)
1095 .ad
1096 .RS 12n
1097 Metaslab groups are considered eligible for allocations if their
1098 fragmenation metric (measured as a percentage) is less than or equal to
1099 this value. If a metaslab group exceeds this threshold then it will be
1100 skipped unless all metaslab groups within the metaslab class have also
1101 crossed this threshold.
1102 .sp
1103 Default value: \fB85\fR.
1104 .RE
1105
1106 .sp
1107 .ne 2
1108 .na
1109 \fBzfs_mg_noalloc_threshold\fR (int)
1110 .ad
1111 .RS 12n
1112 Defines a threshold at which metaslab groups should be eligible for
1113 allocations. The value is expressed as a percentage of free space
1114 beyond which a metaslab group is always eligible for allocations.
1115 If a metaslab group's free space is less than or equal to the
1116 the threshold, the allocator will avoid allocating to that group
1117 unless all groups in the pool have reached the threshold. Once all
1118 groups have reached the threshold, all groups are allowed to accept
1119 allocations. The default value of 0 disables the feature and causes
1120 all metaslab groups to be eligible for allocations.
1121
1122 This parameter allows to deal with pools having heavily imbalanced
1123 vdevs such as would be the case when a new vdev has been added.
1124 Setting the threshold to a non-zero percentage will stop allocations
1125 from being made to vdevs that aren't filled to the specified percentage
1126 and allow lesser filled vdevs to acquire more allocations than they
1127 otherwise would under the old \fBzfs_mg_alloc_failures\fR facility.
1128 .sp
1129 Default value: \fB0\fR.
1130 .RE
1131
1132 .sp
1133 .ne 2
1134 .na
1135 \fBzfs_no_scrub_io\fR (int)
1136 .ad
1137 .RS 12n
1138 Set for no scrub I/O
1139 .sp
1140 Use \fB1\fR for yes and \fB0\fR for no (default).
1141 .RE
1142
1143 .sp
1144 .ne 2
1145 .na
1146 \fBzfs_no_scrub_prefetch\fR (int)
1147 .ad
1148 .RS 12n
1149 Set for no scrub prefetching
1150 .sp
1151 Use \fB1\fR for yes and \fB0\fR for no (default).
1152 .RE
1153
1154 .sp
1155 .ne 2
1156 .na
1157 \fBzfs_nocacheflush\fR (int)
1158 .ad
1159 .RS 12n
1160 Disable cache flushes
1161 .sp
1162 Use \fB1\fR for yes and \fB0\fR for no (default).
1163 .RE
1164
1165 .sp
1166 .ne 2
1167 .na
1168 \fBzfs_nopwrite_enabled\fR (int)
1169 .ad
1170 .RS 12n
1171 Enable NOP writes
1172 .sp
1173 Use \fB1\fR for yes (default) and \fB0\fR to disable.
1174 .RE
1175
1176 .sp
1177 .ne 2
1178 .na
1179 \fBzfs_pd_bytes_max\fR (int)
1180 .ad
1181 .RS 12n
1182 The number of bytes which should be prefetched.
1183 .sp
1184 Default value: \fB52,428,800\fR.
1185 .RE
1186
1187 .sp
1188 .ne 2
1189 .na
1190 \fBzfs_prefetch_disable\fR (int)
1191 .ad
1192 .RS 12n
1193 Disable all ZFS prefetching
1194 .sp
1195 Use \fB1\fR for yes and \fB0\fR for no (default).
1196 .RE
1197
1198 .sp
1199 .ne 2
1200 .na
1201 \fBzfs_read_chunk_size\fR (long)
1202 .ad
1203 .RS 12n
1204 Bytes to read per chunk
1205 .sp
1206 Default value: \fB1,048,576\fR.
1207 .RE
1208
1209 .sp
1210 .ne 2
1211 .na
1212 \fBzfs_read_history\fR (int)
1213 .ad
1214 .RS 12n
1215 Historic statistics for the last N reads
1216 .sp
1217 Default value: \fB0\fR.
1218 .RE
1219
1220 .sp
1221 .ne 2
1222 .na
1223 \fBzfs_read_history_hits\fR (int)
1224 .ad
1225 .RS 12n
1226 Include cache hits in read history
1227 .sp
1228 Use \fB1\fR for yes and \fB0\fR for no (default).
1229 .RE
1230
1231 .sp
1232 .ne 2
1233 .na
1234 \fBzfs_recover\fR (int)
1235 .ad
1236 .RS 12n
1237 Set to attempt to recover from fatal errors. This should only be used as a
1238 last resort, as it typically results in leaked space, or worse.
1239 .sp
1240 Use \fB1\fR for yes and \fB0\fR for no (default).
1241 .RE
1242
1243 .sp
1244 .ne 2
1245 .na
1246 \fBzfs_resilver_delay\fR (int)
1247 .ad
1248 .RS 12n
1249 Number of ticks to delay prior to issuing a resilver I/O operation when
1250 a non-resilver or non-scrub I/O operation has occurred within the past
1251 \fBzfs_scan_idle\fR ticks.
1252 .sp
1253 Default value: \fB2\fR.
1254 .RE
1255
1256 .sp
1257 .ne 2
1258 .na
1259 \fBzfs_resilver_min_time_ms\fR (int)
1260 .ad
1261 .RS 12n
1262 Min millisecs to resilver per txg
1263 .sp
1264 Default value: \fB3,000\fR.
1265 .RE
1266
1267 .sp
1268 .ne 2
1269 .na
1270 \fBzfs_scan_idle\fR (int)
1271 .ad
1272 .RS 12n
1273 Idle window in clock ticks. During a scrub or a resilver, if
1274 a non-scrub or non-resilver I/O operation has occurred during this
1275 window, the next scrub or resilver operation is delayed by, respectively
1276 \fBzfs_scrub_delay\fR or \fBzfs_resilver_delay\fR ticks.
1277 .sp
1278 Default value: \fB50\fR.
1279 .RE
1280
1281 .sp
1282 .ne 2
1283 .na
1284 \fBzfs_scan_min_time_ms\fR (int)
1285 .ad
1286 .RS 12n
1287 Min millisecs to scrub per txg
1288 .sp
1289 Default value: \fB1,000\fR.
1290 .RE
1291
1292 .sp
1293 .ne 2
1294 .na
1295 \fBzfs_scrub_delay\fR (int)
1296 .ad
1297 .RS 12n
1298 Number of ticks to delay prior to issuing a scrub I/O operation when
1299 a non-scrub or non-resilver I/O operation has occurred within the past
1300 \fBzfs_scan_idle\fR ticks.
1301 .sp
1302 Default value: \fB4\fR.
1303 .RE
1304
1305 .sp
1306 .ne 2
1307 .na
1308 \fBzfs_send_corrupt_data\fR (int)
1309 .ad
1310 .RS 12n
1311 Allow to send corrupt data (ignore read/checksum errors when sending data)
1312 .sp
1313 Use \fB1\fR for yes and \fB0\fR for no (default).
1314 .RE
1315
1316 .sp
1317 .ne 2
1318 .na
1319 \fBzfs_sync_pass_deferred_free\fR (int)
1320 .ad
1321 .RS 12n
1322 Defer frees starting in this pass
1323 .sp
1324 Default value: \fB2\fR.
1325 .RE
1326
1327 .sp
1328 .ne 2
1329 .na
1330 \fBzfs_sync_pass_dont_compress\fR (int)
1331 .ad
1332 .RS 12n
1333 Don't compress starting in this pass
1334 .sp
1335 Default value: \fB5\fR.
1336 .RE
1337
1338 .sp
1339 .ne 2
1340 .na
1341 \fBzfs_sync_pass_rewrite\fR (int)
1342 .ad
1343 .RS 12n
1344 Rewrite new bps starting in this pass
1345 .sp
1346 Default value: \fB2\fR.
1347 .RE
1348
1349 .sp
1350 .ne 2
1351 .na
1352 \fBzfs_top_maxinflight\fR (int)
1353 .ad
1354 .RS 12n
1355 Max I/Os per top-level vdev during scrub or resilver operations.
1356 .sp
1357 Default value: \fB32\fR.
1358 .RE
1359
1360 .sp
1361 .ne 2
1362 .na
1363 \fBzfs_txg_history\fR (int)
1364 .ad
1365 .RS 12n
1366 Historic statistics for the last N txgs
1367 .sp
1368 Default value: \fB0\fR.
1369 .RE
1370
1371 .sp
1372 .ne 2
1373 .na
1374 \fBzfs_txg_timeout\fR (int)
1375 .ad
1376 .RS 12n
1377 Max seconds worth of delta per txg
1378 .sp
1379 Default value: \fB5\fR.
1380 .RE
1381
1382 .sp
1383 .ne 2
1384 .na
1385 \fBzfs_vdev_aggregation_limit\fR (int)
1386 .ad
1387 .RS 12n
1388 Max vdev I/O aggregation size
1389 .sp
1390 Default value: \fB131,072\fR.
1391 .RE
1392
1393 .sp
1394 .ne 2
1395 .na
1396 \fBzfs_vdev_cache_bshift\fR (int)
1397 .ad
1398 .RS 12n
1399 Shift size to inflate reads too
1400 .sp
1401 Default value: \fB16\fR.
1402 .RE
1403
1404 .sp
1405 .ne 2
1406 .na
1407 \fBzfs_vdev_cache_max\fR (int)
1408 .ad
1409 .RS 12n
1410 Inflate reads small than max
1411 .RE
1412
1413 .sp
1414 .ne 2
1415 .na
1416 \fBzfs_vdev_cache_size\fR (int)
1417 .ad
1418 .RS 12n
1419 Total size of the per-disk cache
1420 .sp
1421 Default value: \fB0\fR.
1422 .RE
1423
1424 .sp
1425 .ne 2
1426 .na
1427 \fBzfs_vdev_mirror_switch_us\fR (int)
1428 .ad
1429 .RS 12n
1430 Switch mirrors every N usecs
1431 .sp
1432 Default value: \fB10,000\fR.
1433 .RE
1434
1435 .sp
1436 .ne 2
1437 .na
1438 \fBzfs_vdev_read_gap_limit\fR (int)
1439 .ad
1440 .RS 12n
1441 Aggregate read I/O over gap
1442 .sp
1443 Default value: \fB32,768\fR.
1444 .RE
1445
1446 .sp
1447 .ne 2
1448 .na
1449 \fBzfs_vdev_scheduler\fR (charp)
1450 .ad
1451 .RS 12n
1452 I/O scheduler
1453 .sp
1454 Default value: \fBnoop\fR.
1455 .RE
1456
1457 .sp
1458 .ne 2
1459 .na
1460 \fBzfs_vdev_write_gap_limit\fR (int)
1461 .ad
1462 .RS 12n
1463 Aggregate write I/O over gap
1464 .sp
1465 Default value: \fB4,096\fR.
1466 .RE
1467
1468 .sp
1469 .ne 2
1470 .na
1471 \fBzfs_zevent_cols\fR (int)
1472 .ad
1473 .RS 12n
1474 Max event column width
1475 .sp
1476 Default value: \fB80\fR.
1477 .RE
1478
1479 .sp
1480 .ne 2
1481 .na
1482 \fBzfs_zevent_console\fR (int)
1483 .ad
1484 .RS 12n
1485 Log events to the console
1486 .sp
1487 Use \fB1\fR for yes and \fB0\fR for no (default).
1488 .RE
1489
1490 .sp
1491 .ne 2
1492 .na
1493 \fBzfs_zevent_len_max\fR (int)
1494 .ad
1495 .RS 12n
1496 Max event queue length
1497 .sp
1498 Default value: \fB0\fR.
1499 .RE
1500
1501 .sp
1502 .ne 2
1503 .na
1504 \fBzil_replay_disable\fR (int)
1505 .ad
1506 .RS 12n
1507 Disable intent logging replay
1508 .sp
1509 Use \fB1\fR for yes and \fB0\fR for no (default).
1510 .RE
1511
1512 .sp
1513 .ne 2
1514 .na
1515 \fBzil_slog_limit\fR (ulong)
1516 .ad
1517 .RS 12n
1518 Max commit bytes to separate log device
1519 .sp
1520 Default value: \fB1,048,576\fR.
1521 .RE
1522
1523 .sp
1524 .ne 2
1525 .na
1526 \fBzio_delay_max\fR (int)
1527 .ad
1528 .RS 12n
1529 Max zio millisec delay before posting event
1530 .sp
1531 Default value: \fB30,000\fR.
1532 .RE
1533
1534 .sp
1535 .ne 2
1536 .na
1537 \fBzio_requeue_io_start_cut_in_line\fR (int)
1538 .ad
1539 .RS 12n
1540 Prioritize requeued I/O
1541 .sp
1542 Default value: \fB0\fR.
1543 .RE
1544
1545 .sp
1546 .ne 2
1547 .na
1548 \fBzvol_inhibit_dev\fR (uint)
1549 .ad
1550 .RS 12n
1551 Do not create zvol device nodes
1552 .sp
1553 Use \fB1\fR for yes and \fB0\fR for no (default).
1554 .RE
1555
1556 .sp
1557 .ne 2
1558 .na
1559 \fBzvol_major\fR (uint)
1560 .ad
1561 .RS 12n
1562 Major number for zvol device
1563 .sp
1564 Default value: \fB230\fR.
1565 .RE
1566
1567 .sp
1568 .ne 2
1569 .na
1570 \fBzvol_max_discard_blocks\fR (ulong)
1571 .ad
1572 .RS 12n
1573 Max number of blocks to discard at once
1574 .sp
1575 Default value: \fB16,384\fR.
1576 .RE
1577
1578 .sp
1579 .ne 2
1580 .na
1581 \fBzvol_threads\fR (uint)
1582 .ad
1583 .RS 12n
1584 Max number of threads to handle zvol I/O requests
1585 .sp
1586 Default value: \fB32\fR.
1587 .RE
1588
1589 .SH ZFS I/O SCHEDULER
1590 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1591 The I/O scheduler determines when and in what order those operations are
1592 issued. The I/O scheduler divides operations into five I/O classes
1593 prioritized in the following order: sync read, sync write, async read,
1594 async write, and scrub/resilver. Each queue defines the minimum and
1595 maximum number of concurrent operations that may be issued to the
1596 device. In addition, the device has an aggregate maximum,
1597 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1598 must not exceed the aggregate maximum. If the sum of the per-queue
1599 maximums exceeds the aggregate maximum, then the number of active I/Os
1600 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1601 be issued regardless of whether all per-queue minimums have been met.
1602 .sp
1603 For many physical devices, throughput increases with the number of
1604 concurrent operations, but latency typically suffers. Further, physical
1605 devices typically have a limit at which more concurrent operations have no
1606 effect on throughput or can actually cause it to decrease.
1607 .sp
1608 The scheduler selects the next operation to issue by first looking for an
1609 I/O class whose minimum has not been satisfied. Once all are satisfied and
1610 the aggregate maximum has not been hit, the scheduler looks for classes
1611 whose maximum has not been satisfied. Iteration through the I/O classes is
1612 done in the order specified above. No further operations are issued if the
1613 aggregate maximum number of concurrent operations has been hit or if there
1614 are no operations queued for an I/O class that has not hit its maximum.
1615 Every time an I/O is queued or an operation completes, the I/O scheduler
1616 looks for new operations to issue.
1617 .sp
1618 In general, smaller max_active's will lead to lower latency of synchronous
1619 operations. Larger max_active's may lead to higher overall throughput,
1620 depending on underlying storage.
1621 .sp
1622 The ratio of the queues' max_actives determines the balance of performance
1623 between reads, writes, and scrubs. E.g., increasing
1624 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1625 more quickly, but reads and writes to have higher latency and lower throughput.
1626 .sp
1627 All I/O classes have a fixed maximum number of outstanding operations
1628 except for the async write class. Asynchronous writes represent the data
1629 that is committed to stable storage during the syncing stage for
1630 transaction groups. Transaction groups enter the syncing state
1631 periodically so the number of queued async writes will quickly burst up
1632 and then bleed down to zero. Rather than servicing them as quickly as
1633 possible, the I/O scheduler changes the maximum number of active async
1634 write I/Os according to the amount of dirty data in the pool. Since
1635 both throughput and latency typically increase with the number of
1636 concurrent operations issued to physical devices, reducing the
1637 burstiness in the number of concurrent operations also stabilizes the
1638 response time of operations from other -- and in particular synchronous
1639 -- queues. In broad strokes, the I/O scheduler will issue more
1640 concurrent operations from the async write queue as there's more dirty
1641 data in the pool.
1642 .sp
1643 Async Writes
1644 .sp
1645 The number of concurrent operations issued for the async write I/O class
1646 follows a piece-wise linear function defined by a few adjustable points.
1647 .nf
1648
1649 | o---------| <-- zfs_vdev_async_write_max_active
1650 ^ | /^ |
1651 | | / | |
1652 active | / | |
1653 I/O | / | |
1654 count | / | |
1655 | / | |
1656 |-------o | | <-- zfs_vdev_async_write_min_active
1657 0|_______^______|_________|
1658 0% | | 100% of zfs_dirty_data_max
1659 | |
1660 | `-- zfs_vdev_async_write_active_max_dirty_percent
1661 `--------- zfs_vdev_async_write_active_min_dirty_percent
1662
1663 .fi
1664 Until the amount of dirty data exceeds a minimum percentage of the dirty
1665 data allowed in the pool, the I/O scheduler will limit the number of
1666 concurrent operations to the minimum. As that threshold is crossed, the
1667 number of concurrent operations issued increases linearly to the maximum at
1668 the specified maximum percentage of the dirty data allowed in the pool.
1669 .sp
1670 Ideally, the amount of dirty data on a busy pool will stay in the sloped
1671 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
1672 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
1673 maximum percentage, this indicates that the rate of incoming data is
1674 greater than the rate that the backend storage can handle. In this case, we
1675 must further throttle incoming writes, as described in the next section.
1676
1677 .SH ZFS TRANSACTION DELAY
1678 We delay transactions when we've determined that the backend storage
1679 isn't able to accommodate the rate of incoming writes.
1680 .sp
1681 If there is already a transaction waiting, we delay relative to when
1682 that transaction will finish waiting. This way the calculated delay time
1683 is independent of the number of threads concurrently executing
1684 transactions.
1685 .sp
1686 If we are the only waiter, wait relative to when the transaction
1687 started, rather than the current time. This credits the transaction for
1688 "time already served", e.g. reading indirect blocks.
1689 .sp
1690 The minimum time for a transaction to take is calculated as:
1691 .nf
1692 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
1693 min_time is then capped at 100 milliseconds.
1694 .fi
1695 .sp
1696 The delay has two degrees of freedom that can be adjusted via tunables. The
1697 percentage of dirty data at which we start to delay is defined by
1698 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
1699 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
1700 delay after writing at full speed has failed to keep up with the incoming write
1701 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
1702 this variable determines the amount of delay at the midpoint of the curve.
1703 .sp
1704 .nf
1705 delay
1706 10ms +-------------------------------------------------------------*+
1707 | *|
1708 9ms + *+
1709 | *|
1710 8ms + *+
1711 | * |
1712 7ms + * +
1713 | * |
1714 6ms + * +
1715 | * |
1716 5ms + * +
1717 | * |
1718 4ms + * +
1719 | * |
1720 3ms + * +
1721 | * |
1722 2ms + (midpoint) * +
1723 | | ** |
1724 1ms + v *** +
1725 | zfs_delay_scale ----------> ******** |
1726 0 +-------------------------------------*********----------------+
1727 0% <- zfs_dirty_data_max -> 100%
1728 .fi
1729 .sp
1730 Note that since the delay is added to the outstanding time remaining on the
1731 most recent transaction, the delay is effectively the inverse of IOPS.
1732 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
1733 was chosen such that small changes in the amount of accumulated dirty data
1734 in the first 3/4 of the curve yield relatively small differences in the
1735 amount of delay.
1736 .sp
1737 The effects can be easier to understand when the amount of delay is
1738 represented on a log scale:
1739 .sp
1740 .nf
1741 delay
1742 100ms +-------------------------------------------------------------++
1743 + +
1744 | |
1745 + *+
1746 10ms + *+
1747 + ** +
1748 | (midpoint) ** |
1749 + | ** +
1750 1ms + v **** +
1751 + zfs_delay_scale ----------> ***** +
1752 | **** |
1753 + **** +
1754 100us + ** +
1755 + * +
1756 | * |
1757 + * +
1758 10us + * +
1759 + +
1760 | |
1761 + +
1762 +--------------------------------------------------------------+
1763 0% <- zfs_dirty_data_max -> 100%
1764 .fi
1765 .sp
1766 Note here that only as the amount of dirty data approaches its limit does
1767 the delay start to increase rapidly. The goal of a properly tuned system
1768 should be to keep the amount of dirty data out of that range by first
1769 ensuring that the appropriate limits are set for the I/O scheduler to reach
1770 optimal throughput on the backend storage, and then by changing the value
1771 of \fBzfs_delay_scale\fR to increase the steepness of the curve.