2 .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved.
3 .\" The contents of this file are subject to the terms of the Common Development
4 .\" and Distribution License (the "License"). You may not use this file except
5 .\" in compliance with the License. You can obtain a copy of the license at
6 .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing.
8 .\" See the License for the specific language governing permissions and
9 .\" limitations under the License. When distributing Covered Code, include this
10 .\" CDDL HEADER in each file and include the License file at
11 .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this
12 .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your
13 .\" own identifying information:
14 .\" Portions Copyright [yyyy] [name of copyright owner]
15 .TH ZFS-MODULE-PARAMETERS 5 "Nov 16, 2013"
17 zfs\-module\-parameters \- ZFS module parameters
21 Description of the different parameters to the ZFS module.
23 .SS "Module parameters"
30 \fBl2arc_feed_again\fR (int)
35 Use \fB1\fR for yes (default) and \fB0\fR to disable.
41 \fBl2arc_feed_min_ms\fR (ulong)
44 Min feed interval in milliseconds
46 Default value: \fB200\fR.
52 \fBl2arc_feed_secs\fR (ulong)
55 Seconds between L2ARC writing
57 Default value: \fB1\fR.
63 \fBl2arc_headroom\fR (ulong)
66 Number of max device writes to precache
68 Default value: \fB2\fR.
74 \fBl2arc_headroom_boost\fR (ulong)
77 Compressed l2arc_headroom multiplier
79 Default value: \fB200\fR.
85 \fBl2arc_nocompress\fR (int)
88 Skip compressing L2ARC buffers
90 Use \fB1\fR for yes and \fB0\fR for no (default).
96 \fBl2arc_noprefetch\fR (int)
99 Skip caching prefetched buffers
101 Use \fB1\fR for yes (default) and \fB0\fR to disable.
107 \fBl2arc_norw\fR (int)
110 No reads during writes
112 Use \fB1\fR for yes and \fB0\fR for no (default).
118 \fBl2arc_write_boost\fR (ulong)
121 Extra write bytes during device warmup
123 Default value: \fB8,388,608\fR.
129 \fBl2arc_write_max\fR (ulong)
132 Max write bytes per interval
134 Default value: \fB8,388,608\fR.
140 \fBmetaslab_debug\fR (int)
143 Keep space maps in core to verify frees
145 Use \fB1\fR for yes and \fB0\fR for no (default).
151 \fBspa_config_path\fR (charp)
156 Default value: \fB/etc/zfs/zpool.cache\fR.
162 \fBspa_asize_inflation\fR (int)
165 Multiplication factor used to estimate actual disk consumption from the
166 size of data being written. The default value is a worst case estimate,
167 but lower values may be valid for a given pool depending on its
168 configuration. Pool administrators who understand the factors involved
169 may wish to specify a more realistic inflation factor, particularly if
170 they operate close to quota or capacity limits.
178 \fBzfetch_array_rd_sz\fR (ulong)
181 Number of bytes in a array_read
183 Default value: \fB1,048,576\fR.
189 \fBzfetch_block_cap\fR (uint)
192 Max number of blocks to fetch at a time
194 Default value: \fB256\fR.
200 \fBzfetch_max_streams\fR (uint)
203 Max number of streams per zfetch
205 Default value: \fB8\fR.
211 \fBzfetch_min_sec_reap\fR (uint)
214 Min time before stream reclaim
216 Default value: \fB2\fR.
222 \fBzfs_arc_grow_retry\fR (int)
225 Seconds before growing arc size
227 Default value: \fB5\fR.
233 \fBzfs_arc_max\fR (ulong)
238 Default value: \fB0\fR.
244 \fBzfs_arc_memory_throttle_disable\fR (int)
247 Disable memory throttle
249 Use \fB1\fR for yes (default) and \fB0\fR to disable.
255 \fBzfs_arc_meta_limit\fR (ulong)
258 Meta limit for arc size
260 Default value: \fB0\fR.
266 \fBzfs_arc_meta_prune\fR (int)
269 Bytes of meta data to prune
271 Default value: \fB1,048,576\fR.
277 \fBzfs_arc_min\fR (ulong)
282 Default value: \fB100\fR.
288 \fBzfs_arc_min_prefetch_lifespan\fR (int)
291 Min life of prefetch block
293 Default value: \fB100\fR.
299 \fBzfs_arc_p_min_shift\fR (int)
302 arc_c shift to calc min/max arc_p
304 Default value: \fB4\fR.
310 \fBzfs_arc_shrink_shift\fR (int)
313 log2(fraction of arc to reclaim)
315 Default value: \fB5\fR.
321 \fBzfs_autoimport_disable\fR (int)
324 Disable pool import at module load
326 Use \fB1\fR for yes and \fB0\fR for no (default).
332 \fBzfs_dbuf_state_index\fR (int)
335 Calculate arc header index
337 Default value: \fB0\fR.
343 \fBzfs_deadman_enabled\fR (int)
348 Use \fB1\fR for yes (default) and \fB0\fR to disable.
354 \fBzfs_deadman_synctime_ms\fR (ulong)
357 Expiration time in milliseconds. This value has two meanings. First it is
358 used to determine when the spa_deadman() logic should fire. By default the
359 spa_deadman() will fire if spa_sync() has not completed in 1000 seconds.
360 Secondly, the value determines if an I/O is considered "hung". Any I/O that
361 has not completed in zfs_deadman_synctime_ms is considered "hung" resulting
362 in a zevent being logged.
364 Default value: \fB1,000,000\fR.
370 \fBzfs_dedup_prefetch\fR (int)
373 Enable prefetching dedup-ed blks
375 Use \fB1\fR for yes (default) and \fB0\fR to disable.
381 \fBzfs_delay_min_dirty_percent\fR (int)
384 Start to delay each transaction once there is this amount of dirty data,
385 expressed as a percentage of \fBzfs_dirty_data_max\fR.
386 This value should be >= zfs_vdev_async_write_active_max_dirty_percent.
387 See the section "ZFS TRANSACTION DELAY".
389 Default value: \fB60\fR.
395 \fBzfs_delay_scale\fR (int)
398 This controls how quickly the transaction delay approaches infinity.
399 Larger values cause longer delays for a given amount of dirty data.
401 For the smoothest delay, this value should be about 1 billion divided
402 by the maximum number of operations per second. This will smoothly
403 handle between 10x and 1/10th this number.
405 See the section "ZFS TRANSACTION DELAY".
407 Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64.
409 Default value: \fB500,000\fR.
415 \fBzfs_dirty_data_max\fR (int)
418 Determines the dirty space limit in bytes. Once this limit is exceeded, new
419 writes are halted until space frees up. This parameter takes precedence
420 over \fBzfs_dirty_data_max_percent\fR.
421 See the section "ZFS TRANSACTION DELAY".
423 Default value: 10 percent of all memory, capped at \fBzfs_dirty_data_max_max\fR.
429 \fBzfs_dirty_data_max_max\fR (int)
432 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes.
433 This limit is only enforced at module load time, and will be ignored if
434 \fBzfs_dirty_data_max\fR is later changed. This parameter takes
435 precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section
436 "ZFS TRANSACTION DELAY".
438 Default value: 25% of physical RAM.
444 \fBzfs_dirty_data_max_max_percent\fR (int)
447 Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a
448 percentage of physical RAM. This limit is only enforced at module load
449 time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed.
450 The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this
451 one. See the section "ZFS TRANSACTION DELAY".
459 \fBzfs_dirty_data_max_percent\fR (int)
462 Determines the dirty space limit, expressed as a percentage of all
463 memory. Once this limit is exceeded, new writes are halted until space frees
464 up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this
465 one. See the section "ZFS TRANSACTION DELAY".
467 Default value: 10%, subject to \fBzfs_dirty_data_max_max\fR.
473 \fBzfs_dirty_data_sync\fR (int)
476 Start syncing out a transaction group if there is at least this much dirty data.
478 Default value: \fB67,108,864\fR.
484 \fBzfs_vdev_async_read_max_active\fR (int)
487 Maxium asynchronous read I/Os active to each device.
488 See the section "ZFS I/O SCHEDULER".
490 Default value: \fB3\fR.
496 \fBzfs_vdev_async_read_min_active\fR (int)
499 Minimum asynchronous read I/Os active to each device.
500 See the section "ZFS I/O SCHEDULER".
502 Default value: \fB1\fR.
508 \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int)
511 When the pool has more than
512 \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use
513 \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If
514 the dirty data is between min and max, the active I/O limit is linearly
515 interpolated. See the section "ZFS I/O SCHEDULER".
517 Default value: \fB60\fR.
523 \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int)
526 When the pool has less than
527 \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use
528 \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If
529 the dirty data is between min and max, the active I/O limit is linearly
530 interpolated. See the section "ZFS I/O SCHEDULER".
532 Default value: \fB30\fR.
538 \fBzfs_vdev_async_write_max_active\fR (int)
541 Maxium asynchronous write I/Os active to each device.
542 See the section "ZFS I/O SCHEDULER".
544 Default value: \fB10\fR.
550 \fBzfs_vdev_async_write_min_active\fR (int)
553 Minimum asynchronous write I/Os active to each device.
554 See the section "ZFS I/O SCHEDULER".
556 Default value: \fB1\fR.
562 \fBzfs_vdev_max_active\fR (int)
565 The maximum number of I/Os active to each device. Ideally, this will be >=
566 the sum of each queue's max_active. It must be at least the sum of each
567 queue's min_active. See the section "ZFS I/O SCHEDULER".
569 Default value: \fB1,000\fR.
575 \fBzfs_vdev_scrub_max_active\fR (int)
578 Maxium scrub I/Os active to each device.
579 See the section "ZFS I/O SCHEDULER".
581 Default value: \fB2\fR.
587 \fBzfs_vdev_scrub_min_active\fR (int)
590 Minimum scrub I/Os active to each device.
591 See the section "ZFS I/O SCHEDULER".
593 Default value: \fB1\fR.
599 \fBzfs_vdev_sync_read_max_active\fR (int)
602 Maxium synchronous read I/Os active to each device.
603 See the section "ZFS I/O SCHEDULER".
605 Default value: \fB10\fR.
611 \fBzfs_vdev_sync_read_min_active\fR (int)
614 Minimum synchronous read I/Os active to each device.
615 See the section "ZFS I/O SCHEDULER".
617 Default value: \fB10\fR.
623 \fBzfs_vdev_sync_write_max_active\fR (int)
626 Maxium synchronous write I/Os active to each device.
627 See the section "ZFS I/O SCHEDULER".
629 Default value: \fB10\fR.
635 \fBzfs_vdev_sync_write_min_active\fR (int)
638 Minimum synchronous write I/Os active to each device.
639 See the section "ZFS I/O SCHEDULER".
641 Default value: \fB10\fR.
647 \fBzfs_disable_dup_eviction\fR (int)
650 Disable duplicate buffer eviction
652 Use \fB1\fR for yes and \fB0\fR for no (default).
658 \fBzfs_expire_snapshot\fR (int)
661 Seconds to expire .zfs/snapshot
663 Default value: \fB300\fR.
669 \fBzfs_flags\fR (int)
672 Set additional debugging flags
674 Default value: \fB1\fR.
680 \fBzfs_free_min_time_ms\fR (int)
683 Min millisecs to free per txg
685 Default value: \fB1,000\fR.
691 \fBzfs_immediate_write_sz\fR (long)
694 Largest data block to write to zil
696 Default value: \fB32,768\fR.
702 \fBzfs_mdcomp_disable\fR (int)
705 Disable meta data compression
707 Use \fB1\fR for yes and \fB0\fR for no (default).
713 \fBzfs_no_scrub_io\fR (int)
718 Use \fB1\fR for yes and \fB0\fR for no (default).
724 \fBzfs_no_scrub_prefetch\fR (int)
727 Set for no scrub prefetching
729 Use \fB1\fR for yes and \fB0\fR for no (default).
735 \fBzfs_nocacheflush\fR (int)
738 Disable cache flushes
740 Use \fB1\fR for yes and \fB0\fR for no (default).
746 \fBzfs_nopwrite_enabled\fR (int)
751 Use \fB1\fR for yes (default) and \fB0\fR to disable.
757 \fBzfs_pd_blks_max\fR (int)
760 Max number of blocks to prefetch
762 Default value: \fB100\fR.
768 \fBzfs_prefetch_disable\fR (int)
771 Disable all ZFS prefetching
773 Use \fB1\fR for yes and \fB0\fR for no (default).
779 \fBzfs_read_chunk_size\fR (long)
782 Bytes to read per chunk
784 Default value: \fB1,048,576\fR.
790 \fBzfs_read_history\fR (int)
793 Historic statistics for the last N reads
795 Default value: \fB0\fR.
801 \fBzfs_read_history_hits\fR (int)
804 Include cache hits in read history
806 Use \fB1\fR for yes and \fB0\fR for no (default).
812 \fBzfs_recover\fR (int)
815 Set to attempt to recover from fatal errors. This should only be used as a
816 last resort, as it typically results in leaked space, or worse.
818 Use \fB1\fR for yes and \fB0\fR for no (default).
824 \fBzfs_resilver_delay\fR (int)
827 Number of ticks to delay resilver
829 Default value: \fB2\fR.
835 \fBzfs_resilver_min_time_ms\fR (int)
838 Min millisecs to resilver per txg
840 Default value: \fB3,000\fR.
846 \fBzfs_scan_idle\fR (int)
849 Idle window in clock ticks
851 Default value: \fB50\fR.
857 \fBzfs_scan_min_time_ms\fR (int)
860 Min millisecs to scrub per txg
862 Default value: \fB1,000\fR.
868 \fBzfs_scrub_delay\fR (int)
871 Number of ticks to delay scrub
873 Default value: \fB4\fR.
879 \fBzfs_send_corrupt_data\fR (int)
882 Allow to send corrupt data (ignore read/checksum errors when sending data)
884 Use \fB1\fR for yes and \fB0\fR for no (default).
890 \fBzfs_sync_pass_deferred_free\fR (int)
893 Defer frees starting in this pass
895 Default value: \fB2\fR.
901 \fBzfs_sync_pass_dont_compress\fR (int)
904 Don't compress starting in this pass
906 Default value: \fB5\fR.
912 \fBzfs_sync_pass_rewrite\fR (int)
915 Rewrite new bps starting in this pass
917 Default value: \fB2\fR.
923 \fBzfs_top_maxinflight\fR (int)
926 Max I/Os per top-level
928 Default value: \fB32\fR.
934 \fBzfs_txg_history\fR (int)
937 Historic statistics for the last N txgs
939 Default value: \fB0\fR.
945 \fBzfs_txg_timeout\fR (int)
948 Max seconds worth of delta per txg
950 Default value: \fB5\fR.
956 \fBzfs_vdev_aggregation_limit\fR (int)
959 Max vdev I/O aggregation size
961 Default value: \fB131,072\fR.
967 \fBzfs_vdev_cache_bshift\fR (int)
970 Shift size to inflate reads too
972 Default value: \fB16\fR.
978 \fBzfs_vdev_cache_max\fR (int)
981 Inflate reads small than max
987 \fBzfs_vdev_cache_size\fR (int)
990 Total size of the per-disk cache
992 Default value: \fB0\fR.
998 \fBzfs_vdev_mirror_switch_us\fR (int)
1001 Switch mirrors every N usecs
1003 Default value: \fB10,000\fR.
1009 \fBzfs_vdev_read_gap_limit\fR (int)
1012 Aggregate read I/O over gap
1014 Default value: \fB32,768\fR.
1020 \fBzfs_vdev_scheduler\fR (charp)
1025 Default value: \fBnoop\fR.
1031 \fBzfs_vdev_write_gap_limit\fR (int)
1034 Aggregate write I/O over gap
1036 Default value: \fB4,096\fR.
1042 \fBzfs_zevent_cols\fR (int)
1045 Max event column width
1047 Default value: \fB80\fR.
1053 \fBzfs_zevent_console\fR (int)
1056 Log events to the console
1058 Use \fB1\fR for yes and \fB0\fR for no (default).
1064 \fBzfs_zevent_len_max\fR (int)
1067 Max event queue length
1069 Default value: \fB0\fR.
1075 \fBzil_replay_disable\fR (int)
1078 Disable intent logging replay
1080 Use \fB1\fR for yes and \fB0\fR for no (default).
1086 \fBzil_slog_limit\fR (ulong)
1089 Max commit bytes to separate log device
1091 Default value: \fB1,048,576\fR.
1097 \fBzio_bulk_flags\fR (int)
1100 Additional flags to pass to bulk buffers
1102 Default value: \fB0\fR.
1108 \fBzio_delay_max\fR (int)
1111 Max zio millisec delay before posting event
1113 Default value: \fB30,000\fR.
1119 \fBzio_injection_enabled\fR (int)
1122 Enable fault injection
1124 Use \fB1\fR for yes and \fB0\fR for no (default).
1130 \fBzio_requeue_io_start_cut_in_line\fR (int)
1133 Prioritize requeued I/O
1135 Default value: \fB0\fR.
1141 \fBzvol_inhibit_dev\fR (uint)
1144 Do not create zvol device nodes
1146 Use \fB1\fR for yes and \fB0\fR for no (default).
1152 \fBzvol_major\fR (uint)
1155 Major number for zvol device
1157 Default value: \fB230\fR.
1163 \fBzvol_max_discard_blocks\fR (ulong)
1166 Max number of blocks to discard at once
1168 Default value: \fB16,384\fR.
1174 \fBzvol_threads\fR (uint)
1177 Number of threads for zvol device
1179 Default value: \fB32\fR.
1182 .SH ZFS I/O SCHEDULER
1183 ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os.
1184 The I/O scheduler determines when and in what order those operations are
1185 issued. The I/O scheduler divides operations into five I/O classes
1186 prioritized in the following order: sync read, sync write, async read,
1187 async write, and scrub/resilver. Each queue defines the minimum and
1188 maximum number of concurrent operations that may be issued to the
1189 device. In addition, the device has an aggregate maximum,
1190 \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums
1191 must not exceed the aggregate maximum. If the sum of the per-queue
1192 maximums exceeds the aggregate maximum, then the number of active I/Os
1193 may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will
1194 be issued regardless of whether all per-queue minimums have been met.
1196 For many physical devices, throughput increases with the number of
1197 concurrent operations, but latency typically suffers. Further, physical
1198 devices typically have a limit at which more concurrent operations have no
1199 effect on throughput or can actually cause it to decrease.
1201 The scheduler selects the next operation to issue by first looking for an
1202 I/O class whose minimum has not been satisfied. Once all are satisfied and
1203 the aggregate maximum has not been hit, the scheduler looks for classes
1204 whose maximum has not been satisfied. Iteration through the I/O classes is
1205 done in the order specified above. No further operations are issued if the
1206 aggregate maximum number of concurrent operations has been hit or if there
1207 are no operations queued for an I/O class that has not hit its maximum.
1208 Every time an I/O is queued or an operation completes, the I/O scheduler
1209 looks for new operations to issue.
1211 In general, smaller max_active's will lead to lower latency of synchronous
1212 operations. Larger max_active's may lead to higher overall throughput,
1213 depending on underlying storage.
1215 The ratio of the queues' max_actives determines the balance of performance
1216 between reads, writes, and scrubs. E.g., increasing
1217 \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete
1218 more quickly, but reads and writes to have higher latency and lower throughput.
1220 All I/O classes have a fixed maximum number of outstanding operations
1221 except for the async write class. Asynchronous writes represent the data
1222 that is committed to stable storage during the syncing stage for
1223 transaction groups. Transaction groups enter the syncing state
1224 periodically so the number of queued async writes will quickly burst up
1225 and then bleed down to zero. Rather than servicing them as quickly as
1226 possible, the I/O scheduler changes the maximum number of active async
1227 write I/Os according to the amount of dirty data in the pool. Since
1228 both throughput and latency typically increase with the number of
1229 concurrent operations issued to physical devices, reducing the
1230 burstiness in the number of concurrent operations also stabilizes the
1231 response time of operations from other -- and in particular synchronous
1232 -- queues. In broad strokes, the I/O scheduler will issue more
1233 concurrent operations from the async write queue as there's more dirty
1238 The number of concurrent operations issued for the async write I/O class
1239 follows a piece-wise linear function defined by a few adjustable points.
1242 | o---------| <-- zfs_vdev_async_write_max_active
1249 |-------o | | <-- zfs_vdev_async_write_min_active
1250 0|_______^______|_________|
1251 0% | | 100% of zfs_dirty_data_max
1253 | `-- zfs_vdev_async_write_active_max_dirty_percent
1254 `--------- zfs_vdev_async_write_active_min_dirty_percent
1257 Until the amount of dirty data exceeds a minimum percentage of the dirty
1258 data allowed in the pool, the I/O scheduler will limit the number of
1259 concurrent operations to the minimum. As that threshold is crossed, the
1260 number of concurrent operations issued increases linearly to the maximum at
1261 the specified maximum percentage of the dirty data allowed in the pool.
1263 Ideally, the amount of dirty data on a busy pool will stay in the sloped
1264 part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR
1265 and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the
1266 maximum percentage, this indicates that the rate of incoming data is
1267 greater than the rate that the backend storage can handle. In this case, we
1268 must further throttle incoming writes, as described in the next section.
1270 .SH ZFS TRANSACTION DELAY
1271 We delay transactions when we've determined that the backend storage
1272 isn't able to accommodate the rate of incoming writes.
1274 If there is already a transaction waiting, we delay relative to when
1275 that transaction will finish waiting. This way the calculated delay time
1276 is independent of the number of threads concurrently executing
1279 If we are the only waiter, wait relative to when the transaction
1280 started, rather than the current time. This credits the transaction for
1281 "time already served", e.g. reading indirect blocks.
1283 The minimum time for a transaction to take is calculated as:
1285 min_time = zfs_delay_scale * (dirty - min) / (max - dirty)
1286 min_time is then capped at 100 milliseconds.
1289 The delay has two degrees of freedom that can be adjusted via tunables. The
1290 percentage of dirty data at which we start to delay is defined by
1291 \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above
1292 \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to
1293 delay after writing at full speed has failed to keep up with the incoming write
1294 rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking,
1295 this variable determines the amount of delay at the midpoint of the curve.
1299 10ms +-------------------------------------------------------------*+
1315 2ms + (midpoint) * +
1318 | zfs_delay_scale ----------> ******** |
1319 0 +-------------------------------------*********----------------+
1320 0% <- zfs_dirty_data_max -> 100%
1323 Note that since the delay is added to the outstanding time remaining on the
1324 most recent transaction, the delay is effectively the inverse of IOPS.
1325 Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve
1326 was chosen such that small changes in the amount of accumulated dirty data
1327 in the first 3/4 of the curve yield relatively small differences in the
1330 The effects can be easier to understand when the amount of delay is
1331 represented on a log scale:
1335 100ms +-------------------------------------------------------------++
1344 + zfs_delay_scale ----------> ***** +
1355 +--------------------------------------------------------------+
1356 0% <- zfs_dirty_data_max -> 100%
1359 Note here that only as the amount of dirty data approaches its limit does
1360 the delay start to increase rapidly. The goal of a properly tuned system
1361 should be to keep the amount of dirty data out of that range by first
1362 ensuring that the appropriate limits are set for the I/O scheduler to reach
1363 optimal throughput on the backend storage, and then by changing the value
1364 of \fBzfs_delay_scale\fR to increase the steepness of the curve.