]>
Commit | Line | Data |
---|---|---|
29714574 TF |
1 | '\" te |
2 | .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. | |
87c25d56 | 3 | .\" Copyright (c) 2019 by Delphix. All rights reserved. |
65282ee9 | 4 | .\" Copyright (c) 2019 Datto Inc. |
29714574 TF |
5 | .\" The contents of this file are subject to the terms of the Common Development |
6 | .\" and Distribution License (the "License"). You may not use this file except | |
7 | .\" in compliance with the License. You can obtain a copy of the license at | |
8 | .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. | |
9 | .\" | |
10 | .\" See the License for the specific language governing permissions and | |
11 | .\" limitations under the License. When distributing Covered Code, include this | |
12 | .\" CDDL HEADER in each file and include the License file at | |
13 | .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this | |
14 | .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your | |
15 | .\" own identifying information: | |
16 | .\" Portions Copyright [yyyy] [name of copyright owner] | |
1b939560 | 17 | .TH ZFS-MODULE-PARAMETERS 5 "Feb 15, 2019" |
29714574 TF |
18 | .SH NAME |
19 | zfs\-module\-parameters \- ZFS module parameters | |
20 | .SH DESCRIPTION | |
21 | .sp | |
22 | .LP | |
23 | Description of the different parameters to the ZFS module. | |
24 | ||
25 | .SS "Module parameters" | |
26 | .sp | |
27 | .LP | |
28 | ||
de4f8d5d BB |
29 | .sp |
30 | .ne 2 | |
31 | .na | |
32 | \fBdbuf_cache_max_bytes\fR (ulong) | |
33 | .ad | |
34 | .RS 12n | |
35 | Maximum size in bytes of the dbuf cache. When \fB0\fR this value will default | |
36 | to \fB1/2^dbuf_cache_shift\fR (1/32) of the target ARC size, otherwise the | |
37 | provided value in bytes will be used. The behavior of the dbuf cache and its | |
38 | associated settings can be observed via the \fB/proc/spl/kstat/zfs/dbufstats\fR | |
39 | kstat. | |
40 | .sp | |
41 | Default value: \fB0\fR. | |
42 | .RE | |
43 | ||
2e5dc449 MA |
44 | .sp |
45 | .ne 2 | |
46 | .na | |
47 | \fBdbuf_metadata_cache_max_bytes\fR (ulong) | |
48 | .ad | |
49 | .RS 12n | |
50 | Maximum size in bytes of the metadata dbuf cache. When \fB0\fR this value will | |
51 | default to \fB1/2^dbuf_cache_shift\fR (1/16) of the target ARC size, otherwise | |
52 | the provided value in bytes will be used. The behavior of the metadata dbuf | |
53 | cache and its associated settings can be observed via the | |
54 | \fB/proc/spl/kstat/zfs/dbufstats\fR kstat. | |
55 | .sp | |
56 | Default value: \fB0\fR. | |
57 | .RE | |
58 | ||
de4f8d5d BB |
59 | .sp |
60 | .ne 2 | |
61 | .na | |
62 | \fBdbuf_cache_hiwater_pct\fR (uint) | |
63 | .ad | |
64 | .RS 12n | |
65 | The percentage over \fBdbuf_cache_max_bytes\fR when dbufs must be evicted | |
66 | directly. | |
67 | .sp | |
68 | Default value: \fB10\fR%. | |
69 | .RE | |
70 | ||
71 | .sp | |
72 | .ne 2 | |
73 | .na | |
74 | \fBdbuf_cache_lowater_pct\fR (uint) | |
75 | .ad | |
76 | .RS 12n | |
77 | The percentage below \fBdbuf_cache_max_bytes\fR when the evict thread stops | |
78 | evicting dbufs. | |
79 | .sp | |
80 | Default value: \fB10\fR%. | |
81 | .RE | |
82 | ||
83 | .sp | |
84 | .ne 2 | |
85 | .na | |
86 | \fBdbuf_cache_shift\fR (int) | |
87 | .ad | |
88 | .RS 12n | |
89 | Set the size of the dbuf cache, \fBdbuf_cache_max_bytes\fR, to a log2 fraction | |
90 | of the target arc size. | |
91 | .sp | |
92 | Default value: \fB5\fR. | |
93 | .RE | |
94 | ||
2e5dc449 MA |
95 | .sp |
96 | .ne 2 | |
97 | .na | |
98 | \fBdbuf_metadata_cache_shift\fR (int) | |
99 | .ad | |
100 | .RS 12n | |
101 | Set the size of the dbuf metadata cache, \fBdbuf_metadata_cache_max_bytes\fR, | |
102 | to a log2 fraction of the target arc size. | |
103 | .sp | |
104 | Default value: \fB6\fR. | |
105 | .RE | |
106 | ||
d9b4bf06 MA |
107 | .sp |
108 | .ne 2 | |
109 | .na | |
110 | \fBdmu_prefetch_max\fR (int) | |
111 | .ad | |
112 | .RS 12n | |
113 | Limit the amount we can prefetch with one call to this amount (in bytes). | |
114 | This helps to limit the amount of memory that can be used by prefetching. | |
115 | .sp | |
116 | Default value: \fB134,217,728\fR (128MB). | |
117 | .RE | |
118 | ||
6d836e6f RE |
119 | .sp |
120 | .ne 2 | |
121 | .na | |
122 | \fBignore_hole_birth\fR (int) | |
123 | .ad | |
124 | .RS 12n | |
6ce7b2d9 | 125 | This is an alias for \fBsend_holes_without_birth_time\fR. |
6d836e6f RE |
126 | .RE |
127 | ||
29714574 TF |
128 | .sp |
129 | .ne 2 | |
130 | .na | |
131 | \fBl2arc_feed_again\fR (int) | |
132 | .ad | |
133 | .RS 12n | |
83426735 D |
134 | Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as |
135 | fast as possible. | |
29714574 TF |
136 | .sp |
137 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
138 | .RE | |
139 | ||
140 | .sp | |
141 | .ne 2 | |
142 | .na | |
143 | \fBl2arc_feed_min_ms\fR (ulong) | |
144 | .ad | |
145 | .RS 12n | |
83426735 D |
146 | Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only |
147 | applicable in related situations. | |
29714574 TF |
148 | .sp |
149 | Default value: \fB200\fR. | |
150 | .RE | |
151 | ||
152 | .sp | |
153 | .ne 2 | |
154 | .na | |
155 | \fBl2arc_feed_secs\fR (ulong) | |
156 | .ad | |
157 | .RS 12n | |
158 | Seconds between L2ARC writing | |
159 | .sp | |
160 | Default value: \fB1\fR. | |
161 | .RE | |
162 | ||
163 | .sp | |
164 | .ne 2 | |
165 | .na | |
166 | \fBl2arc_headroom\fR (ulong) | |
167 | .ad | |
168 | .RS 12n | |
83426735 D |
169 | How far through the ARC lists to search for L2ARC cacheable content, expressed |
170 | as a multiplier of \fBl2arc_write_max\fR | |
29714574 TF |
171 | .sp |
172 | Default value: \fB2\fR. | |
173 | .RE | |
174 | ||
175 | .sp | |
176 | .ne 2 | |
177 | .na | |
178 | \fBl2arc_headroom_boost\fR (ulong) | |
179 | .ad | |
180 | .RS 12n | |
83426735 D |
181 | Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being |
182 | successfully compressed before writing. A value of 100 disables this feature. | |
29714574 | 183 | .sp |
be54a13c | 184 | Default value: \fB200\fR%. |
29714574 TF |
185 | .RE |
186 | ||
29714574 TF |
187 | .sp |
188 | .ne 2 | |
189 | .na | |
190 | \fBl2arc_noprefetch\fR (int) | |
191 | .ad | |
192 | .RS 12n | |
83426735 D |
193 | Do not write buffers to L2ARC if they were prefetched but not used by |
194 | applications | |
29714574 TF |
195 | .sp |
196 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
197 | .RE | |
198 | ||
199 | .sp | |
200 | .ne 2 | |
201 | .na | |
202 | \fBl2arc_norw\fR (int) | |
203 | .ad | |
204 | .RS 12n | |
205 | No reads during writes | |
206 | .sp | |
207 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
208 | .RE | |
209 | ||
210 | .sp | |
211 | .ne 2 | |
212 | .na | |
213 | \fBl2arc_write_boost\fR (ulong) | |
214 | .ad | |
215 | .RS 12n | |
603a1784 | 216 | Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount |
83426735 | 217 | while they remain cold. |
29714574 TF |
218 | .sp |
219 | Default value: \fB8,388,608\fR. | |
220 | .RE | |
221 | ||
222 | .sp | |
223 | .ne 2 | |
224 | .na | |
225 | \fBl2arc_write_max\fR (ulong) | |
226 | .ad | |
227 | .RS 12n | |
228 | Max write bytes per interval | |
229 | .sp | |
230 | Default value: \fB8,388,608\fR. | |
231 | .RE | |
232 | ||
99b14de4 ED |
233 | .sp |
234 | .ne 2 | |
235 | .na | |
236 | \fBmetaslab_aliquot\fR (ulong) | |
237 | .ad | |
238 | .RS 12n | |
239 | Metaslab granularity, in bytes. This is roughly similar to what would be | |
240 | referred to as the "stripe size" in traditional RAID arrays. In normal | |
241 | operation, ZFS will try to write this amount of data to a top-level vdev | |
242 | before moving on to the next one. | |
243 | .sp | |
244 | Default value: \fB524,288\fR. | |
245 | .RE | |
246 | ||
f3a7f661 GW |
247 | .sp |
248 | .ne 2 | |
249 | .na | |
250 | \fBmetaslab_bias_enabled\fR (int) | |
251 | .ad | |
252 | .RS 12n | |
253 | Enable metaslab group biasing based on its vdev's over- or under-utilization | |
254 | relative to the pool. | |
255 | .sp | |
256 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
257 | .RE | |
258 | ||
d830d479 MA |
259 | .sp |
260 | .ne 2 | |
261 | .na | |
262 | \fBmetaslab_force_ganging\fR (ulong) | |
263 | .ad | |
264 | .RS 12n | |
265 | Make some blocks above a certain size be gang blocks. This option is used | |
266 | by the test suite to facilitate testing. | |
267 | .sp | |
268 | Default value: \fB16,777,217\fR. | |
269 | .RE | |
270 | ||
4e21fd06 DB |
271 | .sp |
272 | .ne 2 | |
273 | .na | |
274 | \fBzfs_metaslab_segment_weight_enabled\fR (int) | |
275 | .ad | |
276 | .RS 12n | |
277 | Enable/disable segment-based metaslab selection. | |
278 | .sp | |
279 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
280 | .RE | |
281 | ||
282 | .sp | |
283 | .ne 2 | |
284 | .na | |
285 | \fBzfs_metaslab_switch_threshold\fR (int) | |
286 | .ad | |
287 | .RS 12n | |
288 | When using segment-based metaslab selection, continue allocating | |
321204be | 289 | from the active metaslab until \fBzfs_metaslab_switch_threshold\fR |
4e21fd06 DB |
290 | worth of buckets have been exhausted. |
291 | .sp | |
292 | Default value: \fB2\fR. | |
293 | .RE | |
294 | ||
29714574 TF |
295 | .sp |
296 | .ne 2 | |
297 | .na | |
aa7d06a9 | 298 | \fBmetaslab_debug_load\fR (int) |
29714574 TF |
299 | .ad |
300 | .RS 12n | |
aa7d06a9 GW |
301 | Load all metaslabs during pool import. |
302 | .sp | |
303 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
304 | .RE | |
305 | ||
306 | .sp | |
307 | .ne 2 | |
308 | .na | |
309 | \fBmetaslab_debug_unload\fR (int) | |
310 | .ad | |
311 | .RS 12n | |
312 | Prevent metaslabs from being unloaded. | |
29714574 TF |
313 | .sp |
314 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
315 | .RE | |
316 | ||
f3a7f661 GW |
317 | .sp |
318 | .ne 2 | |
319 | .na | |
320 | \fBmetaslab_fragmentation_factor_enabled\fR (int) | |
321 | .ad | |
322 | .RS 12n | |
323 | Enable use of the fragmentation metric in computing metaslab weights. | |
324 | .sp | |
325 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
326 | .RE | |
327 | ||
d3230d76 MA |
328 | .sp |
329 | .ne 2 | |
330 | .na | |
331 | \fBmetaslab_df_max_search\fR (int) | |
332 | .ad | |
333 | .RS 12n | |
334 | Maximum distance to search forward from the last offset. Without this limit, | |
335 | fragmented pools can see >100,000 iterations and metaslab_block_picker() | |
336 | becomes the performance limiting factor on high-performance storage. | |
337 | ||
338 | With the default setting of 16MB, we typically see less than 500 iterations, | |
339 | even with very fragmented, ashift=9 pools. The maximum number of iterations | |
340 | possible is: \fBmetaslab_df_max_search / (2 * (1<<ashift))\fR. | |
341 | With the default setting of 16MB this is 16*1024 (with ashift=9) or 2048 | |
342 | (with ashift=12). | |
343 | .sp | |
344 | Default value: \fB16,777,216\fR (16MB) | |
345 | .RE | |
346 | ||
347 | .sp | |
348 | .ne 2 | |
349 | .na | |
350 | \fBmetaslab_df_use_largest_segment\fR (int) | |
351 | .ad | |
352 | .RS 12n | |
353 | If we are not searching forward (due to metaslab_df_max_search, | |
354 | metaslab_df_free_pct, or metaslab_df_alloc_threshold), this tunable controls | |
355 | what segment is used. If it is set, we will use the largest free segment. | |
356 | If it is not set, we will use a segment of exactly the requested size (or | |
357 | larger). | |
358 | .sp | |
359 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
360 | .RE | |
361 | ||
b8bcca18 MA |
362 | .sp |
363 | .ne 2 | |
364 | .na | |
c853f382 | 365 | \fBzfs_vdev_default_ms_count\fR (int) |
b8bcca18 MA |
366 | .ad |
367 | .RS 12n | |
e4e94ca3 | 368 | When a vdev is added target this number of metaslabs per top-level vdev. |
b8bcca18 MA |
369 | .sp |
370 | Default value: \fB200\fR. | |
371 | .RE | |
372 | ||
d2734cce SD |
373 | .sp |
374 | .ne 2 | |
375 | .na | |
c853f382 | 376 | \fBzfs_vdev_min_ms_count\fR (int) |
d2734cce SD |
377 | .ad |
378 | .RS 12n | |
379 | Minimum number of metaslabs to create in a top-level vdev. | |
380 | .sp | |
381 | Default value: \fB16\fR. | |
382 | .RE | |
383 | ||
e4e94ca3 DB |
384 | .sp |
385 | .ne 2 | |
386 | .na | |
387 | \fBvdev_ms_count_limit\fR (int) | |
388 | .ad | |
389 | .RS 12n | |
390 | Practical upper limit of total metaslabs per top-level vdev. | |
391 | .sp | |
392 | Default value: \fB131,072\fR. | |
393 | .RE | |
394 | ||
f3a7f661 GW |
395 | .sp |
396 | .ne 2 | |
397 | .na | |
398 | \fBmetaslab_preload_enabled\fR (int) | |
399 | .ad | |
400 | .RS 12n | |
401 | Enable metaslab group preloading. | |
402 | .sp | |
403 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
404 | .RE | |
405 | ||
406 | .sp | |
407 | .ne 2 | |
408 | .na | |
409 | \fBmetaslab_lba_weighting_enabled\fR (int) | |
410 | .ad | |
411 | .RS 12n | |
412 | Give more weight to metaslabs with lower LBAs, assuming they have | |
413 | greater bandwidth as is typically the case on a modern constant | |
414 | angular velocity disk drive. | |
415 | .sp | |
416 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
417 | .RE | |
418 | ||
6ce7b2d9 RL |
419 | .sp |
420 | .ne 2 | |
421 | .na | |
422 | \fBsend_holes_without_birth_time\fR (int) | |
423 | .ad | |
424 | .RS 12n | |
425 | When set, the hole_birth optimization will not be used, and all holes will | |
d0c3aa9c TC |
426 | always be sent on zfs send. This is useful if you suspect your datasets are |
427 | affected by a bug in hole_birth. | |
6ce7b2d9 RL |
428 | .sp |
429 | Use \fB1\fR for on (default) and \fB0\fR for off. | |
430 | .RE | |
431 | ||
29714574 TF |
432 | .sp |
433 | .ne 2 | |
434 | .na | |
435 | \fBspa_config_path\fR (charp) | |
436 | .ad | |
437 | .RS 12n | |
438 | SPA config file | |
439 | .sp | |
440 | Default value: \fB/etc/zfs/zpool.cache\fR. | |
441 | .RE | |
442 | ||
e8b96c60 MA |
443 | .sp |
444 | .ne 2 | |
445 | .na | |
446 | \fBspa_asize_inflation\fR (int) | |
447 | .ad | |
448 | .RS 12n | |
449 | Multiplication factor used to estimate actual disk consumption from the | |
450 | size of data being written. The default value is a worst case estimate, | |
451 | but lower values may be valid for a given pool depending on its | |
452 | configuration. Pool administrators who understand the factors involved | |
453 | may wish to specify a more realistic inflation factor, particularly if | |
454 | they operate close to quota or capacity limits. | |
455 | .sp | |
83426735 | 456 | Default value: \fB24\fR. |
e8b96c60 MA |
457 | .RE |
458 | ||
6cb8e530 PZ |
459 | .sp |
460 | .ne 2 | |
461 | .na | |
462 | \fBspa_load_print_vdev_tree\fR (int) | |
463 | .ad | |
464 | .RS 12n | |
465 | Whether to print the vdev tree in the debugging message buffer during pool import. | |
466 | Use 0 to disable and 1 to enable. | |
467 | .sp | |
468 | Default value: \fB0\fR. | |
469 | .RE | |
470 | ||
dea377c0 MA |
471 | .sp |
472 | .ne 2 | |
473 | .na | |
474 | \fBspa_load_verify_data\fR (int) | |
475 | .ad | |
476 | .RS 12n | |
477 | Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR) | |
478 | import. Use 0 to disable and 1 to enable. | |
479 | ||
480 | An extreme rewind import normally performs a full traversal of all | |
481 | blocks in the pool for verification. If this parameter is set to 0, | |
482 | the traversal skips non-metadata blocks. It can be toggled once the | |
483 | import has started to stop or start the traversal of non-metadata blocks. | |
484 | .sp | |
83426735 | 485 | Default value: \fB1\fR. |
dea377c0 MA |
486 | .RE |
487 | ||
488 | .sp | |
489 | .ne 2 | |
490 | .na | |
491 | \fBspa_load_verify_metadata\fR (int) | |
492 | .ad | |
493 | .RS 12n | |
494 | Whether to traverse blocks during an "extreme rewind" (\fB-X\fR) | |
495 | pool import. Use 0 to disable and 1 to enable. | |
496 | ||
497 | An extreme rewind import normally performs a full traversal of all | |
1c012083 | 498 | blocks in the pool for verification. If this parameter is set to 0, |
dea377c0 MA |
499 | the traversal is not performed. It can be toggled once the import has |
500 | started to stop or start the traversal. | |
501 | .sp | |
83426735 | 502 | Default value: \fB1\fR. |
dea377c0 MA |
503 | .RE |
504 | ||
505 | .sp | |
506 | .ne 2 | |
507 | .na | |
508 | \fBspa_load_verify_maxinflight\fR (int) | |
509 | .ad | |
510 | .RS 12n | |
511 | Maximum concurrent I/Os during the traversal performed during an "extreme | |
512 | rewind" (\fB-X\fR) pool import. | |
513 | .sp | |
83426735 | 514 | Default value: \fB10000\fR. |
dea377c0 MA |
515 | .RE |
516 | ||
6cde6435 BB |
517 | .sp |
518 | .ne 2 | |
519 | .na | |
520 | \fBspa_slop_shift\fR (int) | |
521 | .ad | |
522 | .RS 12n | |
523 | Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space | |
524 | in the pool to be consumed. This ensures that we don't run the pool | |
525 | completely out of space, due to unaccounted changes (e.g. to the MOS). | |
526 | It also limits the worst-case time to allocate space. If we have | |
527 | less than this amount of free space, most ZPL operations (e.g. write, | |
528 | create) will return ENOSPC. | |
529 | .sp | |
83426735 | 530 | Default value: \fB5\fR. |
6cde6435 BB |
531 | .RE |
532 | ||
0dc2f70c MA |
533 | .sp |
534 | .ne 2 | |
535 | .na | |
536 | \fBvdev_removal_max_span\fR (int) | |
537 | .ad | |
538 | .RS 12n | |
539 | During top-level vdev removal, chunks of data are copied from the vdev | |
540 | which may include free space in order to trade bandwidth for IOPS. | |
541 | This parameter determines the maximum span of free space (in bytes) | |
542 | which will be included as "unnecessary" data in a chunk of copied data. | |
543 | ||
544 | The default value here was chosen to align with | |
545 | \fBzfs_vdev_read_gap_limit\fR, which is a similar concept when doing | |
546 | regular reads (but there's no reason it has to be the same). | |
547 | .sp | |
548 | Default value: \fB32,768\fR. | |
549 | .RE | |
550 | ||
d9b4bf06 MA |
551 | .sp |
552 | .ne 2 | |
553 | .na | |
554 | \fBzap_iterate_prefetch\fR (int) | |
555 | .ad | |
556 | .RS 12n | |
557 | If this is set, when we start iterating over a ZAP object, zfs will prefetch | |
558 | the entire object (all leaf blocks). However, this is limited by | |
559 | \fBdmu_prefetch_max\fR. | |
560 | .sp | |
561 | Use \fB1\fR for on (default) and \fB0\fR for off. | |
562 | .RE | |
563 | ||
29714574 TF |
564 | .sp |
565 | .ne 2 | |
566 | .na | |
567 | \fBzfetch_array_rd_sz\fR (ulong) | |
568 | .ad | |
569 | .RS 12n | |
27b293be | 570 | If prefetching is enabled, disable prefetching for reads larger than this size. |
29714574 TF |
571 | .sp |
572 | Default value: \fB1,048,576\fR. | |
573 | .RE | |
574 | ||
575 | .sp | |
576 | .ne 2 | |
577 | .na | |
7f60329a | 578 | \fBzfetch_max_distance\fR (uint) |
29714574 TF |
579 | .ad |
580 | .RS 12n | |
7f60329a | 581 | Max bytes to prefetch per stream (default 8MB). |
29714574 | 582 | .sp |
7f60329a | 583 | Default value: \fB8,388,608\fR. |
29714574 TF |
584 | .RE |
585 | ||
586 | .sp | |
587 | .ne 2 | |
588 | .na | |
589 | \fBzfetch_max_streams\fR (uint) | |
590 | .ad | |
591 | .RS 12n | |
27b293be | 592 | Max number of streams per zfetch (prefetch streams per file). |
29714574 TF |
593 | .sp |
594 | Default value: \fB8\fR. | |
595 | .RE | |
596 | ||
597 | .sp | |
598 | .ne 2 | |
599 | .na | |
600 | \fBzfetch_min_sec_reap\fR (uint) | |
601 | .ad | |
602 | .RS 12n | |
27b293be | 603 | Min time before an active prefetch stream can be reclaimed |
29714574 TF |
604 | .sp |
605 | Default value: \fB2\fR. | |
606 | .RE | |
607 | ||
87c25d56 MA |
608 | .sp |
609 | .ne 2 | |
610 | .na | |
611 | \fBzfs_abd_scatter_min_size\fR (uint) | |
612 | .ad | |
613 | .RS 12n | |
614 | This is the minimum allocation size that will use scatter (page-based) | |
615 | ABD's. Smaller allocations will use linear ABD's. | |
616 | .sp | |
617 | Default value: \fB1536\fR (512B and 1KB allocations will be linear). | |
618 | .RE | |
619 | ||
25458cbe TC |
620 | .sp |
621 | .ne 2 | |
622 | .na | |
623 | \fBzfs_arc_dnode_limit\fR (ulong) | |
624 | .ad | |
625 | .RS 12n | |
626 | When the number of bytes consumed by dnodes in the ARC exceeds this number of | |
9907cc1c | 627 | bytes, try to unpin some of it in response to demand for non-metadata. This |
627791f3 | 628 | value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which |
9907cc1c G |
629 | indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of |
630 | the ARC meta buffers that may be used for dnodes. | |
25458cbe TC |
631 | |
632 | See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used | |
633 | when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather | |
634 | than in response to overall demand for non-metadata. | |
635 | ||
636 | .sp | |
9907cc1c G |
637 | Default value: \fB0\fR. |
638 | .RE | |
639 | ||
640 | .sp | |
641 | .ne 2 | |
642 | .na | |
643 | \fBzfs_arc_dnode_limit_percent\fR (ulong) | |
644 | .ad | |
645 | .RS 12n | |
646 | Percentage that can be consumed by dnodes of ARC meta buffers. | |
647 | .sp | |
648 | See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a | |
649 | higher priority if set to nonzero value. | |
650 | .sp | |
be54a13c | 651 | Default value: \fB10\fR%. |
25458cbe TC |
652 | .RE |
653 | ||
654 | .sp | |
655 | .ne 2 | |
656 | .na | |
657 | \fBzfs_arc_dnode_reduce_percent\fR (ulong) | |
658 | .ad | |
659 | .RS 12n | |
660 | Percentage of ARC dnodes to try to scan in response to demand for non-metadata | |
6146e17e | 661 | when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR. |
25458cbe TC |
662 | |
663 | .sp | |
be54a13c | 664 | Default value: \fB10\fR% of the number of dnodes in the ARC. |
25458cbe TC |
665 | .RE |
666 | ||
49ddb315 MA |
667 | .sp |
668 | .ne 2 | |
669 | .na | |
670 | \fBzfs_arc_average_blocksize\fR (int) | |
671 | .ad | |
672 | .RS 12n | |
673 | The ARC's buffer hash table is sized based on the assumption of an average | |
674 | block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out | |
675 | to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers. | |
676 | For configurations with a known larger average block size this value can be | |
677 | increased to reduce the memory footprint. | |
678 | ||
679 | .sp | |
680 | Default value: \fB8192\fR. | |
681 | .RE | |
682 | ||
ca0bf58d PS |
683 | .sp |
684 | .ne 2 | |
685 | .na | |
686 | \fBzfs_arc_evict_batch_limit\fR (int) | |
687 | .ad | |
688 | .RS 12n | |
8f343973 | 689 | Number ARC headers to evict per sub-list before proceeding to another sub-list. |
ca0bf58d PS |
690 | This batch-style operation prevents entire sub-lists from being evicted at once |
691 | but comes at a cost of additional unlocking and locking. | |
692 | .sp | |
693 | Default value: \fB10\fR. | |
694 | .RE | |
695 | ||
29714574 TF |
696 | .sp |
697 | .ne 2 | |
698 | .na | |
699 | \fBzfs_arc_grow_retry\fR (int) | |
700 | .ad | |
701 | .RS 12n | |
ca85d690 | 702 | If set to a non zero value, it will replace the arc_grow_retry value with this value. |
d4a72f23 | 703 | The arc_grow_retry value (default 5) is the number of seconds the ARC will wait before |
ca85d690 | 704 | trying to resume growth after a memory pressure event. |
29714574 | 705 | .sp |
ca85d690 | 706 | Default value: \fB0\fR. |
29714574 TF |
707 | .RE |
708 | ||
709 | .sp | |
710 | .ne 2 | |
711 | .na | |
7e8bddd0 | 712 | \fBzfs_arc_lotsfree_percent\fR (int) |
29714574 TF |
713 | .ad |
714 | .RS 12n | |
7e8bddd0 BB |
715 | Throttle I/O when free system memory drops below this percentage of total |
716 | system memory. Setting this value to 0 will disable the throttle. | |
29714574 | 717 | .sp |
be54a13c | 718 | Default value: \fB10\fR%. |
29714574 TF |
719 | .RE |
720 | ||
721 | .sp | |
722 | .ne 2 | |
723 | .na | |
7e8bddd0 | 724 | \fBzfs_arc_max\fR (ulong) |
29714574 TF |
725 | .ad |
726 | .RS 12n | |
83426735 D |
727 | Max arc size of ARC in bytes. If set to 0 then it will consume 1/2 of system |
728 | RAM. This value must be at least 67108864 (64 megabytes). | |
729 | .sp | |
730 | This value can be changed dynamically with some caveats. It cannot be set back | |
731 | to 0 while running and reducing it below the current ARC size will not cause | |
732 | the ARC to shrink without memory pressure to induce shrinking. | |
29714574 | 733 | .sp |
7e8bddd0 | 734 | Default value: \fB0\fR. |
29714574 TF |
735 | .RE |
736 | ||
ca85d690 | 737 | .sp |
738 | .ne 2 | |
739 | .na | |
740 | \fBzfs_arc_meta_adjust_restarts\fR (ulong) | |
741 | .ad | |
742 | .RS 12n | |
743 | The number of restart passes to make while scanning the ARC attempting | |
744 | the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR. | |
745 | This value should not need to be tuned but is available to facilitate | |
746 | performance analysis. | |
747 | .sp | |
748 | Default value: \fB4096\fR. | |
749 | .RE | |
750 | ||
29714574 TF |
751 | .sp |
752 | .ne 2 | |
753 | .na | |
754 | \fBzfs_arc_meta_limit\fR (ulong) | |
755 | .ad | |
756 | .RS 12n | |
2cbb06b5 BB |
757 | The maximum allowed size in bytes that meta data buffers are allowed to |
758 | consume in the ARC. When this limit is reached meta data buffers will | |
759 | be reclaimed even if the overall arc_c_max has not been reached. This | |
9907cc1c G |
760 | value defaults to 0 which indicates that a percent which is based on |
761 | \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data. | |
29714574 | 762 | .sp |
83426735 | 763 | This value my be changed dynamically except that it cannot be set back to 0 |
9907cc1c | 764 | for a specific percent of the ARC; it must be set to an explicit value. |
83426735 | 765 | .sp |
29714574 TF |
766 | Default value: \fB0\fR. |
767 | .RE | |
768 | ||
9907cc1c G |
769 | .sp |
770 | .ne 2 | |
771 | .na | |
772 | \fBzfs_arc_meta_limit_percent\fR (ulong) | |
773 | .ad | |
774 | .RS 12n | |
775 | Percentage of ARC buffers that can be used for meta data. | |
776 | ||
777 | See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a | |
778 | higher priority if set to nonzero value. | |
779 | ||
780 | .sp | |
be54a13c | 781 | Default value: \fB75\fR%. |
9907cc1c G |
782 | .RE |
783 | ||
ca0bf58d PS |
784 | .sp |
785 | .ne 2 | |
786 | .na | |
787 | \fBzfs_arc_meta_min\fR (ulong) | |
788 | .ad | |
789 | .RS 12n | |
790 | The minimum allowed size in bytes that meta data buffers may consume in | |
791 | the ARC. This value defaults to 0 which disables a floor on the amount | |
792 | of the ARC devoted meta data. | |
793 | .sp | |
794 | Default value: \fB0\fR. | |
795 | .RE | |
796 | ||
29714574 TF |
797 | .sp |
798 | .ne 2 | |
799 | .na | |
800 | \fBzfs_arc_meta_prune\fR (int) | |
801 | .ad | |
802 | .RS 12n | |
2cbb06b5 BB |
803 | The number of dentries and inodes to be scanned looking for entries |
804 | which can be dropped. This may be required when the ARC reaches the | |
805 | \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers | |
806 | in the ARC. Increasing this value will cause to dentry and inode caches | |
807 | to be pruned more aggressively. Setting this value to 0 will disable | |
808 | pruning the inode and dentry caches. | |
29714574 | 809 | .sp |
2cbb06b5 | 810 | Default value: \fB10,000\fR. |
29714574 TF |
811 | .RE |
812 | ||
bc888666 BB |
813 | .sp |
814 | .ne 2 | |
815 | .na | |
ca85d690 | 816 | \fBzfs_arc_meta_strategy\fR (int) |
bc888666 BB |
817 | .ad |
818 | .RS 12n | |
ca85d690 | 819 | Define the strategy for ARC meta data buffer eviction (meta reclaim strategy). |
820 | A value of 0 (META_ONLY) will evict only the ARC meta data buffers. | |
d4a72f23 | 821 | A value of 1 (BALANCED) indicates that additional data buffers may be evicted if |
ca85d690 | 822 | that is required to in order to evict the required number of meta data buffers. |
bc888666 | 823 | .sp |
ca85d690 | 824 | Default value: \fB1\fR. |
bc888666 BB |
825 | .RE |
826 | ||
29714574 TF |
827 | .sp |
828 | .ne 2 | |
829 | .na | |
830 | \fBzfs_arc_min\fR (ulong) | |
831 | .ad | |
832 | .RS 12n | |
ca85d690 | 833 | Min arc size of ARC in bytes. If set to 0 then arc_c_min will default to |
834 | consuming the larger of 32M or 1/32 of total system memory. | |
29714574 | 835 | .sp |
ca85d690 | 836 | Default value: \fB0\fR. |
29714574 TF |
837 | .RE |
838 | ||
839 | .sp | |
840 | .ne 2 | |
841 | .na | |
d4a72f23 | 842 | \fBzfs_arc_min_prefetch_ms\fR (int) |
29714574 TF |
843 | .ad |
844 | .RS 12n | |
d4a72f23 | 845 | Minimum time prefetched blocks are locked in the ARC, specified in ms. |
2b84817f | 846 | A value of \fB0\fR will default to 1000 ms. |
d4a72f23 TC |
847 | .sp |
848 | Default value: \fB0\fR. | |
849 | .RE | |
850 | ||
851 | .sp | |
852 | .ne 2 | |
853 | .na | |
854 | \fBzfs_arc_min_prescient_prefetch_ms\fR (int) | |
855 | .ad | |
856 | .RS 12n | |
857 | Minimum time "prescient prefetched" blocks are locked in the ARC, specified | |
858 | in ms. These blocks are meant to be prefetched fairly aggresively ahead of | |
2b84817f | 859 | the code that may use them. A value of \fB0\fR will default to 6000 ms. |
29714574 | 860 | .sp |
83426735 | 861 | Default value: \fB0\fR. |
29714574 TF |
862 | .RE |
863 | ||
6cb8e530 PZ |
864 | .sp |
865 | .ne 2 | |
866 | .na | |
867 | \fBzfs_max_missing_tvds\fR (int) | |
868 | .ad | |
869 | .RS 12n | |
870 | Number of missing top-level vdevs which will be allowed during | |
871 | pool import (only in read-only mode). | |
872 | .sp | |
873 | Default value: \fB0\fR | |
874 | .RE | |
875 | ||
ca0bf58d PS |
876 | .sp |
877 | .ne 2 | |
878 | .na | |
c30e58c4 | 879 | \fBzfs_multilist_num_sublists\fR (int) |
ca0bf58d PS |
880 | .ad |
881 | .RS 12n | |
882 | To allow more fine-grained locking, each ARC state contains a series | |
883 | of lists for both data and meta data objects. Locking is performed at | |
884 | the level of these "sub-lists". This parameters controls the number of | |
c30e58c4 MA |
885 | sub-lists per ARC state, and also applies to other uses of the |
886 | multilist data structure. | |
ca0bf58d | 887 | .sp |
c30e58c4 | 888 | Default value: \fB4\fR or the number of online CPUs, whichever is greater |
ca0bf58d PS |
889 | .RE |
890 | ||
891 | .sp | |
892 | .ne 2 | |
893 | .na | |
894 | \fBzfs_arc_overflow_shift\fR (int) | |
895 | .ad | |
896 | .RS 12n | |
897 | The ARC size is considered to be overflowing if it exceeds the current | |
898 | ARC target size (arc_c) by a threshold determined by this parameter. | |
899 | The threshold is calculated as a fraction of arc_c using the formula | |
900 | "arc_c >> \fBzfs_arc_overflow_shift\fR". | |
901 | ||
902 | The default value of 8 causes the ARC to be considered to be overflowing | |
903 | if it exceeds the target size by 1/256th (0.3%) of the target size. | |
904 | ||
905 | When the ARC is overflowing, new buffer allocations are stalled until | |
906 | the reclaim thread catches up and the overflow condition no longer exists. | |
907 | .sp | |
908 | Default value: \fB8\fR. | |
909 | .RE | |
910 | ||
728d6ae9 BB |
911 | .sp |
912 | .ne 2 | |
913 | .na | |
914 | ||
915 | \fBzfs_arc_p_min_shift\fR (int) | |
916 | .ad | |
917 | .RS 12n | |
ca85d690 | 918 | If set to a non zero value, this will update arc_p_min_shift (default 4) |
919 | with the new value. | |
d4a72f23 | 920 | arc_p_min_shift is used to shift of arc_c for calculating both min and max |
ca85d690 | 921 | max arc_p |
728d6ae9 | 922 | .sp |
ca85d690 | 923 | Default value: \fB0\fR. |
728d6ae9 BB |
924 | .RE |
925 | ||
62422785 PS |
926 | .sp |
927 | .ne 2 | |
928 | .na | |
929 | \fBzfs_arc_p_dampener_disable\fR (int) | |
930 | .ad | |
931 | .RS 12n | |
932 | Disable arc_p adapt dampener | |
933 | .sp | |
934 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
935 | .RE | |
936 | ||
29714574 TF |
937 | .sp |
938 | .ne 2 | |
939 | .na | |
940 | \fBzfs_arc_shrink_shift\fR (int) | |
941 | .ad | |
942 | .RS 12n | |
ca85d690 | 943 | If set to a non zero value, this will update arc_shrink_shift (default 7) |
944 | with the new value. | |
29714574 | 945 | .sp |
ca85d690 | 946 | Default value: \fB0\fR. |
29714574 TF |
947 | .RE |
948 | ||
03b60eee DB |
949 | .sp |
950 | .ne 2 | |
951 | .na | |
952 | \fBzfs_arc_pc_percent\fR (uint) | |
953 | .ad | |
954 | .RS 12n | |
955 | Percent of pagecache to reclaim arc to | |
956 | ||
957 | This tunable allows ZFS arc to play more nicely with the kernel's LRU | |
958 | pagecache. It can guarantee that the arc size won't collapse under scanning | |
959 | pressure on the pagecache, yet still allows arc to be reclaimed down to | |
960 | zfs_arc_min if necessary. This value is specified as percent of pagecache | |
961 | size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This | |
962 | only operates during memory pressure/reclaim. | |
963 | .sp | |
be54a13c | 964 | Default value: \fB0\fR% (disabled). |
03b60eee DB |
965 | .RE |
966 | ||
11f552fa BB |
967 | .sp |
968 | .ne 2 | |
969 | .na | |
970 | \fBzfs_arc_sys_free\fR (ulong) | |
971 | .ad | |
972 | .RS 12n | |
973 | The target number of bytes the ARC should leave as free memory on the system. | |
974 | Defaults to the larger of 1/64 of physical memory or 512K. Setting this | |
975 | option to a non-zero value will override the default. | |
976 | .sp | |
977 | Default value: \fB0\fR. | |
978 | .RE | |
979 | ||
29714574 TF |
980 | .sp |
981 | .ne 2 | |
982 | .na | |
983 | \fBzfs_autoimport_disable\fR (int) | |
984 | .ad | |
985 | .RS 12n | |
27b293be | 986 | Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR). |
29714574 | 987 | .sp |
70081096 | 988 | Use \fB1\fR for yes (default) and \fB0\fR for no. |
29714574 TF |
989 | .RE |
990 | ||
80d52c39 TH |
991 | .sp |
992 | .ne 2 | |
993 | .na | |
994 | \fBzfs_checksums_per_second\fR (int) | |
995 | .ad | |
996 | .RS 12n | |
997 | Rate limit checksum events to this many per second. Note that this should | |
998 | not be set below the zed thresholds (currently 10 checksums over 10 sec) | |
999 | or else zed may not trigger any action. | |
1000 | .sp | |
1001 | Default value: 20 | |
1002 | .RE | |
1003 | ||
2fe61a7e PS |
1004 | .sp |
1005 | .ne 2 | |
1006 | .na | |
1007 | \fBzfs_commit_timeout_pct\fR (int) | |
1008 | .ad | |
1009 | .RS 12n | |
1010 | This controls the amount of time that a ZIL block (lwb) will remain "open" | |
1011 | when it isn't "full", and it has a thread waiting for it to be committed to | |
1012 | stable storage. The timeout is scaled based on a percentage of the last lwb | |
1013 | latency to avoid significantly impacting the latency of each individual | |
1014 | transaction record (itx). | |
1015 | .sp | |
be54a13c | 1016 | Default value: \fB5\fR%. |
2fe61a7e PS |
1017 | .RE |
1018 | ||
0dc2f70c MA |
1019 | .sp |
1020 | .ne 2 | |
1021 | .na | |
1022 | \fBzfs_condense_indirect_vdevs_enable\fR (int) | |
1023 | .ad | |
1024 | .RS 12n | |
1025 | Enable condensing indirect vdev mappings. When set to a non-zero value, | |
1026 | attempt to condense indirect vdev mappings if the mapping uses more than | |
1027 | \fBzfs_condense_min_mapping_bytes\fR bytes of memory and if the obsolete | |
1028 | space map object uses more than \fBzfs_condense_max_obsolete_bytes\fR | |
1029 | bytes on-disk. The condensing process is an attempt to save memory by | |
1030 | removing obsolete mappings. | |
1031 | .sp | |
1032 | Default value: \fB1\fR. | |
1033 | .RE | |
1034 | ||
1035 | .sp | |
1036 | .ne 2 | |
1037 | .na | |
1038 | \fBzfs_condense_max_obsolete_bytes\fR (ulong) | |
1039 | .ad | |
1040 | .RS 12n | |
1041 | Only attempt to condense indirect vdev mappings if the on-disk size | |
1042 | of the obsolete space map object is greater than this number of bytes | |
1043 | (see \fBfBzfs_condense_indirect_vdevs_enable\fR). | |
1044 | .sp | |
1045 | Default value: \fB1,073,741,824\fR. | |
1046 | .RE | |
1047 | ||
1048 | .sp | |
1049 | .ne 2 | |
1050 | .na | |
1051 | \fBzfs_condense_min_mapping_bytes\fR (ulong) | |
1052 | .ad | |
1053 | .RS 12n | |
1054 | Minimum size vdev mapping to attempt to condense (see | |
1055 | \fBzfs_condense_indirect_vdevs_enable\fR). | |
1056 | .sp | |
1057 | Default value: \fB131,072\fR. | |
1058 | .RE | |
1059 | ||
3b36f831 BB |
1060 | .sp |
1061 | .ne 2 | |
1062 | .na | |
1063 | \fBzfs_dbgmsg_enable\fR (int) | |
1064 | .ad | |
1065 | .RS 12n | |
1066 | Internally ZFS keeps a small log to facilitate debugging. By default the log | |
1067 | is disabled, to enable it set this option to 1. The contents of the log can | |
1068 | be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to | |
1069 | this proc file clears the log. | |
1070 | .sp | |
1071 | Default value: \fB0\fR. | |
1072 | .RE | |
1073 | ||
1074 | .sp | |
1075 | .ne 2 | |
1076 | .na | |
1077 | \fBzfs_dbgmsg_maxsize\fR (int) | |
1078 | .ad | |
1079 | .RS 12n | |
1080 | The maximum size in bytes of the internal ZFS debug log. | |
1081 | .sp | |
1082 | Default value: \fB4M\fR. | |
1083 | .RE | |
1084 | ||
29714574 TF |
1085 | .sp |
1086 | .ne 2 | |
1087 | .na | |
1088 | \fBzfs_dbuf_state_index\fR (int) | |
1089 | .ad | |
1090 | .RS 12n | |
83426735 D |
1091 | This feature is currently unused. It is normally used for controlling what |
1092 | reporting is available under /proc/spl/kstat/zfs. | |
29714574 TF |
1093 | .sp |
1094 | Default value: \fB0\fR. | |
1095 | .RE | |
1096 | ||
1097 | .sp | |
1098 | .ne 2 | |
1099 | .na | |
1100 | \fBzfs_deadman_enabled\fR (int) | |
1101 | .ad | |
1102 | .RS 12n | |
b81a3ddc | 1103 | When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR |
8fb1ede1 BB |
1104 | milliseconds, or when an individual I/O takes longer than |
1105 | \fBzfs_deadman_ziotime_ms\fR milliseconds, then the operation is considered to | |
1106 | be "hung". If \fBzfs_deadman_enabled\fR is set then the deadman behavior is | |
1107 | invoked as described by the \fBzfs_deadman_failmode\fR module option. | |
1108 | By default the deadman is enabled and configured to \fBwait\fR which results | |
1109 | in "hung" I/Os only being logged. The deadman is automatically disabled | |
1110 | when a pool gets suspended. | |
29714574 | 1111 | .sp |
8fb1ede1 BB |
1112 | Default value: \fB1\fR. |
1113 | .RE | |
1114 | ||
1115 | .sp | |
1116 | .ne 2 | |
1117 | .na | |
1118 | \fBzfs_deadman_failmode\fR (charp) | |
1119 | .ad | |
1120 | .RS 12n | |
1121 | Controls the failure behavior when the deadman detects a "hung" I/O. Valid | |
1122 | values are \fBwait\fR, \fBcontinue\fR, and \fBpanic\fR. | |
1123 | .sp | |
1124 | \fBwait\fR - Wait for a "hung" I/O to complete. For each "hung" I/O a | |
1125 | "deadman" event will be posted describing that I/O. | |
1126 | .sp | |
1127 | \fBcontinue\fR - Attempt to recover from a "hung" I/O by re-dispatching it | |
1128 | to the I/O pipeline if possible. | |
1129 | .sp | |
1130 | \fBpanic\fR - Panic the system. This can be used to facilitate an automatic | |
1131 | fail-over to a properly configured fail-over partner. | |
1132 | .sp | |
1133 | Default value: \fBwait\fR. | |
b81a3ddc TC |
1134 | .RE |
1135 | ||
1136 | .sp | |
1137 | .ne 2 | |
1138 | .na | |
1139 | \fBzfs_deadman_checktime_ms\fR (int) | |
1140 | .ad | |
1141 | .RS 12n | |
8fb1ede1 BB |
1142 | Check time in milliseconds. This defines the frequency at which we check |
1143 | for hung I/O and potentially invoke the \fBzfs_deadman_failmode\fR behavior. | |
b81a3ddc | 1144 | .sp |
8fb1ede1 | 1145 | Default value: \fB60,000\fR. |
29714574 TF |
1146 | .RE |
1147 | ||
1148 | .sp | |
1149 | .ne 2 | |
1150 | .na | |
e8b96c60 | 1151 | \fBzfs_deadman_synctime_ms\fR (ulong) |
29714574 TF |
1152 | .ad |
1153 | .RS 12n | |
b81a3ddc | 1154 | Interval in milliseconds after which the deadman is triggered and also |
8fb1ede1 BB |
1155 | the interval after which a pool sync operation is considered to be "hung". |
1156 | Once this limit is exceeded the deadman will be invoked every | |
1157 | \fBzfs_deadman_checktime_ms\fR milliseconds until the pool sync completes. | |
1158 | .sp | |
1159 | Default value: \fB600,000\fR. | |
1160 | .RE | |
b81a3ddc | 1161 | |
29714574 | 1162 | .sp |
8fb1ede1 BB |
1163 | .ne 2 |
1164 | .na | |
1165 | \fBzfs_deadman_ziotime_ms\fR (ulong) | |
1166 | .ad | |
1167 | .RS 12n | |
1168 | Interval in milliseconds after which the deadman is triggered and an | |
ad796b8a | 1169 | individual I/O operation is considered to be "hung". As long as the I/O |
8fb1ede1 BB |
1170 | remains "hung" the deadman will be invoked every \fBzfs_deadman_checktime_ms\fR |
1171 | milliseconds until the I/O completes. | |
1172 | .sp | |
1173 | Default value: \fB300,000\fR. | |
29714574 TF |
1174 | .RE |
1175 | ||
1176 | .sp | |
1177 | .ne 2 | |
1178 | .na | |
1179 | \fBzfs_dedup_prefetch\fR (int) | |
1180 | .ad | |
1181 | .RS 12n | |
1182 | Enable prefetching dedup-ed blks | |
1183 | .sp | |
0dfc7324 | 1184 | Use \fB1\fR for yes and \fB0\fR to disable (default). |
29714574 TF |
1185 | .RE |
1186 | ||
e8b96c60 MA |
1187 | .sp |
1188 | .ne 2 | |
1189 | .na | |
1190 | \fBzfs_delay_min_dirty_percent\fR (int) | |
1191 | .ad | |
1192 | .RS 12n | |
1193 | Start to delay each transaction once there is this amount of dirty data, | |
1194 | expressed as a percentage of \fBzfs_dirty_data_max\fR. | |
1195 | This value should be >= zfs_vdev_async_write_active_max_dirty_percent. | |
1196 | See the section "ZFS TRANSACTION DELAY". | |
1197 | .sp | |
be54a13c | 1198 | Default value: \fB60\fR%. |
e8b96c60 MA |
1199 | .RE |
1200 | ||
1201 | .sp | |
1202 | .ne 2 | |
1203 | .na | |
1204 | \fBzfs_delay_scale\fR (int) | |
1205 | .ad | |
1206 | .RS 12n | |
1207 | This controls how quickly the transaction delay approaches infinity. | |
1208 | Larger values cause longer delays for a given amount of dirty data. | |
1209 | .sp | |
1210 | For the smoothest delay, this value should be about 1 billion divided | |
1211 | by the maximum number of operations per second. This will smoothly | |
1212 | handle between 10x and 1/10th this number. | |
1213 | .sp | |
1214 | See the section "ZFS TRANSACTION DELAY". | |
1215 | .sp | |
1216 | Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64. | |
1217 | .sp | |
1218 | Default value: \fB500,000\fR. | |
1219 | .RE | |
1220 | ||
80d52c39 TH |
1221 | .sp |
1222 | .ne 2 | |
1223 | .na | |
62ee31ad | 1224 | \fBzfs_slow_io_events_per_second\fR (int) |
80d52c39 TH |
1225 | .ad |
1226 | .RS 12n | |
ad796b8a | 1227 | Rate limit delay zevents (which report slow I/Os) to this many per second. |
80d52c39 TH |
1228 | .sp |
1229 | Default value: 20 | |
1230 | .RE | |
1231 | ||
dcec0a12 AP |
1232 | .sp |
1233 | .ne 2 | |
1234 | .na | |
1235 | \fBzfs_unlink_suspend_progress\fR (uint) | |
1236 | .ad | |
1237 | .RS 12n | |
1238 | When enabled, files will not be asynchronously removed from the list of pending | |
1239 | unlinks and the space they consume will be leaked. Once this option has been | |
1240 | disabled and the dataset is remounted, the pending unlinks will be processed | |
1241 | and the freed space returned to the pool. | |
1242 | This option is used by the test suite to facilitate testing. | |
1243 | .sp | |
1244 | Uses \fB0\fR (default) to allow progress and \fB1\fR to pause progress. | |
1245 | .RE | |
1246 | ||
a966c564 K |
1247 | .sp |
1248 | .ne 2 | |
1249 | .na | |
1250 | \fBzfs_delete_blocks\fR (ulong) | |
1251 | .ad | |
1252 | .RS 12n | |
1253 | This is the used to define a large file for the purposes of delete. Files | |
1254 | containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously | |
1255 | while smaller files are deleted synchronously. Decreasing this value will | |
1256 | reduce the time spent in an unlink(2) system call at the expense of a longer | |
1257 | delay before the freed space is available. | |
1258 | .sp | |
1259 | Default value: \fB20,480\fR. | |
1260 | .RE | |
1261 | ||
e8b96c60 MA |
1262 | .sp |
1263 | .ne 2 | |
1264 | .na | |
1265 | \fBzfs_dirty_data_max\fR (int) | |
1266 | .ad | |
1267 | .RS 12n | |
1268 | Determines the dirty space limit in bytes. Once this limit is exceeded, new | |
1269 | writes are halted until space frees up. This parameter takes precedence | |
1270 | over \fBzfs_dirty_data_max_percent\fR. | |
1271 | See the section "ZFS TRANSACTION DELAY". | |
1272 | .sp | |
be54a13c | 1273 | Default value: \fB10\fR% of physical RAM, capped at \fBzfs_dirty_data_max_max\fR. |
e8b96c60 MA |
1274 | .RE |
1275 | ||
1276 | .sp | |
1277 | .ne 2 | |
1278 | .na | |
1279 | \fBzfs_dirty_data_max_max\fR (int) | |
1280 | .ad | |
1281 | .RS 12n | |
1282 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes. | |
1283 | This limit is only enforced at module load time, and will be ignored if | |
1284 | \fBzfs_dirty_data_max\fR is later changed. This parameter takes | |
1285 | precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section | |
1286 | "ZFS TRANSACTION DELAY". | |
1287 | .sp | |
be54a13c | 1288 | Default value: \fB25\fR% of physical RAM. |
e8b96c60 MA |
1289 | .RE |
1290 | ||
1291 | .sp | |
1292 | .ne 2 | |
1293 | .na | |
1294 | \fBzfs_dirty_data_max_max_percent\fR (int) | |
1295 | .ad | |
1296 | .RS 12n | |
1297 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a | |
1298 | percentage of physical RAM. This limit is only enforced at module load | |
1299 | time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed. | |
1300 | The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this | |
1301 | one. See the section "ZFS TRANSACTION DELAY". | |
1302 | .sp | |
be54a13c | 1303 | Default value: \fB25\fR%. |
e8b96c60 MA |
1304 | .RE |
1305 | ||
1306 | .sp | |
1307 | .ne 2 | |
1308 | .na | |
1309 | \fBzfs_dirty_data_max_percent\fR (int) | |
1310 | .ad | |
1311 | .RS 12n | |
1312 | Determines the dirty space limit, expressed as a percentage of all | |
1313 | memory. Once this limit is exceeded, new writes are halted until space frees | |
1314 | up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this | |
1315 | one. See the section "ZFS TRANSACTION DELAY". | |
1316 | .sp | |
be54a13c | 1317 | Default value: \fB10\fR%, subject to \fBzfs_dirty_data_max_max\fR. |
e8b96c60 MA |
1318 | .RE |
1319 | ||
1320 | .sp | |
1321 | .ne 2 | |
1322 | .na | |
dfbe2675 | 1323 | \fBzfs_dirty_data_sync_percent\fR (int) |
e8b96c60 MA |
1324 | .ad |
1325 | .RS 12n | |
dfbe2675 MA |
1326 | Start syncing out a transaction group if there's at least this much dirty data |
1327 | as a percentage of \fBzfs_dirty_data_max\fR. This should be less than | |
1328 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR. | |
e8b96c60 | 1329 | .sp |
dfbe2675 | 1330 | Default value: \fB20\fR% of \fBzfs_dirty_data_max\fR. |
e8b96c60 MA |
1331 | .RE |
1332 | ||
1eeb4562 JX |
1333 | .sp |
1334 | .ne 2 | |
1335 | .na | |
1336 | \fBzfs_fletcher_4_impl\fR (string) | |
1337 | .ad | |
1338 | .RS 12n | |
1339 | Select a fletcher 4 implementation. | |
1340 | .sp | |
35a76a03 | 1341 | Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR, |
24cdeaf1 | 1342 | \fBavx2\fR, \fBavx512f\fR, and \fBaarch64_neon\fR. |
70b258fc GN |
1343 | All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction |
1344 | set extensions to be available and will only appear if ZFS detects that they are | |
1345 | present at runtime. If multiple implementations of fletcher 4 are available, | |
1346 | the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR | |
1347 | results in the original, CPU based calculation, being used. Selecting any option | |
1348 | other than \fBfastest\fR and \fBscalar\fR results in vector instructions from | |
1349 | the respective CPU instruction set being used. | |
1eeb4562 JX |
1350 | .sp |
1351 | Default value: \fBfastest\fR. | |
1352 | .RE | |
1353 | ||
ba5ad9a4 GW |
1354 | .sp |
1355 | .ne 2 | |
1356 | .na | |
1357 | \fBzfs_free_bpobj_enabled\fR (int) | |
1358 | .ad | |
1359 | .RS 12n | |
1360 | Enable/disable the processing of the free_bpobj object. | |
1361 | .sp | |
1362 | Default value: \fB1\fR. | |
1363 | .RE | |
1364 | ||
36283ca2 MG |
1365 | .sp |
1366 | .ne 2 | |
1367 | .na | |
a1d477c2 | 1368 | \fBzfs_async_block_max_blocks\fR (ulong) |
36283ca2 MG |
1369 | .ad |
1370 | .RS 12n | |
1371 | Maximum number of blocks freed in a single txg. | |
1372 | .sp | |
1373 | Default value: \fB100,000\fR. | |
1374 | .RE | |
1375 | ||
ca0845d5 PD |
1376 | .sp |
1377 | .ne 2 | |
1378 | .na | |
1379 | \fBzfs_override_estimate_recordsize\fR (ulong) | |
1380 | .ad | |
1381 | .RS 12n | |
1382 | Record size calculation override for zfs send estimates. | |
1383 | .sp | |
1384 | Default value: \fB0\fR. | |
1385 | .RE | |
1386 | ||
e8b96c60 MA |
1387 | .sp |
1388 | .ne 2 | |
1389 | .na | |
1390 | \fBzfs_vdev_async_read_max_active\fR (int) | |
1391 | .ad | |
1392 | .RS 12n | |
83426735 | 1393 | Maximum asynchronous read I/Os active to each device. |
e8b96c60 MA |
1394 | See the section "ZFS I/O SCHEDULER". |
1395 | .sp | |
1396 | Default value: \fB3\fR. | |
1397 | .RE | |
1398 | ||
1399 | .sp | |
1400 | .ne 2 | |
1401 | .na | |
1402 | \fBzfs_vdev_async_read_min_active\fR (int) | |
1403 | .ad | |
1404 | .RS 12n | |
1405 | Minimum asynchronous read I/Os active to each device. | |
1406 | See the section "ZFS I/O SCHEDULER". | |
1407 | .sp | |
1408 | Default value: \fB1\fR. | |
1409 | .RE | |
1410 | ||
1411 | .sp | |
1412 | .ne 2 | |
1413 | .na | |
1414 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int) | |
1415 | .ad | |
1416 | .RS 12n | |
1417 | When the pool has more than | |
1418 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use | |
1419 | \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If | |
1420 | the dirty data is between min and max, the active I/O limit is linearly | |
1421 | interpolated. See the section "ZFS I/O SCHEDULER". | |
1422 | .sp | |
be54a13c | 1423 | Default value: \fB60\fR%. |
e8b96c60 MA |
1424 | .RE |
1425 | ||
1426 | .sp | |
1427 | .ne 2 | |
1428 | .na | |
1429 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int) | |
1430 | .ad | |
1431 | .RS 12n | |
1432 | When the pool has less than | |
1433 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use | |
1434 | \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If | |
1435 | the dirty data is between min and max, the active I/O limit is linearly | |
1436 | interpolated. See the section "ZFS I/O SCHEDULER". | |
1437 | .sp | |
be54a13c | 1438 | Default value: \fB30\fR%. |
e8b96c60 MA |
1439 | .RE |
1440 | ||
1441 | .sp | |
1442 | .ne 2 | |
1443 | .na | |
1444 | \fBzfs_vdev_async_write_max_active\fR (int) | |
1445 | .ad | |
1446 | .RS 12n | |
83426735 | 1447 | Maximum asynchronous write I/Os active to each device. |
e8b96c60 MA |
1448 | See the section "ZFS I/O SCHEDULER". |
1449 | .sp | |
1450 | Default value: \fB10\fR. | |
1451 | .RE | |
1452 | ||
1453 | .sp | |
1454 | .ne 2 | |
1455 | .na | |
1456 | \fBzfs_vdev_async_write_min_active\fR (int) | |
1457 | .ad | |
1458 | .RS 12n | |
1459 | Minimum asynchronous write I/Os active to each device. | |
1460 | See the section "ZFS I/O SCHEDULER". | |
1461 | .sp | |
06226b59 D |
1462 | Lower values are associated with better latency on rotational media but poorer |
1463 | resilver performance. The default value of 2 was chosen as a compromise. A | |
1464 | value of 3 has been shown to improve resilver performance further at a cost of | |
1465 | further increasing latency. | |
1466 | .sp | |
1467 | Default value: \fB2\fR. | |
e8b96c60 MA |
1468 | .RE |
1469 | ||
619f0976 GW |
1470 | .sp |
1471 | .ne 2 | |
1472 | .na | |
1473 | \fBzfs_vdev_initializing_max_active\fR (int) | |
1474 | .ad | |
1475 | .RS 12n | |
1476 | Maximum initializing I/Os active to each device. | |
1477 | See the section "ZFS I/O SCHEDULER". | |
1478 | .sp | |
1479 | Default value: \fB1\fR. | |
1480 | .RE | |
1481 | ||
1482 | .sp | |
1483 | .ne 2 | |
1484 | .na | |
1485 | \fBzfs_vdev_initializing_min_active\fR (int) | |
1486 | .ad | |
1487 | .RS 12n | |
1488 | Minimum initializing I/Os active to each device. | |
1489 | See the section "ZFS I/O SCHEDULER". | |
1490 | .sp | |
1491 | Default value: \fB1\fR. | |
1492 | .RE | |
1493 | ||
e8b96c60 MA |
1494 | .sp |
1495 | .ne 2 | |
1496 | .na | |
1497 | \fBzfs_vdev_max_active\fR (int) | |
1498 | .ad | |
1499 | .RS 12n | |
1500 | The maximum number of I/Os active to each device. Ideally, this will be >= | |
1501 | the sum of each queue's max_active. It must be at least the sum of each | |
1502 | queue's min_active. See the section "ZFS I/O SCHEDULER". | |
1503 | .sp | |
1504 | Default value: \fB1,000\fR. | |
1505 | .RE | |
1506 | ||
619f0976 GW |
1507 | .sp |
1508 | .ne 2 | |
1509 | .na | |
1510 | \fBzfs_vdev_removal_max_active\fR (int) | |
1511 | .ad | |
1512 | .RS 12n | |
1513 | Maximum removal I/Os active to each device. | |
1514 | See the section "ZFS I/O SCHEDULER". | |
1515 | .sp | |
1516 | Default value: \fB2\fR. | |
1517 | .RE | |
1518 | ||
1519 | .sp | |
1520 | .ne 2 | |
1521 | .na | |
1522 | \fBzfs_vdev_removal_min_active\fR (int) | |
1523 | .ad | |
1524 | .RS 12n | |
1525 | Minimum removal I/Os active to each device. | |
1526 | See the section "ZFS I/O SCHEDULER". | |
1527 | .sp | |
1528 | Default value: \fB1\fR. | |
1529 | .RE | |
1530 | ||
e8b96c60 MA |
1531 | .sp |
1532 | .ne 2 | |
1533 | .na | |
1534 | \fBzfs_vdev_scrub_max_active\fR (int) | |
1535 | .ad | |
1536 | .RS 12n | |
83426735 | 1537 | Maximum scrub I/Os active to each device. |
e8b96c60 MA |
1538 | See the section "ZFS I/O SCHEDULER". |
1539 | .sp | |
1540 | Default value: \fB2\fR. | |
1541 | .RE | |
1542 | ||
1543 | .sp | |
1544 | .ne 2 | |
1545 | .na | |
1546 | \fBzfs_vdev_scrub_min_active\fR (int) | |
1547 | .ad | |
1548 | .RS 12n | |
1549 | Minimum scrub I/Os active to each device. | |
1550 | See the section "ZFS I/O SCHEDULER". | |
1551 | .sp | |
1552 | Default value: \fB1\fR. | |
1553 | .RE | |
1554 | ||
1555 | .sp | |
1556 | .ne 2 | |
1557 | .na | |
1558 | \fBzfs_vdev_sync_read_max_active\fR (int) | |
1559 | .ad | |
1560 | .RS 12n | |
83426735 | 1561 | Maximum synchronous read I/Os active to each device. |
e8b96c60 MA |
1562 | See the section "ZFS I/O SCHEDULER". |
1563 | .sp | |
1564 | Default value: \fB10\fR. | |
1565 | .RE | |
1566 | ||
1567 | .sp | |
1568 | .ne 2 | |
1569 | .na | |
1570 | \fBzfs_vdev_sync_read_min_active\fR (int) | |
1571 | .ad | |
1572 | .RS 12n | |
1573 | Minimum synchronous read I/Os active to each device. | |
1574 | See the section "ZFS I/O SCHEDULER". | |
1575 | .sp | |
1576 | Default value: \fB10\fR. | |
1577 | .RE | |
1578 | ||
1579 | .sp | |
1580 | .ne 2 | |
1581 | .na | |
1582 | \fBzfs_vdev_sync_write_max_active\fR (int) | |
1583 | .ad | |
1584 | .RS 12n | |
83426735 | 1585 | Maximum synchronous write I/Os active to each device. |
e8b96c60 MA |
1586 | See the section "ZFS I/O SCHEDULER". |
1587 | .sp | |
1588 | Default value: \fB10\fR. | |
1589 | .RE | |
1590 | ||
1591 | .sp | |
1592 | .ne 2 | |
1593 | .na | |
1594 | \fBzfs_vdev_sync_write_min_active\fR (int) | |
1595 | .ad | |
1596 | .RS 12n | |
1597 | Minimum synchronous write I/Os active to each device. | |
1598 | See the section "ZFS I/O SCHEDULER". | |
1599 | .sp | |
1600 | Default value: \fB10\fR. | |
1601 | .RE | |
1602 | ||
1b939560 BB |
1603 | .sp |
1604 | .ne 2 | |
1605 | .na | |
1606 | \fBzfs_vdev_trim_max_active\fR (int) | |
1607 | .ad | |
1608 | .RS 12n | |
1609 | Maximum trim/discard I/Os active to each device. | |
1610 | See the section "ZFS I/O SCHEDULER". | |
1611 | .sp | |
1612 | Default value: \fB2\fR. | |
1613 | .RE | |
1614 | ||
1615 | .sp | |
1616 | .ne 2 | |
1617 | .na | |
1618 | \fBzfs_vdev_trim_min_active\fR (int) | |
1619 | .ad | |
1620 | .RS 12n | |
1621 | Minimum trim/discard I/Os active to each device. | |
1622 | See the section "ZFS I/O SCHEDULER". | |
1623 | .sp | |
1624 | Default value: \fB1\fR. | |
1625 | .RE | |
1626 | ||
3dfb57a3 DB |
1627 | .sp |
1628 | .ne 2 | |
1629 | .na | |
1630 | \fBzfs_vdev_queue_depth_pct\fR (int) | |
1631 | .ad | |
1632 | .RS 12n | |
e815485f TC |
1633 | Maximum number of queued allocations per top-level vdev expressed as |
1634 | a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the | |
1635 | system to detect devices that are more capable of handling allocations | |
1636 | and to allocate more blocks to those devices. It allows for dynamic | |
1637 | allocation distribution when devices are imbalanced as fuller devices | |
1638 | will tend to be slower than empty devices. | |
1639 | ||
1640 | See also \fBzio_dva_throttle_enabled\fR. | |
3dfb57a3 | 1641 | .sp |
be54a13c | 1642 | Default value: \fB1000\fR%. |
3dfb57a3 DB |
1643 | .RE |
1644 | ||
29714574 TF |
1645 | .sp |
1646 | .ne 2 | |
1647 | .na | |
1648 | \fBzfs_expire_snapshot\fR (int) | |
1649 | .ad | |
1650 | .RS 12n | |
1651 | Seconds to expire .zfs/snapshot | |
1652 | .sp | |
1653 | Default value: \fB300\fR. | |
1654 | .RE | |
1655 | ||
0500e835 BB |
1656 | .sp |
1657 | .ne 2 | |
1658 | .na | |
1659 | \fBzfs_admin_snapshot\fR (int) | |
1660 | .ad | |
1661 | .RS 12n | |
1662 | Allow the creation, removal, or renaming of entries in the .zfs/snapshot | |
1663 | directory to cause the creation, destruction, or renaming of snapshots. | |
1664 | When enabled this functionality works both locally and over NFS exports | |
1665 | which have the 'no_root_squash' option set. This functionality is disabled | |
1666 | by default. | |
1667 | .sp | |
1668 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1669 | .RE | |
1670 | ||
29714574 TF |
1671 | .sp |
1672 | .ne 2 | |
1673 | .na | |
1674 | \fBzfs_flags\fR (int) | |
1675 | .ad | |
1676 | .RS 12n | |
33b6dbbc NB |
1677 | Set additional debugging flags. The following flags may be bitwise-or'd |
1678 | together. | |
1679 | .sp | |
1680 | .TS | |
1681 | box; | |
1682 | rB lB | |
1683 | lB lB | |
1684 | r l. | |
1685 | Value Symbolic Name | |
1686 | Description | |
1687 | _ | |
1688 | 1 ZFS_DEBUG_DPRINTF | |
1689 | Enable dprintf entries in the debug log. | |
1690 | _ | |
1691 | 2 ZFS_DEBUG_DBUF_VERIFY * | |
1692 | Enable extra dbuf verifications. | |
1693 | _ | |
1694 | 4 ZFS_DEBUG_DNODE_VERIFY * | |
1695 | Enable extra dnode verifications. | |
1696 | _ | |
1697 | 8 ZFS_DEBUG_SNAPNAMES | |
1698 | Enable snapshot name verification. | |
1699 | _ | |
1700 | 16 ZFS_DEBUG_MODIFY | |
1701 | Check for illegally modified ARC buffers. | |
1702 | _ | |
33b6dbbc NB |
1703 | 64 ZFS_DEBUG_ZIO_FREE |
1704 | Enable verification of block frees. | |
1705 | _ | |
1706 | 128 ZFS_DEBUG_HISTOGRAM_VERIFY | |
1707 | Enable extra spacemap histogram verifications. | |
8740cf4a NB |
1708 | _ |
1709 | 256 ZFS_DEBUG_METASLAB_VERIFY | |
1710 | Verify space accounting on disk matches in-core range_trees. | |
1711 | _ | |
1712 | 512 ZFS_DEBUG_SET_ERROR | |
1713 | Enable SET_ERROR and dprintf entries in the debug log. | |
1b939560 BB |
1714 | _ |
1715 | 1024 ZFS_DEBUG_INDIRECT_REMAP | |
1716 | Verify split blocks created by device removal. | |
1717 | _ | |
1718 | 2048 ZFS_DEBUG_TRIM | |
1719 | Verify TRIM ranges are always within the allocatable range tree. | |
33b6dbbc NB |
1720 | .TE |
1721 | .sp | |
1722 | * Requires debug build. | |
29714574 | 1723 | .sp |
33b6dbbc | 1724 | Default value: \fB0\fR. |
29714574 TF |
1725 | .RE |
1726 | ||
fbeddd60 MA |
1727 | .sp |
1728 | .ne 2 | |
1729 | .na | |
1730 | \fBzfs_free_leak_on_eio\fR (int) | |
1731 | .ad | |
1732 | .RS 12n | |
1733 | If destroy encounters an EIO while reading metadata (e.g. indirect | |
1734 | blocks), space referenced by the missing metadata can not be freed. | |
1735 | Normally this causes the background destroy to become "stalled", as | |
1736 | it is unable to make forward progress. While in this stalled state, | |
1737 | all remaining space to free from the error-encountering filesystem is | |
1738 | "temporarily leaked". Set this flag to cause it to ignore the EIO, | |
1739 | permanently leak the space from indirect blocks that can not be read, | |
1740 | and continue to free everything else that it can. | |
1741 | ||
1742 | The default, "stalling" behavior is useful if the storage partially | |
1743 | fails (i.e. some but not all i/os fail), and then later recovers. In | |
1744 | this case, we will be able to continue pool operations while it is | |
1745 | partially failed, and when it recovers, we can continue to free the | |
1746 | space, with no leaks. However, note that this case is actually | |
1747 | fairly rare. | |
1748 | ||
1749 | Typically pools either (a) fail completely (but perhaps temporarily, | |
1750 | e.g. a top-level vdev going offline), or (b) have localized, | |
1751 | permanent errors (e.g. disk returns the wrong data due to bit flip or | |
1752 | firmware bug). In case (a), this setting does not matter because the | |
1753 | pool will be suspended and the sync thread will not be able to make | |
1754 | forward progress regardless. In case (b), because the error is | |
1755 | permanent, the best we can do is leak the minimum amount of space, | |
1756 | which is what setting this flag will do. Therefore, it is reasonable | |
1757 | for this flag to normally be set, but we chose the more conservative | |
1758 | approach of not setting it, so that there is no possibility of | |
1759 | leaking space in the "partial temporary" failure case. | |
1760 | .sp | |
1761 | Default value: \fB0\fR. | |
1762 | .RE | |
1763 | ||
29714574 TF |
1764 | .sp |
1765 | .ne 2 | |
1766 | .na | |
1767 | \fBzfs_free_min_time_ms\fR (int) | |
1768 | .ad | |
1769 | .RS 12n | |
6146e17e | 1770 | During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum |
83426735 | 1771 | of this much time will be spent working on freeing blocks per txg. |
29714574 TF |
1772 | .sp |
1773 | Default value: \fB1,000\fR. | |
1774 | .RE | |
1775 | ||
1776 | .sp | |
1777 | .ne 2 | |
1778 | .na | |
1779 | \fBzfs_immediate_write_sz\fR (long) | |
1780 | .ad | |
1781 | .RS 12n | |
83426735 | 1782 | Largest data block to write to zil. Larger blocks will be treated as if the |
6146e17e | 1783 | dataset being written to had the property setting \fBlogbias=throughput\fR. |
29714574 TF |
1784 | .sp |
1785 | Default value: \fB32,768\fR. | |
1786 | .RE | |
1787 | ||
619f0976 GW |
1788 | .sp |
1789 | .ne 2 | |
1790 | .na | |
1791 | \fBzfs_initialize_value\fR (ulong) | |
1792 | .ad | |
1793 | .RS 12n | |
1794 | Pattern written to vdev free space by \fBzpool initialize\fR. | |
1795 | .sp | |
1796 | Default value: \fB16,045,690,984,833,335,022\fR (0xdeadbeefdeadbeee). | |
1797 | .RE | |
1798 | ||
917f475f JG |
1799 | .sp |
1800 | .ne 2 | |
1801 | .na | |
1802 | \fBzfs_lua_max_instrlimit\fR (ulong) | |
1803 | .ad | |
1804 | .RS 12n | |
1805 | The maximum execution time limit that can be set for a ZFS channel program, | |
1806 | specified as a number of Lua instructions. | |
1807 | .sp | |
1808 | Default value: \fB100,000,000\fR. | |
1809 | .RE | |
1810 | ||
1811 | .sp | |
1812 | .ne 2 | |
1813 | .na | |
1814 | \fBzfs_lua_max_memlimit\fR (ulong) | |
1815 | .ad | |
1816 | .RS 12n | |
1817 | The maximum memory limit that can be set for a ZFS channel program, specified | |
1818 | in bytes. | |
1819 | .sp | |
1820 | Default value: \fB104,857,600\fR. | |
1821 | .RE | |
1822 | ||
a7ed98d8 SD |
1823 | .sp |
1824 | .ne 2 | |
1825 | .na | |
1826 | \fBzfs_max_dataset_nesting\fR (int) | |
1827 | .ad | |
1828 | .RS 12n | |
1829 | The maximum depth of nested datasets. This value can be tuned temporarily to | |
1830 | fix existing datasets that exceed the predefined limit. | |
1831 | .sp | |
1832 | Default value: \fB50\fR. | |
1833 | .RE | |
1834 | ||
f1512ee6 MA |
1835 | .sp |
1836 | .ne 2 | |
1837 | .na | |
1838 | \fBzfs_max_recordsize\fR (int) | |
1839 | .ad | |
1840 | .RS 12n | |
1841 | We currently support block sizes from 512 bytes to 16MB. The benefits of | |
ad796b8a | 1842 | larger blocks, and thus larger I/O, need to be weighed against the cost of |
f1512ee6 MA |
1843 | COWing a giant block to modify one byte. Additionally, very large blocks |
1844 | can have an impact on i/o latency, and also potentially on the memory | |
1845 | allocator. Therefore, we do not allow the recordsize to be set larger than | |
1846 | zfs_max_recordsize (default 1MB). Larger blocks can be created by changing | |
1847 | this tunable, and pools with larger blocks can always be imported and used, | |
1848 | regardless of this setting. | |
1849 | .sp | |
1850 | Default value: \fB1,048,576\fR. | |
1851 | .RE | |
1852 | ||
f3a7f661 GW |
1853 | .sp |
1854 | .ne 2 | |
1855 | .na | |
1856 | \fBzfs_metaslab_fragmentation_threshold\fR (int) | |
1857 | .ad | |
1858 | .RS 12n | |
1859 | Allow metaslabs to keep their active state as long as their fragmentation | |
1860 | percentage is less than or equal to this value. An active metaslab that | |
1861 | exceeds this threshold will no longer keep its active status allowing | |
1862 | better metaslabs to be selected. | |
1863 | .sp | |
1864 | Default value: \fB70\fR. | |
1865 | .RE | |
1866 | ||
1867 | .sp | |
1868 | .ne 2 | |
1869 | .na | |
1870 | \fBzfs_mg_fragmentation_threshold\fR (int) | |
1871 | .ad | |
1872 | .RS 12n | |
1873 | Metaslab groups are considered eligible for allocations if their | |
83426735 | 1874 | fragmentation metric (measured as a percentage) is less than or equal to |
f3a7f661 GW |
1875 | this value. If a metaslab group exceeds this threshold then it will be |
1876 | skipped unless all metaslab groups within the metaslab class have also | |
1877 | crossed this threshold. | |
1878 | .sp | |
cb020f0d | 1879 | Default value: \fB95\fR. |
f3a7f661 GW |
1880 | .RE |
1881 | ||
f4a4046b TC |
1882 | .sp |
1883 | .ne 2 | |
1884 | .na | |
1885 | \fBzfs_mg_noalloc_threshold\fR (int) | |
1886 | .ad | |
1887 | .RS 12n | |
1888 | Defines a threshold at which metaslab groups should be eligible for | |
1889 | allocations. The value is expressed as a percentage of free space | |
1890 | beyond which a metaslab group is always eligible for allocations. | |
1891 | If a metaslab group's free space is less than or equal to the | |
6b4e21c6 | 1892 | threshold, the allocator will avoid allocating to that group |
f4a4046b TC |
1893 | unless all groups in the pool have reached the threshold. Once all |
1894 | groups have reached the threshold, all groups are allowed to accept | |
1895 | allocations. The default value of 0 disables the feature and causes | |
1896 | all metaslab groups to be eligible for allocations. | |
1897 | ||
b58237e7 | 1898 | This parameter allows one to deal with pools having heavily imbalanced |
f4a4046b TC |
1899 | vdevs such as would be the case when a new vdev has been added. |
1900 | Setting the threshold to a non-zero percentage will stop allocations | |
1901 | from being made to vdevs that aren't filled to the specified percentage | |
1902 | and allow lesser filled vdevs to acquire more allocations than they | |
1903 | otherwise would under the old \fBzfs_mg_alloc_failures\fR facility. | |
1904 | .sp | |
1905 | Default value: \fB0\fR. | |
1906 | .RE | |
1907 | ||
cc99f275 DB |
1908 | .sp |
1909 | .ne 2 | |
1910 | .na | |
1911 | \fBzfs_ddt_data_is_special\fR (int) | |
1912 | .ad | |
1913 | .RS 12n | |
1914 | If enabled, ZFS will place DDT data into the special allocation class. | |
1915 | .sp | |
1916 | Default value: \fB1\fR. | |
1917 | .RE | |
1918 | ||
1919 | .sp | |
1920 | .ne 2 | |
1921 | .na | |
1922 | \fBzfs_user_indirect_is_special\fR (int) | |
1923 | .ad | |
1924 | .RS 12n | |
1925 | If enabled, ZFS will place user data (both file and zvol) indirect blocks | |
1926 | into the special allocation class. | |
1927 | .sp | |
1928 | Default value: \fB1\fR. | |
1929 | .RE | |
1930 | ||
379ca9cf OF |
1931 | .sp |
1932 | .ne 2 | |
1933 | .na | |
1934 | \fBzfs_multihost_history\fR (int) | |
1935 | .ad | |
1936 | .RS 12n | |
1937 | Historical statistics for the last N multihost updates will be available in | |
1938 | \fB/proc/spl/kstat/zfs/<pool>/multihost\fR | |
1939 | .sp | |
1940 | Default value: \fB0\fR. | |
1941 | .RE | |
1942 | ||
1943 | .sp | |
1944 | .ne 2 | |
1945 | .na | |
1946 | \fBzfs_multihost_interval\fR (ulong) | |
1947 | .ad | |
1948 | .RS 12n | |
1949 | Used to control the frequency of multihost writes which are performed when the | |
060f0226 OF |
1950 | \fBmultihost\fR pool property is on. This is one factor used to determine the |
1951 | length of the activity check during import. | |
379ca9cf | 1952 | .sp |
060f0226 OF |
1953 | The multihost write period is \fBzfs_multihost_interval / leaf-vdevs\fR |
1954 | milliseconds. On average a multihost write will be issued for each leaf vdev | |
1955 | every \fBzfs_multihost_interval\fR milliseconds. In practice, the observed | |
1956 | period can vary with the I/O load and this observed value is the delay which is | |
1957 | stored in the uberblock. | |
379ca9cf OF |
1958 | .sp |
1959 | Default value: \fB1000\fR. | |
1960 | .RE | |
1961 | ||
1962 | .sp | |
1963 | .ne 2 | |
1964 | .na | |
1965 | \fBzfs_multihost_import_intervals\fR (uint) | |
1966 | .ad | |
1967 | .RS 12n | |
1968 | Used to control the duration of the activity test on import. Smaller values of | |
1969 | \fBzfs_multihost_import_intervals\fR will reduce the import time but increase | |
1970 | the risk of failing to detect an active pool. The total activity check time is | |
060f0226 OF |
1971 | never allowed to drop below one second. |
1972 | .sp | |
1973 | On import the activity check waits a minimum amount of time determined by | |
1974 | \fBzfs_multihost_interval * zfs_multihost_import_intervals\fR, or the same | |
1975 | product computed on the host which last had the pool imported (whichever is | |
1976 | greater). The activity check time may be further extended if the value of mmp | |
1977 | delay found in the best uberblock indicates actual multihost updates happened | |
1978 | at longer intervals than \fBzfs_multihost_interval\fR. A minimum value of | |
1979 | \fB100ms\fR is enforced. | |
1980 | .sp | |
1981 | A value of 0 is ignored and treated as if it was set to 1. | |
379ca9cf | 1982 | .sp |
db2af93d | 1983 | Default value: \fB20\fR. |
379ca9cf OF |
1984 | .RE |
1985 | ||
1986 | .sp | |
1987 | .ne 2 | |
1988 | .na | |
1989 | \fBzfs_multihost_fail_intervals\fR (uint) | |
1990 | .ad | |
1991 | .RS 12n | |
060f0226 OF |
1992 | Controls the behavior of the pool when multihost write failures or delays are |
1993 | detected. | |
379ca9cf | 1994 | .sp |
060f0226 OF |
1995 | When \fBzfs_multihost_fail_intervals = 0\fR, multihost write failures or delays |
1996 | are ignored. The failures will still be reported to the ZED which depending on | |
1997 | its configuration may take action such as suspending the pool or offlining a | |
1998 | device. | |
1999 | ||
379ca9cf | 2000 | .sp |
060f0226 OF |
2001 | When \fBzfs_multihost_fail_intervals > 0\fR, the pool will be suspended if |
2002 | \fBzfs_multihost_fail_intervals * zfs_multihost_interval\fR milliseconds pass | |
2003 | without a successful mmp write. This guarantees the activity test will see | |
2004 | mmp writes if the pool is imported. A value of 1 is ignored and treated as | |
2005 | if it was set to 2. This is necessary to prevent the pool from being suspended | |
2006 | due to normal, small I/O latency variations. | |
2007 | ||
379ca9cf | 2008 | .sp |
db2af93d | 2009 | Default value: \fB10\fR. |
379ca9cf OF |
2010 | .RE |
2011 | ||
29714574 TF |
2012 | .sp |
2013 | .ne 2 | |
2014 | .na | |
2015 | \fBzfs_no_scrub_io\fR (int) | |
2016 | .ad | |
2017 | .RS 12n | |
83426735 D |
2018 | Set for no scrub I/O. This results in scrubs not actually scrubbing data and |
2019 | simply doing a metadata crawl of the pool instead. | |
29714574 TF |
2020 | .sp |
2021 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2022 | .RE | |
2023 | ||
2024 | .sp | |
2025 | .ne 2 | |
2026 | .na | |
2027 | \fBzfs_no_scrub_prefetch\fR (int) | |
2028 | .ad | |
2029 | .RS 12n | |
83426735 | 2030 | Set to disable block prefetching for scrubs. |
29714574 TF |
2031 | .sp |
2032 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2033 | .RE | |
2034 | ||
29714574 TF |
2035 | .sp |
2036 | .ne 2 | |
2037 | .na | |
2038 | \fBzfs_nocacheflush\fR (int) | |
2039 | .ad | |
2040 | .RS 12n | |
53b1f5ea PS |
2041 | Disable cache flush operations on disks when writing. Setting this will |
2042 | cause pool corruption on power loss if a volatile out-of-order write cache | |
2043 | is enabled. | |
29714574 TF |
2044 | .sp |
2045 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2046 | .RE | |
2047 | ||
2048 | .sp | |
2049 | .ne 2 | |
2050 | .na | |
2051 | \fBzfs_nopwrite_enabled\fR (int) | |
2052 | .ad | |
2053 | .RS 12n | |
2054 | Enable NOP writes | |
2055 | .sp | |
2056 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
2057 | .RE | |
2058 | ||
66aca247 DB |
2059 | .sp |
2060 | .ne 2 | |
2061 | .na | |
2062 | \fBzfs_dmu_offset_next_sync\fR (int) | |
2063 | .ad | |
2064 | .RS 12n | |
2065 | Enable forcing txg sync to find holes. When enabled forces ZFS to act | |
2066 | like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which | |
2067 | when a dnode is dirty causes txg's to be synced so that this data can be | |
2068 | found. | |
2069 | .sp | |
2070 | Use \fB1\fR for yes and \fB0\fR to disable (default). | |
2071 | .RE | |
2072 | ||
29714574 TF |
2073 | .sp |
2074 | .ne 2 | |
2075 | .na | |
b738bc5a | 2076 | \fBzfs_pd_bytes_max\fR (int) |
29714574 TF |
2077 | .ad |
2078 | .RS 12n | |
83426735 | 2079 | The number of bytes which should be prefetched during a pool traversal |
6146e17e | 2080 | (eg: \fBzfs send\fR or other data crawling operations) |
29714574 | 2081 | .sp |
74aa2ba2 | 2082 | Default value: \fB52,428,800\fR. |
29714574 TF |
2083 | .RE |
2084 | ||
bef78122 DQ |
2085 | .sp |
2086 | .ne 2 | |
2087 | .na | |
2088 | \fBzfs_per_txg_dirty_frees_percent \fR (ulong) | |
2089 | .ad | |
2090 | .RS 12n | |
65282ee9 AP |
2091 | Tunable to control percentage of dirtied indirect blocks from frees allowed |
2092 | into one TXG. After this threshold is crossed, additional frees will wait until | |
2093 | the next TXG. | |
bef78122 DQ |
2094 | A value of zero will disable this throttle. |
2095 | .sp | |
65282ee9 | 2096 | Default value: \fB5\fR, set to \fB0\fR to disable. |
bef78122 DQ |
2097 | .RE |
2098 | ||
29714574 TF |
2099 | .sp |
2100 | .ne 2 | |
2101 | .na | |
2102 | \fBzfs_prefetch_disable\fR (int) | |
2103 | .ad | |
2104 | .RS 12n | |
7f60329a MA |
2105 | This tunable disables predictive prefetch. Note that it leaves "prescient" |
2106 | prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch, | |
2107 | prescient prefetch never issues i/os that end up not being needed, so it | |
2108 | can't hurt performance. | |
29714574 TF |
2109 | .sp |
2110 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2111 | .RE | |
2112 | ||
5090f727 CZ |
2113 | .sp |
2114 | .ne 2 | |
2115 | .na | |
2116 | \fBzfs_qat_checksum_disable\fR (int) | |
2117 | .ad | |
2118 | .RS 12n | |
2119 | This tunable disables qat hardware acceleration for sha256 checksums. It | |
2120 | may be set after the zfs modules have been loaded to initialize the qat | |
2121 | hardware as long as support is compiled in and the qat driver is present. | |
2122 | .sp | |
2123 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2124 | .RE | |
2125 | ||
2126 | .sp | |
2127 | .ne 2 | |
2128 | .na | |
2129 | \fBzfs_qat_compress_disable\fR (int) | |
2130 | .ad | |
2131 | .RS 12n | |
2132 | This tunable disables qat hardware acceleration for gzip compression. It | |
2133 | may be set after the zfs modules have been loaded to initialize the qat | |
2134 | hardware as long as support is compiled in and the qat driver is present. | |
2135 | .sp | |
2136 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2137 | .RE | |
2138 | ||
2139 | .sp | |
2140 | .ne 2 | |
2141 | .na | |
2142 | \fBzfs_qat_encrypt_disable\fR (int) | |
2143 | .ad | |
2144 | .RS 12n | |
2145 | This tunable disables qat hardware acceleration for AES-GCM encryption. It | |
2146 | may be set after the zfs modules have been loaded to initialize the qat | |
2147 | hardware as long as support is compiled in and the qat driver is present. | |
2148 | .sp | |
2149 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2150 | .RE | |
2151 | ||
29714574 TF |
2152 | .sp |
2153 | .ne 2 | |
2154 | .na | |
2155 | \fBzfs_read_chunk_size\fR (long) | |
2156 | .ad | |
2157 | .RS 12n | |
2158 | Bytes to read per chunk | |
2159 | .sp | |
2160 | Default value: \fB1,048,576\fR. | |
2161 | .RE | |
2162 | ||
2163 | .sp | |
2164 | .ne 2 | |
2165 | .na | |
2166 | \fBzfs_read_history\fR (int) | |
2167 | .ad | |
2168 | .RS 12n | |
379ca9cf OF |
2169 | Historical statistics for the last N reads will be available in |
2170 | \fB/proc/spl/kstat/zfs/<pool>/reads\fR | |
29714574 | 2171 | .sp |
83426735 | 2172 | Default value: \fB0\fR (no data is kept). |
29714574 TF |
2173 | .RE |
2174 | ||
2175 | .sp | |
2176 | .ne 2 | |
2177 | .na | |
2178 | \fBzfs_read_history_hits\fR (int) | |
2179 | .ad | |
2180 | .RS 12n | |
2181 | Include cache hits in read history | |
2182 | .sp | |
2183 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2184 | .RE | |
2185 | ||
9e052db4 MA |
2186 | .sp |
2187 | .ne 2 | |
2188 | .na | |
4589f3ae BB |
2189 | \fBzfs_reconstruct_indirect_combinations_max\fR (int) |
2190 | .ad | |
2191 | .RS 12na | |
2192 | If an indirect split block contains more than this many possible unique | |
2193 | combinations when being reconstructed, consider it too computationally | |
2194 | expensive to check them all. Instead, try at most | |
2195 | \fBzfs_reconstruct_indirect_combinations_max\fR randomly-selected | |
2196 | combinations each time the block is accessed. This allows all segment | |
2197 | copies to participate fairly in the reconstruction when all combinations | |
2198 | cannot be checked and prevents repeated use of one bad copy. | |
2199 | .sp | |
64bdf63f | 2200 | Default value: \fB4096\fR. |
9e052db4 MA |
2201 | .RE |
2202 | ||
29714574 TF |
2203 | .sp |
2204 | .ne 2 | |
2205 | .na | |
2206 | \fBzfs_recover\fR (int) | |
2207 | .ad | |
2208 | .RS 12n | |
2209 | Set to attempt to recover from fatal errors. This should only be used as a | |
2210 | last resort, as it typically results in leaked space, or worse. | |
2211 | .sp | |
2212 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2213 | .RE | |
2214 | ||
7c9a4292 BB |
2215 | .sp |
2216 | .ne 2 | |
2217 | .na | |
2218 | \fBzfs_removal_ignore_errors\fR (int) | |
2219 | .ad | |
2220 | .RS 12n | |
2221 | .sp | |
2222 | Ignore hard IO errors during device removal. When set, if a device encounters | |
2223 | a hard IO error during the removal process the removal will not be cancelled. | |
2224 | This can result in a normally recoverable block becoming permanently damaged | |
2225 | and is not recommended. This should only be used as a last resort when the | |
2226 | pool cannot be returned to a healthy state prior to removing the device. | |
2227 | .sp | |
2228 | Default value: \fB0\fR. | |
2229 | .RE | |
2230 | ||
29714574 TF |
2231 | .sp |
2232 | .ne 2 | |
2233 | .na | |
d4a72f23 | 2234 | \fBzfs_resilver_min_time_ms\fR (int) |
29714574 TF |
2235 | .ad |
2236 | .RS 12n | |
d4a72f23 TC |
2237 | Resilvers are processed by the sync thread. While resilvering it will spend |
2238 | at least this much time working on a resilver between txg flushes. | |
29714574 | 2239 | .sp |
d4a72f23 | 2240 | Default value: \fB3,000\fR. |
29714574 TF |
2241 | .RE |
2242 | ||
02638a30 TC |
2243 | .sp |
2244 | .ne 2 | |
2245 | .na | |
2246 | \fBzfs_scan_ignore_errors\fR (int) | |
2247 | .ad | |
2248 | .RS 12n | |
2249 | If set to a nonzero value, remove the DTL (dirty time list) upon | |
2250 | completion of a pool scan (scrub) even if there were unrepairable | |
2251 | errors. It is intended to be used during pool repair or recovery to | |
2252 | stop resilvering when the pool is next imported. | |
2253 | .sp | |
2254 | Default value: \fB0\fR. | |
2255 | .RE | |
2256 | ||
29714574 TF |
2257 | .sp |
2258 | .ne 2 | |
2259 | .na | |
d4a72f23 | 2260 | \fBzfs_scrub_min_time_ms\fR (int) |
29714574 TF |
2261 | .ad |
2262 | .RS 12n | |
d4a72f23 TC |
2263 | Scrubs are processed by the sync thread. While scrubbing it will spend |
2264 | at least this much time working on a scrub between txg flushes. | |
29714574 | 2265 | .sp |
d4a72f23 | 2266 | Default value: \fB1,000\fR. |
29714574 TF |
2267 | .RE |
2268 | ||
2269 | .sp | |
2270 | .ne 2 | |
2271 | .na | |
d4a72f23 | 2272 | \fBzfs_scan_checkpoint_intval\fR (int) |
29714574 TF |
2273 | .ad |
2274 | .RS 12n | |
d4a72f23 TC |
2275 | To preserve progress across reboots the sequential scan algorithm periodically |
2276 | needs to stop metadata scanning and issue all the verifications I/Os to disk. | |
2277 | The frequency of this flushing is determined by the | |
a8577bdb | 2278 | \fBzfs_scan_checkpoint_intval\fR tunable. |
29714574 | 2279 | .sp |
d4a72f23 | 2280 | Default value: \fB7200\fR seconds (every 2 hours). |
29714574 TF |
2281 | .RE |
2282 | ||
2283 | .sp | |
2284 | .ne 2 | |
2285 | .na | |
d4a72f23 | 2286 | \fBzfs_scan_fill_weight\fR (int) |
29714574 TF |
2287 | .ad |
2288 | .RS 12n | |
d4a72f23 TC |
2289 | This tunable affects how scrub and resilver I/O segments are ordered. A higher |
2290 | number indicates that we care more about how filled in a segment is, while a | |
2291 | lower number indicates we care more about the size of the extent without | |
2292 | considering the gaps within a segment. This value is only tunable upon module | |
2293 | insertion. Changing the value afterwards will have no affect on scrub or | |
2294 | resilver performance. | |
29714574 | 2295 | .sp |
d4a72f23 | 2296 | Default value: \fB3\fR. |
29714574 TF |
2297 | .RE |
2298 | ||
2299 | .sp | |
2300 | .ne 2 | |
2301 | .na | |
d4a72f23 | 2302 | \fBzfs_scan_issue_strategy\fR (int) |
29714574 TF |
2303 | .ad |
2304 | .RS 12n | |
d4a72f23 TC |
2305 | Determines the order that data will be verified while scrubbing or resilvering. |
2306 | If set to \fB1\fR, data will be verified as sequentially as possible, given the | |
2307 | amount of memory reserved for scrubbing (see \fBzfs_scan_mem_lim_fact\fR). This | |
2308 | may improve scrub performance if the pool's data is very fragmented. If set to | |
2309 | \fB2\fR, the largest mostly-contiguous chunk of found data will be verified | |
2310 | first. By deferring scrubbing of small segments, we may later find adjacent data | |
2311 | to coalesce and increase the segment size. If set to \fB0\fR, zfs will use | |
2312 | strategy \fB1\fR during normal verification and strategy \fB2\fR while taking a | |
2313 | checkpoint. | |
29714574 | 2314 | .sp |
d4a72f23 TC |
2315 | Default value: \fB0\fR. |
2316 | .RE | |
2317 | ||
2318 | .sp | |
2319 | .ne 2 | |
2320 | .na | |
2321 | \fBzfs_scan_legacy\fR (int) | |
2322 | .ad | |
2323 | .RS 12n | |
2324 | A value of 0 indicates that scrubs and resilvers will gather metadata in | |
2325 | memory before issuing sequential I/O. A value of 1 indicates that the legacy | |
2326 | algorithm will be used where I/O is initiated as soon as it is discovered. | |
2327 | Changing this value to 0 will not affect scrubs or resilvers that are already | |
2328 | in progress. | |
2329 | .sp | |
2330 | Default value: \fB0\fR. | |
2331 | .RE | |
2332 | ||
2333 | .sp | |
2334 | .ne 2 | |
2335 | .na | |
2336 | \fBzfs_scan_max_ext_gap\fR (int) | |
2337 | .ad | |
2338 | .RS 12n | |
2339 | Indicates the largest gap in bytes between scrub / resilver I/Os that will still | |
2340 | be considered sequential for sorting purposes. Changing this value will not | |
2341 | affect scrubs or resilvers that are already in progress. | |
2342 | .sp | |
2343 | Default value: \fB2097152 (2 MB)\fR. | |
2344 | .RE | |
2345 | ||
2346 | .sp | |
2347 | .ne 2 | |
2348 | .na | |
2349 | \fBzfs_scan_mem_lim_fact\fR (int) | |
2350 | .ad | |
2351 | .RS 12n | |
2352 | Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. | |
2353 | This tunable determines the hard limit for I/O sorting memory usage. | |
2354 | When the hard limit is reached we stop scanning metadata and start issuing | |
2355 | data verification I/O. This is done until we get below the soft limit. | |
2356 | .sp | |
2357 | Default value: \fB20\fR which is 5% of RAM (1/20). | |
2358 | .RE | |
2359 | ||
2360 | .sp | |
2361 | .ne 2 | |
2362 | .na | |
2363 | \fBzfs_scan_mem_lim_soft_fact\fR (int) | |
2364 | .ad | |
2365 | .RS 12n | |
2366 | The fraction of the hard limit used to determined the soft limit for I/O sorting | |
2367 | by the sequential scan algorithm. When we cross this limit from bellow no action | |
2368 | is taken. When we cross this limit from above it is because we are issuing | |
2369 | verification I/O. In this case (unless the metadata scan is done) we stop | |
2370 | issuing verification I/O and start scanning metadata again until we get to the | |
2371 | hard limit. | |
2372 | .sp | |
2373 | Default value: \fB20\fR which is 5% of the hard limit (1/20). | |
2374 | .RE | |
2375 | ||
2376 | .sp | |
2377 | .ne 2 | |
2378 | .na | |
2379 | \fBzfs_scan_vdev_limit\fR (int) | |
2380 | .ad | |
2381 | .RS 12n | |
2382 | Maximum amount of data that can be concurrently issued at once for scrubs and | |
2383 | resilvers per leaf device, given in bytes. | |
2384 | .sp | |
2385 | Default value: \fB41943040\fR. | |
29714574 TF |
2386 | .RE |
2387 | ||
fd8febbd TF |
2388 | .sp |
2389 | .ne 2 | |
2390 | .na | |
2391 | \fBzfs_send_corrupt_data\fR (int) | |
2392 | .ad | |
2393 | .RS 12n | |
83426735 | 2394 | Allow sending of corrupt data (ignore read/checksum errors when sending data) |
fd8febbd TF |
2395 | .sp |
2396 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2397 | .RE | |
2398 | ||
caf9dd20 BB |
2399 | .sp |
2400 | .ne 2 | |
2401 | .na | |
2402 | \fBzfs_send_unmodified_spill_blocks\fR (int) | |
2403 | .ad | |
2404 | .RS 12n | |
2405 | Include unmodified spill blocks in the send stream. Under certain circumstances | |
2406 | previous versions of ZFS could incorrectly remove the spill block from an | |
2407 | existing object. Including unmodified copies of the spill blocks creates a | |
2408 | backwards compatible stream which will recreate a spill block if it was | |
2409 | incorrectly removed. | |
2410 | .sp | |
2411 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
2412 | .RE | |
2413 | ||
3b0d9928 BB |
2414 | .sp |
2415 | .ne 2 | |
2416 | .na | |
2417 | \fBzfs_send_queue_length\fR (int) | |
2418 | .ad | |
2419 | .RS 12n | |
2420 | The maximum number of bytes allowed in the \fBzfs send\fR queue. This value | |
2421 | must be at least twice the maximum block size in use. | |
2422 | .sp | |
2423 | Default value: \fB16,777,216\fR. | |
2424 | .RE | |
2425 | ||
2426 | .sp | |
2427 | .ne 2 | |
2428 | .na | |
2429 | \fBzfs_recv_queue_length\fR (int) | |
2430 | .ad | |
2431 | .RS 12n | |
3b0d9928 BB |
2432 | The maximum number of bytes allowed in the \fBzfs receive\fR queue. This value |
2433 | must be at least twice the maximum block size in use. | |
2434 | .sp | |
2435 | Default value: \fB16,777,216\fR. | |
2436 | .RE | |
2437 | ||
29714574 TF |
2438 | .sp |
2439 | .ne 2 | |
2440 | .na | |
2441 | \fBzfs_sync_pass_deferred_free\fR (int) | |
2442 | .ad | |
2443 | .RS 12n | |
83426735 | 2444 | Flushing of data to disk is done in passes. Defer frees starting in this pass |
29714574 TF |
2445 | .sp |
2446 | Default value: \fB2\fR. | |
2447 | .RE | |
2448 | ||
d2734cce SD |
2449 | .sp |
2450 | .ne 2 | |
2451 | .na | |
2452 | \fBzfs_spa_discard_memory_limit\fR (int) | |
2453 | .ad | |
2454 | .RS 12n | |
2455 | Maximum memory used for prefetching a checkpoint's space map on each | |
2456 | vdev while discarding the checkpoint. | |
2457 | .sp | |
2458 | Default value: \fB16,777,216\fR. | |
2459 | .RE | |
2460 | ||
1f02ecc5 D |
2461 | .sp |
2462 | .ne 2 | |
2463 | .na | |
2464 | \fBzfs_special_class_metadata_reserve_pct\fR (int) | |
2465 | .ad | |
2466 | .RS 12n | |
2467 | Only allow small data blocks to be allocated on the special and dedup vdev | |
2468 | types when the available free space percentage on these vdevs exceeds this | |
2469 | value. This ensures reserved space is available for pool meta data as the | |
2470 | special vdevs approach capacity. | |
2471 | .sp | |
2472 | Default value: \fB25\fR. | |
2473 | .RE | |
2474 | ||
29714574 TF |
2475 | .sp |
2476 | .ne 2 | |
2477 | .na | |
2478 | \fBzfs_sync_pass_dont_compress\fR (int) | |
2479 | .ad | |
2480 | .RS 12n | |
be89734a MA |
2481 | Starting in this sync pass, we disable compression (including of metadata). |
2482 | With the default setting, in practice, we don't have this many sync passes, | |
2483 | so this has no effect. | |
2484 | .sp | |
2485 | The original intent was that disabling compression would help the sync passes | |
2486 | to converge. However, in practice disabling compression increases the average | |
2487 | number of sync passes, because when we turn compression off, a lot of block's | |
2488 | size will change and thus we have to re-allocate (not overwrite) them. It | |
2489 | also increases the number of 128KB allocations (e.g. for indirect blocks and | |
2490 | spacemaps) because these will not be compressed. The 128K allocations are | |
2491 | especially detrimental to performance on highly fragmented systems, which may | |
2492 | have very few free segments of this size, and may need to load new metaslabs | |
2493 | to satisfy 128K allocations. | |
29714574 | 2494 | .sp |
be89734a | 2495 | Default value: \fB8\fR. |
29714574 TF |
2496 | .RE |
2497 | ||
2498 | .sp | |
2499 | .ne 2 | |
2500 | .na | |
2501 | \fBzfs_sync_pass_rewrite\fR (int) | |
2502 | .ad | |
2503 | .RS 12n | |
83426735 | 2504 | Rewrite new block pointers starting in this pass |
29714574 TF |
2505 | .sp |
2506 | Default value: \fB2\fR. | |
2507 | .RE | |
2508 | ||
a032ac4b BB |
2509 | .sp |
2510 | .ne 2 | |
2511 | .na | |
2512 | \fBzfs_sync_taskq_batch_pct\fR (int) | |
2513 | .ad | |
2514 | .RS 12n | |
2515 | This controls the number of threads used by the dp_sync_taskq. The default | |
2516 | value of 75% will create a maximum of one thread per cpu. | |
2517 | .sp | |
be54a13c | 2518 | Default value: \fB75\fR%. |
a032ac4b BB |
2519 | .RE |
2520 | ||
1b939560 BB |
2521 | .sp |
2522 | .ne 2 | |
2523 | .na | |
2524 | \fBzfs_trim_extent_bytes_max\fR (unsigned int) | |
2525 | .ad | |
2526 | .RS 12n | |
2527 | Maximum size of TRIM command. Ranges larger than this will be split in to | |
2528 | chunks no larger than \fBzfs_trim_extent_bytes_max\fR bytes before being | |
2529 | issued to the device. | |
2530 | .sp | |
2531 | Default value: \fB134,217,728\fR. | |
2532 | .RE | |
2533 | ||
2534 | .sp | |
2535 | .ne 2 | |
2536 | .na | |
2537 | \fBzfs_trim_extent_bytes_min\fR (unsigned int) | |
2538 | .ad | |
2539 | .RS 12n | |
2540 | Minimum size of TRIM commands. TRIM ranges smaller than this will be skipped | |
2541 | unless they're part of a larger range which was broken in to chunks. This is | |
2542 | done because it's common for these small TRIMs to negatively impact overall | |
2543 | performance. This value can be set to 0 to TRIM all unallocated space. | |
2544 | .sp | |
2545 | Default value: \fB32,768\fR. | |
2546 | .RE | |
2547 | ||
2548 | .sp | |
2549 | .ne 2 | |
2550 | .na | |
2551 | \fBzfs_trim_metaslab_skip\fR (unsigned int) | |
2552 | .ad | |
2553 | .RS 12n | |
2554 | Skip uninitialized metaslabs during the TRIM process. This option is useful | |
2555 | for pools constructed from large thinly-provisioned devices where TRIM | |
2556 | operations are slow. As a pool ages an increasing fraction of the pools | |
2557 | metaslabs will be initialized progressively degrading the usefulness of | |
2558 | this option. This setting is stored when starting a manual TRIM and will | |
2559 | persist for the duration of the requested TRIM. | |
2560 | .sp | |
2561 | Default value: \fB0\fR. | |
2562 | .RE | |
2563 | ||
2564 | .sp | |
2565 | .ne 2 | |
2566 | .na | |
2567 | \fBzfs_trim_queue_limit\fR (unsigned int) | |
2568 | .ad | |
2569 | .RS 12n | |
2570 | Maximum number of queued TRIMs outstanding per leaf vdev. The number of | |
2571 | concurrent TRIM commands issued to the device is controlled by the | |
2572 | \fBzfs_vdev_trim_min_active\fR and \fBzfs_vdev_trim_max_active\fR module | |
2573 | options. | |
2574 | .sp | |
2575 | Default value: \fB10\fR. | |
2576 | .RE | |
2577 | ||
2578 | .sp | |
2579 | .ne 2 | |
2580 | .na | |
2581 | \fBzfs_trim_txg_batch\fR (unsigned int) | |
2582 | .ad | |
2583 | .RS 12n | |
2584 | The number of transaction groups worth of frees which should be aggregated | |
2585 | before TRIM operations are issued to the device. This setting represents a | |
2586 | trade-off between issuing larger, more efficient TRIM operations and the | |
2587 | delay before the recently trimmed space is available for use by the device. | |
2588 | .sp | |
2589 | Increasing this value will allow frees to be aggregated for a longer time. | |
2590 | This will result is larger TRIM operations and potentially increased memory | |
2591 | usage. Decreasing this value will have the opposite effect. The default | |
2592 | value of 32 was determined to be a reasonable compromise. | |
2593 | .sp | |
2594 | Default value: \fB32\fR. | |
2595 | .RE | |
2596 | ||
29714574 TF |
2597 | .sp |
2598 | .ne 2 | |
2599 | .na | |
2600 | \fBzfs_txg_history\fR (int) | |
2601 | .ad | |
2602 | .RS 12n | |
379ca9cf OF |
2603 | Historical statistics for the last N txgs will be available in |
2604 | \fB/proc/spl/kstat/zfs/<pool>/txgs\fR | |
29714574 | 2605 | .sp |
ca85d690 | 2606 | Default value: \fB0\fR. |
29714574 TF |
2607 | .RE |
2608 | ||
29714574 TF |
2609 | .sp |
2610 | .ne 2 | |
2611 | .na | |
2612 | \fBzfs_txg_timeout\fR (int) | |
2613 | .ad | |
2614 | .RS 12n | |
83426735 | 2615 | Flush dirty data to disk at least every N seconds (maximum txg duration) |
29714574 TF |
2616 | .sp |
2617 | Default value: \fB5\fR. | |
2618 | .RE | |
2619 | ||
1b939560 BB |
2620 | .sp |
2621 | .ne 2 | |
2622 | .na | |
2623 | \fBzfs_vdev_aggregate_trim\fR (int) | |
2624 | .ad | |
2625 | .RS 12n | |
2626 | Allow TRIM I/Os to be aggregated. This is normally not helpful because | |
2627 | the extents to be trimmed will have been already been aggregated by the | |
2628 | metaslab. This option is provided for debugging and performance analysis. | |
2629 | .sp | |
2630 | Default value: \fB0\fR. | |
2631 | .RE | |
2632 | ||
29714574 TF |
2633 | .sp |
2634 | .ne 2 | |
2635 | .na | |
2636 | \fBzfs_vdev_aggregation_limit\fR (int) | |
2637 | .ad | |
2638 | .RS 12n | |
2639 | Max vdev I/O aggregation size | |
2640 | .sp | |
1af240f3 AM |
2641 | Default value: \fB1,048,576\fR. |
2642 | .RE | |
2643 | ||
2644 | .sp | |
2645 | .ne 2 | |
2646 | .na | |
2647 | \fBzfs_vdev_aggregation_limit_non_rotating\fR (int) | |
2648 | .ad | |
2649 | .RS 12n | |
2650 | Max vdev I/O aggregation size for non-rotating media | |
2651 | .sp | |
29714574 TF |
2652 | Default value: \fB131,072\fR. |
2653 | .RE | |
2654 | ||
2655 | .sp | |
2656 | .ne 2 | |
2657 | .na | |
2658 | \fBzfs_vdev_cache_bshift\fR (int) | |
2659 | .ad | |
2660 | .RS 12n | |
2661 | Shift size to inflate reads too | |
2662 | .sp | |
83426735 | 2663 | Default value: \fB16\fR (effectively 65536). |
29714574 TF |
2664 | .RE |
2665 | ||
2666 | .sp | |
2667 | .ne 2 | |
2668 | .na | |
2669 | \fBzfs_vdev_cache_max\fR (int) | |
2670 | .ad | |
2671 | .RS 12n | |
ca85d690 | 2672 | Inflate reads smaller than this value to meet the \fBzfs_vdev_cache_bshift\fR |
2673 | size (default 64k). | |
83426735 D |
2674 | .sp |
2675 | Default value: \fB16384\fR. | |
29714574 TF |
2676 | .RE |
2677 | ||
2678 | .sp | |
2679 | .ne 2 | |
2680 | .na | |
2681 | \fBzfs_vdev_cache_size\fR (int) | |
2682 | .ad | |
2683 | .RS 12n | |
83426735 D |
2684 | Total size of the per-disk cache in bytes. |
2685 | .sp | |
2686 | Currently this feature is disabled as it has been found to not be helpful | |
2687 | for performance and in some cases harmful. | |
29714574 TF |
2688 | .sp |
2689 | Default value: \fB0\fR. | |
2690 | .RE | |
2691 | ||
29714574 TF |
2692 | .sp |
2693 | .ne 2 | |
2694 | .na | |
9f500936 | 2695 | \fBzfs_vdev_mirror_rotating_inc\fR (int) |
29714574 TF |
2696 | .ad |
2697 | .RS 12n | |
9f500936 | 2698 | A number by which the balancing algorithm increments the load calculation for |
2699 | the purpose of selecting the least busy mirror member when an I/O immediately | |
2700 | follows its predecessor on rotational vdevs for the purpose of making decisions | |
2701 | based on load. | |
29714574 | 2702 | .sp |
9f500936 | 2703 | Default value: \fB0\fR. |
2704 | .RE | |
2705 | ||
2706 | .sp | |
2707 | .ne 2 | |
2708 | .na | |
2709 | \fBzfs_vdev_mirror_rotating_seek_inc\fR (int) | |
2710 | .ad | |
2711 | .RS 12n | |
2712 | A number by which the balancing algorithm increments the load calculation for | |
2713 | the purpose of selecting the least busy mirror member when an I/O lacks | |
2714 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
2715 | this that are not immediately following the previous I/O are incremented by | |
2716 | half. | |
2717 | .sp | |
2718 | Default value: \fB5\fR. | |
2719 | .RE | |
2720 | ||
2721 | .sp | |
2722 | .ne 2 | |
2723 | .na | |
2724 | \fBzfs_vdev_mirror_rotating_seek_offset\fR (int) | |
2725 | .ad | |
2726 | .RS 12n | |
2727 | The maximum distance for the last queued I/O in which the balancing algorithm | |
2728 | considers an I/O to have locality. | |
2729 | See the section "ZFS I/O SCHEDULER". | |
2730 | .sp | |
2731 | Default value: \fB1048576\fR. | |
2732 | .RE | |
2733 | ||
2734 | .sp | |
2735 | .ne 2 | |
2736 | .na | |
2737 | \fBzfs_vdev_mirror_non_rotating_inc\fR (int) | |
2738 | .ad | |
2739 | .RS 12n | |
2740 | A number by which the balancing algorithm increments the load calculation for | |
2741 | the purpose of selecting the least busy mirror member on non-rotational vdevs | |
2742 | when I/Os do not immediately follow one another. | |
2743 | .sp | |
2744 | Default value: \fB0\fR. | |
2745 | .RE | |
2746 | ||
2747 | .sp | |
2748 | .ne 2 | |
2749 | .na | |
2750 | \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int) | |
2751 | .ad | |
2752 | .RS 12n | |
2753 | A number by which the balancing algorithm increments the load calculation for | |
2754 | the purpose of selecting the least busy mirror member when an I/O lacks | |
2755 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
2756 | this that are not immediately following the previous I/O are incremented by | |
2757 | half. | |
2758 | .sp | |
2759 | Default value: \fB1\fR. | |
29714574 TF |
2760 | .RE |
2761 | ||
29714574 TF |
2762 | .sp |
2763 | .ne 2 | |
2764 | .na | |
2765 | \fBzfs_vdev_read_gap_limit\fR (int) | |
2766 | .ad | |
2767 | .RS 12n | |
83426735 D |
2768 | Aggregate read I/O operations if the gap on-disk between them is within this |
2769 | threshold. | |
29714574 TF |
2770 | .sp |
2771 | Default value: \fB32,768\fR. | |
2772 | .RE | |
2773 | ||
2774 | .sp | |
2775 | .ne 2 | |
2776 | .na | |
2777 | \fBzfs_vdev_scheduler\fR (charp) | |
2778 | .ad | |
2779 | .RS 12n | |
ca85d690 | 2780 | Set the Linux I/O scheduler on whole disk vdevs to this scheduler. Valid options |
2781 | are noop, cfq, bfq & deadline | |
29714574 TF |
2782 | .sp |
2783 | Default value: \fBnoop\fR. | |
2784 | .RE | |
2785 | ||
29714574 TF |
2786 | .sp |
2787 | .ne 2 | |
2788 | .na | |
2789 | \fBzfs_vdev_write_gap_limit\fR (int) | |
2790 | .ad | |
2791 | .RS 12n | |
2792 | Aggregate write I/O over gap | |
2793 | .sp | |
2794 | Default value: \fB4,096\fR. | |
2795 | .RE | |
2796 | ||
ab9f4b0b GN |
2797 | .sp |
2798 | .ne 2 | |
2799 | .na | |
2800 | \fBzfs_vdev_raidz_impl\fR (string) | |
2801 | .ad | |
2802 | .RS 12n | |
c9187d86 | 2803 | Parameter for selecting raidz parity implementation to use. |
ab9f4b0b GN |
2804 | |
2805 | Options marked (always) below may be selected on module load as they are | |
2806 | supported on all systems. | |
2807 | The remaining options may only be set after the module is loaded, as they | |
2808 | are available only if the implementations are compiled in and supported | |
2809 | on the running system. | |
2810 | ||
2811 | Once the module is loaded, the content of | |
2812 | /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options | |
2813 | with the currently selected one enclosed in []. | |
2814 | Possible options are: | |
2815 | fastest - (always) implementation selected using built-in benchmark | |
2816 | original - (always) original raidz implementation | |
2817 | scalar - (always) scalar raidz implementation | |
ae25d222 GN |
2818 | sse2 - implementation using SSE2 instruction set (64bit x86 only) |
2819 | ssse3 - implementation using SSSE3 instruction set (64bit x86 only) | |
ab9f4b0b | 2820 | avx2 - implementation using AVX2 instruction set (64bit x86 only) |
7f547f85 RD |
2821 | avx512f - implementation using AVX512F instruction set (64bit x86 only) |
2822 | avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only) | |
62a65a65 RD |
2823 | aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) |
2824 | aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only) | |
ab9f4b0b GN |
2825 | .sp |
2826 | Default value: \fBfastest\fR. | |
2827 | .RE | |
2828 | ||
29714574 TF |
2829 | .sp |
2830 | .ne 2 | |
2831 | .na | |
2832 | \fBzfs_zevent_cols\fR (int) | |
2833 | .ad | |
2834 | .RS 12n | |
83426735 | 2835 | When zevents are logged to the console use this as the word wrap width. |
29714574 TF |
2836 | .sp |
2837 | Default value: \fB80\fR. | |
2838 | .RE | |
2839 | ||
2840 | .sp | |
2841 | .ne 2 | |
2842 | .na | |
2843 | \fBzfs_zevent_console\fR (int) | |
2844 | .ad | |
2845 | .RS 12n | |
2846 | Log events to the console | |
2847 | .sp | |
2848 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2849 | .RE | |
2850 | ||
2851 | .sp | |
2852 | .ne 2 | |
2853 | .na | |
2854 | \fBzfs_zevent_len_max\fR (int) | |
2855 | .ad | |
2856 | .RS 12n | |
83426735 D |
2857 | Max event queue length. A value of 0 will result in a calculated value which |
2858 | increases with the number of CPUs in the system (minimum 64 events). Events | |
2859 | in the queue can be viewed with the \fBzpool events\fR command. | |
29714574 TF |
2860 | .sp |
2861 | Default value: \fB0\fR. | |
2862 | .RE | |
2863 | ||
a032ac4b BB |
2864 | .sp |
2865 | .ne 2 | |
2866 | .na | |
2867 | \fBzfs_zil_clean_taskq_maxalloc\fR (int) | |
2868 | .ad | |
2869 | .RS 12n | |
2870 | The maximum number of taskq entries that are allowed to be cached. When this | |
2fe61a7e | 2871 | limit is exceeded transaction records (itxs) will be cleaned synchronously. |
a032ac4b BB |
2872 | .sp |
2873 | Default value: \fB1048576\fR. | |
2874 | .RE | |
2875 | ||
2876 | .sp | |
2877 | .ne 2 | |
2878 | .na | |
2879 | \fBzfs_zil_clean_taskq_minalloc\fR (int) | |
2880 | .ad | |
2881 | .RS 12n | |
2882 | The number of taskq entries that are pre-populated when the taskq is first | |
2883 | created and are immediately available for use. | |
2884 | .sp | |
2885 | Default value: \fB1024\fR. | |
2886 | .RE | |
2887 | ||
2888 | .sp | |
2889 | .ne 2 | |
2890 | .na | |
2891 | \fBzfs_zil_clean_taskq_nthr_pct\fR (int) | |
2892 | .ad | |
2893 | .RS 12n | |
2894 | This controls the number of threads used by the dp_zil_clean_taskq. The default | |
2895 | value of 100% will create a maximum of one thread per cpu. | |
2896 | .sp | |
be54a13c | 2897 | Default value: \fB100\fR%. |
a032ac4b BB |
2898 | .RE |
2899 | ||
b8738257 MA |
2900 | .sp |
2901 | .ne 2 | |
2902 | .na | |
2903 | \fBzil_maxblocksize\fR (int) | |
2904 | .ad | |
2905 | .RS 12n | |
2906 | This sets the maximum block size used by the ZIL. On very fragmented pools, | |
2907 | lowering this (typically to 36KB) can improve performance. | |
2908 | .sp | |
2909 | Default value: \fB131072\fR (128KB). | |
2910 | .RE | |
2911 | ||
53b1f5ea PS |
2912 | .sp |
2913 | .ne 2 | |
2914 | .na | |
2915 | \fBzil_nocacheflush\fR (int) | |
2916 | .ad | |
2917 | .RS 12n | |
2918 | Disable the cache flush commands that are normally sent to the disk(s) by | |
2919 | the ZIL after an LWB write has completed. Setting this will cause ZIL | |
2920 | corruption on power loss if a volatile out-of-order write cache is enabled. | |
2921 | .sp | |
2922 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2923 | .RE | |
2924 | ||
29714574 TF |
2925 | .sp |
2926 | .ne 2 | |
2927 | .na | |
2928 | \fBzil_replay_disable\fR (int) | |
2929 | .ad | |
2930 | .RS 12n | |
83426735 D |
2931 | Disable intent logging replay. Can be disabled for recovery from corrupted |
2932 | ZIL | |
29714574 TF |
2933 | .sp |
2934 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2935 | .RE | |
2936 | ||
2937 | .sp | |
2938 | .ne 2 | |
2939 | .na | |
1b7c1e5c | 2940 | \fBzil_slog_bulk\fR (ulong) |
29714574 TF |
2941 | .ad |
2942 | .RS 12n | |
1b7c1e5c GDN |
2943 | Limit SLOG write size per commit executed with synchronous priority. |
2944 | Any writes above that will be executed with lower (asynchronous) priority | |
2945 | to limit potential SLOG device abuse by single active ZIL writer. | |
29714574 | 2946 | .sp |
1b7c1e5c | 2947 | Default value: \fB786,432\fR. |
29714574 TF |
2948 | .RE |
2949 | ||
638dd5f4 TC |
2950 | .sp |
2951 | .ne 2 | |
2952 | .na | |
2953 | \fBzio_deadman_log_all\fR (int) | |
2954 | .ad | |
2955 | .RS 12n | |
2956 | If non-zero, the zio deadman will produce debugging messages (see | |
2957 | \fBzfs_dbgmsg_enable\fR) for all zios, rather than only for leaf | |
2958 | zios possessing a vdev. This is meant to be used by developers to gain | |
2959 | diagnostic information for hang conditions which don't involve a mutex | |
2960 | or other locking primitive; typically conditions in which a thread in | |
2961 | the zio pipeline is looping indefinitely. | |
2962 | .sp | |
2963 | Default value: \fB0\fR. | |
2964 | .RE | |
2965 | ||
c3bd3fb4 TC |
2966 | .sp |
2967 | .ne 2 | |
2968 | .na | |
2969 | \fBzio_decompress_fail_fraction\fR (int) | |
2970 | .ad | |
2971 | .RS 12n | |
2972 | If non-zero, this value represents the denominator of the probability that zfs | |
2973 | should induce a decompression failure. For instance, for a 5% decompression | |
2974 | failure rate, this value should be set to 20. | |
2975 | .sp | |
2976 | Default value: \fB0\fR. | |
2977 | .RE | |
2978 | ||
29714574 TF |
2979 | .sp |
2980 | .ne 2 | |
2981 | .na | |
ad796b8a | 2982 | \fBzio_slow_io_ms\fR (int) |
29714574 TF |
2983 | .ad |
2984 | .RS 12n | |
ad796b8a TH |
2985 | When an I/O operation takes more than \fBzio_slow_io_ms\fR milliseconds to |
2986 | complete is marked as a slow I/O. Each slow I/O causes a delay zevent. Slow | |
2987 | I/O counters can be seen with "zpool status -s". | |
2988 | ||
29714574 TF |
2989 | .sp |
2990 | Default value: \fB30,000\fR. | |
2991 | .RE | |
2992 | ||
3dfb57a3 DB |
2993 | .sp |
2994 | .ne 2 | |
2995 | .na | |
2996 | \fBzio_dva_throttle_enabled\fR (int) | |
2997 | .ad | |
2998 | .RS 12n | |
ad796b8a | 2999 | Throttle block allocations in the I/O pipeline. This allows for |
3dfb57a3 | 3000 | dynamic allocation distribution when devices are imbalanced. |
e815485f TC |
3001 | When enabled, the maximum number of pending allocations per top-level vdev |
3002 | is limited by \fBzfs_vdev_queue_depth_pct\fR. | |
3dfb57a3 | 3003 | .sp |
27f2b90d | 3004 | Default value: \fB1\fR. |
3dfb57a3 DB |
3005 | .RE |
3006 | ||
29714574 TF |
3007 | .sp |
3008 | .ne 2 | |
3009 | .na | |
3010 | \fBzio_requeue_io_start_cut_in_line\fR (int) | |
3011 | .ad | |
3012 | .RS 12n | |
3013 | Prioritize requeued I/O | |
3014 | .sp | |
3015 | Default value: \fB0\fR. | |
3016 | .RE | |
3017 | ||
dcb6bed1 D |
3018 | .sp |
3019 | .ne 2 | |
3020 | .na | |
3021 | \fBzio_taskq_batch_pct\fR (uint) | |
3022 | .ad | |
3023 | .RS 12n | |
3024 | Percentage of online CPUs (or CPU cores, etc) which will run a worker thread | |
ad796b8a | 3025 | for I/O. These workers are responsible for I/O work such as compression and |
dcb6bed1 D |
3026 | checksum calculations. Fractional number of CPUs will be rounded down. |
3027 | .sp | |
3028 | The default value of 75 was chosen to avoid using all CPUs which can result in | |
3029 | latency issues and inconsistent application performance, especially when high | |
3030 | compression is enabled. | |
3031 | .sp | |
3032 | Default value: \fB75\fR. | |
3033 | .RE | |
3034 | ||
29714574 TF |
3035 | .sp |
3036 | .ne 2 | |
3037 | .na | |
3038 | \fBzvol_inhibit_dev\fR (uint) | |
3039 | .ad | |
3040 | .RS 12n | |
83426735 D |
3041 | Do not create zvol device nodes. This may slightly improve startup time on |
3042 | systems with a very large number of zvols. | |
29714574 TF |
3043 | .sp |
3044 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
3045 | .RE | |
3046 | ||
3047 | .sp | |
3048 | .ne 2 | |
3049 | .na | |
3050 | \fBzvol_major\fR (uint) | |
3051 | .ad | |
3052 | .RS 12n | |
83426735 | 3053 | Major number for zvol block devices |
29714574 TF |
3054 | .sp |
3055 | Default value: \fB230\fR. | |
3056 | .RE | |
3057 | ||
3058 | .sp | |
3059 | .ne 2 | |
3060 | .na | |
3061 | \fBzvol_max_discard_blocks\fR (ulong) | |
3062 | .ad | |
3063 | .RS 12n | |
83426735 D |
3064 | Discard (aka TRIM) operations done on zvols will be done in batches of this |
3065 | many blocks, where block size is determined by the \fBvolblocksize\fR property | |
3066 | of a zvol. | |
29714574 TF |
3067 | .sp |
3068 | Default value: \fB16,384\fR. | |
3069 | .RE | |
3070 | ||
9965059a BB |
3071 | .sp |
3072 | .ne 2 | |
3073 | .na | |
3074 | \fBzvol_prefetch_bytes\fR (uint) | |
3075 | .ad | |
3076 | .RS 12n | |
3077 | When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR | |
3078 | from the start and end of the volume. Prefetching these regions | |
3079 | of the volume is desirable because they are likely to be accessed | |
3080 | immediately by \fBblkid(8)\fR or by the kernel scanning for a partition | |
3081 | table. | |
3082 | .sp | |
3083 | Default value: \fB131,072\fR. | |
3084 | .RE | |
3085 | ||
692e55b8 CC |
3086 | .sp |
3087 | .ne 2 | |
3088 | .na | |
3089 | \fBzvol_request_sync\fR (uint) | |
3090 | .ad | |
3091 | .RS 12n | |
3092 | When processing I/O requests for a zvol submit them synchronously. This | |
3093 | effectively limits the queue depth to 1 for each I/O submitter. When set | |
3094 | to 0 requests are handled asynchronously by a thread pool. The number of | |
3095 | requests which can be handled concurrently is controller by \fBzvol_threads\fR. | |
3096 | .sp | |
8fa5250f | 3097 | Default value: \fB0\fR. |
692e55b8 CC |
3098 | .RE |
3099 | ||
3100 | .sp | |
3101 | .ne 2 | |
3102 | .na | |
3103 | \fBzvol_threads\fR (uint) | |
3104 | .ad | |
3105 | .RS 12n | |
3106 | Max number of threads which can handle zvol I/O requests concurrently. | |
3107 | .sp | |
3108 | Default value: \fB32\fR. | |
3109 | .RE | |
3110 | ||
cf8738d8 | 3111 | .sp |
3112 | .ne 2 | |
3113 | .na | |
3114 | \fBzvol_volmode\fR (uint) | |
3115 | .ad | |
3116 | .RS 12n | |
3117 | Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR. | |
3118 | Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none). | |
3119 | .sp | |
3120 | Default value: \fB1\fR. | |
3121 | .RE | |
3122 | ||
e8b96c60 MA |
3123 | .SH ZFS I/O SCHEDULER |
3124 | ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os. | |
3125 | The I/O scheduler determines when and in what order those operations are | |
3126 | issued. The I/O scheduler divides operations into five I/O classes | |
3127 | prioritized in the following order: sync read, sync write, async read, | |
3128 | async write, and scrub/resilver. Each queue defines the minimum and | |
3129 | maximum number of concurrent operations that may be issued to the | |
3130 | device. In addition, the device has an aggregate maximum, | |
3131 | \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums | |
3132 | must not exceed the aggregate maximum. If the sum of the per-queue | |
3133 | maximums exceeds the aggregate maximum, then the number of active I/Os | |
3134 | may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will | |
3135 | be issued regardless of whether all per-queue minimums have been met. | |
3136 | .sp | |
3137 | For many physical devices, throughput increases with the number of | |
3138 | concurrent operations, but latency typically suffers. Further, physical | |
3139 | devices typically have a limit at which more concurrent operations have no | |
3140 | effect on throughput or can actually cause it to decrease. | |
3141 | .sp | |
3142 | The scheduler selects the next operation to issue by first looking for an | |
3143 | I/O class whose minimum has not been satisfied. Once all are satisfied and | |
3144 | the aggregate maximum has not been hit, the scheduler looks for classes | |
3145 | whose maximum has not been satisfied. Iteration through the I/O classes is | |
3146 | done in the order specified above. No further operations are issued if the | |
3147 | aggregate maximum number of concurrent operations has been hit or if there | |
3148 | are no operations queued for an I/O class that has not hit its maximum. | |
3149 | Every time an I/O is queued or an operation completes, the I/O scheduler | |
3150 | looks for new operations to issue. | |
3151 | .sp | |
3152 | In general, smaller max_active's will lead to lower latency of synchronous | |
3153 | operations. Larger max_active's may lead to higher overall throughput, | |
3154 | depending on underlying storage. | |
3155 | .sp | |
3156 | The ratio of the queues' max_actives determines the balance of performance | |
3157 | between reads, writes, and scrubs. E.g., increasing | |
3158 | \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete | |
3159 | more quickly, but reads and writes to have higher latency and lower throughput. | |
3160 | .sp | |
3161 | All I/O classes have a fixed maximum number of outstanding operations | |
3162 | except for the async write class. Asynchronous writes represent the data | |
3163 | that is committed to stable storage during the syncing stage for | |
3164 | transaction groups. Transaction groups enter the syncing state | |
3165 | periodically so the number of queued async writes will quickly burst up | |
3166 | and then bleed down to zero. Rather than servicing them as quickly as | |
3167 | possible, the I/O scheduler changes the maximum number of active async | |
3168 | write I/Os according to the amount of dirty data in the pool. Since | |
3169 | both throughput and latency typically increase with the number of | |
3170 | concurrent operations issued to physical devices, reducing the | |
3171 | burstiness in the number of concurrent operations also stabilizes the | |
3172 | response time of operations from other -- and in particular synchronous | |
3173 | -- queues. In broad strokes, the I/O scheduler will issue more | |
3174 | concurrent operations from the async write queue as there's more dirty | |
3175 | data in the pool. | |
3176 | .sp | |
3177 | Async Writes | |
3178 | .sp | |
3179 | The number of concurrent operations issued for the async write I/O class | |
3180 | follows a piece-wise linear function defined by a few adjustable points. | |
3181 | .nf | |
3182 | ||
3183 | | o---------| <-- zfs_vdev_async_write_max_active | |
3184 | ^ | /^ | | |
3185 | | | / | | | |
3186 | active | / | | | |
3187 | I/O | / | | | |
3188 | count | / | | | |
3189 | | / | | | |
3190 | |-------o | | <-- zfs_vdev_async_write_min_active | |
3191 | 0|_______^______|_________| | |
3192 | 0% | | 100% of zfs_dirty_data_max | |
3193 | | | | |
3194 | | `-- zfs_vdev_async_write_active_max_dirty_percent | |
3195 | `--------- zfs_vdev_async_write_active_min_dirty_percent | |
3196 | ||
3197 | .fi | |
3198 | Until the amount of dirty data exceeds a minimum percentage of the dirty | |
3199 | data allowed in the pool, the I/O scheduler will limit the number of | |
3200 | concurrent operations to the minimum. As that threshold is crossed, the | |
3201 | number of concurrent operations issued increases linearly to the maximum at | |
3202 | the specified maximum percentage of the dirty data allowed in the pool. | |
3203 | .sp | |
3204 | Ideally, the amount of dirty data on a busy pool will stay in the sloped | |
3205 | part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR | |
3206 | and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the | |
3207 | maximum percentage, this indicates that the rate of incoming data is | |
3208 | greater than the rate that the backend storage can handle. In this case, we | |
3209 | must further throttle incoming writes, as described in the next section. | |
3210 | ||
3211 | .SH ZFS TRANSACTION DELAY | |
3212 | We delay transactions when we've determined that the backend storage | |
3213 | isn't able to accommodate the rate of incoming writes. | |
3214 | .sp | |
3215 | If there is already a transaction waiting, we delay relative to when | |
3216 | that transaction will finish waiting. This way the calculated delay time | |
3217 | is independent of the number of threads concurrently executing | |
3218 | transactions. | |
3219 | .sp | |
3220 | If we are the only waiter, wait relative to when the transaction | |
3221 | started, rather than the current time. This credits the transaction for | |
3222 | "time already served", e.g. reading indirect blocks. | |
3223 | .sp | |
3224 | The minimum time for a transaction to take is calculated as: | |
3225 | .nf | |
3226 | min_time = zfs_delay_scale * (dirty - min) / (max - dirty) | |
3227 | min_time is then capped at 100 milliseconds. | |
3228 | .fi | |
3229 | .sp | |
3230 | The delay has two degrees of freedom that can be adjusted via tunables. The | |
3231 | percentage of dirty data at which we start to delay is defined by | |
3232 | \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above | |
3233 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to | |
3234 | delay after writing at full speed has failed to keep up with the incoming write | |
3235 | rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking, | |
3236 | this variable determines the amount of delay at the midpoint of the curve. | |
3237 | .sp | |
3238 | .nf | |
3239 | delay | |
3240 | 10ms +-------------------------------------------------------------*+ | |
3241 | | *| | |
3242 | 9ms + *+ | |
3243 | | *| | |
3244 | 8ms + *+ | |
3245 | | * | | |
3246 | 7ms + * + | |
3247 | | * | | |
3248 | 6ms + * + | |
3249 | | * | | |
3250 | 5ms + * + | |
3251 | | * | | |
3252 | 4ms + * + | |
3253 | | * | | |
3254 | 3ms + * + | |
3255 | | * | | |
3256 | 2ms + (midpoint) * + | |
3257 | | | ** | | |
3258 | 1ms + v *** + | |
3259 | | zfs_delay_scale ----------> ******** | | |
3260 | 0 +-------------------------------------*********----------------+ | |
3261 | 0% <- zfs_dirty_data_max -> 100% | |
3262 | .fi | |
3263 | .sp | |
3264 | Note that since the delay is added to the outstanding time remaining on the | |
3265 | most recent transaction, the delay is effectively the inverse of IOPS. | |
3266 | Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve | |
3267 | was chosen such that small changes in the amount of accumulated dirty data | |
3268 | in the first 3/4 of the curve yield relatively small differences in the | |
3269 | amount of delay. | |
3270 | .sp | |
3271 | The effects can be easier to understand when the amount of delay is | |
3272 | represented on a log scale: | |
3273 | .sp | |
3274 | .nf | |
3275 | delay | |
3276 | 100ms +-------------------------------------------------------------++ | |
3277 | + + | |
3278 | | | | |
3279 | + *+ | |
3280 | 10ms + *+ | |
3281 | + ** + | |
3282 | | (midpoint) ** | | |
3283 | + | ** + | |
3284 | 1ms + v **** + | |
3285 | + zfs_delay_scale ----------> ***** + | |
3286 | | **** | | |
3287 | + **** + | |
3288 | 100us + ** + | |
3289 | + * + | |
3290 | | * | | |
3291 | + * + | |
3292 | 10us + * + | |
3293 | + + | |
3294 | | | | |
3295 | + + | |
3296 | +--------------------------------------------------------------+ | |
3297 | 0% <- zfs_dirty_data_max -> 100% | |
3298 | .fi | |
3299 | .sp | |
3300 | Note here that only as the amount of dirty data approaches its limit does | |
3301 | the delay start to increase rapidly. The goal of a properly tuned system | |
3302 | should be to keep the amount of dirty data out of that range by first | |
3303 | ensuring that the appropriate limits are set for the I/O scheduler to reach | |
3304 | optimal throughput on the backend storage, and then by changing the value | |
3305 | of \fBzfs_delay_scale\fR to increase the steepness of the curve. |