]>
Commit | Line | Data |
---|---|---|
29714574 TF |
1 | '\" te |
2 | .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. | |
87c25d56 | 3 | .\" Copyright (c) 2019 by Delphix. All rights reserved. |
65282ee9 | 4 | .\" Copyright (c) 2019 Datto Inc. |
29714574 TF |
5 | .\" The contents of this file are subject to the terms of the Common Development |
6 | .\" and Distribution License (the "License"). You may not use this file except | |
7 | .\" in compliance with the License. You can obtain a copy of the license at | |
8 | .\" usr/src/OPENSOLARIS.LICENSE or http://www.opensolaris.org/os/licensing. | |
9 | .\" | |
10 | .\" See the License for the specific language governing permissions and | |
11 | .\" limitations under the License. When distributing Covered Code, include this | |
12 | .\" CDDL HEADER in each file and include the License file at | |
13 | .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this | |
14 | .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your | |
15 | .\" own identifying information: | |
16 | .\" Portions Copyright [yyyy] [name of copyright owner] | |
1b939560 | 17 | .TH ZFS-MODULE-PARAMETERS 5 "Feb 15, 2019" |
29714574 TF |
18 | .SH NAME |
19 | zfs\-module\-parameters \- ZFS module parameters | |
20 | .SH DESCRIPTION | |
21 | .sp | |
22 | .LP | |
23 | Description of the different parameters to the ZFS module. | |
24 | ||
25 | .SS "Module parameters" | |
26 | .sp | |
27 | .LP | |
28 | ||
de4f8d5d BB |
29 | .sp |
30 | .ne 2 | |
31 | .na | |
32 | \fBdbuf_cache_max_bytes\fR (ulong) | |
33 | .ad | |
34 | .RS 12n | |
35 | Maximum size in bytes of the dbuf cache. When \fB0\fR this value will default | |
36 | to \fB1/2^dbuf_cache_shift\fR (1/32) of the target ARC size, otherwise the | |
37 | provided value in bytes will be used. The behavior of the dbuf cache and its | |
38 | associated settings can be observed via the \fB/proc/spl/kstat/zfs/dbufstats\fR | |
39 | kstat. | |
40 | .sp | |
41 | Default value: \fB0\fR. | |
42 | .RE | |
43 | ||
2e5dc449 MA |
44 | .sp |
45 | .ne 2 | |
46 | .na | |
47 | \fBdbuf_metadata_cache_max_bytes\fR (ulong) | |
48 | .ad | |
49 | .RS 12n | |
50 | Maximum size in bytes of the metadata dbuf cache. When \fB0\fR this value will | |
51 | default to \fB1/2^dbuf_cache_shift\fR (1/16) of the target ARC size, otherwise | |
52 | the provided value in bytes will be used. The behavior of the metadata dbuf | |
53 | cache and its associated settings can be observed via the | |
54 | \fB/proc/spl/kstat/zfs/dbufstats\fR kstat. | |
55 | .sp | |
56 | Default value: \fB0\fR. | |
57 | .RE | |
58 | ||
de4f8d5d BB |
59 | .sp |
60 | .ne 2 | |
61 | .na | |
62 | \fBdbuf_cache_hiwater_pct\fR (uint) | |
63 | .ad | |
64 | .RS 12n | |
65 | The percentage over \fBdbuf_cache_max_bytes\fR when dbufs must be evicted | |
66 | directly. | |
67 | .sp | |
68 | Default value: \fB10\fR%. | |
69 | .RE | |
70 | ||
71 | .sp | |
72 | .ne 2 | |
73 | .na | |
74 | \fBdbuf_cache_lowater_pct\fR (uint) | |
75 | .ad | |
76 | .RS 12n | |
77 | The percentage below \fBdbuf_cache_max_bytes\fR when the evict thread stops | |
78 | evicting dbufs. | |
79 | .sp | |
80 | Default value: \fB10\fR%. | |
81 | .RE | |
82 | ||
83 | .sp | |
84 | .ne 2 | |
85 | .na | |
86 | \fBdbuf_cache_shift\fR (int) | |
87 | .ad | |
88 | .RS 12n | |
89 | Set the size of the dbuf cache, \fBdbuf_cache_max_bytes\fR, to a log2 fraction | |
90 | of the target arc size. | |
91 | .sp | |
92 | Default value: \fB5\fR. | |
93 | .RE | |
94 | ||
2e5dc449 MA |
95 | .sp |
96 | .ne 2 | |
97 | .na | |
98 | \fBdbuf_metadata_cache_shift\fR (int) | |
99 | .ad | |
100 | .RS 12n | |
101 | Set the size of the dbuf metadata cache, \fBdbuf_metadata_cache_max_bytes\fR, | |
102 | to a log2 fraction of the target arc size. | |
103 | .sp | |
104 | Default value: \fB6\fR. | |
105 | .RE | |
106 | ||
67709516 D |
107 | .sp |
108 | .ne 2 | |
109 | .na | |
110 | \fBdmu_object_alloc_chunk_shift\fR (int) | |
111 | .ad | |
112 | .RS 12n | |
113 | dnode slots allocated in a single operation as a power of 2. The default value | |
114 | minimizes lock contention for the bulk operation performed. | |
115 | .sp | |
116 | Default value: \fB7\fR (128). | |
117 | .RE | |
118 | ||
d9b4bf06 MA |
119 | .sp |
120 | .ne 2 | |
121 | .na | |
122 | \fBdmu_prefetch_max\fR (int) | |
123 | .ad | |
124 | .RS 12n | |
125 | Limit the amount we can prefetch with one call to this amount (in bytes). | |
126 | This helps to limit the amount of memory that can be used by prefetching. | |
127 | .sp | |
128 | Default value: \fB134,217,728\fR (128MB). | |
129 | .RE | |
130 | ||
6d836e6f RE |
131 | .sp |
132 | .ne 2 | |
133 | .na | |
134 | \fBignore_hole_birth\fR (int) | |
135 | .ad | |
136 | .RS 12n | |
6ce7b2d9 | 137 | This is an alias for \fBsend_holes_without_birth_time\fR. |
6d836e6f RE |
138 | .RE |
139 | ||
29714574 TF |
140 | .sp |
141 | .ne 2 | |
142 | .na | |
143 | \fBl2arc_feed_again\fR (int) | |
144 | .ad | |
145 | .RS 12n | |
83426735 D |
146 | Turbo L2ARC warm-up. When the L2ARC is cold the fill interval will be set as |
147 | fast as possible. | |
29714574 TF |
148 | .sp |
149 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
150 | .RE | |
151 | ||
152 | .sp | |
153 | .ne 2 | |
154 | .na | |
155 | \fBl2arc_feed_min_ms\fR (ulong) | |
156 | .ad | |
157 | .RS 12n | |
83426735 D |
158 | Min feed interval in milliseconds. Requires \fBl2arc_feed_again=1\fR and only |
159 | applicable in related situations. | |
29714574 TF |
160 | .sp |
161 | Default value: \fB200\fR. | |
162 | .RE | |
163 | ||
164 | .sp | |
165 | .ne 2 | |
166 | .na | |
167 | \fBl2arc_feed_secs\fR (ulong) | |
168 | .ad | |
169 | .RS 12n | |
170 | Seconds between L2ARC writing | |
171 | .sp | |
172 | Default value: \fB1\fR. | |
173 | .RE | |
174 | ||
175 | .sp | |
176 | .ne 2 | |
177 | .na | |
178 | \fBl2arc_headroom\fR (ulong) | |
179 | .ad | |
180 | .RS 12n | |
83426735 D |
181 | How far through the ARC lists to search for L2ARC cacheable content, expressed |
182 | as a multiplier of \fBl2arc_write_max\fR | |
29714574 TF |
183 | .sp |
184 | Default value: \fB2\fR. | |
185 | .RE | |
186 | ||
187 | .sp | |
188 | .ne 2 | |
189 | .na | |
190 | \fBl2arc_headroom_boost\fR (ulong) | |
191 | .ad | |
192 | .RS 12n | |
83426735 D |
193 | Scales \fBl2arc_headroom\fR by this percentage when L2ARC contents are being |
194 | successfully compressed before writing. A value of 100 disables this feature. | |
29714574 | 195 | .sp |
be54a13c | 196 | Default value: \fB200\fR%. |
29714574 TF |
197 | .RE |
198 | ||
29714574 TF |
199 | .sp |
200 | .ne 2 | |
201 | .na | |
202 | \fBl2arc_noprefetch\fR (int) | |
203 | .ad | |
204 | .RS 12n | |
83426735 D |
205 | Do not write buffers to L2ARC if they were prefetched but not used by |
206 | applications | |
29714574 TF |
207 | .sp |
208 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
209 | .RE | |
210 | ||
211 | .sp | |
212 | .ne 2 | |
213 | .na | |
214 | \fBl2arc_norw\fR (int) | |
215 | .ad | |
216 | .RS 12n | |
217 | No reads during writes | |
218 | .sp | |
219 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
220 | .RE | |
221 | ||
222 | .sp | |
223 | .ne 2 | |
224 | .na | |
225 | \fBl2arc_write_boost\fR (ulong) | |
226 | .ad | |
227 | .RS 12n | |
603a1784 | 228 | Cold L2ARC devices will have \fBl2arc_write_max\fR increased by this amount |
83426735 | 229 | while they remain cold. |
29714574 TF |
230 | .sp |
231 | Default value: \fB8,388,608\fR. | |
232 | .RE | |
233 | ||
234 | .sp | |
235 | .ne 2 | |
236 | .na | |
237 | \fBl2arc_write_max\fR (ulong) | |
238 | .ad | |
239 | .RS 12n | |
240 | Max write bytes per interval | |
241 | .sp | |
242 | Default value: \fB8,388,608\fR. | |
243 | .RE | |
244 | ||
99b14de4 ED |
245 | .sp |
246 | .ne 2 | |
247 | .na | |
248 | \fBmetaslab_aliquot\fR (ulong) | |
249 | .ad | |
250 | .RS 12n | |
251 | Metaslab granularity, in bytes. This is roughly similar to what would be | |
252 | referred to as the "stripe size" in traditional RAID arrays. In normal | |
253 | operation, ZFS will try to write this amount of data to a top-level vdev | |
254 | before moving on to the next one. | |
255 | .sp | |
256 | Default value: \fB524,288\fR. | |
257 | .RE | |
258 | ||
f3a7f661 GW |
259 | .sp |
260 | .ne 2 | |
261 | .na | |
262 | \fBmetaslab_bias_enabled\fR (int) | |
263 | .ad | |
264 | .RS 12n | |
265 | Enable metaslab group biasing based on its vdev's over- or under-utilization | |
266 | relative to the pool. | |
267 | .sp | |
268 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
269 | .RE | |
270 | ||
d830d479 MA |
271 | .sp |
272 | .ne 2 | |
273 | .na | |
274 | \fBmetaslab_force_ganging\fR (ulong) | |
275 | .ad | |
276 | .RS 12n | |
277 | Make some blocks above a certain size be gang blocks. This option is used | |
278 | by the test suite to facilitate testing. | |
279 | .sp | |
280 | Default value: \fB16,777,217\fR. | |
281 | .RE | |
282 | ||
93e28d66 SD |
283 | .sp |
284 | .ne 2 | |
285 | .na | |
286 | \fBzfs_keep_log_spacemaps_at_export\fR (int) | |
287 | .ad | |
288 | .RS 12n | |
289 | Prevent log spacemaps from being destroyed during pool exports and destroys. | |
290 | .sp | |
291 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
292 | .RE | |
293 | ||
4e21fd06 DB |
294 | .sp |
295 | .ne 2 | |
296 | .na | |
297 | \fBzfs_metaslab_segment_weight_enabled\fR (int) | |
298 | .ad | |
299 | .RS 12n | |
300 | Enable/disable segment-based metaslab selection. | |
301 | .sp | |
302 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
303 | .RE | |
304 | ||
305 | .sp | |
306 | .ne 2 | |
307 | .na | |
308 | \fBzfs_metaslab_switch_threshold\fR (int) | |
309 | .ad | |
310 | .RS 12n | |
311 | When using segment-based metaslab selection, continue allocating | |
321204be | 312 | from the active metaslab until \fBzfs_metaslab_switch_threshold\fR |
4e21fd06 DB |
313 | worth of buckets have been exhausted. |
314 | .sp | |
315 | Default value: \fB2\fR. | |
316 | .RE | |
317 | ||
29714574 TF |
318 | .sp |
319 | .ne 2 | |
320 | .na | |
aa7d06a9 | 321 | \fBmetaslab_debug_load\fR (int) |
29714574 TF |
322 | .ad |
323 | .RS 12n | |
aa7d06a9 GW |
324 | Load all metaslabs during pool import. |
325 | .sp | |
326 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
327 | .RE | |
328 | ||
329 | .sp | |
330 | .ne 2 | |
331 | .na | |
332 | \fBmetaslab_debug_unload\fR (int) | |
333 | .ad | |
334 | .RS 12n | |
335 | Prevent metaslabs from being unloaded. | |
29714574 TF |
336 | .sp |
337 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
338 | .RE | |
339 | ||
f3a7f661 GW |
340 | .sp |
341 | .ne 2 | |
342 | .na | |
343 | \fBmetaslab_fragmentation_factor_enabled\fR (int) | |
344 | .ad | |
345 | .RS 12n | |
346 | Enable use of the fragmentation metric in computing metaslab weights. | |
347 | .sp | |
348 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
349 | .RE | |
350 | ||
d3230d76 MA |
351 | .sp |
352 | .ne 2 | |
353 | .na | |
354 | \fBmetaslab_df_max_search\fR (int) | |
355 | .ad | |
356 | .RS 12n | |
357 | Maximum distance to search forward from the last offset. Without this limit, | |
358 | fragmented pools can see >100,000 iterations and metaslab_block_picker() | |
359 | becomes the performance limiting factor on high-performance storage. | |
360 | ||
361 | With the default setting of 16MB, we typically see less than 500 iterations, | |
362 | even with very fragmented, ashift=9 pools. The maximum number of iterations | |
363 | possible is: \fBmetaslab_df_max_search / (2 * (1<<ashift))\fR. | |
364 | With the default setting of 16MB this is 16*1024 (with ashift=9) or 2048 | |
365 | (with ashift=12). | |
366 | .sp | |
367 | Default value: \fB16,777,216\fR (16MB) | |
368 | .RE | |
369 | ||
370 | .sp | |
371 | .ne 2 | |
372 | .na | |
373 | \fBmetaslab_df_use_largest_segment\fR (int) | |
374 | .ad | |
375 | .RS 12n | |
376 | If we are not searching forward (due to metaslab_df_max_search, | |
377 | metaslab_df_free_pct, or metaslab_df_alloc_threshold), this tunable controls | |
378 | what segment is used. If it is set, we will use the largest free segment. | |
379 | If it is not set, we will use a segment of exactly the requested size (or | |
380 | larger). | |
381 | .sp | |
382 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
383 | .RE | |
384 | ||
c81f1790 PD |
385 | .sp |
386 | .ne 2 | |
387 | .na | |
388 | \fBzfs_metaslab_max_size_cache_sec\fR (ulong) | |
389 | .ad | |
390 | .RS 12n | |
391 | When we unload a metaslab, we cache the size of the largest free chunk. We use | |
392 | that cached size to determine whether or not to load a metaslab for a given | |
393 | allocation. As more frees accumulate in that metaslab while it's unloaded, the | |
394 | cached max size becomes less and less accurate. After a number of seconds | |
395 | controlled by this tunable, we stop considering the cached max size and start | |
396 | considering only the histogram instead. | |
397 | .sp | |
398 | Default value: \fB3600 seconds\fR (one hour) | |
399 | .RE | |
400 | ||
f09fda50 PD |
401 | .sp |
402 | .ne 2 | |
403 | .na | |
404 | \fBzfs_metaslab_mem_limit\fR (int) | |
405 | .ad | |
406 | .RS 12n | |
407 | When we are loading a new metaslab, we check the amount of memory being used | |
408 | to store metaslab range trees. If it is over a threshold, we attempt to unload | |
409 | the least recently used metaslab to prevent the system from clogging all of | |
410 | its memory with range trees. This tunable sets the percentage of total system | |
411 | memory that is the threshold. | |
412 | .sp | |
eef0f4d8 | 413 | Default value: \fB25 percent\fR |
f09fda50 PD |
414 | .RE |
415 | ||
b8bcca18 MA |
416 | .sp |
417 | .ne 2 | |
418 | .na | |
c853f382 | 419 | \fBzfs_vdev_default_ms_count\fR (int) |
b8bcca18 MA |
420 | .ad |
421 | .RS 12n | |
e4e94ca3 | 422 | When a vdev is added target this number of metaslabs per top-level vdev. |
b8bcca18 MA |
423 | .sp |
424 | Default value: \fB200\fR. | |
425 | .RE | |
426 | ||
93e28d66 SD |
427 | .sp |
428 | .ne 2 | |
429 | .na | |
430 | \fBzfs_vdev_default_ms_shift\fR (int) | |
431 | .ad | |
432 | .RS 12n | |
433 | Default limit for metaslab size. | |
434 | .sp | |
435 | Default value: \fB29\fR [meaning (1 << 29) = 512MB]. | |
436 | .RE | |
437 | ||
d2734cce SD |
438 | .sp |
439 | .ne 2 | |
440 | .na | |
c853f382 | 441 | \fBzfs_vdev_min_ms_count\fR (int) |
d2734cce SD |
442 | .ad |
443 | .RS 12n | |
444 | Minimum number of metaslabs to create in a top-level vdev. | |
445 | .sp | |
446 | Default value: \fB16\fR. | |
447 | .RE | |
448 | ||
e4e94ca3 DB |
449 | .sp |
450 | .ne 2 | |
451 | .na | |
67709516 D |
452 | \fBvdev_validate_skip\fR (int) |
453 | .ad | |
454 | .RS 12n | |
455 | Skip label validation steps during pool import. Changing is not recommended | |
456 | unless you know what you are doing and are recovering a damaged label. | |
457 | .sp | |
458 | Default value: \fB0\fR. | |
459 | .RE | |
460 | ||
461 | .sp | |
462 | .ne 2 | |
463 | .na | |
464 | \fBzfs_vdev_ms_count_limit\fR (int) | |
e4e94ca3 DB |
465 | .ad |
466 | .RS 12n | |
467 | Practical upper limit of total metaslabs per top-level vdev. | |
468 | .sp | |
469 | Default value: \fB131,072\fR. | |
470 | .RE | |
471 | ||
f3a7f661 GW |
472 | .sp |
473 | .ne 2 | |
474 | .na | |
475 | \fBmetaslab_preload_enabled\fR (int) | |
476 | .ad | |
477 | .RS 12n | |
478 | Enable metaslab group preloading. | |
479 | .sp | |
480 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
481 | .RE | |
482 | ||
483 | .sp | |
484 | .ne 2 | |
485 | .na | |
486 | \fBmetaslab_lba_weighting_enabled\fR (int) | |
487 | .ad | |
488 | .RS 12n | |
489 | Give more weight to metaslabs with lower LBAs, assuming they have | |
490 | greater bandwidth as is typically the case on a modern constant | |
491 | angular velocity disk drive. | |
492 | .sp | |
493 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
494 | .RE | |
495 | ||
eef0f4d8 PD |
496 | .sp |
497 | .ne 2 | |
498 | .na | |
499 | \fBmetaslab_unload_delay\fR (int) | |
500 | .ad | |
501 | .RS 12n | |
502 | After a metaslab is used, we keep it loaded for this many txgs, to attempt to | |
503 | reduce unnecessary reloading. Note that both this many txgs and | |
504 | \fBmetaslab_unload_delay_ms\fR milliseconds must pass before unloading will | |
505 | occur. | |
506 | .sp | |
507 | Default value: \fB32\fR. | |
508 | .RE | |
509 | ||
510 | .sp | |
511 | .ne 2 | |
512 | .na | |
513 | \fBmetaslab_unload_delay_ms\fR (int) | |
514 | .ad | |
515 | .RS 12n | |
516 | After a metaslab is used, we keep it loaded for this many milliseconds, to | |
517 | attempt to reduce unnecessary reloading. Note that both this many | |
518 | milliseconds and \fBmetaslab_unload_delay\fR txgs must pass before unloading | |
519 | will occur. | |
520 | .sp | |
521 | Default value: \fB600000\fR (ten minutes). | |
522 | .RE | |
523 | ||
6ce7b2d9 RL |
524 | .sp |
525 | .ne 2 | |
526 | .na | |
527 | \fBsend_holes_without_birth_time\fR (int) | |
528 | .ad | |
529 | .RS 12n | |
530 | When set, the hole_birth optimization will not be used, and all holes will | |
d0c3aa9c TC |
531 | always be sent on zfs send. This is useful if you suspect your datasets are |
532 | affected by a bug in hole_birth. | |
6ce7b2d9 RL |
533 | .sp |
534 | Use \fB1\fR for on (default) and \fB0\fR for off. | |
535 | .RE | |
536 | ||
29714574 TF |
537 | .sp |
538 | .ne 2 | |
539 | .na | |
540 | \fBspa_config_path\fR (charp) | |
541 | .ad | |
542 | .RS 12n | |
543 | SPA config file | |
544 | .sp | |
545 | Default value: \fB/etc/zfs/zpool.cache\fR. | |
546 | .RE | |
547 | ||
e8b96c60 MA |
548 | .sp |
549 | .ne 2 | |
550 | .na | |
551 | \fBspa_asize_inflation\fR (int) | |
552 | .ad | |
553 | .RS 12n | |
554 | Multiplication factor used to estimate actual disk consumption from the | |
555 | size of data being written. The default value is a worst case estimate, | |
556 | but lower values may be valid for a given pool depending on its | |
557 | configuration. Pool administrators who understand the factors involved | |
558 | may wish to specify a more realistic inflation factor, particularly if | |
559 | they operate close to quota or capacity limits. | |
560 | .sp | |
83426735 | 561 | Default value: \fB24\fR. |
e8b96c60 MA |
562 | .RE |
563 | ||
6cb8e530 PZ |
564 | .sp |
565 | .ne 2 | |
566 | .na | |
567 | \fBspa_load_print_vdev_tree\fR (int) | |
568 | .ad | |
569 | .RS 12n | |
570 | Whether to print the vdev tree in the debugging message buffer during pool import. | |
571 | Use 0 to disable and 1 to enable. | |
572 | .sp | |
573 | Default value: \fB0\fR. | |
574 | .RE | |
575 | ||
dea377c0 MA |
576 | .sp |
577 | .ne 2 | |
578 | .na | |
579 | \fBspa_load_verify_data\fR (int) | |
580 | .ad | |
581 | .RS 12n | |
582 | Whether to traverse data blocks during an "extreme rewind" (\fB-X\fR) | |
583 | import. Use 0 to disable and 1 to enable. | |
584 | ||
585 | An extreme rewind import normally performs a full traversal of all | |
586 | blocks in the pool for verification. If this parameter is set to 0, | |
587 | the traversal skips non-metadata blocks. It can be toggled once the | |
588 | import has started to stop or start the traversal of non-metadata blocks. | |
589 | .sp | |
83426735 | 590 | Default value: \fB1\fR. |
dea377c0 MA |
591 | .RE |
592 | ||
593 | .sp | |
594 | .ne 2 | |
595 | .na | |
596 | \fBspa_load_verify_metadata\fR (int) | |
597 | .ad | |
598 | .RS 12n | |
599 | Whether to traverse blocks during an "extreme rewind" (\fB-X\fR) | |
600 | pool import. Use 0 to disable and 1 to enable. | |
601 | ||
602 | An extreme rewind import normally performs a full traversal of all | |
1c012083 | 603 | blocks in the pool for verification. If this parameter is set to 0, |
dea377c0 MA |
604 | the traversal is not performed. It can be toggled once the import has |
605 | started to stop or start the traversal. | |
606 | .sp | |
83426735 | 607 | Default value: \fB1\fR. |
dea377c0 MA |
608 | .RE |
609 | ||
610 | .sp | |
611 | .ne 2 | |
612 | .na | |
c8242a96 | 613 | \fBspa_load_verify_shift\fR (int) |
dea377c0 MA |
614 | .ad |
615 | .RS 12n | |
c8242a96 GW |
616 | Sets the maximum number of bytes to consume during pool import to the log2 |
617 | fraction of the target arc size. | |
dea377c0 | 618 | .sp |
c8242a96 | 619 | Default value: \fB4\fR. |
dea377c0 MA |
620 | .RE |
621 | ||
6cde6435 BB |
622 | .sp |
623 | .ne 2 | |
624 | .na | |
625 | \fBspa_slop_shift\fR (int) | |
626 | .ad | |
627 | .RS 12n | |
628 | Normally, we don't allow the last 3.2% (1/(2^spa_slop_shift)) of space | |
629 | in the pool to be consumed. This ensures that we don't run the pool | |
630 | completely out of space, due to unaccounted changes (e.g. to the MOS). | |
631 | It also limits the worst-case time to allocate space. If we have | |
632 | less than this amount of free space, most ZPL operations (e.g. write, | |
633 | create) will return ENOSPC. | |
634 | .sp | |
83426735 | 635 | Default value: \fB5\fR. |
6cde6435 BB |
636 | .RE |
637 | ||
0dc2f70c MA |
638 | .sp |
639 | .ne 2 | |
640 | .na | |
641 | \fBvdev_removal_max_span\fR (int) | |
642 | .ad | |
643 | .RS 12n | |
644 | During top-level vdev removal, chunks of data are copied from the vdev | |
645 | which may include free space in order to trade bandwidth for IOPS. | |
646 | This parameter determines the maximum span of free space (in bytes) | |
647 | which will be included as "unnecessary" data in a chunk of copied data. | |
648 | ||
649 | The default value here was chosen to align with | |
650 | \fBzfs_vdev_read_gap_limit\fR, which is a similar concept when doing | |
651 | regular reads (but there's no reason it has to be the same). | |
652 | .sp | |
653 | Default value: \fB32,768\fR. | |
654 | .RE | |
655 | ||
d9b4bf06 MA |
656 | .sp |
657 | .ne 2 | |
658 | .na | |
659 | \fBzap_iterate_prefetch\fR (int) | |
660 | .ad | |
661 | .RS 12n | |
662 | If this is set, when we start iterating over a ZAP object, zfs will prefetch | |
663 | the entire object (all leaf blocks). However, this is limited by | |
664 | \fBdmu_prefetch_max\fR. | |
665 | .sp | |
666 | Use \fB1\fR for on (default) and \fB0\fR for off. | |
667 | .RE | |
668 | ||
29714574 TF |
669 | .sp |
670 | .ne 2 | |
671 | .na | |
672 | \fBzfetch_array_rd_sz\fR (ulong) | |
673 | .ad | |
674 | .RS 12n | |
27b293be | 675 | If prefetching is enabled, disable prefetching for reads larger than this size. |
29714574 TF |
676 | .sp |
677 | Default value: \fB1,048,576\fR. | |
678 | .RE | |
679 | ||
680 | .sp | |
681 | .ne 2 | |
682 | .na | |
7f60329a | 683 | \fBzfetch_max_distance\fR (uint) |
29714574 TF |
684 | .ad |
685 | .RS 12n | |
7f60329a | 686 | Max bytes to prefetch per stream (default 8MB). |
29714574 | 687 | .sp |
7f60329a | 688 | Default value: \fB8,388,608\fR. |
29714574 TF |
689 | .RE |
690 | ||
691 | .sp | |
692 | .ne 2 | |
693 | .na | |
694 | \fBzfetch_max_streams\fR (uint) | |
695 | .ad | |
696 | .RS 12n | |
27b293be | 697 | Max number of streams per zfetch (prefetch streams per file). |
29714574 TF |
698 | .sp |
699 | Default value: \fB8\fR. | |
700 | .RE | |
701 | ||
702 | .sp | |
703 | .ne 2 | |
704 | .na | |
705 | \fBzfetch_min_sec_reap\fR (uint) | |
706 | .ad | |
707 | .RS 12n | |
27b293be | 708 | Min time before an active prefetch stream can be reclaimed |
29714574 TF |
709 | .sp |
710 | Default value: \fB2\fR. | |
711 | .RE | |
712 | ||
67709516 D |
713 | .sp |
714 | .ne 2 | |
715 | .na | |
716 | \fBzfs_abd_scatter_enabled\fR (int) | |
717 | .ad | |
718 | .RS 12n | |
719 | Enables ARC from using scatter/gather lists and forces all allocations to be | |
720 | linear in kernel memory. Disabling can improve performance in some code paths | |
721 | at the expense of fragmented kernel memory. | |
722 | .sp | |
723 | Default value: \fB1\fR. | |
724 | .RE | |
725 | ||
726 | .sp | |
727 | .ne 2 | |
728 | .na | |
729 | \fBzfs_abd_scatter_max_order\fR (iunt) | |
730 | .ad | |
731 | .RS 12n | |
732 | Maximum number of consecutive memory pages allocated in a single block for | |
733 | scatter/gather lists. Default value is specified by the kernel itself. | |
734 | .sp | |
735 | Default value: \fB10\fR at the time of this writing. | |
736 | .RE | |
737 | ||
87c25d56 MA |
738 | .sp |
739 | .ne 2 | |
740 | .na | |
741 | \fBzfs_abd_scatter_min_size\fR (uint) | |
742 | .ad | |
743 | .RS 12n | |
744 | This is the minimum allocation size that will use scatter (page-based) | |
745 | ABD's. Smaller allocations will use linear ABD's. | |
746 | .sp | |
747 | Default value: \fB1536\fR (512B and 1KB allocations will be linear). | |
748 | .RE | |
749 | ||
25458cbe TC |
750 | .sp |
751 | .ne 2 | |
752 | .na | |
753 | \fBzfs_arc_dnode_limit\fR (ulong) | |
754 | .ad | |
755 | .RS 12n | |
756 | When the number of bytes consumed by dnodes in the ARC exceeds this number of | |
9907cc1c | 757 | bytes, try to unpin some of it in response to demand for non-metadata. This |
627791f3 | 758 | value acts as a ceiling to the amount of dnode metadata, and defaults to 0 which |
9907cc1c G |
759 | indicates that a percent which is based on \fBzfs_arc_dnode_limit_percent\fR of |
760 | the ARC meta buffers that may be used for dnodes. | |
25458cbe TC |
761 | |
762 | See also \fBzfs_arc_meta_prune\fR which serves a similar purpose but is used | |
763 | when the amount of metadata in the ARC exceeds \fBzfs_arc_meta_limit\fR rather | |
764 | than in response to overall demand for non-metadata. | |
765 | ||
766 | .sp | |
9907cc1c G |
767 | Default value: \fB0\fR. |
768 | .RE | |
769 | ||
770 | .sp | |
771 | .ne 2 | |
772 | .na | |
773 | \fBzfs_arc_dnode_limit_percent\fR (ulong) | |
774 | .ad | |
775 | .RS 12n | |
776 | Percentage that can be consumed by dnodes of ARC meta buffers. | |
777 | .sp | |
778 | See also \fBzfs_arc_dnode_limit\fR which serves a similar purpose but has a | |
779 | higher priority if set to nonzero value. | |
780 | .sp | |
be54a13c | 781 | Default value: \fB10\fR%. |
25458cbe TC |
782 | .RE |
783 | ||
784 | .sp | |
785 | .ne 2 | |
786 | .na | |
787 | \fBzfs_arc_dnode_reduce_percent\fR (ulong) | |
788 | .ad | |
789 | .RS 12n | |
790 | Percentage of ARC dnodes to try to scan in response to demand for non-metadata | |
6146e17e | 791 | when the number of bytes consumed by dnodes exceeds \fBzfs_arc_dnode_limit\fR. |
25458cbe TC |
792 | |
793 | .sp | |
be54a13c | 794 | Default value: \fB10\fR% of the number of dnodes in the ARC. |
25458cbe TC |
795 | .RE |
796 | ||
49ddb315 MA |
797 | .sp |
798 | .ne 2 | |
799 | .na | |
800 | \fBzfs_arc_average_blocksize\fR (int) | |
801 | .ad | |
802 | .RS 12n | |
803 | The ARC's buffer hash table is sized based on the assumption of an average | |
804 | block size of \fBzfs_arc_average_blocksize\fR (default 8K). This works out | |
805 | to roughly 1MB of hash table per 1GB of physical memory with 8-byte pointers. | |
806 | For configurations with a known larger average block size this value can be | |
807 | increased to reduce the memory footprint. | |
808 | ||
809 | .sp | |
810 | Default value: \fB8192\fR. | |
811 | .RE | |
812 | ||
ca0bf58d PS |
813 | .sp |
814 | .ne 2 | |
815 | .na | |
816 | \fBzfs_arc_evict_batch_limit\fR (int) | |
817 | .ad | |
818 | .RS 12n | |
8f343973 | 819 | Number ARC headers to evict per sub-list before proceeding to another sub-list. |
ca0bf58d PS |
820 | This batch-style operation prevents entire sub-lists from being evicted at once |
821 | but comes at a cost of additional unlocking and locking. | |
822 | .sp | |
823 | Default value: \fB10\fR. | |
824 | .RE | |
825 | ||
29714574 TF |
826 | .sp |
827 | .ne 2 | |
828 | .na | |
829 | \fBzfs_arc_grow_retry\fR (int) | |
830 | .ad | |
831 | .RS 12n | |
ca85d690 | 832 | If set to a non zero value, it will replace the arc_grow_retry value with this value. |
d4a72f23 | 833 | The arc_grow_retry value (default 5) is the number of seconds the ARC will wait before |
ca85d690 | 834 | trying to resume growth after a memory pressure event. |
29714574 | 835 | .sp |
ca85d690 | 836 | Default value: \fB0\fR. |
29714574 TF |
837 | .RE |
838 | ||
839 | .sp | |
840 | .ne 2 | |
841 | .na | |
7e8bddd0 | 842 | \fBzfs_arc_lotsfree_percent\fR (int) |
29714574 TF |
843 | .ad |
844 | .RS 12n | |
7e8bddd0 BB |
845 | Throttle I/O when free system memory drops below this percentage of total |
846 | system memory. Setting this value to 0 will disable the throttle. | |
29714574 | 847 | .sp |
be54a13c | 848 | Default value: \fB10\fR%. |
29714574 TF |
849 | .RE |
850 | ||
851 | .sp | |
852 | .ne 2 | |
853 | .na | |
7e8bddd0 | 854 | \fBzfs_arc_max\fR (ulong) |
29714574 TF |
855 | .ad |
856 | .RS 12n | |
9a51738b RM |
857 | Max size of ARC in bytes. If set to 0 then the max size of ARC is determined |
858 | by the amount of system memory installed. For Linux, 1/2 of system memory will | |
859 | be used as the limit. For FreeBSD, the larger of all system memory - 1GB or | |
860 | 5/8 of system memory will be used as the limit. This value must be at least | |
861 | 67108864 (64 megabytes). | |
83426735 D |
862 | .sp |
863 | This value can be changed dynamically with some caveats. It cannot be set back | |
864 | to 0 while running and reducing it below the current ARC size will not cause | |
865 | the ARC to shrink without memory pressure to induce shrinking. | |
29714574 | 866 | .sp |
7e8bddd0 | 867 | Default value: \fB0\fR. |
29714574 TF |
868 | .RE |
869 | ||
ca85d690 | 870 | .sp |
871 | .ne 2 | |
872 | .na | |
873 | \fBzfs_arc_meta_adjust_restarts\fR (ulong) | |
874 | .ad | |
875 | .RS 12n | |
876 | The number of restart passes to make while scanning the ARC attempting | |
877 | the free buffers in order to stay below the \fBzfs_arc_meta_limit\fR. | |
878 | This value should not need to be tuned but is available to facilitate | |
879 | performance analysis. | |
880 | .sp | |
881 | Default value: \fB4096\fR. | |
882 | .RE | |
883 | ||
29714574 TF |
884 | .sp |
885 | .ne 2 | |
886 | .na | |
887 | \fBzfs_arc_meta_limit\fR (ulong) | |
888 | .ad | |
889 | .RS 12n | |
2cbb06b5 BB |
890 | The maximum allowed size in bytes that meta data buffers are allowed to |
891 | consume in the ARC. When this limit is reached meta data buffers will | |
892 | be reclaimed even if the overall arc_c_max has not been reached. This | |
9907cc1c G |
893 | value defaults to 0 which indicates that a percent which is based on |
894 | \fBzfs_arc_meta_limit_percent\fR of the ARC may be used for meta data. | |
29714574 | 895 | .sp |
83426735 | 896 | This value my be changed dynamically except that it cannot be set back to 0 |
9907cc1c | 897 | for a specific percent of the ARC; it must be set to an explicit value. |
83426735 | 898 | .sp |
29714574 TF |
899 | Default value: \fB0\fR. |
900 | .RE | |
901 | ||
9907cc1c G |
902 | .sp |
903 | .ne 2 | |
904 | .na | |
905 | \fBzfs_arc_meta_limit_percent\fR (ulong) | |
906 | .ad | |
907 | .RS 12n | |
908 | Percentage of ARC buffers that can be used for meta data. | |
909 | ||
910 | See also \fBzfs_arc_meta_limit\fR which serves a similar purpose but has a | |
911 | higher priority if set to nonzero value. | |
912 | ||
913 | .sp | |
be54a13c | 914 | Default value: \fB75\fR%. |
9907cc1c G |
915 | .RE |
916 | ||
ca0bf58d PS |
917 | .sp |
918 | .ne 2 | |
919 | .na | |
920 | \fBzfs_arc_meta_min\fR (ulong) | |
921 | .ad | |
922 | .RS 12n | |
923 | The minimum allowed size in bytes that meta data buffers may consume in | |
924 | the ARC. This value defaults to 0 which disables a floor on the amount | |
925 | of the ARC devoted meta data. | |
926 | .sp | |
927 | Default value: \fB0\fR. | |
928 | .RE | |
929 | ||
29714574 TF |
930 | .sp |
931 | .ne 2 | |
932 | .na | |
933 | \fBzfs_arc_meta_prune\fR (int) | |
934 | .ad | |
935 | .RS 12n | |
2cbb06b5 BB |
936 | The number of dentries and inodes to be scanned looking for entries |
937 | which can be dropped. This may be required when the ARC reaches the | |
938 | \fBzfs_arc_meta_limit\fR because dentries and inodes can pin buffers | |
939 | in the ARC. Increasing this value will cause to dentry and inode caches | |
940 | to be pruned more aggressively. Setting this value to 0 will disable | |
941 | pruning the inode and dentry caches. | |
29714574 | 942 | .sp |
2cbb06b5 | 943 | Default value: \fB10,000\fR. |
29714574 TF |
944 | .RE |
945 | ||
bc888666 BB |
946 | .sp |
947 | .ne 2 | |
948 | .na | |
ca85d690 | 949 | \fBzfs_arc_meta_strategy\fR (int) |
bc888666 BB |
950 | .ad |
951 | .RS 12n | |
ca85d690 | 952 | Define the strategy for ARC meta data buffer eviction (meta reclaim strategy). |
953 | A value of 0 (META_ONLY) will evict only the ARC meta data buffers. | |
d4a72f23 | 954 | A value of 1 (BALANCED) indicates that additional data buffers may be evicted if |
ca85d690 | 955 | that is required to in order to evict the required number of meta data buffers. |
bc888666 | 956 | .sp |
ca85d690 | 957 | Default value: \fB1\fR. |
bc888666 BB |
958 | .RE |
959 | ||
29714574 TF |
960 | .sp |
961 | .ne 2 | |
962 | .na | |
963 | \fBzfs_arc_min\fR (ulong) | |
964 | .ad | |
965 | .RS 12n | |
ca85d690 | 966 | Min arc size of ARC in bytes. If set to 0 then arc_c_min will default to |
967 | consuming the larger of 32M or 1/32 of total system memory. | |
29714574 | 968 | .sp |
ca85d690 | 969 | Default value: \fB0\fR. |
29714574 TF |
970 | .RE |
971 | ||
972 | .sp | |
973 | .ne 2 | |
974 | .na | |
d4a72f23 | 975 | \fBzfs_arc_min_prefetch_ms\fR (int) |
29714574 TF |
976 | .ad |
977 | .RS 12n | |
d4a72f23 | 978 | Minimum time prefetched blocks are locked in the ARC, specified in ms. |
2b84817f | 979 | A value of \fB0\fR will default to 1000 ms. |
d4a72f23 TC |
980 | .sp |
981 | Default value: \fB0\fR. | |
982 | .RE | |
983 | ||
984 | .sp | |
985 | .ne 2 | |
986 | .na | |
987 | \fBzfs_arc_min_prescient_prefetch_ms\fR (int) | |
988 | .ad | |
989 | .RS 12n | |
990 | Minimum time "prescient prefetched" blocks are locked in the ARC, specified | |
ac3d4d0c | 991 | in ms. These blocks are meant to be prefetched fairly aggressively ahead of |
2b84817f | 992 | the code that may use them. A value of \fB0\fR will default to 6000 ms. |
29714574 | 993 | .sp |
83426735 | 994 | Default value: \fB0\fR. |
29714574 TF |
995 | .RE |
996 | ||
6cb8e530 PZ |
997 | .sp |
998 | .ne 2 | |
999 | .na | |
1000 | \fBzfs_max_missing_tvds\fR (int) | |
1001 | .ad | |
1002 | .RS 12n | |
1003 | Number of missing top-level vdevs which will be allowed during | |
1004 | pool import (only in read-only mode). | |
1005 | .sp | |
1006 | Default value: \fB0\fR | |
1007 | .RE | |
1008 | ||
ca0bf58d PS |
1009 | .sp |
1010 | .ne 2 | |
1011 | .na | |
c30e58c4 | 1012 | \fBzfs_multilist_num_sublists\fR (int) |
ca0bf58d PS |
1013 | .ad |
1014 | .RS 12n | |
1015 | To allow more fine-grained locking, each ARC state contains a series | |
1016 | of lists for both data and meta data objects. Locking is performed at | |
1017 | the level of these "sub-lists". This parameters controls the number of | |
c30e58c4 MA |
1018 | sub-lists per ARC state, and also applies to other uses of the |
1019 | multilist data structure. | |
ca0bf58d | 1020 | .sp |
c30e58c4 | 1021 | Default value: \fB4\fR or the number of online CPUs, whichever is greater |
ca0bf58d PS |
1022 | .RE |
1023 | ||
1024 | .sp | |
1025 | .ne 2 | |
1026 | .na | |
1027 | \fBzfs_arc_overflow_shift\fR (int) | |
1028 | .ad | |
1029 | .RS 12n | |
1030 | The ARC size is considered to be overflowing if it exceeds the current | |
1031 | ARC target size (arc_c) by a threshold determined by this parameter. | |
1032 | The threshold is calculated as a fraction of arc_c using the formula | |
1033 | "arc_c >> \fBzfs_arc_overflow_shift\fR". | |
1034 | ||
1035 | The default value of 8 causes the ARC to be considered to be overflowing | |
1036 | if it exceeds the target size by 1/256th (0.3%) of the target size. | |
1037 | ||
1038 | When the ARC is overflowing, new buffer allocations are stalled until | |
1039 | the reclaim thread catches up and the overflow condition no longer exists. | |
1040 | .sp | |
1041 | Default value: \fB8\fR. | |
1042 | .RE | |
1043 | ||
728d6ae9 BB |
1044 | .sp |
1045 | .ne 2 | |
1046 | .na | |
1047 | ||
1048 | \fBzfs_arc_p_min_shift\fR (int) | |
1049 | .ad | |
1050 | .RS 12n | |
ca85d690 | 1051 | If set to a non zero value, this will update arc_p_min_shift (default 4) |
1052 | with the new value. | |
d4a72f23 | 1053 | arc_p_min_shift is used to shift of arc_c for calculating both min and max |
ca85d690 | 1054 | max arc_p |
728d6ae9 | 1055 | .sp |
ca85d690 | 1056 | Default value: \fB0\fR. |
728d6ae9 BB |
1057 | .RE |
1058 | ||
62422785 PS |
1059 | .sp |
1060 | .ne 2 | |
1061 | .na | |
1062 | \fBzfs_arc_p_dampener_disable\fR (int) | |
1063 | .ad | |
1064 | .RS 12n | |
1065 | Disable arc_p adapt dampener | |
1066 | .sp | |
1067 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
1068 | .RE | |
1069 | ||
29714574 TF |
1070 | .sp |
1071 | .ne 2 | |
1072 | .na | |
1073 | \fBzfs_arc_shrink_shift\fR (int) | |
1074 | .ad | |
1075 | .RS 12n | |
ca85d690 | 1076 | If set to a non zero value, this will update arc_shrink_shift (default 7) |
1077 | with the new value. | |
29714574 | 1078 | .sp |
ca85d690 | 1079 | Default value: \fB0\fR. |
29714574 TF |
1080 | .RE |
1081 | ||
03b60eee DB |
1082 | .sp |
1083 | .ne 2 | |
1084 | .na | |
1085 | \fBzfs_arc_pc_percent\fR (uint) | |
1086 | .ad | |
1087 | .RS 12n | |
1088 | Percent of pagecache to reclaim arc to | |
1089 | ||
1090 | This tunable allows ZFS arc to play more nicely with the kernel's LRU | |
1091 | pagecache. It can guarantee that the arc size won't collapse under scanning | |
1092 | pressure on the pagecache, yet still allows arc to be reclaimed down to | |
1093 | zfs_arc_min if necessary. This value is specified as percent of pagecache | |
1094 | size (as measured by NR_FILE_PAGES) where that percent may exceed 100. This | |
1095 | only operates during memory pressure/reclaim. | |
1096 | .sp | |
be54a13c | 1097 | Default value: \fB0\fR% (disabled). |
03b60eee DB |
1098 | .RE |
1099 | ||
11f552fa BB |
1100 | .sp |
1101 | .ne 2 | |
1102 | .na | |
1103 | \fBzfs_arc_sys_free\fR (ulong) | |
1104 | .ad | |
1105 | .RS 12n | |
1106 | The target number of bytes the ARC should leave as free memory on the system. | |
1107 | Defaults to the larger of 1/64 of physical memory or 512K. Setting this | |
1108 | option to a non-zero value will override the default. | |
1109 | .sp | |
1110 | Default value: \fB0\fR. | |
1111 | .RE | |
1112 | ||
29714574 TF |
1113 | .sp |
1114 | .ne 2 | |
1115 | .na | |
1116 | \fBzfs_autoimport_disable\fR (int) | |
1117 | .ad | |
1118 | .RS 12n | |
27b293be | 1119 | Disable pool import at module load by ignoring the cache file (typically \fB/etc/zfs/zpool.cache\fR). |
29714574 | 1120 | .sp |
70081096 | 1121 | Use \fB1\fR for yes (default) and \fB0\fR for no. |
29714574 TF |
1122 | .RE |
1123 | ||
80d52c39 TH |
1124 | .sp |
1125 | .ne 2 | |
1126 | .na | |
67709516 | 1127 | \fBzfs_checksum_events_per_second\fR (uint) |
80d52c39 TH |
1128 | .ad |
1129 | .RS 12n | |
1130 | Rate limit checksum events to this many per second. Note that this should | |
1131 | not be set below the zed thresholds (currently 10 checksums over 10 sec) | |
1132 | or else zed may not trigger any action. | |
1133 | .sp | |
1134 | Default value: 20 | |
1135 | .RE | |
1136 | ||
2fe61a7e PS |
1137 | .sp |
1138 | .ne 2 | |
1139 | .na | |
1140 | \fBzfs_commit_timeout_pct\fR (int) | |
1141 | .ad | |
1142 | .RS 12n | |
1143 | This controls the amount of time that a ZIL block (lwb) will remain "open" | |
1144 | when it isn't "full", and it has a thread waiting for it to be committed to | |
1145 | stable storage. The timeout is scaled based on a percentage of the last lwb | |
1146 | latency to avoid significantly impacting the latency of each individual | |
1147 | transaction record (itx). | |
1148 | .sp | |
be54a13c | 1149 | Default value: \fB5\fR%. |
2fe61a7e PS |
1150 | .RE |
1151 | ||
67709516 D |
1152 | .sp |
1153 | .ne 2 | |
1154 | .na | |
1155 | \fBzfs_condense_indirect_commit_entry_delay_ms\fR (int) | |
1156 | .ad | |
1157 | .RS 12n | |
1158 | Vdev indirection layer (used for device removal) sleeps for this many | |
1159 | milliseconds during mapping generation. Intended for use with the test suite | |
1160 | to throttle vdev removal speed. | |
1161 | .sp | |
1162 | Default value: \fB0\fR (no throttle). | |
1163 | .RE | |
1164 | ||
0dc2f70c MA |
1165 | .sp |
1166 | .ne 2 | |
1167 | .na | |
1168 | \fBzfs_condense_indirect_vdevs_enable\fR (int) | |
1169 | .ad | |
1170 | .RS 12n | |
1171 | Enable condensing indirect vdev mappings. When set to a non-zero value, | |
1172 | attempt to condense indirect vdev mappings if the mapping uses more than | |
1173 | \fBzfs_condense_min_mapping_bytes\fR bytes of memory and if the obsolete | |
1174 | space map object uses more than \fBzfs_condense_max_obsolete_bytes\fR | |
1175 | bytes on-disk. The condensing process is an attempt to save memory by | |
1176 | removing obsolete mappings. | |
1177 | .sp | |
1178 | Default value: \fB1\fR. | |
1179 | .RE | |
1180 | ||
1181 | .sp | |
1182 | .ne 2 | |
1183 | .na | |
1184 | \fBzfs_condense_max_obsolete_bytes\fR (ulong) | |
1185 | .ad | |
1186 | .RS 12n | |
1187 | Only attempt to condense indirect vdev mappings if the on-disk size | |
1188 | of the obsolete space map object is greater than this number of bytes | |
1189 | (see \fBfBzfs_condense_indirect_vdevs_enable\fR). | |
1190 | .sp | |
1191 | Default value: \fB1,073,741,824\fR. | |
1192 | .RE | |
1193 | ||
1194 | .sp | |
1195 | .ne 2 | |
1196 | .na | |
1197 | \fBzfs_condense_min_mapping_bytes\fR (ulong) | |
1198 | .ad | |
1199 | .RS 12n | |
1200 | Minimum size vdev mapping to attempt to condense (see | |
1201 | \fBzfs_condense_indirect_vdevs_enable\fR). | |
1202 | .sp | |
1203 | Default value: \fB131,072\fR. | |
1204 | .RE | |
1205 | ||
3b36f831 BB |
1206 | .sp |
1207 | .ne 2 | |
1208 | .na | |
1209 | \fBzfs_dbgmsg_enable\fR (int) | |
1210 | .ad | |
1211 | .RS 12n | |
1212 | Internally ZFS keeps a small log to facilitate debugging. By default the log | |
1213 | is disabled, to enable it set this option to 1. The contents of the log can | |
1214 | be accessed by reading the /proc/spl/kstat/zfs/dbgmsg file. Writing 0 to | |
1215 | this proc file clears the log. | |
1216 | .sp | |
1217 | Default value: \fB0\fR. | |
1218 | .RE | |
1219 | ||
1220 | .sp | |
1221 | .ne 2 | |
1222 | .na | |
1223 | \fBzfs_dbgmsg_maxsize\fR (int) | |
1224 | .ad | |
1225 | .RS 12n | |
1226 | The maximum size in bytes of the internal ZFS debug log. | |
1227 | .sp | |
1228 | Default value: \fB4M\fR. | |
1229 | .RE | |
1230 | ||
29714574 TF |
1231 | .sp |
1232 | .ne 2 | |
1233 | .na | |
1234 | \fBzfs_dbuf_state_index\fR (int) | |
1235 | .ad | |
1236 | .RS 12n | |
83426735 D |
1237 | This feature is currently unused. It is normally used for controlling what |
1238 | reporting is available under /proc/spl/kstat/zfs. | |
29714574 TF |
1239 | .sp |
1240 | Default value: \fB0\fR. | |
1241 | .RE | |
1242 | ||
1243 | .sp | |
1244 | .ne 2 | |
1245 | .na | |
1246 | \fBzfs_deadman_enabled\fR (int) | |
1247 | .ad | |
1248 | .RS 12n | |
b81a3ddc | 1249 | When a pool sync operation takes longer than \fBzfs_deadman_synctime_ms\fR |
8fb1ede1 BB |
1250 | milliseconds, or when an individual I/O takes longer than |
1251 | \fBzfs_deadman_ziotime_ms\fR milliseconds, then the operation is considered to | |
1252 | be "hung". If \fBzfs_deadman_enabled\fR is set then the deadman behavior is | |
1253 | invoked as described by the \fBzfs_deadman_failmode\fR module option. | |
1254 | By default the deadman is enabled and configured to \fBwait\fR which results | |
1255 | in "hung" I/Os only being logged. The deadman is automatically disabled | |
1256 | when a pool gets suspended. | |
29714574 | 1257 | .sp |
8fb1ede1 BB |
1258 | Default value: \fB1\fR. |
1259 | .RE | |
1260 | ||
1261 | .sp | |
1262 | .ne 2 | |
1263 | .na | |
1264 | \fBzfs_deadman_failmode\fR (charp) | |
1265 | .ad | |
1266 | .RS 12n | |
1267 | Controls the failure behavior when the deadman detects a "hung" I/O. Valid | |
1268 | values are \fBwait\fR, \fBcontinue\fR, and \fBpanic\fR. | |
1269 | .sp | |
1270 | \fBwait\fR - Wait for a "hung" I/O to complete. For each "hung" I/O a | |
1271 | "deadman" event will be posted describing that I/O. | |
1272 | .sp | |
1273 | \fBcontinue\fR - Attempt to recover from a "hung" I/O by re-dispatching it | |
1274 | to the I/O pipeline if possible. | |
1275 | .sp | |
1276 | \fBpanic\fR - Panic the system. This can be used to facilitate an automatic | |
1277 | fail-over to a properly configured fail-over partner. | |
1278 | .sp | |
1279 | Default value: \fBwait\fR. | |
b81a3ddc TC |
1280 | .RE |
1281 | ||
1282 | .sp | |
1283 | .ne 2 | |
1284 | .na | |
1285 | \fBzfs_deadman_checktime_ms\fR (int) | |
1286 | .ad | |
1287 | .RS 12n | |
8fb1ede1 BB |
1288 | Check time in milliseconds. This defines the frequency at which we check |
1289 | for hung I/O and potentially invoke the \fBzfs_deadman_failmode\fR behavior. | |
b81a3ddc | 1290 | .sp |
8fb1ede1 | 1291 | Default value: \fB60,000\fR. |
29714574 TF |
1292 | .RE |
1293 | ||
1294 | .sp | |
1295 | .ne 2 | |
1296 | .na | |
e8b96c60 | 1297 | \fBzfs_deadman_synctime_ms\fR (ulong) |
29714574 TF |
1298 | .ad |
1299 | .RS 12n | |
b81a3ddc | 1300 | Interval in milliseconds after which the deadman is triggered and also |
8fb1ede1 BB |
1301 | the interval after which a pool sync operation is considered to be "hung". |
1302 | Once this limit is exceeded the deadman will be invoked every | |
1303 | \fBzfs_deadman_checktime_ms\fR milliseconds until the pool sync completes. | |
1304 | .sp | |
1305 | Default value: \fB600,000\fR. | |
1306 | .RE | |
b81a3ddc | 1307 | |
29714574 | 1308 | .sp |
8fb1ede1 BB |
1309 | .ne 2 |
1310 | .na | |
1311 | \fBzfs_deadman_ziotime_ms\fR (ulong) | |
1312 | .ad | |
1313 | .RS 12n | |
1314 | Interval in milliseconds after which the deadman is triggered and an | |
ad796b8a | 1315 | individual I/O operation is considered to be "hung". As long as the I/O |
8fb1ede1 BB |
1316 | remains "hung" the deadman will be invoked every \fBzfs_deadman_checktime_ms\fR |
1317 | milliseconds until the I/O completes. | |
1318 | .sp | |
1319 | Default value: \fB300,000\fR. | |
29714574 TF |
1320 | .RE |
1321 | ||
1322 | .sp | |
1323 | .ne 2 | |
1324 | .na | |
1325 | \fBzfs_dedup_prefetch\fR (int) | |
1326 | .ad | |
1327 | .RS 12n | |
1328 | Enable prefetching dedup-ed blks | |
1329 | .sp | |
0dfc7324 | 1330 | Use \fB1\fR for yes and \fB0\fR to disable (default). |
29714574 TF |
1331 | .RE |
1332 | ||
e8b96c60 MA |
1333 | .sp |
1334 | .ne 2 | |
1335 | .na | |
1336 | \fBzfs_delay_min_dirty_percent\fR (int) | |
1337 | .ad | |
1338 | .RS 12n | |
1339 | Start to delay each transaction once there is this amount of dirty data, | |
1340 | expressed as a percentage of \fBzfs_dirty_data_max\fR. | |
1341 | This value should be >= zfs_vdev_async_write_active_max_dirty_percent. | |
1342 | See the section "ZFS TRANSACTION DELAY". | |
1343 | .sp | |
be54a13c | 1344 | Default value: \fB60\fR%. |
e8b96c60 MA |
1345 | .RE |
1346 | ||
1347 | .sp | |
1348 | .ne 2 | |
1349 | .na | |
1350 | \fBzfs_delay_scale\fR (int) | |
1351 | .ad | |
1352 | .RS 12n | |
1353 | This controls how quickly the transaction delay approaches infinity. | |
1354 | Larger values cause longer delays for a given amount of dirty data. | |
1355 | .sp | |
1356 | For the smoothest delay, this value should be about 1 billion divided | |
1357 | by the maximum number of operations per second. This will smoothly | |
1358 | handle between 10x and 1/10th this number. | |
1359 | .sp | |
1360 | See the section "ZFS TRANSACTION DELAY". | |
1361 | .sp | |
1362 | Note: \fBzfs_delay_scale\fR * \fBzfs_dirty_data_max\fR must be < 2^64. | |
1363 | .sp | |
1364 | Default value: \fB500,000\fR. | |
1365 | .RE | |
1366 | ||
67709516 D |
1367 | .sp |
1368 | .ne 2 | |
1369 | .na | |
1370 | \fBzfs_disable_ivset_guid_check\fR (int) | |
1371 | .ad | |
1372 | .RS 12n | |
1373 | Disables requirement for IVset guids to be present and match when doing a raw | |
1374 | receive of encrypted datasets. Intended for users whose pools were created with | |
1375 | ZFS on Linux pre-release versions and now have compatibility issues. | |
1376 | .sp | |
1377 | Default value: \fB0\fR. | |
1378 | .RE | |
1379 | ||
1380 | .sp | |
1381 | .ne 2 | |
1382 | .na | |
1383 | \fBzfs_key_max_salt_uses\fR (ulong) | |
1384 | .ad | |
1385 | .RS 12n | |
1386 | Maximum number of uses of a single salt value before generating a new one for | |
1387 | encrypted datasets. The default value is also the maximum that will be | |
1388 | accepted. | |
1389 | .sp | |
1390 | Default value: \fB400,000,000\fR. | |
1391 | .RE | |
1392 | ||
1393 | .sp | |
1394 | .ne 2 | |
1395 | .na | |
1396 | \fBzfs_object_mutex_size\fR (uint) | |
1397 | .ad | |
1398 | .RS 12n | |
1399 | Size of the znode hashtable used for holds. | |
1400 | ||
1401 | Due to the need to hold locks on objects that may not exist yet, kernel mutexes | |
1402 | are not created per-object and instead a hashtable is used where collisions | |
1403 | will result in objects waiting when there is not actually contention on the | |
1404 | same object. | |
1405 | .sp | |
1406 | Default value: \fB64\fR. | |
1407 | .RE | |
1408 | ||
80d52c39 TH |
1409 | .sp |
1410 | .ne 2 | |
1411 | .na | |
62ee31ad | 1412 | \fBzfs_slow_io_events_per_second\fR (int) |
80d52c39 TH |
1413 | .ad |
1414 | .RS 12n | |
ad796b8a | 1415 | Rate limit delay zevents (which report slow I/Os) to this many per second. |
80d52c39 TH |
1416 | .sp |
1417 | Default value: 20 | |
1418 | .RE | |
1419 | ||
93e28d66 SD |
1420 | .sp |
1421 | .ne 2 | |
1422 | .na | |
1423 | \fBzfs_unflushed_max_mem_amt\fR (ulong) | |
1424 | .ad | |
1425 | .RS 12n | |
1426 | Upper-bound limit for unflushed metadata changes to be held by the | |
1427 | log spacemap in memory (in bytes). | |
1428 | .sp | |
1429 | Default value: \fB1,073,741,824\fR (1GB). | |
1430 | .RE | |
1431 | ||
1432 | .sp | |
1433 | .ne 2 | |
1434 | .na | |
1435 | \fBzfs_unflushed_max_mem_ppm\fR (ulong) | |
1436 | .ad | |
1437 | .RS 12n | |
1438 | Percentage of the overall system memory that ZFS allows to be used | |
1439 | for unflushed metadata changes by the log spacemap. | |
1440 | (value is calculated over 1000000 for finer granularity). | |
1441 | .sp | |
1442 | Default value: \fB1000\fR (which is divided by 1000000, resulting in | |
1443 | the limit to be \fB0.1\fR% of memory) | |
1444 | .RE | |
1445 | ||
1446 | .sp | |
1447 | .ne 2 | |
1448 | .na | |
1449 | \fBzfs_unflushed_log_block_max\fR (ulong) | |
1450 | .ad | |
1451 | .RS 12n | |
1452 | Describes the maximum number of log spacemap blocks allowed for each pool. | |
1453 | The default value of 262144 means that the space in all the log spacemaps | |
1454 | can add up to no more than 262144 blocks (which means 32GB of logical | |
1455 | space before compression and ditto blocks, assuming that blocksize is | |
1456 | 128k). | |
1457 | .sp | |
1458 | This tunable is important because it involves a trade-off between import | |
1459 | time after an unclean export and the frequency of flushing metaslabs. | |
1460 | The higher this number is, the more log blocks we allow when the pool is | |
1461 | active which means that we flush metaslabs less often and thus decrease | |
1462 | the number of I/Os for spacemap updates per TXG. | |
1463 | At the same time though, that means that in the event of an unclean export, | |
1464 | there will be more log spacemap blocks for us to read, inducing overhead | |
1465 | in the import time of the pool. | |
1466 | The lower the number, the amount of flushing increases destroying log | |
1467 | blocks quicker as they become obsolete faster, which leaves less blocks | |
1468 | to be read during import time after a crash. | |
1469 | .sp | |
1470 | Each log spacemap block existing during pool import leads to approximately | |
1471 | one extra logical I/O issued. | |
1472 | This is the reason why this tunable is exposed in terms of blocks rather | |
1473 | than space used. | |
1474 | .sp | |
1475 | Default value: \fB262144\fR (256K). | |
1476 | .RE | |
1477 | ||
1478 | .sp | |
1479 | .ne 2 | |
1480 | .na | |
1481 | \fBzfs_unflushed_log_block_min\fR (ulong) | |
1482 | .ad | |
1483 | .RS 12n | |
1484 | If the number of metaslabs is small and our incoming rate is high, we | |
1485 | could get into a situation that we are flushing all our metaslabs every | |
1486 | TXG. | |
1487 | Thus we always allow at least this many log blocks. | |
1488 | .sp | |
1489 | Default value: \fB1000\fR. | |
1490 | .RE | |
1491 | ||
1492 | .sp | |
1493 | .ne 2 | |
1494 | .na | |
1495 | \fBzfs_unflushed_log_block_pct\fR (ulong) | |
1496 | .ad | |
1497 | .RS 12n | |
1498 | Tunable used to determine the number of blocks that can be used for | |
1499 | the spacemap log, expressed as a percentage of the total number of | |
1500 | metaslabs in the pool. | |
1501 | .sp | |
1502 | Default value: \fB400\fR (read as \fB400\fR% - meaning that the number | |
1503 | of log spacemap blocks are capped at 4 times the number of | |
1504 | metaslabs in the pool). | |
1505 | .RE | |
1506 | ||
dcec0a12 AP |
1507 | .sp |
1508 | .ne 2 | |
1509 | .na | |
1510 | \fBzfs_unlink_suspend_progress\fR (uint) | |
1511 | .ad | |
1512 | .RS 12n | |
1513 | When enabled, files will not be asynchronously removed from the list of pending | |
1514 | unlinks and the space they consume will be leaked. Once this option has been | |
1515 | disabled and the dataset is remounted, the pending unlinks will be processed | |
1516 | and the freed space returned to the pool. | |
1517 | This option is used by the test suite to facilitate testing. | |
1518 | .sp | |
1519 | Uses \fB0\fR (default) to allow progress and \fB1\fR to pause progress. | |
1520 | .RE | |
1521 | ||
a966c564 K |
1522 | .sp |
1523 | .ne 2 | |
1524 | .na | |
1525 | \fBzfs_delete_blocks\fR (ulong) | |
1526 | .ad | |
1527 | .RS 12n | |
1528 | This is the used to define a large file for the purposes of delete. Files | |
1529 | containing more than \fBzfs_delete_blocks\fR will be deleted asynchronously | |
1530 | while smaller files are deleted synchronously. Decreasing this value will | |
1531 | reduce the time spent in an unlink(2) system call at the expense of a longer | |
1532 | delay before the freed space is available. | |
1533 | .sp | |
1534 | Default value: \fB20,480\fR. | |
1535 | .RE | |
1536 | ||
e8b96c60 MA |
1537 | .sp |
1538 | .ne 2 | |
1539 | .na | |
1540 | \fBzfs_dirty_data_max\fR (int) | |
1541 | .ad | |
1542 | .RS 12n | |
1543 | Determines the dirty space limit in bytes. Once this limit is exceeded, new | |
1544 | writes are halted until space frees up. This parameter takes precedence | |
1545 | over \fBzfs_dirty_data_max_percent\fR. | |
1546 | See the section "ZFS TRANSACTION DELAY". | |
1547 | .sp | |
be54a13c | 1548 | Default value: \fB10\fR% of physical RAM, capped at \fBzfs_dirty_data_max_max\fR. |
e8b96c60 MA |
1549 | .RE |
1550 | ||
1551 | .sp | |
1552 | .ne 2 | |
1553 | .na | |
1554 | \fBzfs_dirty_data_max_max\fR (int) | |
1555 | .ad | |
1556 | .RS 12n | |
1557 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed in bytes. | |
1558 | This limit is only enforced at module load time, and will be ignored if | |
1559 | \fBzfs_dirty_data_max\fR is later changed. This parameter takes | |
1560 | precedence over \fBzfs_dirty_data_max_max_percent\fR. See the section | |
1561 | "ZFS TRANSACTION DELAY". | |
1562 | .sp | |
be54a13c | 1563 | Default value: \fB25\fR% of physical RAM. |
e8b96c60 MA |
1564 | .RE |
1565 | ||
1566 | .sp | |
1567 | .ne 2 | |
1568 | .na | |
1569 | \fBzfs_dirty_data_max_max_percent\fR (int) | |
1570 | .ad | |
1571 | .RS 12n | |
1572 | Maximum allowable value of \fBzfs_dirty_data_max\fR, expressed as a | |
1573 | percentage of physical RAM. This limit is only enforced at module load | |
1574 | time, and will be ignored if \fBzfs_dirty_data_max\fR is later changed. | |
1575 | The parameter \fBzfs_dirty_data_max_max\fR takes precedence over this | |
1576 | one. See the section "ZFS TRANSACTION DELAY". | |
1577 | .sp | |
be54a13c | 1578 | Default value: \fB25\fR%. |
e8b96c60 MA |
1579 | .RE |
1580 | ||
1581 | .sp | |
1582 | .ne 2 | |
1583 | .na | |
1584 | \fBzfs_dirty_data_max_percent\fR (int) | |
1585 | .ad | |
1586 | .RS 12n | |
1587 | Determines the dirty space limit, expressed as a percentage of all | |
1588 | memory. Once this limit is exceeded, new writes are halted until space frees | |
1589 | up. The parameter \fBzfs_dirty_data_max\fR takes precedence over this | |
1590 | one. See the section "ZFS TRANSACTION DELAY". | |
1591 | .sp | |
be54a13c | 1592 | Default value: \fB10\fR%, subject to \fBzfs_dirty_data_max_max\fR. |
e8b96c60 MA |
1593 | .RE |
1594 | ||
1595 | .sp | |
1596 | .ne 2 | |
1597 | .na | |
dfbe2675 | 1598 | \fBzfs_dirty_data_sync_percent\fR (int) |
e8b96c60 MA |
1599 | .ad |
1600 | .RS 12n | |
dfbe2675 MA |
1601 | Start syncing out a transaction group if there's at least this much dirty data |
1602 | as a percentage of \fBzfs_dirty_data_max\fR. This should be less than | |
1603 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR. | |
e8b96c60 | 1604 | .sp |
dfbe2675 | 1605 | Default value: \fB20\fR% of \fBzfs_dirty_data_max\fR. |
e8b96c60 MA |
1606 | .RE |
1607 | ||
1eeb4562 JX |
1608 | .sp |
1609 | .ne 2 | |
1610 | .na | |
1611 | \fBzfs_fletcher_4_impl\fR (string) | |
1612 | .ad | |
1613 | .RS 12n | |
1614 | Select a fletcher 4 implementation. | |
1615 | .sp | |
35a76a03 | 1616 | Supported selectors are: \fBfastest\fR, \fBscalar\fR, \fBsse2\fR, \fBssse3\fR, |
0b2a6423 | 1617 | \fBavx2\fR, \fBavx512f\fR, \fBavx512bw\fR, and \fBaarch64_neon\fR. |
70b258fc GN |
1618 | All of the selectors except \fBfastest\fR and \fBscalar\fR require instruction |
1619 | set extensions to be available and will only appear if ZFS detects that they are | |
1620 | present at runtime. If multiple implementations of fletcher 4 are available, | |
1621 | the \fBfastest\fR will be chosen using a micro benchmark. Selecting \fBscalar\fR | |
1622 | results in the original, CPU based calculation, being used. Selecting any option | |
1623 | other than \fBfastest\fR and \fBscalar\fR results in vector instructions from | |
1624 | the respective CPU instruction set being used. | |
1eeb4562 JX |
1625 | .sp |
1626 | Default value: \fBfastest\fR. | |
1627 | .RE | |
1628 | ||
ba5ad9a4 GW |
1629 | .sp |
1630 | .ne 2 | |
1631 | .na | |
1632 | \fBzfs_free_bpobj_enabled\fR (int) | |
1633 | .ad | |
1634 | .RS 12n | |
1635 | Enable/disable the processing of the free_bpobj object. | |
1636 | .sp | |
1637 | Default value: \fB1\fR. | |
1638 | .RE | |
1639 | ||
36283ca2 MG |
1640 | .sp |
1641 | .ne 2 | |
1642 | .na | |
a1d477c2 | 1643 | \fBzfs_async_block_max_blocks\fR (ulong) |
36283ca2 MG |
1644 | .ad |
1645 | .RS 12n | |
1646 | Maximum number of blocks freed in a single txg. | |
1647 | .sp | |
4fe3a842 MA |
1648 | Default value: \fBULONG_MAX\fR (unlimited). |
1649 | .RE | |
1650 | ||
1651 | .sp | |
1652 | .ne 2 | |
1653 | .na | |
1654 | \fBzfs_max_async_dedup_frees\fR (ulong) | |
1655 | .ad | |
1656 | .RS 12n | |
1657 | Maximum number of dedup blocks freed in a single txg. | |
1658 | .sp | |
36283ca2 MG |
1659 | Default value: \fB100,000\fR. |
1660 | .RE | |
1661 | ||
ca0845d5 PD |
1662 | .sp |
1663 | .ne 2 | |
1664 | .na | |
1665 | \fBzfs_override_estimate_recordsize\fR (ulong) | |
1666 | .ad | |
1667 | .RS 12n | |
1668 | Record size calculation override for zfs send estimates. | |
1669 | .sp | |
1670 | Default value: \fB0\fR. | |
1671 | .RE | |
1672 | ||
e8b96c60 MA |
1673 | .sp |
1674 | .ne 2 | |
1675 | .na | |
1676 | \fBzfs_vdev_async_read_max_active\fR (int) | |
1677 | .ad | |
1678 | .RS 12n | |
83426735 | 1679 | Maximum asynchronous read I/Os active to each device. |
e8b96c60 MA |
1680 | See the section "ZFS I/O SCHEDULER". |
1681 | .sp | |
1682 | Default value: \fB3\fR. | |
1683 | .RE | |
1684 | ||
1685 | .sp | |
1686 | .ne 2 | |
1687 | .na | |
1688 | \fBzfs_vdev_async_read_min_active\fR (int) | |
1689 | .ad | |
1690 | .RS 12n | |
1691 | Minimum asynchronous read I/Os active to each device. | |
1692 | See the section "ZFS I/O SCHEDULER". | |
1693 | .sp | |
1694 | Default value: \fB1\fR. | |
1695 | .RE | |
1696 | ||
1697 | .sp | |
1698 | .ne 2 | |
1699 | .na | |
1700 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR (int) | |
1701 | .ad | |
1702 | .RS 12n | |
1703 | When the pool has more than | |
1704 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR dirty data, use | |
1705 | \fBzfs_vdev_async_write_max_active\fR to limit active async writes. If | |
1706 | the dirty data is between min and max, the active I/O limit is linearly | |
1707 | interpolated. See the section "ZFS I/O SCHEDULER". | |
1708 | .sp | |
be54a13c | 1709 | Default value: \fB60\fR%. |
e8b96c60 MA |
1710 | .RE |
1711 | ||
1712 | .sp | |
1713 | .ne 2 | |
1714 | .na | |
1715 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR (int) | |
1716 | .ad | |
1717 | .RS 12n | |
1718 | When the pool has less than | |
1719 | \fBzfs_vdev_async_write_active_min_dirty_percent\fR dirty data, use | |
1720 | \fBzfs_vdev_async_write_min_active\fR to limit active async writes. If | |
1721 | the dirty data is between min and max, the active I/O limit is linearly | |
1722 | interpolated. See the section "ZFS I/O SCHEDULER". | |
1723 | .sp | |
be54a13c | 1724 | Default value: \fB30\fR%. |
e8b96c60 MA |
1725 | .RE |
1726 | ||
1727 | .sp | |
1728 | .ne 2 | |
1729 | .na | |
1730 | \fBzfs_vdev_async_write_max_active\fR (int) | |
1731 | .ad | |
1732 | .RS 12n | |
83426735 | 1733 | Maximum asynchronous write I/Os active to each device. |
e8b96c60 MA |
1734 | See the section "ZFS I/O SCHEDULER". |
1735 | .sp | |
1736 | Default value: \fB10\fR. | |
1737 | .RE | |
1738 | ||
1739 | .sp | |
1740 | .ne 2 | |
1741 | .na | |
1742 | \fBzfs_vdev_async_write_min_active\fR (int) | |
1743 | .ad | |
1744 | .RS 12n | |
1745 | Minimum asynchronous write I/Os active to each device. | |
1746 | See the section "ZFS I/O SCHEDULER". | |
1747 | .sp | |
06226b59 D |
1748 | Lower values are associated with better latency on rotational media but poorer |
1749 | resilver performance. The default value of 2 was chosen as a compromise. A | |
1750 | value of 3 has been shown to improve resilver performance further at a cost of | |
1751 | further increasing latency. | |
1752 | .sp | |
1753 | Default value: \fB2\fR. | |
e8b96c60 MA |
1754 | .RE |
1755 | ||
619f0976 GW |
1756 | .sp |
1757 | .ne 2 | |
1758 | .na | |
1759 | \fBzfs_vdev_initializing_max_active\fR (int) | |
1760 | .ad | |
1761 | .RS 12n | |
1762 | Maximum initializing I/Os active to each device. | |
1763 | See the section "ZFS I/O SCHEDULER". | |
1764 | .sp | |
1765 | Default value: \fB1\fR. | |
1766 | .RE | |
1767 | ||
1768 | .sp | |
1769 | .ne 2 | |
1770 | .na | |
1771 | \fBzfs_vdev_initializing_min_active\fR (int) | |
1772 | .ad | |
1773 | .RS 12n | |
1774 | Minimum initializing I/Os active to each device. | |
1775 | See the section "ZFS I/O SCHEDULER". | |
1776 | .sp | |
1777 | Default value: \fB1\fR. | |
1778 | .RE | |
1779 | ||
e8b96c60 MA |
1780 | .sp |
1781 | .ne 2 | |
1782 | .na | |
1783 | \fBzfs_vdev_max_active\fR (int) | |
1784 | .ad | |
1785 | .RS 12n | |
1786 | The maximum number of I/Os active to each device. Ideally, this will be >= | |
1787 | the sum of each queue's max_active. It must be at least the sum of each | |
1788 | queue's min_active. See the section "ZFS I/O SCHEDULER". | |
1789 | .sp | |
1790 | Default value: \fB1,000\fR. | |
1791 | .RE | |
1792 | ||
619f0976 GW |
1793 | .sp |
1794 | .ne 2 | |
1795 | .na | |
1796 | \fBzfs_vdev_removal_max_active\fR (int) | |
1797 | .ad | |
1798 | .RS 12n | |
1799 | Maximum removal I/Os active to each device. | |
1800 | See the section "ZFS I/O SCHEDULER". | |
1801 | .sp | |
1802 | Default value: \fB2\fR. | |
1803 | .RE | |
1804 | ||
1805 | .sp | |
1806 | .ne 2 | |
1807 | .na | |
1808 | \fBzfs_vdev_removal_min_active\fR (int) | |
1809 | .ad | |
1810 | .RS 12n | |
1811 | Minimum removal I/Os active to each device. | |
1812 | See the section "ZFS I/O SCHEDULER". | |
1813 | .sp | |
1814 | Default value: \fB1\fR. | |
1815 | .RE | |
1816 | ||
e8b96c60 MA |
1817 | .sp |
1818 | .ne 2 | |
1819 | .na | |
1820 | \fBzfs_vdev_scrub_max_active\fR (int) | |
1821 | .ad | |
1822 | .RS 12n | |
83426735 | 1823 | Maximum scrub I/Os active to each device. |
e8b96c60 MA |
1824 | See the section "ZFS I/O SCHEDULER". |
1825 | .sp | |
1826 | Default value: \fB2\fR. | |
1827 | .RE | |
1828 | ||
1829 | .sp | |
1830 | .ne 2 | |
1831 | .na | |
1832 | \fBzfs_vdev_scrub_min_active\fR (int) | |
1833 | .ad | |
1834 | .RS 12n | |
1835 | Minimum scrub I/Os active to each device. | |
1836 | See the section "ZFS I/O SCHEDULER". | |
1837 | .sp | |
1838 | Default value: \fB1\fR. | |
1839 | .RE | |
1840 | ||
1841 | .sp | |
1842 | .ne 2 | |
1843 | .na | |
1844 | \fBzfs_vdev_sync_read_max_active\fR (int) | |
1845 | .ad | |
1846 | .RS 12n | |
83426735 | 1847 | Maximum synchronous read I/Os active to each device. |
e8b96c60 MA |
1848 | See the section "ZFS I/O SCHEDULER". |
1849 | .sp | |
1850 | Default value: \fB10\fR. | |
1851 | .RE | |
1852 | ||
1853 | .sp | |
1854 | .ne 2 | |
1855 | .na | |
1856 | \fBzfs_vdev_sync_read_min_active\fR (int) | |
1857 | .ad | |
1858 | .RS 12n | |
1859 | Minimum synchronous read I/Os active to each device. | |
1860 | See the section "ZFS I/O SCHEDULER". | |
1861 | .sp | |
1862 | Default value: \fB10\fR. | |
1863 | .RE | |
1864 | ||
1865 | .sp | |
1866 | .ne 2 | |
1867 | .na | |
1868 | \fBzfs_vdev_sync_write_max_active\fR (int) | |
1869 | .ad | |
1870 | .RS 12n | |
83426735 | 1871 | Maximum synchronous write I/Os active to each device. |
e8b96c60 MA |
1872 | See the section "ZFS I/O SCHEDULER". |
1873 | .sp | |
1874 | Default value: \fB10\fR. | |
1875 | .RE | |
1876 | ||
1877 | .sp | |
1878 | .ne 2 | |
1879 | .na | |
1880 | \fBzfs_vdev_sync_write_min_active\fR (int) | |
1881 | .ad | |
1882 | .RS 12n | |
1883 | Minimum synchronous write I/Os active to each device. | |
1884 | See the section "ZFS I/O SCHEDULER". | |
1885 | .sp | |
1886 | Default value: \fB10\fR. | |
1887 | .RE | |
1888 | ||
1b939560 BB |
1889 | .sp |
1890 | .ne 2 | |
1891 | .na | |
1892 | \fBzfs_vdev_trim_max_active\fR (int) | |
1893 | .ad | |
1894 | .RS 12n | |
1895 | Maximum trim/discard I/Os active to each device. | |
1896 | See the section "ZFS I/O SCHEDULER". | |
1897 | .sp | |
1898 | Default value: \fB2\fR. | |
1899 | .RE | |
1900 | ||
1901 | .sp | |
1902 | .ne 2 | |
1903 | .na | |
1904 | \fBzfs_vdev_trim_min_active\fR (int) | |
1905 | .ad | |
1906 | .RS 12n | |
1907 | Minimum trim/discard I/Os active to each device. | |
1908 | See the section "ZFS I/O SCHEDULER". | |
1909 | .sp | |
1910 | Default value: \fB1\fR. | |
1911 | .RE | |
1912 | ||
3dfb57a3 DB |
1913 | .sp |
1914 | .ne 2 | |
1915 | .na | |
1916 | \fBzfs_vdev_queue_depth_pct\fR (int) | |
1917 | .ad | |
1918 | .RS 12n | |
e815485f TC |
1919 | Maximum number of queued allocations per top-level vdev expressed as |
1920 | a percentage of \fBzfs_vdev_async_write_max_active\fR which allows the | |
1921 | system to detect devices that are more capable of handling allocations | |
1922 | and to allocate more blocks to those devices. It allows for dynamic | |
1923 | allocation distribution when devices are imbalanced as fuller devices | |
1924 | will tend to be slower than empty devices. | |
1925 | ||
1926 | See also \fBzio_dva_throttle_enabled\fR. | |
3dfb57a3 | 1927 | .sp |
be54a13c | 1928 | Default value: \fB1000\fR%. |
3dfb57a3 DB |
1929 | .RE |
1930 | ||
29714574 TF |
1931 | .sp |
1932 | .ne 2 | |
1933 | .na | |
1934 | \fBzfs_expire_snapshot\fR (int) | |
1935 | .ad | |
1936 | .RS 12n | |
1937 | Seconds to expire .zfs/snapshot | |
1938 | .sp | |
1939 | Default value: \fB300\fR. | |
1940 | .RE | |
1941 | ||
0500e835 BB |
1942 | .sp |
1943 | .ne 2 | |
1944 | .na | |
1945 | \fBzfs_admin_snapshot\fR (int) | |
1946 | .ad | |
1947 | .RS 12n | |
1948 | Allow the creation, removal, or renaming of entries in the .zfs/snapshot | |
1949 | directory to cause the creation, destruction, or renaming of snapshots. | |
1950 | When enabled this functionality works both locally and over NFS exports | |
1951 | which have the 'no_root_squash' option set. This functionality is disabled | |
1952 | by default. | |
1953 | .sp | |
1954 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
1955 | .RE | |
1956 | ||
29714574 TF |
1957 | .sp |
1958 | .ne 2 | |
1959 | .na | |
1960 | \fBzfs_flags\fR (int) | |
1961 | .ad | |
1962 | .RS 12n | |
33b6dbbc NB |
1963 | Set additional debugging flags. The following flags may be bitwise-or'd |
1964 | together. | |
1965 | .sp | |
1966 | .TS | |
1967 | box; | |
1968 | rB lB | |
1969 | lB lB | |
1970 | r l. | |
1971 | Value Symbolic Name | |
1972 | Description | |
1973 | _ | |
1974 | 1 ZFS_DEBUG_DPRINTF | |
1975 | Enable dprintf entries in the debug log. | |
1976 | _ | |
1977 | 2 ZFS_DEBUG_DBUF_VERIFY * | |
1978 | Enable extra dbuf verifications. | |
1979 | _ | |
1980 | 4 ZFS_DEBUG_DNODE_VERIFY * | |
1981 | Enable extra dnode verifications. | |
1982 | _ | |
1983 | 8 ZFS_DEBUG_SNAPNAMES | |
1984 | Enable snapshot name verification. | |
1985 | _ | |
1986 | 16 ZFS_DEBUG_MODIFY | |
1987 | Check for illegally modified ARC buffers. | |
1988 | _ | |
33b6dbbc NB |
1989 | 64 ZFS_DEBUG_ZIO_FREE |
1990 | Enable verification of block frees. | |
1991 | _ | |
1992 | 128 ZFS_DEBUG_HISTOGRAM_VERIFY | |
1993 | Enable extra spacemap histogram verifications. | |
8740cf4a NB |
1994 | _ |
1995 | 256 ZFS_DEBUG_METASLAB_VERIFY | |
1996 | Verify space accounting on disk matches in-core range_trees. | |
1997 | _ | |
1998 | 512 ZFS_DEBUG_SET_ERROR | |
1999 | Enable SET_ERROR and dprintf entries in the debug log. | |
1b939560 BB |
2000 | _ |
2001 | 1024 ZFS_DEBUG_INDIRECT_REMAP | |
2002 | Verify split blocks created by device removal. | |
2003 | _ | |
2004 | 2048 ZFS_DEBUG_TRIM | |
2005 | Verify TRIM ranges are always within the allocatable range tree. | |
93e28d66 SD |
2006 | _ |
2007 | 4096 ZFS_DEBUG_LOG_SPACEMAP | |
2008 | Verify that the log summary is consistent with the spacemap log | |
2009 | and enable zfs_dbgmsgs for metaslab loading and flushing. | |
33b6dbbc NB |
2010 | .TE |
2011 | .sp | |
2012 | * Requires debug build. | |
29714574 | 2013 | .sp |
33b6dbbc | 2014 | Default value: \fB0\fR. |
29714574 TF |
2015 | .RE |
2016 | ||
fbeddd60 MA |
2017 | .sp |
2018 | .ne 2 | |
2019 | .na | |
2020 | \fBzfs_free_leak_on_eio\fR (int) | |
2021 | .ad | |
2022 | .RS 12n | |
2023 | If destroy encounters an EIO while reading metadata (e.g. indirect | |
2024 | blocks), space referenced by the missing metadata can not be freed. | |
2025 | Normally this causes the background destroy to become "stalled", as | |
2026 | it is unable to make forward progress. While in this stalled state, | |
2027 | all remaining space to free from the error-encountering filesystem is | |
2028 | "temporarily leaked". Set this flag to cause it to ignore the EIO, | |
2029 | permanently leak the space from indirect blocks that can not be read, | |
2030 | and continue to free everything else that it can. | |
2031 | ||
2032 | The default, "stalling" behavior is useful if the storage partially | |
2033 | fails (i.e. some but not all i/os fail), and then later recovers. In | |
2034 | this case, we will be able to continue pool operations while it is | |
2035 | partially failed, and when it recovers, we can continue to free the | |
2036 | space, with no leaks. However, note that this case is actually | |
2037 | fairly rare. | |
2038 | ||
2039 | Typically pools either (a) fail completely (but perhaps temporarily, | |
2040 | e.g. a top-level vdev going offline), or (b) have localized, | |
2041 | permanent errors (e.g. disk returns the wrong data due to bit flip or | |
2042 | firmware bug). In case (a), this setting does not matter because the | |
2043 | pool will be suspended and the sync thread will not be able to make | |
2044 | forward progress regardless. In case (b), because the error is | |
2045 | permanent, the best we can do is leak the minimum amount of space, | |
2046 | which is what setting this flag will do. Therefore, it is reasonable | |
2047 | for this flag to normally be set, but we chose the more conservative | |
2048 | approach of not setting it, so that there is no possibility of | |
2049 | leaking space in the "partial temporary" failure case. | |
2050 | .sp | |
2051 | Default value: \fB0\fR. | |
2052 | .RE | |
2053 | ||
29714574 TF |
2054 | .sp |
2055 | .ne 2 | |
2056 | .na | |
2057 | \fBzfs_free_min_time_ms\fR (int) | |
2058 | .ad | |
2059 | .RS 12n | |
6146e17e | 2060 | During a \fBzfs destroy\fR operation using \fBfeature@async_destroy\fR a minimum |
83426735 | 2061 | of this much time will be spent working on freeing blocks per txg. |
29714574 TF |
2062 | .sp |
2063 | Default value: \fB1,000\fR. | |
2064 | .RE | |
2065 | ||
67709516 D |
2066 | .sp |
2067 | .ne 2 | |
2068 | .na | |
2069 | \fBzfs_obsolete_min_time_ms\fR (int) | |
2070 | .ad | |
2071 | .RS 12n | |
2072 | Simlar to \fBzfs_free_min_time_ms\fR but for cleanup of old indirection records | |
2073 | for removed vdevs. | |
2074 | .sp | |
2075 | Default value: \fB500\fR. | |
2076 | .RE | |
2077 | ||
29714574 TF |
2078 | .sp |
2079 | .ne 2 | |
2080 | .na | |
2081 | \fBzfs_immediate_write_sz\fR (long) | |
2082 | .ad | |
2083 | .RS 12n | |
83426735 | 2084 | Largest data block to write to zil. Larger blocks will be treated as if the |
6146e17e | 2085 | dataset being written to had the property setting \fBlogbias=throughput\fR. |
29714574 TF |
2086 | .sp |
2087 | Default value: \fB32,768\fR. | |
2088 | .RE | |
2089 | ||
619f0976 GW |
2090 | .sp |
2091 | .ne 2 | |
2092 | .na | |
2093 | \fBzfs_initialize_value\fR (ulong) | |
2094 | .ad | |
2095 | .RS 12n | |
2096 | Pattern written to vdev free space by \fBzpool initialize\fR. | |
2097 | .sp | |
2098 | Default value: \fB16,045,690,984,833,335,022\fR (0xdeadbeefdeadbeee). | |
2099 | .RE | |
2100 | ||
e60e158e JG |
2101 | .sp |
2102 | .ne 2 | |
2103 | .na | |
2104 | \fBzfs_initialize_chunk_size\fR (ulong) | |
2105 | .ad | |
2106 | .RS 12n | |
2107 | Size of writes used by \fBzpool initialize\fR. | |
2108 | This option is used by the test suite to facilitate testing. | |
2109 | .sp | |
2110 | Default value: \fB1,048,576\fR | |
2111 | .RE | |
2112 | ||
37f03da8 SH |
2113 | .sp |
2114 | .ne 2 | |
2115 | .na | |
2116 | \fBzfs_livelist_max_entries\fR (ulong) | |
2117 | .ad | |
2118 | .RS 12n | |
2119 | The threshold size (in block pointers) at which we create a new sub-livelist. | |
2120 | Larger sublists are more costly from a memory perspective but the fewer | |
2121 | sublists there are, the lower the cost of insertion. | |
2122 | .sp | |
2123 | Default value: \fB500,000\fR. | |
2124 | .RE | |
2125 | ||
2126 | .sp | |
2127 | .ne 2 | |
2128 | .na | |
2129 | \fBzfs_livelist_min_percent_shared\fR (int) | |
2130 | .ad | |
2131 | .RS 12n | |
2132 | If the amount of shared space between a snapshot and its clone drops below | |
2133 | this threshold, the clone turns off the livelist and reverts to the old deletion | |
2134 | method. This is in place because once a clone has been overwritten enough | |
2135 | livelists no long give us a benefit. | |
2136 | .sp | |
2137 | Default value: \fB75\fR. | |
2138 | .RE | |
2139 | ||
2140 | .sp | |
2141 | .ne 2 | |
2142 | .na | |
2143 | \fBzfs_livelist_condense_new_alloc\fR (int) | |
2144 | .ad | |
2145 | .RS 12n | |
2146 | Incremented each time an extra ALLOC blkptr is added to a livelist entry while | |
2147 | it is being condensed. | |
2148 | This option is used by the test suite to track race conditions. | |
2149 | .sp | |
2150 | Default value: \fB0\fR. | |
2151 | .RE | |
2152 | ||
2153 | .sp | |
2154 | .ne 2 | |
2155 | .na | |
2156 | \fBzfs_livelist_condense_sync_cancel\fR (int) | |
2157 | .ad | |
2158 | .RS 12n | |
2159 | Incremented each time livelist condensing is canceled while in | |
2160 | spa_livelist_condense_sync. | |
2161 | This option is used by the test suite to track race conditions. | |
2162 | .sp | |
2163 | Default value: \fB0\fR. | |
2164 | .RE | |
2165 | ||
2166 | .sp | |
2167 | .ne 2 | |
2168 | .na | |
2169 | \fBzfs_livelist_condense_sync_pause\fR (int) | |
2170 | .ad | |
2171 | .RS 12n | |
2172 | When set, the livelist condense process pauses indefinitely before | |
2173 | executing the synctask - spa_livelist_condense_sync. | |
2174 | This option is used by the test suite to trigger race conditions. | |
2175 | .sp | |
2176 | Default value: \fB0\fR. | |
2177 | .RE | |
2178 | ||
2179 | .sp | |
2180 | .ne 2 | |
2181 | .na | |
2182 | \fBzfs_livelist_condense_zthr_cancel\fR (int) | |
2183 | .ad | |
2184 | .RS 12n | |
2185 | Incremented each time livelist condensing is canceled while in | |
2186 | spa_livelist_condense_cb. | |
2187 | This option is used by the test suite to track race conditions. | |
2188 | .sp | |
2189 | Default value: \fB0\fR. | |
2190 | .RE | |
2191 | ||
2192 | .sp | |
2193 | .ne 2 | |
2194 | .na | |
2195 | \fBzfs_livelist_condense_zthr_pause\fR (int) | |
2196 | .ad | |
2197 | .RS 12n | |
2198 | When set, the livelist condense process pauses indefinitely before | |
2199 | executing the open context condensing work in spa_livelist_condense_cb. | |
2200 | This option is used by the test suite to trigger race conditions. | |
2201 | .sp | |
2202 | Default value: \fB0\fR. | |
2203 | .RE | |
2204 | ||
917f475f JG |
2205 | .sp |
2206 | .ne 2 | |
2207 | .na | |
2208 | \fBzfs_lua_max_instrlimit\fR (ulong) | |
2209 | .ad | |
2210 | .RS 12n | |
2211 | The maximum execution time limit that can be set for a ZFS channel program, | |
2212 | specified as a number of Lua instructions. | |
2213 | .sp | |
2214 | Default value: \fB100,000,000\fR. | |
2215 | .RE | |
2216 | ||
2217 | .sp | |
2218 | .ne 2 | |
2219 | .na | |
2220 | \fBzfs_lua_max_memlimit\fR (ulong) | |
2221 | .ad | |
2222 | .RS 12n | |
2223 | The maximum memory limit that can be set for a ZFS channel program, specified | |
2224 | in bytes. | |
2225 | .sp | |
2226 | Default value: \fB104,857,600\fR. | |
2227 | .RE | |
2228 | ||
a7ed98d8 SD |
2229 | .sp |
2230 | .ne 2 | |
2231 | .na | |
2232 | \fBzfs_max_dataset_nesting\fR (int) | |
2233 | .ad | |
2234 | .RS 12n | |
2235 | The maximum depth of nested datasets. This value can be tuned temporarily to | |
2236 | fix existing datasets that exceed the predefined limit. | |
2237 | .sp | |
2238 | Default value: \fB50\fR. | |
2239 | .RE | |
2240 | ||
93e28d66 SD |
2241 | .sp |
2242 | .ne 2 | |
2243 | .na | |
2244 | \fBzfs_max_log_walking\fR (ulong) | |
2245 | .ad | |
2246 | .RS 12n | |
2247 | The number of past TXGs that the flushing algorithm of the log spacemap | |
2248 | feature uses to estimate incoming log blocks. | |
2249 | .sp | |
2250 | Default value: \fB5\fR. | |
2251 | .RE | |
2252 | ||
2253 | .sp | |
2254 | .ne 2 | |
2255 | .na | |
2256 | \fBzfs_max_logsm_summary_length\fR (ulong) | |
2257 | .ad | |
2258 | .RS 12n | |
2259 | Maximum number of rows allowed in the summary of the spacemap log. | |
2260 | .sp | |
2261 | Default value: \fB10\fR. | |
2262 | .RE | |
2263 | ||
f1512ee6 MA |
2264 | .sp |
2265 | .ne 2 | |
2266 | .na | |
2267 | \fBzfs_max_recordsize\fR (int) | |
2268 | .ad | |
2269 | .RS 12n | |
2270 | We currently support block sizes from 512 bytes to 16MB. The benefits of | |
ad796b8a | 2271 | larger blocks, and thus larger I/O, need to be weighed against the cost of |
f1512ee6 MA |
2272 | COWing a giant block to modify one byte. Additionally, very large blocks |
2273 | can have an impact on i/o latency, and also potentially on the memory | |
2274 | allocator. Therefore, we do not allow the recordsize to be set larger than | |
2275 | zfs_max_recordsize (default 1MB). Larger blocks can be created by changing | |
2276 | this tunable, and pools with larger blocks can always be imported and used, | |
2277 | regardless of this setting. | |
2278 | .sp | |
2279 | Default value: \fB1,048,576\fR. | |
2280 | .RE | |
2281 | ||
30af21b0 PD |
2282 | .sp |
2283 | .ne 2 | |
2284 | .na | |
2285 | \fBzfs_allow_redacted_dataset_mount\fR (int) | |
2286 | .ad | |
2287 | .RS 12n | |
2288 | Allow datasets received with redacted send/receive to be mounted. Normally | |
2289 | disabled because these datasets may be missing key data. | |
2290 | .sp | |
2291 | Default value: \fB0\fR. | |
2292 | .RE | |
2293 | ||
93e28d66 SD |
2294 | .sp |
2295 | .ne 2 | |
2296 | .na | |
2297 | \fBzfs_min_metaslabs_to_flush\fR (ulong) | |
2298 | .ad | |
2299 | .RS 12n | |
2300 | Minimum number of metaslabs to flush per dirty TXG | |
2301 | .sp | |
2302 | Default value: \fB1\fR. | |
2303 | .RE | |
2304 | ||
f3a7f661 GW |
2305 | .sp |
2306 | .ne 2 | |
2307 | .na | |
2308 | \fBzfs_metaslab_fragmentation_threshold\fR (int) | |
2309 | .ad | |
2310 | .RS 12n | |
2311 | Allow metaslabs to keep their active state as long as their fragmentation | |
2312 | percentage is less than or equal to this value. An active metaslab that | |
2313 | exceeds this threshold will no longer keep its active status allowing | |
2314 | better metaslabs to be selected. | |
2315 | .sp | |
2316 | Default value: \fB70\fR. | |
2317 | .RE | |
2318 | ||
2319 | .sp | |
2320 | .ne 2 | |
2321 | .na | |
2322 | \fBzfs_mg_fragmentation_threshold\fR (int) | |
2323 | .ad | |
2324 | .RS 12n | |
2325 | Metaslab groups are considered eligible for allocations if their | |
83426735 | 2326 | fragmentation metric (measured as a percentage) is less than or equal to |
f3a7f661 GW |
2327 | this value. If a metaslab group exceeds this threshold then it will be |
2328 | skipped unless all metaslab groups within the metaslab class have also | |
2329 | crossed this threshold. | |
2330 | .sp | |
cb020f0d | 2331 | Default value: \fB95\fR. |
f3a7f661 GW |
2332 | .RE |
2333 | ||
f4a4046b TC |
2334 | .sp |
2335 | .ne 2 | |
2336 | .na | |
2337 | \fBzfs_mg_noalloc_threshold\fR (int) | |
2338 | .ad | |
2339 | .RS 12n | |
2340 | Defines a threshold at which metaslab groups should be eligible for | |
2341 | allocations. The value is expressed as a percentage of free space | |
2342 | beyond which a metaslab group is always eligible for allocations. | |
2343 | If a metaslab group's free space is less than or equal to the | |
6b4e21c6 | 2344 | threshold, the allocator will avoid allocating to that group |
f4a4046b TC |
2345 | unless all groups in the pool have reached the threshold. Once all |
2346 | groups have reached the threshold, all groups are allowed to accept | |
2347 | allocations. The default value of 0 disables the feature and causes | |
2348 | all metaslab groups to be eligible for allocations. | |
2349 | ||
b58237e7 | 2350 | This parameter allows one to deal with pools having heavily imbalanced |
f4a4046b TC |
2351 | vdevs such as would be the case when a new vdev has been added. |
2352 | Setting the threshold to a non-zero percentage will stop allocations | |
2353 | from being made to vdevs that aren't filled to the specified percentage | |
2354 | and allow lesser filled vdevs to acquire more allocations than they | |
2355 | otherwise would under the old \fBzfs_mg_alloc_failures\fR facility. | |
2356 | .sp | |
2357 | Default value: \fB0\fR. | |
2358 | .RE | |
2359 | ||
cc99f275 DB |
2360 | .sp |
2361 | .ne 2 | |
2362 | .na | |
2363 | \fBzfs_ddt_data_is_special\fR (int) | |
2364 | .ad | |
2365 | .RS 12n | |
2366 | If enabled, ZFS will place DDT data into the special allocation class. | |
2367 | .sp | |
2368 | Default value: \fB1\fR. | |
2369 | .RE | |
2370 | ||
2371 | .sp | |
2372 | .ne 2 | |
2373 | .na | |
2374 | \fBzfs_user_indirect_is_special\fR (int) | |
2375 | .ad | |
2376 | .RS 12n | |
2377 | If enabled, ZFS will place user data (both file and zvol) indirect blocks | |
2378 | into the special allocation class. | |
2379 | .sp | |
2380 | Default value: \fB1\fR. | |
2381 | .RE | |
2382 | ||
379ca9cf OF |
2383 | .sp |
2384 | .ne 2 | |
2385 | .na | |
2386 | \fBzfs_multihost_history\fR (int) | |
2387 | .ad | |
2388 | .RS 12n | |
2389 | Historical statistics for the last N multihost updates will be available in | |
2390 | \fB/proc/spl/kstat/zfs/<pool>/multihost\fR | |
2391 | .sp | |
2392 | Default value: \fB0\fR. | |
2393 | .RE | |
2394 | ||
2395 | .sp | |
2396 | .ne 2 | |
2397 | .na | |
2398 | \fBzfs_multihost_interval\fR (ulong) | |
2399 | .ad | |
2400 | .RS 12n | |
2401 | Used to control the frequency of multihost writes which are performed when the | |
060f0226 OF |
2402 | \fBmultihost\fR pool property is on. This is one factor used to determine the |
2403 | length of the activity check during import. | |
379ca9cf | 2404 | .sp |
060f0226 OF |
2405 | The multihost write period is \fBzfs_multihost_interval / leaf-vdevs\fR |
2406 | milliseconds. On average a multihost write will be issued for each leaf vdev | |
2407 | every \fBzfs_multihost_interval\fR milliseconds. In practice, the observed | |
2408 | period can vary with the I/O load and this observed value is the delay which is | |
2409 | stored in the uberblock. | |
379ca9cf OF |
2410 | .sp |
2411 | Default value: \fB1000\fR. | |
2412 | .RE | |
2413 | ||
2414 | .sp | |
2415 | .ne 2 | |
2416 | .na | |
2417 | \fBzfs_multihost_import_intervals\fR (uint) | |
2418 | .ad | |
2419 | .RS 12n | |
2420 | Used to control the duration of the activity test on import. Smaller values of | |
2421 | \fBzfs_multihost_import_intervals\fR will reduce the import time but increase | |
2422 | the risk of failing to detect an active pool. The total activity check time is | |
060f0226 OF |
2423 | never allowed to drop below one second. |
2424 | .sp | |
2425 | On import the activity check waits a minimum amount of time determined by | |
2426 | \fBzfs_multihost_interval * zfs_multihost_import_intervals\fR, or the same | |
2427 | product computed on the host which last had the pool imported (whichever is | |
2428 | greater). The activity check time may be further extended if the value of mmp | |
2429 | delay found in the best uberblock indicates actual multihost updates happened | |
2430 | at longer intervals than \fBzfs_multihost_interval\fR. A minimum value of | |
2431 | \fB100ms\fR is enforced. | |
2432 | .sp | |
2433 | A value of 0 is ignored and treated as if it was set to 1. | |
379ca9cf | 2434 | .sp |
db2af93d | 2435 | Default value: \fB20\fR. |
379ca9cf OF |
2436 | .RE |
2437 | ||
2438 | .sp | |
2439 | .ne 2 | |
2440 | .na | |
2441 | \fBzfs_multihost_fail_intervals\fR (uint) | |
2442 | .ad | |
2443 | .RS 12n | |
060f0226 OF |
2444 | Controls the behavior of the pool when multihost write failures or delays are |
2445 | detected. | |
379ca9cf | 2446 | .sp |
060f0226 OF |
2447 | When \fBzfs_multihost_fail_intervals = 0\fR, multihost write failures or delays |
2448 | are ignored. The failures will still be reported to the ZED which depending on | |
2449 | its configuration may take action such as suspending the pool or offlining a | |
2450 | device. | |
2451 | ||
379ca9cf | 2452 | .sp |
060f0226 OF |
2453 | When \fBzfs_multihost_fail_intervals > 0\fR, the pool will be suspended if |
2454 | \fBzfs_multihost_fail_intervals * zfs_multihost_interval\fR milliseconds pass | |
2455 | without a successful mmp write. This guarantees the activity test will see | |
2456 | mmp writes if the pool is imported. A value of 1 is ignored and treated as | |
2457 | if it was set to 2. This is necessary to prevent the pool from being suspended | |
2458 | due to normal, small I/O latency variations. | |
2459 | ||
379ca9cf | 2460 | .sp |
db2af93d | 2461 | Default value: \fB10\fR. |
379ca9cf OF |
2462 | .RE |
2463 | ||
29714574 TF |
2464 | .sp |
2465 | .ne 2 | |
2466 | .na | |
2467 | \fBzfs_no_scrub_io\fR (int) | |
2468 | .ad | |
2469 | .RS 12n | |
83426735 D |
2470 | Set for no scrub I/O. This results in scrubs not actually scrubbing data and |
2471 | simply doing a metadata crawl of the pool instead. | |
29714574 TF |
2472 | .sp |
2473 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2474 | .RE | |
2475 | ||
2476 | .sp | |
2477 | .ne 2 | |
2478 | .na | |
2479 | \fBzfs_no_scrub_prefetch\fR (int) | |
2480 | .ad | |
2481 | .RS 12n | |
83426735 | 2482 | Set to disable block prefetching for scrubs. |
29714574 TF |
2483 | .sp |
2484 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2485 | .RE | |
2486 | ||
29714574 TF |
2487 | .sp |
2488 | .ne 2 | |
2489 | .na | |
2490 | \fBzfs_nocacheflush\fR (int) | |
2491 | .ad | |
2492 | .RS 12n | |
53b1f5ea PS |
2493 | Disable cache flush operations on disks when writing. Setting this will |
2494 | cause pool corruption on power loss if a volatile out-of-order write cache | |
2495 | is enabled. | |
29714574 TF |
2496 | .sp |
2497 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2498 | .RE | |
2499 | ||
2500 | .sp | |
2501 | .ne 2 | |
2502 | .na | |
2503 | \fBzfs_nopwrite_enabled\fR (int) | |
2504 | .ad | |
2505 | .RS 12n | |
2506 | Enable NOP writes | |
2507 | .sp | |
2508 | Use \fB1\fR for yes (default) and \fB0\fR to disable. | |
2509 | .RE | |
2510 | ||
66aca247 DB |
2511 | .sp |
2512 | .ne 2 | |
2513 | .na | |
2514 | \fBzfs_dmu_offset_next_sync\fR (int) | |
2515 | .ad | |
2516 | .RS 12n | |
2517 | Enable forcing txg sync to find holes. When enabled forces ZFS to act | |
2518 | like prior versions when SEEK_HOLE or SEEK_DATA flags are used, which | |
2519 | when a dnode is dirty causes txg's to be synced so that this data can be | |
2520 | found. | |
2521 | .sp | |
2522 | Use \fB1\fR for yes and \fB0\fR to disable (default). | |
2523 | .RE | |
2524 | ||
29714574 TF |
2525 | .sp |
2526 | .ne 2 | |
2527 | .na | |
b738bc5a | 2528 | \fBzfs_pd_bytes_max\fR (int) |
29714574 TF |
2529 | .ad |
2530 | .RS 12n | |
83426735 | 2531 | The number of bytes which should be prefetched during a pool traversal |
6146e17e | 2532 | (eg: \fBzfs send\fR or other data crawling operations) |
29714574 | 2533 | .sp |
74aa2ba2 | 2534 | Default value: \fB52,428,800\fR. |
29714574 TF |
2535 | .RE |
2536 | ||
bef78122 DQ |
2537 | .sp |
2538 | .ne 2 | |
2539 | .na | |
2540 | \fBzfs_per_txg_dirty_frees_percent \fR (ulong) | |
2541 | .ad | |
2542 | .RS 12n | |
65282ee9 AP |
2543 | Tunable to control percentage of dirtied indirect blocks from frees allowed |
2544 | into one TXG. After this threshold is crossed, additional frees will wait until | |
2545 | the next TXG. | |
bef78122 DQ |
2546 | A value of zero will disable this throttle. |
2547 | .sp | |
65282ee9 | 2548 | Default value: \fB5\fR, set to \fB0\fR to disable. |
bef78122 DQ |
2549 | .RE |
2550 | ||
29714574 TF |
2551 | .sp |
2552 | .ne 2 | |
2553 | .na | |
2554 | \fBzfs_prefetch_disable\fR (int) | |
2555 | .ad | |
2556 | .RS 12n | |
7f60329a MA |
2557 | This tunable disables predictive prefetch. Note that it leaves "prescient" |
2558 | prefetch (e.g. prefetch for zfs send) intact. Unlike predictive prefetch, | |
2559 | prescient prefetch never issues i/os that end up not being needed, so it | |
2560 | can't hurt performance. | |
29714574 TF |
2561 | .sp |
2562 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2563 | .RE | |
2564 | ||
5090f727 CZ |
2565 | .sp |
2566 | .ne 2 | |
2567 | .na | |
2568 | \fBzfs_qat_checksum_disable\fR (int) | |
2569 | .ad | |
2570 | .RS 12n | |
2571 | This tunable disables qat hardware acceleration for sha256 checksums. It | |
2572 | may be set after the zfs modules have been loaded to initialize the qat | |
2573 | hardware as long as support is compiled in and the qat driver is present. | |
2574 | .sp | |
2575 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2576 | .RE | |
2577 | ||
2578 | .sp | |
2579 | .ne 2 | |
2580 | .na | |
2581 | \fBzfs_qat_compress_disable\fR (int) | |
2582 | .ad | |
2583 | .RS 12n | |
2584 | This tunable disables qat hardware acceleration for gzip compression. It | |
2585 | may be set after the zfs modules have been loaded to initialize the qat | |
2586 | hardware as long as support is compiled in and the qat driver is present. | |
2587 | .sp | |
2588 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2589 | .RE | |
2590 | ||
2591 | .sp | |
2592 | .ne 2 | |
2593 | .na | |
2594 | \fBzfs_qat_encrypt_disable\fR (int) | |
2595 | .ad | |
2596 | .RS 12n | |
2597 | This tunable disables qat hardware acceleration for AES-GCM encryption. It | |
2598 | may be set after the zfs modules have been loaded to initialize the qat | |
2599 | hardware as long as support is compiled in and the qat driver is present. | |
2600 | .sp | |
2601 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2602 | .RE | |
2603 | ||
29714574 TF |
2604 | .sp |
2605 | .ne 2 | |
2606 | .na | |
2607 | \fBzfs_read_chunk_size\fR (long) | |
2608 | .ad | |
2609 | .RS 12n | |
2610 | Bytes to read per chunk | |
2611 | .sp | |
2612 | Default value: \fB1,048,576\fR. | |
2613 | .RE | |
2614 | ||
2615 | .sp | |
2616 | .ne 2 | |
2617 | .na | |
2618 | \fBzfs_read_history\fR (int) | |
2619 | .ad | |
2620 | .RS 12n | |
379ca9cf OF |
2621 | Historical statistics for the last N reads will be available in |
2622 | \fB/proc/spl/kstat/zfs/<pool>/reads\fR | |
29714574 | 2623 | .sp |
83426735 | 2624 | Default value: \fB0\fR (no data is kept). |
29714574 TF |
2625 | .RE |
2626 | ||
2627 | .sp | |
2628 | .ne 2 | |
2629 | .na | |
2630 | \fBzfs_read_history_hits\fR (int) | |
2631 | .ad | |
2632 | .RS 12n | |
2633 | Include cache hits in read history | |
2634 | .sp | |
2635 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2636 | .RE | |
2637 | ||
9e052db4 MA |
2638 | .sp |
2639 | .ne 2 | |
2640 | .na | |
4589f3ae BB |
2641 | \fBzfs_reconstruct_indirect_combinations_max\fR (int) |
2642 | .ad | |
2643 | .RS 12na | |
2644 | If an indirect split block contains more than this many possible unique | |
2645 | combinations when being reconstructed, consider it too computationally | |
2646 | expensive to check them all. Instead, try at most | |
2647 | \fBzfs_reconstruct_indirect_combinations_max\fR randomly-selected | |
2648 | combinations each time the block is accessed. This allows all segment | |
2649 | copies to participate fairly in the reconstruction when all combinations | |
2650 | cannot be checked and prevents repeated use of one bad copy. | |
2651 | .sp | |
64bdf63f | 2652 | Default value: \fB4096\fR. |
9e052db4 MA |
2653 | .RE |
2654 | ||
29714574 TF |
2655 | .sp |
2656 | .ne 2 | |
2657 | .na | |
2658 | \fBzfs_recover\fR (int) | |
2659 | .ad | |
2660 | .RS 12n | |
2661 | Set to attempt to recover from fatal errors. This should only be used as a | |
2662 | last resort, as it typically results in leaked space, or worse. | |
2663 | .sp | |
2664 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2665 | .RE | |
2666 | ||
7c9a4292 BB |
2667 | .sp |
2668 | .ne 2 | |
2669 | .na | |
2670 | \fBzfs_removal_ignore_errors\fR (int) | |
2671 | .ad | |
2672 | .RS 12n | |
2673 | .sp | |
2674 | Ignore hard IO errors during device removal. When set, if a device encounters | |
2675 | a hard IO error during the removal process the removal will not be cancelled. | |
2676 | This can result in a normally recoverable block becoming permanently damaged | |
2677 | and is not recommended. This should only be used as a last resort when the | |
2678 | pool cannot be returned to a healthy state prior to removing the device. | |
2679 | .sp | |
2680 | Default value: \fB0\fR. | |
2681 | .RE | |
2682 | ||
53dce5ac MA |
2683 | .sp |
2684 | .ne 2 | |
2685 | .na | |
2686 | \fBzfs_removal_suspend_progress\fR (int) | |
2687 | .ad | |
2688 | .RS 12n | |
2689 | .sp | |
2690 | This is used by the test suite so that it can ensure that certain actions | |
2691 | happen while in the middle of a removal. | |
2692 | .sp | |
2693 | Default value: \fB0\fR. | |
2694 | .RE | |
2695 | ||
2696 | .sp | |
2697 | .ne 2 | |
2698 | .na | |
2699 | \fBzfs_remove_max_segment\fR (int) | |
2700 | .ad | |
2701 | .RS 12n | |
2702 | .sp | |
2703 | The largest contiguous segment that we will attempt to allocate when removing | |
2704 | a device. This can be no larger than 16MB. If there is a performance | |
2705 | problem with attempting to allocate large blocks, consider decreasing this. | |
2706 | .sp | |
2707 | Default value: \fB16,777,216\fR (16MB). | |
2708 | .RE | |
2709 | ||
67709516 D |
2710 | .sp |
2711 | .ne 2 | |
2712 | .na | |
2713 | \fBzfs_resilver_disable_defer\fR (int) | |
2714 | .ad | |
2715 | .RS 12n | |
2716 | Disables the \fBresilver_defer\fR feature, causing an operation that would | |
2717 | start a resilver to restart one in progress immediately. | |
2718 | .sp | |
2719 | Default value: \fB0\fR (feature enabled). | |
2720 | .RE | |
2721 | ||
29714574 TF |
2722 | .sp |
2723 | .ne 2 | |
2724 | .na | |
d4a72f23 | 2725 | \fBzfs_resilver_min_time_ms\fR (int) |
29714574 TF |
2726 | .ad |
2727 | .RS 12n | |
d4a72f23 TC |
2728 | Resilvers are processed by the sync thread. While resilvering it will spend |
2729 | at least this much time working on a resilver between txg flushes. | |
29714574 | 2730 | .sp |
d4a72f23 | 2731 | Default value: \fB3,000\fR. |
29714574 TF |
2732 | .RE |
2733 | ||
02638a30 TC |
2734 | .sp |
2735 | .ne 2 | |
2736 | .na | |
2737 | \fBzfs_scan_ignore_errors\fR (int) | |
2738 | .ad | |
2739 | .RS 12n | |
2740 | If set to a nonzero value, remove the DTL (dirty time list) upon | |
2741 | completion of a pool scan (scrub) even if there were unrepairable | |
2742 | errors. It is intended to be used during pool repair or recovery to | |
2743 | stop resilvering when the pool is next imported. | |
2744 | .sp | |
2745 | Default value: \fB0\fR. | |
2746 | .RE | |
2747 | ||
29714574 TF |
2748 | .sp |
2749 | .ne 2 | |
2750 | .na | |
d4a72f23 | 2751 | \fBzfs_scrub_min_time_ms\fR (int) |
29714574 TF |
2752 | .ad |
2753 | .RS 12n | |
d4a72f23 TC |
2754 | Scrubs are processed by the sync thread. While scrubbing it will spend |
2755 | at least this much time working on a scrub between txg flushes. | |
29714574 | 2756 | .sp |
d4a72f23 | 2757 | Default value: \fB1,000\fR. |
29714574 TF |
2758 | .RE |
2759 | ||
2760 | .sp | |
2761 | .ne 2 | |
2762 | .na | |
d4a72f23 | 2763 | \fBzfs_scan_checkpoint_intval\fR (int) |
29714574 TF |
2764 | .ad |
2765 | .RS 12n | |
d4a72f23 TC |
2766 | To preserve progress across reboots the sequential scan algorithm periodically |
2767 | needs to stop metadata scanning and issue all the verifications I/Os to disk. | |
2768 | The frequency of this flushing is determined by the | |
a8577bdb | 2769 | \fBzfs_scan_checkpoint_intval\fR tunable. |
29714574 | 2770 | .sp |
d4a72f23 | 2771 | Default value: \fB7200\fR seconds (every 2 hours). |
29714574 TF |
2772 | .RE |
2773 | ||
2774 | .sp | |
2775 | .ne 2 | |
2776 | .na | |
d4a72f23 | 2777 | \fBzfs_scan_fill_weight\fR (int) |
29714574 TF |
2778 | .ad |
2779 | .RS 12n | |
d4a72f23 TC |
2780 | This tunable affects how scrub and resilver I/O segments are ordered. A higher |
2781 | number indicates that we care more about how filled in a segment is, while a | |
2782 | lower number indicates we care more about the size of the extent without | |
2783 | considering the gaps within a segment. This value is only tunable upon module | |
2784 | insertion. Changing the value afterwards will have no affect on scrub or | |
2785 | resilver performance. | |
29714574 | 2786 | .sp |
d4a72f23 | 2787 | Default value: \fB3\fR. |
29714574 TF |
2788 | .RE |
2789 | ||
2790 | .sp | |
2791 | .ne 2 | |
2792 | .na | |
d4a72f23 | 2793 | \fBzfs_scan_issue_strategy\fR (int) |
29714574 TF |
2794 | .ad |
2795 | .RS 12n | |
d4a72f23 TC |
2796 | Determines the order that data will be verified while scrubbing or resilvering. |
2797 | If set to \fB1\fR, data will be verified as sequentially as possible, given the | |
2798 | amount of memory reserved for scrubbing (see \fBzfs_scan_mem_lim_fact\fR). This | |
2799 | may improve scrub performance if the pool's data is very fragmented. If set to | |
2800 | \fB2\fR, the largest mostly-contiguous chunk of found data will be verified | |
2801 | first. By deferring scrubbing of small segments, we may later find adjacent data | |
2802 | to coalesce and increase the segment size. If set to \fB0\fR, zfs will use | |
2803 | strategy \fB1\fR during normal verification and strategy \fB2\fR while taking a | |
2804 | checkpoint. | |
29714574 | 2805 | .sp |
d4a72f23 TC |
2806 | Default value: \fB0\fR. |
2807 | .RE | |
2808 | ||
2809 | .sp | |
2810 | .ne 2 | |
2811 | .na | |
2812 | \fBzfs_scan_legacy\fR (int) | |
2813 | .ad | |
2814 | .RS 12n | |
2815 | A value of 0 indicates that scrubs and resilvers will gather metadata in | |
2816 | memory before issuing sequential I/O. A value of 1 indicates that the legacy | |
2817 | algorithm will be used where I/O is initiated as soon as it is discovered. | |
2818 | Changing this value to 0 will not affect scrubs or resilvers that are already | |
2819 | in progress. | |
2820 | .sp | |
2821 | Default value: \fB0\fR. | |
2822 | .RE | |
2823 | ||
2824 | .sp | |
2825 | .ne 2 | |
2826 | .na | |
2827 | \fBzfs_scan_max_ext_gap\fR (int) | |
2828 | .ad | |
2829 | .RS 12n | |
2830 | Indicates the largest gap in bytes between scrub / resilver I/Os that will still | |
2831 | be considered sequential for sorting purposes. Changing this value will not | |
2832 | affect scrubs or resilvers that are already in progress. | |
2833 | .sp | |
2834 | Default value: \fB2097152 (2 MB)\fR. | |
2835 | .RE | |
2836 | ||
2837 | .sp | |
2838 | .ne 2 | |
2839 | .na | |
2840 | \fBzfs_scan_mem_lim_fact\fR (int) | |
2841 | .ad | |
2842 | .RS 12n | |
2843 | Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. | |
2844 | This tunable determines the hard limit for I/O sorting memory usage. | |
2845 | When the hard limit is reached we stop scanning metadata and start issuing | |
2846 | data verification I/O. This is done until we get below the soft limit. | |
2847 | .sp | |
2848 | Default value: \fB20\fR which is 5% of RAM (1/20). | |
2849 | .RE | |
2850 | ||
2851 | .sp | |
2852 | .ne 2 | |
2853 | .na | |
2854 | \fBzfs_scan_mem_lim_soft_fact\fR (int) | |
2855 | .ad | |
2856 | .RS 12n | |
2857 | The fraction of the hard limit used to determined the soft limit for I/O sorting | |
ac3d4d0c | 2858 | by the sequential scan algorithm. When we cross this limit from below no action |
d4a72f23 TC |
2859 | is taken. When we cross this limit from above it is because we are issuing |
2860 | verification I/O. In this case (unless the metadata scan is done) we stop | |
2861 | issuing verification I/O and start scanning metadata again until we get to the | |
2862 | hard limit. | |
2863 | .sp | |
2864 | Default value: \fB20\fR which is 5% of the hard limit (1/20). | |
2865 | .RE | |
2866 | ||
67709516 D |
2867 | .sp |
2868 | .ne 2 | |
2869 | .na | |
2870 | \fBzfs_scan_strict_mem_lim\fR (int) | |
2871 | .ad | |
2872 | .RS 12n | |
2873 | Enforces tight memory limits on pool scans when a sequential scan is in | |
2874 | progress. When disabled the memory limit may be exceeded by fast disks. | |
2875 | .sp | |
2876 | Default value: \fB0\fR. | |
2877 | .RE | |
2878 | ||
2879 | .sp | |
2880 | .ne 2 | |
2881 | .na | |
2882 | \fBzfs_scan_suspend_progress\fR (int) | |
2883 | .ad | |
2884 | .RS 12n | |
2885 | Freezes a scrub/resilver in progress without actually pausing it. Intended for | |
2886 | testing/debugging. | |
2887 | .sp | |
2888 | Default value: \fB0\fR. | |
2889 | .RE | |
2890 | ||
2891 | ||
d4a72f23 TC |
2892 | .sp |
2893 | .ne 2 | |
2894 | .na | |
2895 | \fBzfs_scan_vdev_limit\fR (int) | |
2896 | .ad | |
2897 | .RS 12n | |
2898 | Maximum amount of data that can be concurrently issued at once for scrubs and | |
2899 | resilvers per leaf device, given in bytes. | |
2900 | .sp | |
2901 | Default value: \fB41943040\fR. | |
29714574 TF |
2902 | .RE |
2903 | ||
fd8febbd TF |
2904 | .sp |
2905 | .ne 2 | |
2906 | .na | |
2907 | \fBzfs_send_corrupt_data\fR (int) | |
2908 | .ad | |
2909 | .RS 12n | |
83426735 | 2910 | Allow sending of corrupt data (ignore read/checksum errors when sending data) |
fd8febbd TF |
2911 | .sp |
2912 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
2913 | .RE | |
2914 | ||
caf9dd20 BB |
2915 | .sp |
2916 | .ne 2 | |
2917 | .na | |
2918 | \fBzfs_send_unmodified_spill_blocks\fR (int) | |
2919 | .ad | |
2920 | .RS 12n | |
2921 | Include unmodified spill blocks in the send stream. Under certain circumstances | |
2922 | previous versions of ZFS could incorrectly remove the spill block from an | |
2923 | existing object. Including unmodified copies of the spill blocks creates a | |
2924 | backwards compatible stream which will recreate a spill block if it was | |
2925 | incorrectly removed. | |
2926 | .sp | |
2927 | Use \fB1\fR for yes (default) and \fB0\fR for no. | |
2928 | .RE | |
2929 | ||
30af21b0 PD |
2930 | .sp |
2931 | .ne 2 | |
2932 | .na | |
2933 | \fBzfs_send_no_prefetch_queue_ff\fR (int) | |
2934 | .ad | |
2935 | .RS 12n | |
2936 | The fill fraction of the \fBzfs send\fR internal queues. The fill fraction | |
2937 | controls the timing with which internal threads are woken up. | |
2938 | .sp | |
2939 | Default value: \fB20\fR. | |
2940 | .RE | |
2941 | ||
2942 | .sp | |
2943 | .ne 2 | |
2944 | .na | |
2945 | \fBzfs_send_no_prefetch_queue_length\fR (int) | |
2946 | .ad | |
2947 | .RS 12n | |
2948 | The maximum number of bytes allowed in \fBzfs send\fR's internal queues. | |
2949 | .sp | |
2950 | Default value: \fB1,048,576\fR. | |
2951 | .RE | |
2952 | ||
2953 | .sp | |
2954 | .ne 2 | |
2955 | .na | |
2956 | \fBzfs_send_queue_ff\fR (int) | |
2957 | .ad | |
2958 | .RS 12n | |
2959 | The fill fraction of the \fBzfs send\fR prefetch queue. The fill fraction | |
2960 | controls the timing with which internal threads are woken up. | |
2961 | .sp | |
2962 | Default value: \fB20\fR. | |
2963 | .RE | |
2964 | ||
3b0d9928 BB |
2965 | .sp |
2966 | .ne 2 | |
2967 | .na | |
2968 | \fBzfs_send_queue_length\fR (int) | |
2969 | .ad | |
2970 | .RS 12n | |
30af21b0 PD |
2971 | The maximum number of bytes allowed that will be prefetched by \fBzfs send\fR. |
2972 | This value must be at least twice the maximum block size in use. | |
3b0d9928 BB |
2973 | .sp |
2974 | Default value: \fB16,777,216\fR. | |
2975 | .RE | |
2976 | ||
30af21b0 PD |
2977 | .sp |
2978 | .ne 2 | |
2979 | .na | |
2980 | \fBzfs_recv_queue_ff\fR (int) | |
2981 | .ad | |
2982 | .RS 12n | |
2983 | The fill fraction of the \fBzfs receive\fR queue. The fill fraction | |
2984 | controls the timing with which internal threads are woken up. | |
2985 | .sp | |
2986 | Default value: \fB20\fR. | |
2987 | .RE | |
2988 | ||
3b0d9928 BB |
2989 | .sp |
2990 | .ne 2 | |
2991 | .na | |
2992 | \fBzfs_recv_queue_length\fR (int) | |
2993 | .ad | |
2994 | .RS 12n | |
3b0d9928 BB |
2995 | The maximum number of bytes allowed in the \fBzfs receive\fR queue. This value |
2996 | must be at least twice the maximum block size in use. | |
2997 | .sp | |
2998 | Default value: \fB16,777,216\fR. | |
2999 | .RE | |
3000 | ||
7261fc2e MA |
3001 | .sp |
3002 | .ne 2 | |
3003 | .na | |
3004 | \fBzfs_recv_write_batch_size\fR (int) | |
3005 | .ad | |
3006 | .RS 12n | |
3007 | The maximum amount of data (in bytes) that \fBzfs receive\fR will write in | |
3008 | one DMU transaction. This is the uncompressed size, even when receiving a | |
3009 | compressed send stream. This setting will not reduce the write size below | |
3010 | a single block. Capped at a maximum of 32MB | |
3011 | .sp | |
3012 | Default value: \fB1MB\fR. | |
3013 | .RE | |
3014 | ||
30af21b0 PD |
3015 | .sp |
3016 | .ne 2 | |
3017 | .na | |
3018 | \fBzfs_override_estimate_recordsize\fR (ulong) | |
3019 | .ad | |
3020 | .RS 12n | |
3021 | Setting this variable overrides the default logic for estimating block | |
3022 | sizes when doing a zfs send. The default heuristic is that the average | |
3023 | block size will be the current recordsize. Override this value if most data | |
3024 | in your dataset is not of that size and you require accurate zfs send size | |
3025 | estimates. | |
3026 | .sp | |
3027 | Default value: \fB0\fR. | |
3028 | .RE | |
3029 | ||
29714574 TF |
3030 | .sp |
3031 | .ne 2 | |
3032 | .na | |
3033 | \fBzfs_sync_pass_deferred_free\fR (int) | |
3034 | .ad | |
3035 | .RS 12n | |
83426735 | 3036 | Flushing of data to disk is done in passes. Defer frees starting in this pass |
29714574 TF |
3037 | .sp |
3038 | Default value: \fB2\fR. | |
3039 | .RE | |
3040 | ||
d2734cce SD |
3041 | .sp |
3042 | .ne 2 | |
3043 | .na | |
3044 | \fBzfs_spa_discard_memory_limit\fR (int) | |
3045 | .ad | |
3046 | .RS 12n | |
3047 | Maximum memory used for prefetching a checkpoint's space map on each | |
3048 | vdev while discarding the checkpoint. | |
3049 | .sp | |
3050 | Default value: \fB16,777,216\fR. | |
3051 | .RE | |
3052 | ||
1f02ecc5 D |
3053 | .sp |
3054 | .ne 2 | |
3055 | .na | |
3056 | \fBzfs_special_class_metadata_reserve_pct\fR (int) | |
3057 | .ad | |
3058 | .RS 12n | |
3059 | Only allow small data blocks to be allocated on the special and dedup vdev | |
3060 | types when the available free space percentage on these vdevs exceeds this | |
3061 | value. This ensures reserved space is available for pool meta data as the | |
3062 | special vdevs approach capacity. | |
3063 | .sp | |
3064 | Default value: \fB25\fR. | |
3065 | .RE | |
3066 | ||
29714574 TF |
3067 | .sp |
3068 | .ne 2 | |
3069 | .na | |
3070 | \fBzfs_sync_pass_dont_compress\fR (int) | |
3071 | .ad | |
3072 | .RS 12n | |
be89734a MA |
3073 | Starting in this sync pass, we disable compression (including of metadata). |
3074 | With the default setting, in practice, we don't have this many sync passes, | |
3075 | so this has no effect. | |
3076 | .sp | |
3077 | The original intent was that disabling compression would help the sync passes | |
3078 | to converge. However, in practice disabling compression increases the average | |
3079 | number of sync passes, because when we turn compression off, a lot of block's | |
3080 | size will change and thus we have to re-allocate (not overwrite) them. It | |
3081 | also increases the number of 128KB allocations (e.g. for indirect blocks and | |
3082 | spacemaps) because these will not be compressed. The 128K allocations are | |
3083 | especially detrimental to performance on highly fragmented systems, which may | |
3084 | have very few free segments of this size, and may need to load new metaslabs | |
3085 | to satisfy 128K allocations. | |
29714574 | 3086 | .sp |
be89734a | 3087 | Default value: \fB8\fR. |
29714574 TF |
3088 | .RE |
3089 | ||
3090 | .sp | |
3091 | .ne 2 | |
3092 | .na | |
3093 | \fBzfs_sync_pass_rewrite\fR (int) | |
3094 | .ad | |
3095 | .RS 12n | |
83426735 | 3096 | Rewrite new block pointers starting in this pass |
29714574 TF |
3097 | .sp |
3098 | Default value: \fB2\fR. | |
3099 | .RE | |
3100 | ||
a032ac4b BB |
3101 | .sp |
3102 | .ne 2 | |
3103 | .na | |
3104 | \fBzfs_sync_taskq_batch_pct\fR (int) | |
3105 | .ad | |
3106 | .RS 12n | |
3107 | This controls the number of threads used by the dp_sync_taskq. The default | |
3108 | value of 75% will create a maximum of one thread per cpu. | |
3109 | .sp | |
be54a13c | 3110 | Default value: \fB75\fR%. |
a032ac4b BB |
3111 | .RE |
3112 | ||
1b939560 BB |
3113 | .sp |
3114 | .ne 2 | |
3115 | .na | |
67709516 | 3116 | \fBzfs_trim_extent_bytes_max\fR (uint) |
1b939560 BB |
3117 | .ad |
3118 | .RS 12n | |
3119 | Maximum size of TRIM command. Ranges larger than this will be split in to | |
3120 | chunks no larger than \fBzfs_trim_extent_bytes_max\fR bytes before being | |
3121 | issued to the device. | |
3122 | .sp | |
3123 | Default value: \fB134,217,728\fR. | |
3124 | .RE | |
3125 | ||
3126 | .sp | |
3127 | .ne 2 | |
3128 | .na | |
67709516 | 3129 | \fBzfs_trim_extent_bytes_min\fR (uint) |
1b939560 BB |
3130 | .ad |
3131 | .RS 12n | |
3132 | Minimum size of TRIM commands. TRIM ranges smaller than this will be skipped | |
3133 | unless they're part of a larger range which was broken in to chunks. This is | |
3134 | done because it's common for these small TRIMs to negatively impact overall | |
3135 | performance. This value can be set to 0 to TRIM all unallocated space. | |
3136 | .sp | |
3137 | Default value: \fB32,768\fR. | |
3138 | .RE | |
3139 | ||
3140 | .sp | |
3141 | .ne 2 | |
3142 | .na | |
67709516 | 3143 | \fBzfs_trim_metaslab_skip\fR (uint) |
1b939560 BB |
3144 | .ad |
3145 | .RS 12n | |
3146 | Skip uninitialized metaslabs during the TRIM process. This option is useful | |
3147 | for pools constructed from large thinly-provisioned devices where TRIM | |
3148 | operations are slow. As a pool ages an increasing fraction of the pools | |
3149 | metaslabs will be initialized progressively degrading the usefulness of | |
3150 | this option. This setting is stored when starting a manual TRIM and will | |
3151 | persist for the duration of the requested TRIM. | |
3152 | .sp | |
3153 | Default value: \fB0\fR. | |
3154 | .RE | |
3155 | ||
3156 | .sp | |
3157 | .ne 2 | |
3158 | .na | |
67709516 | 3159 | \fBzfs_trim_queue_limit\fR (uint) |
1b939560 BB |
3160 | .ad |
3161 | .RS 12n | |
3162 | Maximum number of queued TRIMs outstanding per leaf vdev. The number of | |
3163 | concurrent TRIM commands issued to the device is controlled by the | |
3164 | \fBzfs_vdev_trim_min_active\fR and \fBzfs_vdev_trim_max_active\fR module | |
3165 | options. | |
3166 | .sp | |
3167 | Default value: \fB10\fR. | |
3168 | .RE | |
3169 | ||
3170 | .sp | |
3171 | .ne 2 | |
3172 | .na | |
67709516 | 3173 | \fBzfs_trim_txg_batch\fR (uint) |
1b939560 BB |
3174 | .ad |
3175 | .RS 12n | |
3176 | The number of transaction groups worth of frees which should be aggregated | |
3177 | before TRIM operations are issued to the device. This setting represents a | |
3178 | trade-off between issuing larger, more efficient TRIM operations and the | |
3179 | delay before the recently trimmed space is available for use by the device. | |
3180 | .sp | |
3181 | Increasing this value will allow frees to be aggregated for a longer time. | |
3182 | This will result is larger TRIM operations and potentially increased memory | |
3183 | usage. Decreasing this value will have the opposite effect. The default | |
3184 | value of 32 was determined to be a reasonable compromise. | |
3185 | .sp | |
3186 | Default value: \fB32\fR. | |
3187 | .RE | |
3188 | ||
29714574 TF |
3189 | .sp |
3190 | .ne 2 | |
3191 | .na | |
3192 | \fBzfs_txg_history\fR (int) | |
3193 | .ad | |
3194 | .RS 12n | |
379ca9cf OF |
3195 | Historical statistics for the last N txgs will be available in |
3196 | \fB/proc/spl/kstat/zfs/<pool>/txgs\fR | |
29714574 | 3197 | .sp |
ca85d690 | 3198 | Default value: \fB0\fR. |
29714574 TF |
3199 | .RE |
3200 | ||
29714574 TF |
3201 | .sp |
3202 | .ne 2 | |
3203 | .na | |
3204 | \fBzfs_txg_timeout\fR (int) | |
3205 | .ad | |
3206 | .RS 12n | |
83426735 | 3207 | Flush dirty data to disk at least every N seconds (maximum txg duration) |
29714574 TF |
3208 | .sp |
3209 | Default value: \fB5\fR. | |
3210 | .RE | |
3211 | ||
1b939560 BB |
3212 | .sp |
3213 | .ne 2 | |
3214 | .na | |
3215 | \fBzfs_vdev_aggregate_trim\fR (int) | |
3216 | .ad | |
3217 | .RS 12n | |
3218 | Allow TRIM I/Os to be aggregated. This is normally not helpful because | |
3219 | the extents to be trimmed will have been already been aggregated by the | |
3220 | metaslab. This option is provided for debugging and performance analysis. | |
3221 | .sp | |
3222 | Default value: \fB0\fR. | |
3223 | .RE | |
3224 | ||
29714574 TF |
3225 | .sp |
3226 | .ne 2 | |
3227 | .na | |
3228 | \fBzfs_vdev_aggregation_limit\fR (int) | |
3229 | .ad | |
3230 | .RS 12n | |
3231 | Max vdev I/O aggregation size | |
3232 | .sp | |
1af240f3 AM |
3233 | Default value: \fB1,048,576\fR. |
3234 | .RE | |
3235 | ||
3236 | .sp | |
3237 | .ne 2 | |
3238 | .na | |
3239 | \fBzfs_vdev_aggregation_limit_non_rotating\fR (int) | |
3240 | .ad | |
3241 | .RS 12n | |
3242 | Max vdev I/O aggregation size for non-rotating media | |
3243 | .sp | |
29714574 TF |
3244 | Default value: \fB131,072\fR. |
3245 | .RE | |
3246 | ||
3247 | .sp | |
3248 | .ne 2 | |
3249 | .na | |
3250 | \fBzfs_vdev_cache_bshift\fR (int) | |
3251 | .ad | |
3252 | .RS 12n | |
3253 | Shift size to inflate reads too | |
3254 | .sp | |
83426735 | 3255 | Default value: \fB16\fR (effectively 65536). |
29714574 TF |
3256 | .RE |
3257 | ||
3258 | .sp | |
3259 | .ne 2 | |
3260 | .na | |
3261 | \fBzfs_vdev_cache_max\fR (int) | |
3262 | .ad | |
3263 | .RS 12n | |
ca85d690 | 3264 | Inflate reads smaller than this value to meet the \fBzfs_vdev_cache_bshift\fR |
3265 | size (default 64k). | |
83426735 D |
3266 | .sp |
3267 | Default value: \fB16384\fR. | |
29714574 TF |
3268 | .RE |
3269 | ||
3270 | .sp | |
3271 | .ne 2 | |
3272 | .na | |
3273 | \fBzfs_vdev_cache_size\fR (int) | |
3274 | .ad | |
3275 | .RS 12n | |
83426735 D |
3276 | Total size of the per-disk cache in bytes. |
3277 | .sp | |
3278 | Currently this feature is disabled as it has been found to not be helpful | |
3279 | for performance and in some cases harmful. | |
29714574 TF |
3280 | .sp |
3281 | Default value: \fB0\fR. | |
3282 | .RE | |
3283 | ||
29714574 TF |
3284 | .sp |
3285 | .ne 2 | |
3286 | .na | |
9f500936 | 3287 | \fBzfs_vdev_mirror_rotating_inc\fR (int) |
29714574 TF |
3288 | .ad |
3289 | .RS 12n | |
9f500936 | 3290 | A number by which the balancing algorithm increments the load calculation for |
3291 | the purpose of selecting the least busy mirror member when an I/O immediately | |
3292 | follows its predecessor on rotational vdevs for the purpose of making decisions | |
3293 | based on load. | |
29714574 | 3294 | .sp |
9f500936 | 3295 | Default value: \fB0\fR. |
3296 | .RE | |
3297 | ||
3298 | .sp | |
3299 | .ne 2 | |
3300 | .na | |
3301 | \fBzfs_vdev_mirror_rotating_seek_inc\fR (int) | |
3302 | .ad | |
3303 | .RS 12n | |
3304 | A number by which the balancing algorithm increments the load calculation for | |
3305 | the purpose of selecting the least busy mirror member when an I/O lacks | |
3306 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
3307 | this that are not immediately following the previous I/O are incremented by | |
3308 | half. | |
3309 | .sp | |
3310 | Default value: \fB5\fR. | |
3311 | .RE | |
3312 | ||
3313 | .sp | |
3314 | .ne 2 | |
3315 | .na | |
3316 | \fBzfs_vdev_mirror_rotating_seek_offset\fR (int) | |
3317 | .ad | |
3318 | .RS 12n | |
3319 | The maximum distance for the last queued I/O in which the balancing algorithm | |
3320 | considers an I/O to have locality. | |
3321 | See the section "ZFS I/O SCHEDULER". | |
3322 | .sp | |
3323 | Default value: \fB1048576\fR. | |
3324 | .RE | |
3325 | ||
3326 | .sp | |
3327 | .ne 2 | |
3328 | .na | |
3329 | \fBzfs_vdev_mirror_non_rotating_inc\fR (int) | |
3330 | .ad | |
3331 | .RS 12n | |
3332 | A number by which the balancing algorithm increments the load calculation for | |
3333 | the purpose of selecting the least busy mirror member on non-rotational vdevs | |
3334 | when I/Os do not immediately follow one another. | |
3335 | .sp | |
3336 | Default value: \fB0\fR. | |
3337 | .RE | |
3338 | ||
3339 | .sp | |
3340 | .ne 2 | |
3341 | .na | |
3342 | \fBzfs_vdev_mirror_non_rotating_seek_inc\fR (int) | |
3343 | .ad | |
3344 | .RS 12n | |
3345 | A number by which the balancing algorithm increments the load calculation for | |
3346 | the purpose of selecting the least busy mirror member when an I/O lacks | |
3347 | locality as defined by the zfs_vdev_mirror_rotating_seek_offset. I/Os within | |
3348 | this that are not immediately following the previous I/O are incremented by | |
3349 | half. | |
3350 | .sp | |
3351 | Default value: \fB1\fR. | |
29714574 TF |
3352 | .RE |
3353 | ||
29714574 TF |
3354 | .sp |
3355 | .ne 2 | |
3356 | .na | |
3357 | \fBzfs_vdev_read_gap_limit\fR (int) | |
3358 | .ad | |
3359 | .RS 12n | |
83426735 D |
3360 | Aggregate read I/O operations if the gap on-disk between them is within this |
3361 | threshold. | |
29714574 TF |
3362 | .sp |
3363 | Default value: \fB32,768\fR. | |
3364 | .RE | |
3365 | ||
29714574 TF |
3366 | .sp |
3367 | .ne 2 | |
3368 | .na | |
3369 | \fBzfs_vdev_write_gap_limit\fR (int) | |
3370 | .ad | |
3371 | .RS 12n | |
3372 | Aggregate write I/O over gap | |
3373 | .sp | |
3374 | Default value: \fB4,096\fR. | |
3375 | .RE | |
3376 | ||
ab9f4b0b GN |
3377 | .sp |
3378 | .ne 2 | |
3379 | .na | |
3380 | \fBzfs_vdev_raidz_impl\fR (string) | |
3381 | .ad | |
3382 | .RS 12n | |
c9187d86 | 3383 | Parameter for selecting raidz parity implementation to use. |
ab9f4b0b GN |
3384 | |
3385 | Options marked (always) below may be selected on module load as they are | |
3386 | supported on all systems. | |
3387 | The remaining options may only be set after the module is loaded, as they | |
3388 | are available only if the implementations are compiled in and supported | |
3389 | on the running system. | |
3390 | ||
3391 | Once the module is loaded, the content of | |
3392 | /sys/module/zfs/parameters/zfs_vdev_raidz_impl will show available options | |
3393 | with the currently selected one enclosed in []. | |
3394 | Possible options are: | |
3395 | fastest - (always) implementation selected using built-in benchmark | |
3396 | original - (always) original raidz implementation | |
3397 | scalar - (always) scalar raidz implementation | |
ae25d222 GN |
3398 | sse2 - implementation using SSE2 instruction set (64bit x86 only) |
3399 | ssse3 - implementation using SSSE3 instruction set (64bit x86 only) | |
ab9f4b0b | 3400 | avx2 - implementation using AVX2 instruction set (64bit x86 only) |
7f547f85 RD |
3401 | avx512f - implementation using AVX512F instruction set (64bit x86 only) |
3402 | avx512bw - implementation using AVX512F & AVX512BW instruction sets (64bit x86 only) | |
62a65a65 RD |
3403 | aarch64_neon - implementation using NEON (Aarch64/64 bit ARMv8 only) |
3404 | aarch64_neonx2 - implementation using NEON with more unrolling (Aarch64/64 bit ARMv8 only) | |
35b07497 | 3405 | powerpc_altivec - implementation using Altivec (PowerPC only) |
ab9f4b0b GN |
3406 | .sp |
3407 | Default value: \fBfastest\fR. | |
3408 | .RE | |
3409 | ||
67709516 D |
3410 | .sp |
3411 | .ne 2 | |
3412 | .na | |
3413 | \fBzfs_vdev_scheduler\fR (charp) | |
3414 | .ad | |
3415 | .RS 12n | |
3416 | \fBDEPRECATED\fR: This option exists for compatibility with older user | |
3417 | configurations. It does nothing except print a warning to the kernel log if | |
3418 | set. | |
3419 | .sp | |
3420 | .RE | |
3421 | ||
29714574 TF |
3422 | .sp |
3423 | .ne 2 | |
3424 | .na | |
3425 | \fBzfs_zevent_cols\fR (int) | |
3426 | .ad | |
3427 | .RS 12n | |
83426735 | 3428 | When zevents are logged to the console use this as the word wrap width. |
29714574 TF |
3429 | .sp |
3430 | Default value: \fB80\fR. | |
3431 | .RE | |
3432 | ||
3433 | .sp | |
3434 | .ne 2 | |
3435 | .na | |
3436 | \fBzfs_zevent_console\fR (int) | |
3437 | .ad | |
3438 | .RS 12n | |
3439 | Log events to the console | |
3440 | .sp | |
3441 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
3442 | .RE | |
3443 | ||
3444 | .sp | |
3445 | .ne 2 | |
3446 | .na | |
3447 | \fBzfs_zevent_len_max\fR (int) | |
3448 | .ad | |
3449 | .RS 12n | |
83426735 D |
3450 | Max event queue length. A value of 0 will result in a calculated value which |
3451 | increases with the number of CPUs in the system (minimum 64 events). Events | |
3452 | in the queue can be viewed with the \fBzpool events\fR command. | |
29714574 TF |
3453 | .sp |
3454 | Default value: \fB0\fR. | |
3455 | .RE | |
3456 | ||
a032ac4b BB |
3457 | .sp |
3458 | .ne 2 | |
3459 | .na | |
3460 | \fBzfs_zil_clean_taskq_maxalloc\fR (int) | |
3461 | .ad | |
3462 | .RS 12n | |
3463 | The maximum number of taskq entries that are allowed to be cached. When this | |
2fe61a7e | 3464 | limit is exceeded transaction records (itxs) will be cleaned synchronously. |
a032ac4b BB |
3465 | .sp |
3466 | Default value: \fB1048576\fR. | |
3467 | .RE | |
3468 | ||
3469 | .sp | |
3470 | .ne 2 | |
3471 | .na | |
3472 | \fBzfs_zil_clean_taskq_minalloc\fR (int) | |
3473 | .ad | |
3474 | .RS 12n | |
3475 | The number of taskq entries that are pre-populated when the taskq is first | |
3476 | created and are immediately available for use. | |
3477 | .sp | |
3478 | Default value: \fB1024\fR. | |
3479 | .RE | |
3480 | ||
3481 | .sp | |
3482 | .ne 2 | |
3483 | .na | |
3484 | \fBzfs_zil_clean_taskq_nthr_pct\fR (int) | |
3485 | .ad | |
3486 | .RS 12n | |
3487 | This controls the number of threads used by the dp_zil_clean_taskq. The default | |
3488 | value of 100% will create a maximum of one thread per cpu. | |
3489 | .sp | |
be54a13c | 3490 | Default value: \fB100\fR%. |
a032ac4b BB |
3491 | .RE |
3492 | ||
b8738257 MA |
3493 | .sp |
3494 | .ne 2 | |
3495 | .na | |
3496 | \fBzil_maxblocksize\fR (int) | |
3497 | .ad | |
3498 | .RS 12n | |
3499 | This sets the maximum block size used by the ZIL. On very fragmented pools, | |
3500 | lowering this (typically to 36KB) can improve performance. | |
3501 | .sp | |
3502 | Default value: \fB131072\fR (128KB). | |
3503 | .RE | |
3504 | ||
53b1f5ea PS |
3505 | .sp |
3506 | .ne 2 | |
3507 | .na | |
3508 | \fBzil_nocacheflush\fR (int) | |
3509 | .ad | |
3510 | .RS 12n | |
3511 | Disable the cache flush commands that are normally sent to the disk(s) by | |
3512 | the ZIL after an LWB write has completed. Setting this will cause ZIL | |
3513 | corruption on power loss if a volatile out-of-order write cache is enabled. | |
3514 | .sp | |
3515 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
3516 | .RE | |
3517 | ||
29714574 TF |
3518 | .sp |
3519 | .ne 2 | |
3520 | .na | |
3521 | \fBzil_replay_disable\fR (int) | |
3522 | .ad | |
3523 | .RS 12n | |
83426735 D |
3524 | Disable intent logging replay. Can be disabled for recovery from corrupted |
3525 | ZIL | |
29714574 TF |
3526 | .sp |
3527 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
3528 | .RE | |
3529 | ||
3530 | .sp | |
3531 | .ne 2 | |
3532 | .na | |
1b7c1e5c | 3533 | \fBzil_slog_bulk\fR (ulong) |
29714574 TF |
3534 | .ad |
3535 | .RS 12n | |
1b7c1e5c GDN |
3536 | Limit SLOG write size per commit executed with synchronous priority. |
3537 | Any writes above that will be executed with lower (asynchronous) priority | |
3538 | to limit potential SLOG device abuse by single active ZIL writer. | |
29714574 | 3539 | .sp |
1b7c1e5c | 3540 | Default value: \fB786,432\fR. |
29714574 TF |
3541 | .RE |
3542 | ||
638dd5f4 TC |
3543 | .sp |
3544 | .ne 2 | |
3545 | .na | |
3546 | \fBzio_deadman_log_all\fR (int) | |
3547 | .ad | |
3548 | .RS 12n | |
3549 | If non-zero, the zio deadman will produce debugging messages (see | |
3550 | \fBzfs_dbgmsg_enable\fR) for all zios, rather than only for leaf | |
3551 | zios possessing a vdev. This is meant to be used by developers to gain | |
3552 | diagnostic information for hang conditions which don't involve a mutex | |
3553 | or other locking primitive; typically conditions in which a thread in | |
3554 | the zio pipeline is looping indefinitely. | |
3555 | .sp | |
3556 | Default value: \fB0\fR. | |
3557 | .RE | |
3558 | ||
c3bd3fb4 TC |
3559 | .sp |
3560 | .ne 2 | |
3561 | .na | |
3562 | \fBzio_decompress_fail_fraction\fR (int) | |
3563 | .ad | |
3564 | .RS 12n | |
3565 | If non-zero, this value represents the denominator of the probability that zfs | |
3566 | should induce a decompression failure. For instance, for a 5% decompression | |
3567 | failure rate, this value should be set to 20. | |
3568 | .sp | |
3569 | Default value: \fB0\fR. | |
3570 | .RE | |
3571 | ||
29714574 TF |
3572 | .sp |
3573 | .ne 2 | |
3574 | .na | |
ad796b8a | 3575 | \fBzio_slow_io_ms\fR (int) |
29714574 TF |
3576 | .ad |
3577 | .RS 12n | |
ad796b8a TH |
3578 | When an I/O operation takes more than \fBzio_slow_io_ms\fR milliseconds to |
3579 | complete is marked as a slow I/O. Each slow I/O causes a delay zevent. Slow | |
3580 | I/O counters can be seen with "zpool status -s". | |
3581 | ||
29714574 TF |
3582 | .sp |
3583 | Default value: \fB30,000\fR. | |
3584 | .RE | |
3585 | ||
3dfb57a3 DB |
3586 | .sp |
3587 | .ne 2 | |
3588 | .na | |
3589 | \fBzio_dva_throttle_enabled\fR (int) | |
3590 | .ad | |
3591 | .RS 12n | |
ad796b8a | 3592 | Throttle block allocations in the I/O pipeline. This allows for |
3dfb57a3 | 3593 | dynamic allocation distribution when devices are imbalanced. |
e815485f TC |
3594 | When enabled, the maximum number of pending allocations per top-level vdev |
3595 | is limited by \fBzfs_vdev_queue_depth_pct\fR. | |
3dfb57a3 | 3596 | .sp |
27f2b90d | 3597 | Default value: \fB1\fR. |
3dfb57a3 DB |
3598 | .RE |
3599 | ||
29714574 TF |
3600 | .sp |
3601 | .ne 2 | |
3602 | .na | |
3603 | \fBzio_requeue_io_start_cut_in_line\fR (int) | |
3604 | .ad | |
3605 | .RS 12n | |
3606 | Prioritize requeued I/O | |
3607 | .sp | |
3608 | Default value: \fB0\fR. | |
3609 | .RE | |
3610 | ||
dcb6bed1 D |
3611 | .sp |
3612 | .ne 2 | |
3613 | .na | |
3614 | \fBzio_taskq_batch_pct\fR (uint) | |
3615 | .ad | |
3616 | .RS 12n | |
3617 | Percentage of online CPUs (or CPU cores, etc) which will run a worker thread | |
ad796b8a | 3618 | for I/O. These workers are responsible for I/O work such as compression and |
dcb6bed1 D |
3619 | checksum calculations. Fractional number of CPUs will be rounded down. |
3620 | .sp | |
3621 | The default value of 75 was chosen to avoid using all CPUs which can result in | |
3622 | latency issues and inconsistent application performance, especially when high | |
3623 | compression is enabled. | |
3624 | .sp | |
3625 | Default value: \fB75\fR. | |
3626 | .RE | |
3627 | ||
29714574 TF |
3628 | .sp |
3629 | .ne 2 | |
3630 | .na | |
3631 | \fBzvol_inhibit_dev\fR (uint) | |
3632 | .ad | |
3633 | .RS 12n | |
83426735 D |
3634 | Do not create zvol device nodes. This may slightly improve startup time on |
3635 | systems with a very large number of zvols. | |
29714574 TF |
3636 | .sp |
3637 | Use \fB1\fR for yes and \fB0\fR for no (default). | |
3638 | .RE | |
3639 | ||
3640 | .sp | |
3641 | .ne 2 | |
3642 | .na | |
3643 | \fBzvol_major\fR (uint) | |
3644 | .ad | |
3645 | .RS 12n | |
83426735 | 3646 | Major number for zvol block devices |
29714574 TF |
3647 | .sp |
3648 | Default value: \fB230\fR. | |
3649 | .RE | |
3650 | ||
3651 | .sp | |
3652 | .ne 2 | |
3653 | .na | |
3654 | \fBzvol_max_discard_blocks\fR (ulong) | |
3655 | .ad | |
3656 | .RS 12n | |
83426735 D |
3657 | Discard (aka TRIM) operations done on zvols will be done in batches of this |
3658 | many blocks, where block size is determined by the \fBvolblocksize\fR property | |
3659 | of a zvol. | |
29714574 TF |
3660 | .sp |
3661 | Default value: \fB16,384\fR. | |
3662 | .RE | |
3663 | ||
9965059a BB |
3664 | .sp |
3665 | .ne 2 | |
3666 | .na | |
3667 | \fBzvol_prefetch_bytes\fR (uint) | |
3668 | .ad | |
3669 | .RS 12n | |
3670 | When adding a zvol to the system prefetch \fBzvol_prefetch_bytes\fR | |
3671 | from the start and end of the volume. Prefetching these regions | |
3672 | of the volume is desirable because they are likely to be accessed | |
3673 | immediately by \fBblkid(8)\fR or by the kernel scanning for a partition | |
3674 | table. | |
3675 | .sp | |
3676 | Default value: \fB131,072\fR. | |
3677 | .RE | |
3678 | ||
692e55b8 CC |
3679 | .sp |
3680 | .ne 2 | |
3681 | .na | |
3682 | \fBzvol_request_sync\fR (uint) | |
3683 | .ad | |
3684 | .RS 12n | |
3685 | When processing I/O requests for a zvol submit them synchronously. This | |
3686 | effectively limits the queue depth to 1 for each I/O submitter. When set | |
3687 | to 0 requests are handled asynchronously by a thread pool. The number of | |
3688 | requests which can be handled concurrently is controller by \fBzvol_threads\fR. | |
3689 | .sp | |
8fa5250f | 3690 | Default value: \fB0\fR. |
692e55b8 CC |
3691 | .RE |
3692 | ||
3693 | .sp | |
3694 | .ne 2 | |
3695 | .na | |
3696 | \fBzvol_threads\fR (uint) | |
3697 | .ad | |
3698 | .RS 12n | |
3699 | Max number of threads which can handle zvol I/O requests concurrently. | |
3700 | .sp | |
3701 | Default value: \fB32\fR. | |
3702 | .RE | |
3703 | ||
cf8738d8 | 3704 | .sp |
3705 | .ne 2 | |
3706 | .na | |
3707 | \fBzvol_volmode\fR (uint) | |
3708 | .ad | |
3709 | .RS 12n | |
3710 | Defines zvol block devices behaviour when \fBvolmode\fR is set to \fBdefault\fR. | |
3711 | Valid values are \fB1\fR (full), \fB2\fR (dev) and \fB3\fR (none). | |
3712 | .sp | |
3713 | Default value: \fB1\fR. | |
3714 | .RE | |
3715 | ||
e8b96c60 MA |
3716 | .SH ZFS I/O SCHEDULER |
3717 | ZFS issues I/O operations to leaf vdevs to satisfy and complete I/Os. | |
3718 | The I/O scheduler determines when and in what order those operations are | |
3719 | issued. The I/O scheduler divides operations into five I/O classes | |
3720 | prioritized in the following order: sync read, sync write, async read, | |
3721 | async write, and scrub/resilver. Each queue defines the minimum and | |
3722 | maximum number of concurrent operations that may be issued to the | |
3723 | device. In addition, the device has an aggregate maximum, | |
3724 | \fBzfs_vdev_max_active\fR. Note that the sum of the per-queue minimums | |
3725 | must not exceed the aggregate maximum. If the sum of the per-queue | |
3726 | maximums exceeds the aggregate maximum, then the number of active I/Os | |
3727 | may reach \fBzfs_vdev_max_active\fR, in which case no further I/Os will | |
3728 | be issued regardless of whether all per-queue minimums have been met. | |
3729 | .sp | |
3730 | For many physical devices, throughput increases with the number of | |
3731 | concurrent operations, but latency typically suffers. Further, physical | |
3732 | devices typically have a limit at which more concurrent operations have no | |
3733 | effect on throughput or can actually cause it to decrease. | |
3734 | .sp | |
3735 | The scheduler selects the next operation to issue by first looking for an | |
3736 | I/O class whose minimum has not been satisfied. Once all are satisfied and | |
3737 | the aggregate maximum has not been hit, the scheduler looks for classes | |
3738 | whose maximum has not been satisfied. Iteration through the I/O classes is | |
3739 | done in the order specified above. No further operations are issued if the | |
3740 | aggregate maximum number of concurrent operations has been hit or if there | |
3741 | are no operations queued for an I/O class that has not hit its maximum. | |
3742 | Every time an I/O is queued or an operation completes, the I/O scheduler | |
3743 | looks for new operations to issue. | |
3744 | .sp | |
3745 | In general, smaller max_active's will lead to lower latency of synchronous | |
3746 | operations. Larger max_active's may lead to higher overall throughput, | |
3747 | depending on underlying storage. | |
3748 | .sp | |
3749 | The ratio of the queues' max_actives determines the balance of performance | |
3750 | between reads, writes, and scrubs. E.g., increasing | |
3751 | \fBzfs_vdev_scrub_max_active\fR will cause the scrub or resilver to complete | |
3752 | more quickly, but reads and writes to have higher latency and lower throughput. | |
3753 | .sp | |
3754 | All I/O classes have a fixed maximum number of outstanding operations | |
3755 | except for the async write class. Asynchronous writes represent the data | |
3756 | that is committed to stable storage during the syncing stage for | |
3757 | transaction groups. Transaction groups enter the syncing state | |
3758 | periodically so the number of queued async writes will quickly burst up | |
3759 | and then bleed down to zero. Rather than servicing them as quickly as | |
3760 | possible, the I/O scheduler changes the maximum number of active async | |
3761 | write I/Os according to the amount of dirty data in the pool. Since | |
3762 | both throughput and latency typically increase with the number of | |
3763 | concurrent operations issued to physical devices, reducing the | |
3764 | burstiness in the number of concurrent operations also stabilizes the | |
3765 | response time of operations from other -- and in particular synchronous | |
3766 | -- queues. In broad strokes, the I/O scheduler will issue more | |
3767 | concurrent operations from the async write queue as there's more dirty | |
3768 | data in the pool. | |
3769 | .sp | |
3770 | Async Writes | |
3771 | .sp | |
3772 | The number of concurrent operations issued for the async write I/O class | |
3773 | follows a piece-wise linear function defined by a few adjustable points. | |
3774 | .nf | |
3775 | ||
3776 | | o---------| <-- zfs_vdev_async_write_max_active | |
3777 | ^ | /^ | | |
3778 | | | / | | | |
3779 | active | / | | | |
3780 | I/O | / | | | |
3781 | count | / | | | |
3782 | | / | | | |
3783 | |-------o | | <-- zfs_vdev_async_write_min_active | |
3784 | 0|_______^______|_________| | |
3785 | 0% | | 100% of zfs_dirty_data_max | |
3786 | | | | |
3787 | | `-- zfs_vdev_async_write_active_max_dirty_percent | |
3788 | `--------- zfs_vdev_async_write_active_min_dirty_percent | |
3789 | ||
3790 | .fi | |
3791 | Until the amount of dirty data exceeds a minimum percentage of the dirty | |
3792 | data allowed in the pool, the I/O scheduler will limit the number of | |
3793 | concurrent operations to the minimum. As that threshold is crossed, the | |
3794 | number of concurrent operations issued increases linearly to the maximum at | |
3795 | the specified maximum percentage of the dirty data allowed in the pool. | |
3796 | .sp | |
3797 | Ideally, the amount of dirty data on a busy pool will stay in the sloped | |
3798 | part of the function between \fBzfs_vdev_async_write_active_min_dirty_percent\fR | |
3799 | and \fBzfs_vdev_async_write_active_max_dirty_percent\fR. If it exceeds the | |
3800 | maximum percentage, this indicates that the rate of incoming data is | |
3801 | greater than the rate that the backend storage can handle. In this case, we | |
3802 | must further throttle incoming writes, as described in the next section. | |
3803 | ||
3804 | .SH ZFS TRANSACTION DELAY | |
3805 | We delay transactions when we've determined that the backend storage | |
3806 | isn't able to accommodate the rate of incoming writes. | |
3807 | .sp | |
3808 | If there is already a transaction waiting, we delay relative to when | |
3809 | that transaction will finish waiting. This way the calculated delay time | |
3810 | is independent of the number of threads concurrently executing | |
3811 | transactions. | |
3812 | .sp | |
3813 | If we are the only waiter, wait relative to when the transaction | |
3814 | started, rather than the current time. This credits the transaction for | |
3815 | "time already served", e.g. reading indirect blocks. | |
3816 | .sp | |
3817 | The minimum time for a transaction to take is calculated as: | |
3818 | .nf | |
3819 | min_time = zfs_delay_scale * (dirty - min) / (max - dirty) | |
3820 | min_time is then capped at 100 milliseconds. | |
3821 | .fi | |
3822 | .sp | |
3823 | The delay has two degrees of freedom that can be adjusted via tunables. The | |
3824 | percentage of dirty data at which we start to delay is defined by | |
3825 | \fBzfs_delay_min_dirty_percent\fR. This should typically be at or above | |
3826 | \fBzfs_vdev_async_write_active_max_dirty_percent\fR so that we only start to | |
3827 | delay after writing at full speed has failed to keep up with the incoming write | |
3828 | rate. The scale of the curve is defined by \fBzfs_delay_scale\fR. Roughly speaking, | |
3829 | this variable determines the amount of delay at the midpoint of the curve. | |
3830 | .sp | |
3831 | .nf | |
3832 | delay | |
3833 | 10ms +-------------------------------------------------------------*+ | |
3834 | | *| | |
3835 | 9ms + *+ | |
3836 | | *| | |
3837 | 8ms + *+ | |
3838 | | * | | |
3839 | 7ms + * + | |
3840 | | * | | |
3841 | 6ms + * + | |
3842 | | * | | |
3843 | 5ms + * + | |
3844 | | * | | |
3845 | 4ms + * + | |
3846 | | * | | |
3847 | 3ms + * + | |
3848 | | * | | |
3849 | 2ms + (midpoint) * + | |
3850 | | | ** | | |
3851 | 1ms + v *** + | |
3852 | | zfs_delay_scale ----------> ******** | | |
3853 | 0 +-------------------------------------*********----------------+ | |
3854 | 0% <- zfs_dirty_data_max -> 100% | |
3855 | .fi | |
3856 | .sp | |
3857 | Note that since the delay is added to the outstanding time remaining on the | |
3858 | most recent transaction, the delay is effectively the inverse of IOPS. | |
3859 | Here the midpoint of 500us translates to 2000 IOPS. The shape of the curve | |
3860 | was chosen such that small changes in the amount of accumulated dirty data | |
3861 | in the first 3/4 of the curve yield relatively small differences in the | |
3862 | amount of delay. | |
3863 | .sp | |
3864 | The effects can be easier to understand when the amount of delay is | |
3865 | represented on a log scale: | |
3866 | .sp | |
3867 | .nf | |
3868 | delay | |
3869 | 100ms +-------------------------------------------------------------++ | |
3870 | + + | |
3871 | | | | |
3872 | + *+ | |
3873 | 10ms + *+ | |
3874 | + ** + | |
3875 | | (midpoint) ** | | |
3876 | + | ** + | |
3877 | 1ms + v **** + | |
3878 | + zfs_delay_scale ----------> ***** + | |
3879 | | **** | | |
3880 | + **** + | |
3881 | 100us + ** + | |
3882 | + * + | |
3883 | | * | | |
3884 | + * + | |
3885 | 10us + * + | |
3886 | + + | |
3887 | | | | |
3888 | + + | |
3889 | +--------------------------------------------------------------+ | |
3890 | 0% <- zfs_dirty_data_max -> 100% | |
3891 | .fi | |
3892 | .sp | |
3893 | Note here that only as the amount of dirty data approaches its limit does | |
3894 | the delay start to increase rapidly. The goal of a properly tuned system | |
3895 | should be to keep the amount of dirty data out of that range by first | |
3896 | ensuring that the appropriate limits are set for the I/O scheduler to reach | |
3897 | optimal throughput on the backend storage, and then by changing the value | |
3898 | of \fBzfs_delay_scale\fR to increase the steepness of the curve. |