]>
Commit | Line | Data |
---|---|---|
2d815d95 | 1 | .\" |
29714574 | 2 | .\" Copyright (c) 2013 by Turbo Fredriksson <turbo@bayour.com>. All rights reserved. |
dd0b5c85 | 3 | .\" Copyright (c) 2019, 2021 by Delphix. All rights reserved. |
65282ee9 | 4 | .\" Copyright (c) 2019 Datto Inc. |
29714574 TF |
5 | .\" The contents of this file are subject to the terms of the Common Development |
6 | .\" and Distribution License (the "License"). You may not use this file except | |
7 | .\" in compliance with the License. You can obtain a copy of the license at | |
1d3ba0bf | 8 | .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0. |
29714574 TF |
9 | .\" |
10 | .\" See the License for the specific language governing permissions and | |
11 | .\" limitations under the License. When distributing Covered Code, include this | |
12 | .\" CDDL HEADER in each file and include the License file at | |
13 | .\" usr/src/OPENSOLARIS.LICENSE. If applicable, add the following below this | |
14 | .\" CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your | |
15 | .\" own identifying information: | |
16 | .\" Portions Copyright [yyyy] [name of copyright owner] | |
2d815d95 | 17 | .\" |
926715b9 | 18 | .Dd January 10, 2023 |
2badb345 | 19 | .Dt ZFS 4 |
2d815d95 AZ |
20 | .Os |
21 | . | |
22 | .Sh NAME | |
2badb345 AZ |
23 | .Nm zfs |
24 | .Nd tuning of the ZFS kernel module | |
2d815d95 AZ |
25 | . |
26 | .Sh DESCRIPTION | |
2badb345 | 27 | The ZFS module supports these parameters: |
2d815d95 | 28 | .Bl -tag -width Ds |
ab8d9c17 | 29 | .It Sy dbuf_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 |
2d815d95 AZ |
30 | Maximum size in bytes of the dbuf cache. |
31 | The target size is determined by the MIN versus | |
32 | .No 1/2^ Ns Sy dbuf_cache_shift Pq 1/32nd | |
77f6826b | 33 | of the target ARC size. |
2d815d95 AZ |
34 | The behavior of the dbuf cache and its associated settings |
35 | can be observed via the | |
36 | .Pa /proc/spl/kstat/zfs/dbufstats | |
37 | kstat. | |
38 | . | |
ab8d9c17 | 39 | .It Sy dbuf_metadata_cache_max_bytes Ns = Ns Sy UINT64_MAX Ns B Pq u64 |
2d815d95 AZ |
40 | Maximum size in bytes of the metadata dbuf cache. |
41 | The target size is determined by the MIN versus | |
42 | .No 1/2^ Ns Sy dbuf_metadata_cache_shift Pq 1/64th | |
43 | of the target ARC size. | |
44 | The behavior of the metadata dbuf cache and its associated settings | |
45 | can be observed via the | |
46 | .Pa /proc/spl/kstat/zfs/dbufstats | |
47 | kstat. | |
48 | . | |
49 | .It Sy dbuf_cache_hiwater_pct Ns = Ns Sy 10 Ns % Pq uint | |
50 | The percentage over | |
51 | .Sy dbuf_cache_max_bytes | |
52 | when dbufs must be evicted directly. | |
53 | . | |
54 | .It Sy dbuf_cache_lowater_pct Ns = Ns Sy 10 Ns % Pq uint | |
55 | The percentage below | |
56 | .Sy dbuf_cache_max_bytes | |
57 | when the evict thread stops evicting dbufs. | |
58 | . | |
fdc2d303 | 59 | .It Sy dbuf_cache_shift Ns = Ns Sy 5 Pq uint |
2d815d95 AZ |
60 | Set the size of the dbuf cache |
61 | .Pq Sy dbuf_cache_max_bytes | |
62 | to a log2 fraction of the target ARC size. | |
63 | . | |
fdc2d303 | 64 | .It Sy dbuf_metadata_cache_shift Ns = Ns Sy 6 Pq uint |
2d815d95 AZ |
65 | Set the size of the dbuf metadata cache |
66 | .Pq Sy dbuf_metadata_cache_max_bytes | |
77f6826b | 67 | to a log2 fraction of the target ARC size. |
2d815d95 | 68 | . |
505df8d1 BB |
69 | .It Sy dbuf_mutex_cache_shift Ns = Ns Sy 0 Pq uint |
70 | Set the size of the mutex array for the dbuf cache. | |
71 | When set to | |
72 | .Sy 0 | |
73 | the array is dynamically sized based on total system memory. | |
74 | . | |
fdc2d303 | 75 | .It Sy dmu_object_alloc_chunk_shift Ns = Ns Sy 7 Po 128 Pc Pq uint |
2d815d95 AZ |
76 | dnode slots allocated in a single operation as a power of 2. |
77 | The default value minimizes lock contention for the bulk operation performed. | |
78 | . | |
fdc2d303 | 79 | .It Sy dmu_prefetch_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint |
2d815d95 | 80 | Limit the amount we can prefetch with one call to this amount in bytes. |
d9b4bf06 | 81 | This helps to limit the amount of memory that can be used by prefetching. |
2d815d95 AZ |
82 | . |
83 | .It Sy ignore_hole_birth Pq int | |
84 | Alias for | |
85 | .Sy send_holes_without_birth_time . | |
86 | . | |
87 | .It Sy l2arc_feed_again Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
88 | Turbo L2ARC warm-up. | |
89 | When the L2ARC is cold the fill interval will be set as fast as possible. | |
90 | . | |
ab8d9c17 | 91 | .It Sy l2arc_feed_min_ms Ns = Ns Sy 200 Pq u64 |
2d815d95 AZ |
92 | Min feed interval in milliseconds. |
93 | Requires | |
94 | .Sy l2arc_feed_again Ns = Ns Ar 1 | |
95 | and only applicable in related situations. | |
96 | . | |
ab8d9c17 | 97 | .It Sy l2arc_feed_secs Ns = Ns Sy 1 Pq u64 |
2d815d95 AZ |
98 | Seconds between L2ARC writing. |
99 | . | |
ab8d9c17 | 100 | .It Sy l2arc_headroom Ns = Ns Sy 2 Pq u64 |
2d815d95 AZ |
101 | How far through the ARC lists to search for L2ARC cacheable content, |
102 | expressed as a multiplier of | |
103 | .Sy l2arc_write_max . | |
104 | ARC persistence across reboots can be achieved with persistent L2ARC | |
105 | by setting this parameter to | |
106 | .Sy 0 , | |
107 | allowing the full length of ARC lists to be searched for cacheable content. | |
108 | . | |
ab8d9c17 | 109 | .It Sy l2arc_headroom_boost Ns = Ns Sy 200 Ns % Pq u64 |
2d815d95 AZ |
110 | Scales |
111 | .Sy l2arc_headroom | |
112 | by this percentage when L2ARC contents are being successfully compressed | |
113 | before writing. | |
114 | A value of | |
115 | .Sy 100 | |
116 | disables this feature. | |
117 | . | |
c9d62d13 | 118 | .It Sy l2arc_exclude_special Ns = Ns Sy 0 Ns | Ns 1 Pq int |
0175272f | 119 | Controls whether buffers present on special vdevs are eligible for caching |
c9d62d13 GA |
120 | into L2ARC. |
121 | If set to 1, exclude dbufs on special vdevs from being cached to L2ARC. | |
122 | . | |
2d815d95 | 123 | .It Sy l2arc_mfuonly Ns = Ns Sy 0 Ns | Ns 1 Pq int |
feb3a7ee GA |
124 | Controls whether only MFU metadata and data are cached from ARC into L2ARC. |
125 | This may be desired to avoid wasting space on L2ARC when reading/writing large | |
2d815d95 AZ |
126 | amounts of data that are not expected to be accessed more than once. |
127 | .Pp | |
128 | The default is off, | |
129 | meaning both MRU and MFU data and metadata are cached. | |
130 | When turning off this feature, some MRU buffers will still be present | |
131 | in ARC and eventually cached on L2ARC. | |
132 | .No If Sy l2arc_noprefetch Ns = Ns Sy 0 , | |
08532162 | 133 | some prefetched buffers will be cached to L2ARC, and those might later |
2d815d95 AZ |
134 | transition to MRU, in which case the |
135 | .Sy l2arc_mru_asize No arcstat will not be Sy 0 . | |
136 | .Pp | |
137 | Regardless of | |
138 | .Sy l2arc_noprefetch , | |
139 | some MFU buffers might be evicted from ARC, | |
140 | accessed later on as prefetches and transition to MRU as prefetches. | |
141 | If accessed again they are counted as MRU and the | |
142 | .Sy l2arc_mru_asize No arcstat will not be Sy 0 . | |
143 | .Pp | |
144 | The ARC status of L2ARC buffers when they were first cached in | |
145 | L2ARC can be seen in the | |
146 | .Sy l2arc_mru_asize , Sy l2arc_mfu_asize , No and Sy l2arc_prefetch_asize | |
147 | arcstats when importing the pool or onlining a cache | |
148 | device if persistent L2ARC is enabled. | |
149 | .Pp | |
150 | The | |
151 | .Sy evict_l2_eligible_mru | |
37b00fb0 | 152 | arcstat does not take into account if this option is enabled as the information |
2d815d95 AZ |
153 | provided by the |
154 | .Sy evict_l2_eligible_m[rf]u | |
155 | arcstats can be used to decide if toggling this option is appropriate | |
156 | for the current workload. | |
157 | . | |
fdc2d303 | 158 | .It Sy l2arc_meta_percent Ns = Ns Sy 33 Ns % Pq uint |
523e1295 | 159 | Percent of ARC size allowed for L2ARC-only headers. |
2d815d95 AZ |
160 | Since L2ARC buffers are not evicted on memory pressure, |
161 | too many headers on a system with an irrationally large L2ARC | |
162 | can render it slow or unusable. | |
163 | This parameter limits L2ARC writes and rebuilds to achieve the target. | |
164 | . | |
ab8d9c17 | 165 | .It Sy l2arc_trim_ahead Ns = Ns Sy 0 Ns % Pq u64 |
2d815d95 AZ |
166 | Trims ahead of the current write size |
167 | .Pq Sy l2arc_write_max | |
168 | on L2ARC devices by this percentage of write size if we have filled the device. | |
169 | If set to | |
170 | .Sy 100 | |
171 | we TRIM twice the space required to accommodate upcoming writes. | |
172 | A minimum of | |
a894ae75 | 173 | .Sy 64 MiB |
2d815d95 AZ |
174 | will be trimmed. |
175 | It also enables TRIM of the whole L2ARC device upon creation | |
176 | or addition to an existing pool or if the header of the device is | |
177 | invalid upon importing a pool or onlining a cache device. | |
178 | A value of | |
179 | .Sy 0 | |
b7654bd7 | 180 | disables TRIM on L2ARC altogether and is the default as it can put significant |
2d815d95 AZ |
181 | stress on the underlying storage devices. |
182 | This will vary depending of how well the specific device handles these commands. | |
183 | . | |
184 | .It Sy l2arc_noprefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
83426735 | 185 | Do not write buffers to L2ARC if they were prefetched but not used by |
2d815d95 AZ |
186 | applications. |
187 | In case there are prefetched buffers in L2ARC and this option | |
188 | is later set, we do not read the prefetched buffers from L2ARC. | |
189 | Unsetting this option is useful for caching sequential reads from the | |
190 | disks to L2ARC and serve those reads from L2ARC later on. | |
191 | This may be beneficial in case the L2ARC device is significantly faster | |
192 | in sequential reads than the disks of the pool. | |
193 | .Pp | |
194 | Use | |
195 | .Sy 1 | |
196 | to disable and | |
197 | .Sy 0 | |
198 | to enable caching/reading prefetches to/from L2ARC. | |
199 | . | |
200 | .It Sy l2arc_norw Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
77f6826b | 201 | No reads during writes. |
2d815d95 | 202 | . |
ab8d9c17 | 203 | .It Sy l2arc_write_boost Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 |
2d815d95 AZ |
204 | Cold L2ARC devices will have |
205 | .Sy l2arc_write_max | |
206 | increased by this amount while they remain cold. | |
207 | . | |
ab8d9c17 | 208 | .It Sy l2arc_write_max Ns = Ns Sy 8388608 Ns B Po 8 MiB Pc Pq u64 |
77f6826b | 209 | Max write bytes per interval. |
2d815d95 AZ |
210 | . |
211 | .It Sy l2arc_rebuild_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
212 | Rebuild the L2ARC when importing a pool (persistent L2ARC). | |
213 | This can be disabled if there are problems importing a pool | |
214 | or attaching an L2ARC device (e.g. the L2ARC device is slow | |
215 | in reading stored log metadata, or the metadata | |
77f6826b | 216 | has become somehow fragmented/unusable). |
2d815d95 | 217 | . |
ab8d9c17 | 218 | .It Sy l2arc_rebuild_blocks_min_l2size Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 |
2d815d95 AZ |
219 | Mininum size of an L2ARC device required in order to write log blocks in it. |
220 | The log blocks are used upon importing the pool to rebuild the persistent L2ARC. | |
221 | .Pp | |
a894ae75 | 222 | For L2ARC devices less than 1 GiB, the amount of data |
2d815d95 AZ |
223 | .Fn l2arc_evict |
224 | evicts is significant compared to the amount of restored L2ARC data. | |
225 | In this case, do not write log blocks in L2ARC in order not to waste space. | |
226 | . | |
ab8d9c17 | 227 | .It Sy metaslab_aliquot Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 |
2d815d95 AZ |
228 | Metaslab granularity, in bytes. |
229 | This is roughly similar to what would be referred to as the "stripe size" | |
230 | in traditional RAID arrays. | |
c55b2932 AM |
231 | In normal operation, ZFS will try to write this amount of data to each disk |
232 | before moving on to the next top-level vdev. | |
2d815d95 AZ |
233 | . |
234 | .It Sy metaslab_bias_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
235 | Enable metaslab group biasing based on their vdevs' over- or under-utilization | |
f3a7f661 | 236 | relative to the pool. |
2d815d95 | 237 | . |
ab8d9c17 | 238 | .It Sy metaslab_force_ganging Ns = Ns Sy 16777217 Ns B Po 16 MiB + 1 B Pc Pq u64 |
2d815d95 AZ |
239 | Make some blocks above a certain size be gang blocks. |
240 | This option is used by the test suite to facilitate testing. | |
241 | . | |
2b10e325 RE |
242 | .It Sy zfs_ddt_zap_default_bs Ns = Ns Sy 15 Po 32 KiB Pc Pq int |
243 | Default DDT ZAP data block size as a power of 2. Note that changing this after | |
244 | creating a DDT on the pool will not affect existing DDTs, only newly created | |
245 | ones. | |
246 | . | |
247 | .It Sy zfs_ddt_zap_default_ibs Ns = Ns Sy 15 Po 32 KiB Pc Pq int | |
248 | Default DDT ZAP indirect block size as a power of 2. Note that changing this | |
249 | after creating a DDT on the pool will not affect existing DDTs, only newly | |
250 | created ones. | |
251 | . | |
926715b9 MP |
252 | .It Sy zfs_default_bs Ns = Ns Sy 9 Po 512 B Pc Pq int |
253 | Default dnode block size as a power of 2. | |
254 | . | |
255 | .It Sy zfs_default_ibs Ns = Ns Sy 17 Po 128 KiB Pc Pq int | |
256 | Default dnode indirect block size as a power of 2. | |
257 | . | |
ab8d9c17 | 258 | .It Sy zfs_history_output_max Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 |
2d815d95 AZ |
259 | When attempting to log an output nvlist of an ioctl in the on-disk history, |
260 | the output will not be stored if it is larger than this size (in bytes). | |
261 | This must be less than | |
a894ae75 | 262 | .Sy DMU_MAX_ACCESS Pq 64 MiB . |
2d815d95 AZ |
263 | This applies primarily to |
264 | .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 . | |
265 | . | |
266 | .It Sy zfs_keep_log_spacemaps_at_export Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
93e28d66 | 267 | Prevent log spacemaps from being destroyed during pool exports and destroys. |
2d815d95 AZ |
268 | . |
269 | .It Sy zfs_metaslab_segment_weight_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
4e21fd06 | 270 | Enable/disable segment-based metaslab selection. |
2d815d95 AZ |
271 | . |
272 | .It Sy zfs_metaslab_switch_threshold Ns = Ns Sy 2 Pq int | |
4e21fd06 | 273 | When using segment-based metaslab selection, continue allocating |
2d815d95 | 274 | from the active metaslab until this option's |
4e21fd06 | 275 | worth of buckets have been exhausted. |
2d815d95 AZ |
276 | . |
277 | .It Sy metaslab_debug_load Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
aa7d06a9 | 278 | Load all metaslabs during pool import. |
2d815d95 AZ |
279 | . |
280 | .It Sy metaslab_debug_unload Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
aa7d06a9 | 281 | Prevent metaslabs from being unloaded. |
2d815d95 AZ |
282 | . |
283 | .It Sy metaslab_fragmentation_factor_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
f3a7f661 | 284 | Enable use of the fragmentation metric in computing metaslab weights. |
2d815d95 | 285 | . |
fdc2d303 | 286 | .It Sy metaslab_df_max_search Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint |
2d815d95 AZ |
287 | Maximum distance to search forward from the last offset. |
288 | Without this limit, fragmented pools can see | |
289 | .Em >100`000 | |
290 | iterations and | |
291 | .Fn metaslab_block_picker | |
d3230d76 | 292 | becomes the performance limiting factor on high-performance storage. |
2d815d95 AZ |
293 | .Pp |
294 | With the default setting of | |
a894ae75 | 295 | .Sy 16 MiB , |
2d815d95 AZ |
296 | we typically see less than |
297 | .Em 500 | |
298 | iterations, even with very fragmented | |
299 | .Sy ashift Ns = Ns Sy 9 | |
300 | pools. | |
301 | The maximum number of iterations possible is | |
302 | .Sy metaslab_df_max_search / 2^(ashift+1) . | |
303 | With the default setting of | |
a894ae75 | 304 | .Sy 16 MiB |
2d815d95 AZ |
305 | this is |
306 | .Em 16*1024 Pq with Sy ashift Ns = Ns Sy 9 | |
307 | or | |
308 | .Em 2*1024 Pq with Sy ashift Ns = Ns Sy 12 . | |
309 | . | |
310 | .It Sy metaslab_df_use_largest_segment Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
311 | If not searching forward (due to | |
312 | .Sy metaslab_df_max_search , metaslab_df_free_pct , | |
313 | .No or Sy metaslab_df_alloc_threshold ) , | |
314 | this tunable controls which segment is used. | |
315 | If set, we will use the largest free segment. | |
316 | If unset, we will use a segment of at least the requested size. | |
317 | . | |
ab8d9c17 | 318 | .It Sy zfs_metaslab_max_size_cache_sec Ns = Ns Sy 3600 Ns s Po 1 hour Pc Pq u64 |
2d815d95 AZ |
319 | When we unload a metaslab, we cache the size of the largest free chunk. |
320 | We use that cached size to determine whether or not to load a metaslab | |
321 | for a given allocation. | |
322 | As more frees accumulate in that metaslab while it's unloaded, | |
323 | the cached max size becomes less and less accurate. | |
324 | After a number of seconds controlled by this tunable, | |
325 | we stop considering the cached max size and start | |
c81f1790 | 326 | considering only the histogram instead. |
2d815d95 | 327 | . |
fdc2d303 | 328 | .It Sy zfs_metaslab_mem_limit Ns = Ns Sy 25 Ns % Pq uint |
f09fda50 | 329 | When we are loading a new metaslab, we check the amount of memory being used |
2d815d95 AZ |
330 | to store metaslab range trees. |
331 | If it is over a threshold, we attempt to unload the least recently used metaslab | |
332 | to prevent the system from clogging all of its memory with range trees. | |
333 | This tunable sets the percentage of total system memory that is the threshold. | |
334 | . | |
335 | .It Sy zfs_metaslab_try_hard_before_gang Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
336 | .Bl -item -compact | |
337 | .It | |
338 | If unset, we will first try normal allocation. | |
339 | .It | |
be5c6d96 | 340 | If that fails then we will do a gang allocation. |
2d815d95 | 341 | .It |
be5c6d96 | 342 | If that fails then we will do a "try hard" gang allocation. |
2d815d95 | 343 | .It |
be5c6d96 | 344 | If that fails then we will have a multi-layer gang block. |
2d815d95 AZ |
345 | .El |
346 | .Pp | |
347 | .Bl -item -compact | |
348 | .It | |
be5c6d96 | 349 | If set, we will first try normal allocation. |
2d815d95 | 350 | .It |
be5c6d96 | 351 | If that fails then we will do a "try hard" allocation. |
2d815d95 | 352 | .It |
be5c6d96 | 353 | If that fails we will do a gang allocation. |
2d815d95 | 354 | .It |
be5c6d96 | 355 | If that fails we will do a "try hard" gang allocation. |
2d815d95 | 356 | .It |
be5c6d96 | 357 | If that fails then we will have a multi-layer gang block. |
2d815d95 AZ |
358 | .El |
359 | . | |
fdc2d303 | 360 | .It Sy zfs_metaslab_find_max_tries Ns = Ns Sy 100 Pq uint |
be5c6d96 MA |
361 | When not trying hard, we only consider this number of the best metaslabs. |
362 | This improves performance, especially when there are many metaslabs per vdev | |
2d815d95 AZ |
363 | and the allocation can't actually be satisfied |
364 | (so we would otherwise iterate all metaslabs). | |
365 | . | |
fdc2d303 | 366 | .It Sy zfs_vdev_default_ms_count Ns = Ns Sy 200 Pq uint |
2d815d95 AZ |
367 | When a vdev is added, target this number of metaslabs per top-level vdev. |
368 | . | |
fdc2d303 | 369 | .It Sy zfs_vdev_default_ms_shift Ns = Ns Sy 29 Po 512 MiB Pc Pq uint |
ff73574c RN |
370 | Default lower limit for metaslab size. |
371 | . | |
372 | .It Sy zfs_vdev_max_ms_shift Ns = Ns Sy 34 Po 16 GiB Pc Pq uint | |
373 | Default upper limit for metaslab size. | |
2d815d95 | 374 | . |
ab8d9c17 | 375 | .It Sy zfs_vdev_max_auto_ashift Ns = Ns Sy 14 Pq uint |
b46be903 DS |
376 | Maximum ashift used when optimizing for logical \[->] physical sector size on |
377 | new | |
6fe3498c | 378 | top-level vdevs. |
37f6845c AM |
379 | May be increased up to |
380 | .Sy ASHIFT_MAX Po 16 Pc , | |
381 | but this may negatively impact pool space efficiency. | |
2d815d95 | 382 | . |
ab8d9c17 | 383 | .It Sy zfs_vdev_min_auto_ashift Ns = Ns Sy ASHIFT_MIN Po 9 Pc Pq uint |
6fe3498c | 384 | Minimum ashift used when creating new top-level vdevs. |
2d815d95 | 385 | . |
fdc2d303 | 386 | .It Sy zfs_vdev_min_ms_count Ns = Ns Sy 16 Pq uint |
d2734cce | 387 | Minimum number of metaslabs to create in a top-level vdev. |
2d815d95 AZ |
388 | . |
389 | .It Sy vdev_validate_skip Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
390 | Skip label validation steps during pool import. | |
391 | Changing is not recommended unless you know what you're doing | |
392 | and are recovering a damaged label. | |
393 | . | |
fdc2d303 | 394 | .It Sy zfs_vdev_ms_count_limit Ns = Ns Sy 131072 Po 128k Pc Pq uint |
e4e94ca3 | 395 | Practical upper limit of total metaslabs per top-level vdev. |
2d815d95 AZ |
396 | . |
397 | .It Sy metaslab_preload_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
f3a7f661 | 398 | Enable metaslab group preloading. |
2d815d95 AZ |
399 | . |
400 | .It Sy metaslab_lba_weighting_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
401 | Give more weight to metaslabs with lower LBAs, | |
402 | assuming they have greater bandwidth, | |
403 | as is typically the case on a modern constant angular velocity disk drive. | |
404 | . | |
fdc2d303 | 405 | .It Sy metaslab_unload_delay Ns = Ns Sy 32 Pq uint |
2d815d95 AZ |
406 | After a metaslab is used, we keep it loaded for this many TXGs, to attempt to |
407 | reduce unnecessary reloading. | |
408 | Note that both this many TXGs and | |
409 | .Sy metaslab_unload_delay_ms | |
410 | milliseconds must pass before unloading will occur. | |
411 | . | |
fdc2d303 | 412 | .It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint |
2d815d95 AZ |
413 | After a metaslab is used, we keep it loaded for this many milliseconds, |
414 | to attempt to reduce unnecessary reloading. | |
415 | Note, that both this many milliseconds and | |
416 | .Sy metaslab_unload_delay | |
417 | TXGs must pass before unloading will occur. | |
418 | . | |
fdc2d303 | 419 | .It Sy reference_history Ns = Ns Sy 3 Pq uint |
b46be903 DS |
420 | Maximum reference holders being tracked when reference_tracking_enable is |
421 | active. | |
2d815d95 AZ |
422 | . |
423 | .It Sy reference_tracking_enable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
424 | Track reference holders to | |
425 | .Sy refcount_t | |
426 | objects (debug builds only). | |
427 | . | |
428 | .It Sy send_holes_without_birth_time Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
429 | When set, the | |
430 | .Sy hole_birth | |
431 | optimization will not be used, and all holes will always be sent during a | |
432 | .Nm zfs Cm send . | |
433 | This is useful if you suspect your datasets are affected by a bug in | |
434 | .Sy hole_birth . | |
435 | . | |
436 | .It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp | |
437 | SPA config file. | |
438 | . | |
fdc2d303 | 439 | .It Sy spa_asize_inflation Ns = Ns Sy 24 Pq uint |
e8b96c60 | 440 | Multiplication factor used to estimate actual disk consumption from the |
2d815d95 AZ |
441 | size of data being written. |
442 | The default value is a worst case estimate, | |
443 | but lower values may be valid for a given pool depending on its configuration. | |
444 | Pool administrators who understand the factors involved | |
445 | may wish to specify a more realistic inflation factor, | |
446 | particularly if they operate close to quota or capacity limits. | |
447 | . | |
448 | .It Sy spa_load_print_vdev_tree Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
b46be903 DS |
449 | Whether to print the vdev tree in the debugging message buffer during pool |
450 | import. | |
2d815d95 AZ |
451 | . |
452 | .It Sy spa_load_verify_data Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
453 | Whether to traverse data blocks during an "extreme rewind" | |
454 | .Pq Fl X | |
455 | import. | |
456 | .Pp | |
dea377c0 | 457 | An extreme rewind import normally performs a full traversal of all |
2d815d95 AZ |
458 | blocks in the pool for verification. |
459 | If this parameter is unset, the traversal skips non-metadata blocks. | |
460 | It can be toggled once the | |
dea377c0 | 461 | import has started to stop or start the traversal of non-metadata blocks. |
2d815d95 AZ |
462 | . |
463 | .It Sy spa_load_verify_metadata Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
464 | Whether to traverse blocks during an "extreme rewind" | |
465 | .Pq Fl X | |
466 | pool import. | |
467 | .Pp | |
dea377c0 | 468 | An extreme rewind import normally performs a full traversal of all |
2d815d95 AZ |
469 | blocks in the pool for verification. |
470 | If this parameter is unset, the traversal is not performed. | |
471 | It can be toggled once the import has started to stop or start the traversal. | |
472 | . | |
fdc2d303 | 473 | .It Sy spa_load_verify_shift Ns = Ns Sy 4 Po 1/16th Pc Pq uint |
c8242a96 | 474 | Sets the maximum number of bytes to consume during pool import to the log2 |
77f6826b | 475 | fraction of the target ARC size. |
2d815d95 AZ |
476 | . |
477 | .It Sy spa_slop_shift Ns = Ns Sy 5 Po 1/32nd Pc Pq int | |
478 | Normally, we don't allow the last | |
479 | .Sy 3.2% Pq Sy 1/2^spa_slop_shift | |
480 | of space in the pool to be consumed. | |
481 | This ensures that we don't run the pool completely out of space, | |
482 | due to unaccounted changes (e.g. to the MOS). | |
483 | It also limits the worst-case time to allocate space. | |
484 | If we have less than this amount of free space, | |
485 | most ZPL operations (e.g. write, create) will return | |
486 | .Sy ENOSPC . | |
487 | . | |
0409d332 GA |
488 | .It Sy spa_upgrade_errlog_limit Ns = Ns Sy 0 Pq uint |
489 | Limits the number of on-disk error log entries that will be converted to the | |
490 | new format when enabling the | |
491 | .Sy head_errlog | |
492 | feature. | |
493 | The default is to convert all log entries. | |
494 | . | |
fdc2d303 | 495 | .It Sy vdev_removal_max_span Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint |
0dc2f70c MA |
496 | During top-level vdev removal, chunks of data are copied from the vdev |
497 | which may include free space in order to trade bandwidth for IOPS. | |
2d815d95 | 498 | This parameter determines the maximum span of free space, in bytes, |
0dc2f70c | 499 | which will be included as "unnecessary" data in a chunk of copied data. |
2d815d95 | 500 | .Pp |
0dc2f70c | 501 | The default value here was chosen to align with |
2d815d95 AZ |
502 | .Sy zfs_vdev_read_gap_limit , |
503 | which is a similar concept when doing | |
0dc2f70c | 504 | regular reads (but there's no reason it has to be the same). |
2d815d95 | 505 | . |
ab8d9c17 | 506 | .It Sy vdev_file_logical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 |
c494aa7f | 507 | Logical ashift for file-based devices. |
2d815d95 | 508 | . |
ab8d9c17 | 509 | .It Sy vdev_file_physical_ashift Ns = Ns Sy 9 Po 512 B Pc Pq u64 |
c494aa7f | 510 | Physical ashift for file-based devices. |
2d815d95 AZ |
511 | . |
512 | .It Sy zap_iterate_prefetch Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
513 | If set, when we start iterating over a ZAP object, | |
514 | prefetch the entire object (all leaf blocks). | |
515 | However, this is limited by | |
516 | .Sy dmu_prefetch_max . | |
517 | . | |
a4b21ead MP |
518 | .It Sy zap_micro_max_size Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq int |
519 | Maximum micro ZAP size. | |
520 | A micro ZAP is upgraded to a fat ZAP, once it grows beyond the specified size. | |
521 | . | |
6aa8c21a AM |
522 | .It Sy zfetch_min_distance Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint |
523 | Min bytes to prefetch per stream. | |
524 | Prefetch distance starts from the demand access size and quickly grows to | |
525 | this value, doubling on each hit. | |
526 | After that it may grow further by 1/8 per hit, but only if some prefetch | |
527 | since last time haven't completed in time to satisfy demand request, i.e. | |
528 | prefetch depth didn't cover the read latency or the pool got saturated. | |
529 | . | |
530 | .It Sy zfetch_max_distance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint | |
7dfc56d8 | 531 | Max bytes to prefetch per stream. |
2d815d95 | 532 | . |
a894ae75 | 533 | .It Sy zfetch_max_idistance Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq uint |
7dfc56d8 | 534 | Max bytes to prefetch indirects for per stream. |
2d815d95 AZ |
535 | . |
536 | .It Sy zfetch_max_streams Ns = Ns Sy 8 Pq uint | |
27b293be | 537 | Max number of streams per zfetch (prefetch streams per file). |
2d815d95 | 538 | . |
6aa8c21a AM |
539 | .It Sy zfetch_min_sec_reap Ns = Ns Sy 1 Pq uint |
540 | Min time before inactive prefetch stream can be reclaimed | |
541 | . | |
542 | .It Sy zfetch_max_sec_reap Ns = Ns Sy 2 Pq uint | |
543 | Max time before inactive prefetch stream can be deleted | |
2d815d95 AZ |
544 | . |
545 | .It Sy zfs_abd_scatter_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
67709516 | 546 | Enables ARC from using scatter/gather lists and forces all allocations to be |
2d815d95 AZ |
547 | linear in kernel memory. |
548 | Disabling can improve performance in some code paths | |
67709516 | 549 | at the expense of fragmented kernel memory. |
2d815d95 | 550 | . |
12bd322d | 551 | .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint |
67709516 | 552 | Maximum number of consecutive memory pages allocated in a single block for |
2d815d95 AZ |
553 | scatter/gather lists. |
554 | .Pp | |
555 | The value of | |
556 | .Sy MAX_ORDER | |
557 | depends on kernel configuration. | |
558 | . | |
a894ae75 | 559 | .It Sy zfs_abd_scatter_min_size Ns = Ns Sy 1536 Ns B Po 1.5 KiB Pc Pq uint |
2d815d95 AZ |
560 | This is the minimum allocation size that will use scatter (page-based) ABDs. |
561 | Smaller allocations will use linear ABDs. | |
562 | . | |
ab8d9c17 | 563 | .It Sy zfs_arc_dnode_limit Ns = Ns Sy 0 Ns B Pq u64 |
25458cbe | 564 | When the number of bytes consumed by dnodes in the ARC exceeds this number of |
2d815d95 AZ |
565 | bytes, try to unpin some of it in response to demand for non-metadata. |
566 | This value acts as a ceiling to the amount of dnode metadata, and defaults to | |
567 | .Sy 0 , | |
568 | which indicates that a percent which is based on | |
569 | .Sy zfs_arc_dnode_limit_percent | |
570 | of the ARC meta buffers that may be used for dnodes. | |
ab8d9c17 | 571 | .It Sy zfs_arc_dnode_limit_percent Ns = Ns Sy 10 Ns % Pq u64 |
9907cc1c | 572 | Percentage that can be consumed by dnodes of ARC meta buffers. |
2d815d95 AZ |
573 | .Pp |
574 | See also | |
575 | .Sy zfs_arc_dnode_limit , | |
576 | which serves a similar purpose but has a higher priority if nonzero. | |
577 | . | |
ab8d9c17 | 578 | .It Sy zfs_arc_dnode_reduce_percent Ns = Ns Sy 10 Ns % Pq u64 |
25458cbe | 579 | Percentage of ARC dnodes to try to scan in response to demand for non-metadata |
2d815d95 AZ |
580 | when the number of bytes consumed by dnodes exceeds |
581 | .Sy zfs_arc_dnode_limit . | |
582 | . | |
fdc2d303 | 583 | .It Sy zfs_arc_average_blocksize Ns = Ns Sy 8192 Ns B Po 8 KiB Pc Pq uint |
49ddb315 | 584 | The ARC's buffer hash table is sized based on the assumption of an average |
2d815d95 | 585 | block size of this value. |
a894ae75 | 586 | This works out to roughly 1 MiB of hash table per 1 GiB of physical memory |
2d815d95 AZ |
587 | with 8-byte pointers. |
588 | For configurations with a known larger average block size, | |
589 | this value can be increased to reduce the memory footprint. | |
590 | . | |
fdc2d303 | 591 | .It Sy zfs_arc_eviction_pct Ns = Ns Sy 200 Ns % Pq uint |
2d815d95 AZ |
592 | When |
593 | .Fn arc_is_overflowing , | |
594 | .Fn arc_get_data_impl | |
595 | waits for this percent of the requested amount of data to be evicted. | |
596 | For example, by default, for every | |
a894ae75 | 597 | .Em 2 KiB |
2d815d95 | 598 | that's evicted, |
a894ae75 | 599 | .Em 1 KiB |
2d815d95 AZ |
600 | of it may be "reused" by a new allocation. |
601 | Since this is above | |
602 | .Sy 100 Ns % , | |
603 | it ensures that progress is made towards getting | |
604 | .Sy arc_size No under Sy arc_c . | |
605 | Since this is finite, it ensures that allocations can still happen, | |
606 | even during the potentially long time that | |
607 | .Sy arc_size No is more than Sy arc_c . | |
608 | . | |
fdc2d303 | 609 | .It Sy zfs_arc_evict_batch_limit Ns = Ns Sy 10 Pq uint |
8f343973 | 610 | Number ARC headers to evict per sub-list before proceeding to another sub-list. |
ca0bf58d PS |
611 | This batch-style operation prevents entire sub-lists from being evicted at once |
612 | but comes at a cost of additional unlocking and locking. | |
2d815d95 | 613 | . |
fdc2d303 | 614 | .It Sy zfs_arc_grow_retry Ns = Ns Sy 0 Ns s Pq uint |
2d815d95 AZ |
615 | If set to a non zero value, it will replace the |
616 | .Sy arc_grow_retry | |
617 | value with this value. | |
618 | The | |
619 | .Sy arc_grow_retry | |
620 | .No value Pq default Sy 5 Ns s | |
621 | is the number of seconds the ARC will wait before | |
ca85d690 | 622 | trying to resume growth after a memory pressure event. |
2d815d95 AZ |
623 | . |
624 | .It Sy zfs_arc_lotsfree_percent Ns = Ns Sy 10 Ns % Pq int | |
7e8bddd0 | 625 | Throttle I/O when free system memory drops below this percentage of total |
2d815d95 AZ |
626 | system memory. |
627 | Setting this value to | |
628 | .Sy 0 | |
629 | will disable the throttle. | |
630 | . | |
ab8d9c17 | 631 | .It Sy zfs_arc_max Ns = Ns Sy 0 Ns B Pq u64 |
2d815d95 AZ |
632 | Max size of ARC in bytes. |
633 | If | |
634 | .Sy 0 , | |
635 | then the max size of ARC is determined by the amount of system memory installed. | |
636 | Under Linux, half of system memory will be used as the limit. | |
637 | Under | |
638 | .Fx , | |
639 | the larger of | |
a894ae75 | 640 | .Sy all_system_memory No \- Sy 1 GiB |
12bd322d AZ |
641 | and |
642 | .Sy 5/8 No \(mu Sy all_system_memory | |
2d815d95 AZ |
643 | will be used as the limit. |
644 | This value must be at least | |
a894ae75 | 645 | .Sy 67108864 Ns B Pq 64 MiB . |
2d815d95 AZ |
646 | .Pp |
647 | This value can be changed dynamically, with some caveats. | |
648 | It cannot be set back to | |
649 | .Sy 0 | |
650 | while running, and reducing it below the current ARC size will not cause | |
83426735 | 651 | the ARC to shrink without memory pressure to induce shrinking. |
2d815d95 | 652 | . |
a8d83e2a AM |
653 | .It Sy zfs_arc_meta_balance Ns = Ns Sy 500 Pq uint |
654 | Balance between metadata and data on ghost hits. | |
655 | Values above 100 increase metadata caching by proportionally reducing effect | |
656 | of ghost data hits on target data/metadata rate. | |
2d815d95 | 657 | . |
ab8d9c17 | 658 | .It Sy zfs_arc_min Ns = Ns Sy 0 Ns B Pq u64 |
2d815d95 AZ |
659 | Min size of ARC in bytes. |
660 | .No If set to Sy 0 , arc_c_min | |
661 | will default to consuming the larger of | |
a894ae75 | 662 | .Sy 32 MiB |
12bd322d AZ |
663 | and |
664 | .Sy all_system_memory No / Sy 32 . | |
2d815d95 | 665 | . |
fdc2d303 | 666 | .It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint |
2d815d95 AZ |
667 | Minimum time prefetched blocks are locked in the ARC. |
668 | . | |
fdc2d303 | 669 | .It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint |
2d815d95 AZ |
670 | Minimum time "prescient prefetched" blocks are locked in the ARC. |
671 | These blocks are meant to be prefetched fairly aggressively ahead of | |
672 | the code that may use them. | |
673 | . | |
462217d1 AM |
674 | .It Sy zfs_arc_prune_task_threads Ns = Ns Sy 1 Pq int |
675 | Number of arc_prune threads. | |
676 | .Fx | |
677 | does not need more than one. | |
678 | Linux may theoretically use one per mount point up to number of CPUs, | |
679 | but that was not proven to be useful. | |
680 | . | |
2d815d95 | 681 | .It Sy zfs_max_missing_tvds Ns = Ns Sy 0 Pq int |
6cb8e530 PZ |
682 | Number of missing top-level vdevs which will be allowed during |
683 | pool import (only in read-only mode). | |
2d815d95 | 684 | . |
ab8d9c17 | 685 | .It Sy zfs_max_nvlist_src_size Ns = Sy 0 Pq u64 |
2d815d95 AZ |
686 | Maximum size in bytes allowed to be passed as |
687 | .Sy zc_nvlist_src_size | |
688 | for ioctls on | |
689 | .Pa /dev/zfs . | |
690 | This prevents a user from causing the kernel to allocate | |
691 | an excessive amount of memory. | |
692 | When the limit is exceeded, the ioctl fails with | |
693 | .Sy EINVAL | |
694 | and a description of the error is sent to the | |
695 | .Pa zfs-dbgmsg | |
696 | log. | |
697 | This parameter should not need to be touched under normal circumstances. | |
698 | If | |
699 | .Sy 0 , | |
700 | equivalent to a quarter of the user-wired memory limit under | |
701 | .Fx | |
702 | and to | |
a894ae75 | 703 | .Sy 134217728 Ns B Pq 128 MiB |
2d815d95 AZ |
704 | under Linux. |
705 | . | |
fdc2d303 | 706 | .It Sy zfs_multilist_num_sublists Ns = Ns Sy 0 Pq uint |
ca0bf58d | 707 | To allow more fine-grained locking, each ARC state contains a series |
2d815d95 AZ |
708 | of lists for both data and metadata objects. |
709 | Locking is performed at the level of these "sub-lists". | |
710 | This parameters controls the number of sub-lists per ARC state, | |
711 | and also applies to other uses of the multilist data structure. | |
712 | .Pp | |
713 | If | |
714 | .Sy 0 , | |
715 | equivalent to the greater of the number of online CPUs and | |
716 | .Sy 4 . | |
717 | . | |
718 | .It Sy zfs_arc_overflow_shift Ns = Ns Sy 8 Pq int | |
ca0bf58d | 719 | The ARC size is considered to be overflowing if it exceeds the current |
2d815d95 AZ |
720 | ARC target size |
721 | .Pq Sy arc_c | |
f7de776d AM |
722 | by thresholds determined by this parameter. |
723 | Exceeding by | |
12bd322d | 724 | .Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No / Sy 2 |
f7de776d AM |
725 | starts ARC reclamation process. |
726 | If that appears insufficient, exceeding by | |
12bd322d | 727 | .Sy ( arc_c No >> Sy zfs_arc_overflow_shift ) No \(mu Sy 1.5 |
f7de776d AM |
728 | blocks new buffer allocation until the reclaim thread catches up. |
729 | Started reclamation process continues till ARC size returns below the | |
730 | target size. | |
2d815d95 AZ |
731 | .Pp |
732 | The default value of | |
733 | .Sy 8 | |
f7de776d AM |
734 | causes the ARC to start reclamation if it exceeds the target size by |
735 | .Em 0.2% | |
736 | of the target size, and block allocations by | |
737 | .Em 0.6% . | |
2d815d95 | 738 | . |
fdc2d303 | 739 | .It Sy zfs_arc_shrink_shift Ns = Ns Sy 0 Pq uint |
2d815d95 AZ |
740 | If nonzero, this will update |
741 | .Sy arc_shrink_shift Pq default Sy 7 | |
ca85d690 | 742 | with the new value. |
2d815d95 AZ |
743 | . |
744 | .It Sy zfs_arc_pc_percent Ns = Ns Sy 0 Ns % Po off Pc Pq uint | |
745 | Percent of pagecache to reclaim ARC to. | |
746 | .Pp | |
747 | This tunable allows the ZFS ARC to play more nicely | |
748 | with the kernel's LRU pagecache. | |
749 | It can guarantee that the ARC size won't collapse under scanning | |
750 | pressure on the pagecache, yet still allows the ARC to be reclaimed down to | |
751 | .Sy zfs_arc_min | |
752 | if necessary. | |
753 | This value is specified as percent of pagecache size (as measured by | |
754 | .Sy NR_FILE_PAGES ) , | |
755 | where that percent may exceed | |
756 | .Sy 100 . | |
757 | This | |
03b60eee | 758 | only operates during memory pressure/reclaim. |
2d815d95 AZ |
759 | . |
760 | .It Sy zfs_arc_shrinker_limit Ns = Ns Sy 10000 Pq int | |
3442c2a0 | 761 | This is a limit on how many pages the ARC shrinker makes available for |
2d815d95 AZ |
762 | eviction in response to one page allocation attempt. |
763 | Note that in practice, the kernel's shrinker can ask us to evict | |
764 | up to about four times this for one allocation attempt. | |
765 | .Pp | |
766 | The default limit of | |
a894ae75 | 767 | .Sy 10000 Pq in practice, Em 160 MiB No per allocation attempt with 4 KiB pages |
2d815d95 | 768 | limits the amount of time spent attempting to reclaim ARC memory to |
a894ae75 W |
769 | less than 100 ms per allocation attempt, |
770 | even with a small average compressed block size of ~8 KiB. | |
2d815d95 AZ |
771 | .Pp |
772 | The parameter can be set to 0 (zero) to disable the limit, | |
773 | and only applies on Linux. | |
774 | . | |
ab8d9c17 | 775 | .It Sy zfs_arc_sys_free Ns = Ns Sy 0 Ns B Pq u64 |
11f552fa | 776 | The target number of bytes the ARC should leave as free memory on the system. |
2d815d95 | 777 | If zero, equivalent to the bigger of |
a894ae75 | 778 | .Sy 512 KiB No and Sy all_system_memory/64 . |
2d815d95 AZ |
779 | . |
780 | .It Sy zfs_autoimport_disable Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
781 | Disable pool import at module load by ignoring the cache file | |
782 | .Pq Sy spa_config_path . | |
783 | . | |
784 | .It Sy zfs_checksum_events_per_second Ns = Ns Sy 20 Ns /s Pq uint | |
785 | Rate limit checksum events to this many per second. | |
786 | Note that this should not be set below the ZED thresholds | |
787 | (currently 10 checksums over 10 seconds) | |
788 | or else the daemon may not trigger any action. | |
789 | . | |
fdc2d303 | 790 | .It Sy zfs_commit_timeout_pct Ns = Ns Sy 5 Ns % Pq uint |
2fe61a7e PS |
791 | This controls the amount of time that a ZIL block (lwb) will remain "open" |
792 | when it isn't "full", and it has a thread waiting for it to be committed to | |
2d815d95 AZ |
793 | stable storage. |
794 | The timeout is scaled based on a percentage of the last lwb | |
2fe61a7e PS |
795 | latency to avoid significantly impacting the latency of each individual |
796 | transaction record (itx). | |
2d815d95 AZ |
797 | . |
798 | .It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int | |
67709516 | 799 | Vdev indirection layer (used for device removal) sleeps for this many |
2d815d95 AZ |
800 | milliseconds during mapping generation. |
801 | Intended for use with the test suite to throttle vdev removal speed. | |
802 | . | |
fdc2d303 | 803 | .It Sy zfs_condense_indirect_obsolete_pct Ns = Ns Sy 25 Ns % Pq uint |
b46be903 DS |
804 | Minimum percent of obsolete bytes in vdev mapping required to attempt to |
805 | condense | |
2d815d95 AZ |
806 | .Pq see Sy zfs_condense_indirect_vdevs_enable . |
807 | Intended for use with the test suite | |
808 | to facilitate triggering condensing as needed. | |
809 | . | |
810 | .It Sy zfs_condense_indirect_vdevs_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
811 | Enable condensing indirect vdev mappings. | |
812 | When set, attempt to condense indirect vdev mappings | |
813 | if the mapping uses more than | |
814 | .Sy zfs_condense_min_mapping_bytes | |
815 | bytes of memory and if the obsolete space map object uses more than | |
816 | .Sy zfs_condense_max_obsolete_bytes | |
817 | bytes on-disk. | |
b46be903 DS |
818 | The condensing process is an attempt to save memory by removing obsolete |
819 | mappings. | |
2d815d95 | 820 | . |
ab8d9c17 | 821 | .It Sy zfs_condense_max_obsolete_bytes Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 |
0dc2f70c MA |
822 | Only attempt to condense indirect vdev mappings if the on-disk size |
823 | of the obsolete space map object is greater than this number of bytes | |
2d815d95 AZ |
824 | .Pq see Sy zfs_condense_indirect_vdevs_enable . |
825 | . | |
ab8d9c17 | 826 | .It Sy zfs_condense_min_mapping_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq u64 |
2d815d95 AZ |
827 | Minimum size vdev mapping to attempt to condense |
828 | .Pq see Sy zfs_condense_indirect_vdevs_enable . | |
829 | . | |
830 | .It Sy zfs_dbgmsg_enable Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
831 | Internally ZFS keeps a small log to facilitate debugging. | |
832 | The log is enabled by default, and can be disabled by unsetting this option. | |
833 | The contents of the log can be accessed by reading | |
834 | .Pa /proc/spl/kstat/zfs/dbgmsg . | |
835 | Writing | |
836 | .Sy 0 | |
837 | to the file clears the log. | |
838 | .Pp | |
839 | This setting does not influence debug prints due to | |
840 | .Sy zfs_flags . | |
841 | . | |
fdc2d303 | 842 | .It Sy zfs_dbgmsg_maxsize Ns = Ns Sy 4194304 Ns B Po 4 MiB Pc Pq uint |
2d815d95 AZ |
843 | Maximum size of the internal ZFS debug log. |
844 | . | |
845 | .It Sy zfs_dbuf_state_index Ns = Ns Sy 0 Pq int | |
846 | Historically used for controlling what reporting was available under | |
847 | .Pa /proc/spl/kstat/zfs . | |
848 | No effect. | |
849 | . | |
850 | .It Sy zfs_deadman_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
851 | When a pool sync operation takes longer than | |
852 | .Sy zfs_deadman_synctime_ms , | |
853 | or when an individual I/O operation takes longer than | |
854 | .Sy zfs_deadman_ziotime_ms , | |
855 | then the operation is considered to be "hung". | |
856 | If | |
857 | .Sy zfs_deadman_enabled | |
858 | is set, then the deadman behavior is invoked as described by | |
859 | .Sy zfs_deadman_failmode . | |
860 | By default, the deadman is enabled and set to | |
861 | .Sy wait | |
a737b415 | 862 | which results in "hung" I/O operations only being logged. |
2d815d95 AZ |
863 | The deadman is automatically disabled when a pool gets suspended. |
864 | . | |
865 | .It Sy zfs_deadman_failmode Ns = Ns Sy wait Pq charp | |
866 | Controls the failure behavior when the deadman detects a "hung" I/O operation. | |
867 | Valid values are: | |
868 | .Bl -tag -compact -offset 4n -width "continue" | |
869 | .It Sy wait | |
870 | Wait for a "hung" operation to complete. | |
871 | For each "hung" operation a "deadman" event will be posted | |
872 | describing that operation. | |
873 | .It Sy continue | |
874 | Attempt to recover from a "hung" operation by re-dispatching it | |
8fb1ede1 | 875 | to the I/O pipeline if possible. |
2d815d95 AZ |
876 | .It Sy panic |
877 | Panic the system. | |
878 | This can be used to facilitate automatic fail-over | |
879 | to a properly configured fail-over partner. | |
880 | .El | |
881 | . | |
ab8d9c17 | 882 | .It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64 |
2d815d95 AZ |
883 | Check time in milliseconds. |
884 | This defines the frequency at which we check for hung I/O requests | |
885 | and potentially invoke the | |
886 | .Sy zfs_deadman_failmode | |
887 | behavior. | |
888 | . | |
ab8d9c17 | 889 | .It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64 |
b81a3ddc | 890 | Interval in milliseconds after which the deadman is triggered and also |
8fb1ede1 BB |
891 | the interval after which a pool sync operation is considered to be "hung". |
892 | Once this limit is exceeded the deadman will be invoked every | |
2d815d95 AZ |
893 | .Sy zfs_deadman_checktime_ms |
894 | milliseconds until the pool sync completes. | |
895 | . | |
ab8d9c17 | 896 | .It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64 |
8fb1ede1 | 897 | Interval in milliseconds after which the deadman is triggered and an |
2d815d95 AZ |
898 | individual I/O operation is considered to be "hung". |
899 | As long as the operation remains "hung", | |
900 | the deadman will be invoked every | |
901 | .Sy zfs_deadman_checktime_ms | |
902 | milliseconds until the operation completes. | |
903 | . | |
904 | .It Sy zfs_dedup_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
905 | Enable prefetching dedup-ed blocks which are going to be freed. | |
906 | . | |
fdc2d303 | 907 | .It Sy zfs_delay_min_dirty_percent Ns = Ns Sy 60 Ns % Pq uint |
e8b96c60 | 908 | Start to delay each transaction once there is this amount of dirty data, |
2d815d95 AZ |
909 | expressed as a percentage of |
910 | .Sy zfs_dirty_data_max . | |
911 | This value should be at least | |
912 | .Sy zfs_vdev_async_write_active_max_dirty_percent . | |
913 | .No See Sx ZFS TRANSACTION DELAY . | |
914 | . | |
915 | .It Sy zfs_delay_scale Ns = Ns Sy 500000 Pq int | |
e8b96c60 MA |
916 | This controls how quickly the transaction delay approaches infinity. |
917 | Larger values cause longer delays for a given amount of dirty data. | |
2d815d95 | 918 | .Pp |
e8b96c60 | 919 | For the smoothest delay, this value should be about 1 billion divided |
2d815d95 AZ |
920 | by the maximum number of operations per second. |
921 | This will smoothly handle between ten times and a tenth of this number. | |
922 | .No See Sx ZFS TRANSACTION DELAY . | |
923 | .Pp | |
12bd322d | 924 | .Sy zfs_delay_scale No \(mu Sy zfs_dirty_data_max Em must No be smaller than Sy 2^64 . |
2d815d95 AZ |
925 | . |
926 | .It Sy zfs_disable_ivset_guid_check Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
927 | Disables requirement for IVset GUIDs to be present and match when doing a raw | |
928 | receive of encrypted datasets. | |
929 | Intended for users whose pools were created with | |
d0249a4b | 930 | OpenZFS pre-release versions and now have compatibility issues. |
2d815d95 AZ |
931 | . |
932 | .It Sy zfs_key_max_salt_uses Ns = Ns Sy 400000000 Po 4*10^8 Pc Pq ulong | |
67709516 | 933 | Maximum number of uses of a single salt value before generating a new one for |
2d815d95 AZ |
934 | encrypted datasets. |
935 | The default value is also the maximum. | |
936 | . | |
937 | .It Sy zfs_object_mutex_size Ns = Ns Sy 64 Pq uint | |
67709516 | 938 | Size of the znode hashtable used for holds. |
2d815d95 | 939 | .Pp |
67709516 D |
940 | Due to the need to hold locks on objects that may not exist yet, kernel mutexes |
941 | are not created per-object and instead a hashtable is used where collisions | |
942 | will result in objects waiting when there is not actually contention on the | |
943 | same object. | |
2d815d95 AZ |
944 | . |
945 | .It Sy zfs_slow_io_events_per_second Ns = Ns Sy 20 Ns /s Pq int | |
b46be903 DS |
946 | Rate limit delay and deadman zevents (which report slow I/O operations) to this |
947 | many per | |
e778b048 | 948 | second. |
2d815d95 | 949 | . |
ab8d9c17 | 950 | .It Sy zfs_unflushed_max_mem_amt Ns = Ns Sy 1073741824 Ns B Po 1 GiB Pc Pq u64 |
93e28d66 | 951 | Upper-bound limit for unflushed metadata changes to be held by the |
2d815d95 AZ |
952 | log spacemap in memory, in bytes. |
953 | . | |
ab8d9c17 | 954 | .It Sy zfs_unflushed_max_mem_ppm Ns = Ns Sy 1000 Ns ppm Po 0.1% Pc Pq u64 |
2d815d95 AZ |
955 | Part of overall system memory that ZFS allows to be used |
956 | for unflushed metadata changes by the log spacemap, in millionths. | |
957 | . | |
ab8d9c17 | 958 | .It Sy zfs_unflushed_log_block_max Ns = Ns Sy 131072 Po 128k Pc Pq u64 |
93e28d66 | 959 | Describes the maximum number of log spacemap blocks allowed for each pool. |
2d815d95 AZ |
960 | The default value means that the space in all the log spacemaps |
961 | can add up to no more than | |
600a02b8 | 962 | .Sy 131072 |
2d815d95 | 963 | blocks (which means |
a894ae75 | 964 | .Em 16 GiB |
2d815d95 AZ |
965 | of logical space before compression and ditto blocks, |
966 | assuming that blocksize is | |
a894ae75 | 967 | .Em 128 KiB ) . |
2d815d95 | 968 | .Pp |
93e28d66 SD |
969 | This tunable is important because it involves a trade-off between import |
970 | time after an unclean export and the frequency of flushing metaslabs. | |
971 | The higher this number is, the more log blocks we allow when the pool is | |
972 | active which means that we flush metaslabs less often and thus decrease | |
a737b415 | 973 | the number of I/O operations for spacemap updates per TXG. |
93e28d66 SD |
974 | At the same time though, that means that in the event of an unclean export, |
975 | there will be more log spacemap blocks for us to read, inducing overhead | |
976 | in the import time of the pool. | |
2d815d95 | 977 | The lower the number, the amount of flushing increases, destroying log |
93e28d66 SD |
978 | blocks quicker as they become obsolete faster, which leaves less blocks |
979 | to be read during import time after a crash. | |
2d815d95 | 980 | .Pp |
93e28d66 SD |
981 | Each log spacemap block existing during pool import leads to approximately |
982 | one extra logical I/O issued. | |
983 | This is the reason why this tunable is exposed in terms of blocks rather | |
984 | than space used. | |
2d815d95 | 985 | . |
ab8d9c17 | 986 | .It Sy zfs_unflushed_log_block_min Ns = Ns Sy 1000 Pq u64 |
2d815d95 AZ |
987 | If the number of metaslabs is small and our incoming rate is high, |
988 | we could get into a situation that we are flushing all our metaslabs every TXG. | |
93e28d66 | 989 | Thus we always allow at least this many log blocks. |
2d815d95 | 990 | . |
ab8d9c17 | 991 | .It Sy zfs_unflushed_log_block_pct Ns = Ns Sy 400 Ns % Pq u64 |
93e28d66 SD |
992 | Tunable used to determine the number of blocks that can be used for |
993 | the spacemap log, expressed as a percentage of the total number of | |
600a02b8 AM |
994 | unflushed metaslabs in the pool. |
995 | . | |
ab8d9c17 | 996 | .It Sy zfs_unflushed_log_txg_max Ns = Ns Sy 1000 Pq u64 |
600a02b8 AM |
997 | Tunable limiting maximum time in TXGs any metaslab may remain unflushed. |
998 | It effectively limits maximum number of unflushed per-TXG spacemap logs | |
999 | that need to be read after unclean pool export. | |
2d815d95 AZ |
1000 | . |
1001 | .It Sy zfs_unlink_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
dcec0a12 | 1002 | When enabled, files will not be asynchronously removed from the list of pending |
2d815d95 AZ |
1003 | unlinks and the space they consume will be leaked. |
1004 | Once this option has been disabled and the dataset is remounted, | |
1005 | the pending unlinks will be processed and the freed space returned to the pool. | |
1006 | This option is used by the test suite. | |
1007 | . | |
1008 | .It Sy zfs_delete_blocks Ns = Ns Sy 20480 Pq ulong | |
1009 | This is the used to define a large file for the purposes of deletion. | |
1010 | Files containing more than | |
1011 | .Sy zfs_delete_blocks | |
1012 | will be deleted asynchronously, while smaller files are deleted synchronously. | |
1013 | Decreasing this value will reduce the time spent in an | |
1014 | .Xr unlink 2 | |
b46be903 DS |
1015 | system call, at the expense of a longer delay before the freed space is |
1016 | available. | |
ab8d9c17 | 1017 | This only applies on Linux. |
2d815d95 AZ |
1018 | . |
1019 | .It Sy zfs_dirty_data_max Ns = Pq int | |
1020 | Determines the dirty space limit in bytes. | |
1021 | Once this limit is exceeded, new writes are halted until space frees up. | |
1022 | This parameter takes precedence over | |
1023 | .Sy zfs_dirty_data_max_percent . | |
1024 | .No See Sx ZFS TRANSACTION DELAY . | |
1025 | .Pp | |
1026 | Defaults to | |
1027 | .Sy physical_ram/10 , | |
1028 | capped at | |
1029 | .Sy zfs_dirty_data_max_max . | |
1030 | . | |
1031 | .It Sy zfs_dirty_data_max_max Ns = Pq int | |
1032 | Maximum allowable value of | |
1033 | .Sy zfs_dirty_data_max , | |
1034 | expressed in bytes. | |
1035 | This limit is only enforced at module load time, and will be ignored if | |
1036 | .Sy zfs_dirty_data_max | |
1037 | is later changed. | |
1038 | This parameter takes precedence over | |
1039 | .Sy zfs_dirty_data_max_max_percent . | |
1040 | .No See Sx ZFS TRANSACTION DELAY . | |
1041 | .Pp | |
1042 | Defaults to | |
a379083d GM |
1043 | .Sy min(physical_ram/4, 4GiB) , |
1044 | or | |
1045 | .Sy min(physical_ram/4, 1GiB) | |
1046 | for 32-bit systems. | |
2d815d95 | 1047 | . |
fdc2d303 | 1048 | .It Sy zfs_dirty_data_max_max_percent Ns = Ns Sy 25 Ns % Pq uint |
2d815d95 AZ |
1049 | Maximum allowable value of |
1050 | .Sy zfs_dirty_data_max , | |
1051 | expressed as a percentage of physical RAM. | |
e8b96c60 | 1052 | This limit is only enforced at module load time, and will be ignored if |
2d815d95 AZ |
1053 | .Sy zfs_dirty_data_max |
1054 | is later changed. | |
1055 | The parameter | |
1056 | .Sy zfs_dirty_data_max_max | |
1057 | takes precedence over this one. | |
1058 | .No See Sx ZFS TRANSACTION DELAY . | |
1059 | . | |
fdc2d303 | 1060 | .It Sy zfs_dirty_data_max_percent Ns = Ns Sy 10 Ns % Pq uint |
2d815d95 AZ |
1061 | Determines the dirty space limit, expressed as a percentage of all memory. |
1062 | Once this limit is exceeded, new writes are halted until space frees up. | |
1063 | The parameter | |
1064 | .Sy zfs_dirty_data_max | |
1065 | takes precedence over this one. | |
1066 | .No See Sx ZFS TRANSACTION DELAY . | |
1067 | .Pp | |
1068 | Subject to | |
1069 | .Sy zfs_dirty_data_max_max . | |
1070 | . | |
fdc2d303 | 1071 | .It Sy zfs_dirty_data_sync_percent Ns = Ns Sy 20 Ns % Pq uint |
dfbe2675 | 1072 | Start syncing out a transaction group if there's at least this much dirty data |
2d815d95 AZ |
1073 | .Pq as a percentage of Sy zfs_dirty_data_max . |
1074 | This should be less than | |
1075 | .Sy zfs_vdev_async_write_active_min_dirty_percent . | |
1076 | . | |
a7bd20e3 KJ |
1077 | .It Sy zfs_wrlog_data_max Ns = Pq int |
1078 | The upper limit of write-transaction zil log data size in bytes. | |
84d0a03f AM |
1079 | Write operations are throttled when approaching the limit until log data is |
1080 | cleared out after transaction group sync. | |
1081 | Because of some overhead, it should be set at least 2 times the size of | |
a7bd20e3 | 1082 | .Sy zfs_dirty_data_max |
b46be903 | 1083 | .No to prevent harming normal write throughput . |
a7bd20e3 KJ |
1084 | It also should be smaller than the size of the slog device if slog is present. |
1085 | .Pp | |
1086 | Defaults to | |
1087 | .Sy zfs_dirty_data_max*2 | |
1088 | . | |
2d815d95 | 1089 | .It Sy zfs_fallocate_reserve_percent Ns = Ns Sy 110 Ns % Pq uint |
f734301d AD |
1090 | Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be |
1091 | preallocated for a file in order to guarantee that later writes will not | |
2d815d95 AZ |
1092 | run out of space. |
1093 | Instead, | |
1094 | .Xr fallocate 2 | |
1095 | space preallocation only checks that sufficient space is currently available | |
1096 | in the pool or the user's project quota allocation, | |
1097 | and then creates a sparse file of the requested size. | |
1098 | The requested space is multiplied by | |
1099 | .Sy zfs_fallocate_reserve_percent | |
f734301d | 1100 | to allow additional space for indirect blocks and other internal metadata. |
2d815d95 AZ |
1101 | Setting this to |
1102 | .Sy 0 | |
1103 | disables support for | |
1104 | .Xr fallocate 2 | |
1105 | and causes it to return | |
1106 | .Sy EOPNOTSUPP . | |
1107 | . | |
1108 | .It Sy zfs_fletcher_4_impl Ns = Ns Sy fastest Pq string | |
1eeb4562 | 1109 | Select a fletcher 4 implementation. |
2d815d95 AZ |
1110 | .Pp |
1111 | Supported selectors are: | |
1112 | .Sy fastest , scalar , sse2 , ssse3 , avx2 , avx512f , avx512bw , | |
1113 | .No and Sy aarch64_neon . | |
1114 | All except | |
1115 | .Sy fastest No and Sy scalar | |
1116 | require instruction set extensions to be available, | |
1117 | and will only appear if ZFS detects that they are present at runtime. | |
1118 | If multiple implementations of fletcher 4 are available, the | |
1119 | .Sy fastest | |
1120 | will be chosen using a micro benchmark. | |
1121 | Selecting | |
1122 | .Sy scalar | |
1123 | results in the original CPU-based calculation being used. | |
1124 | Selecting any option other than | |
1125 | .Sy fastest No or Sy scalar | |
1126 | results in vector instructions | |
1127 | from the respective CPU instruction set being used. | |
1128 | . | |
eeca9d27 TR |
1129 | .It Sy zfs_blake3_impl Ns = Ns Sy fastest Pq string |
1130 | Select a BLAKE3 implementation. | |
1131 | .Pp | |
1132 | Supported selectors are: | |
1133 | .Sy cycle , fastest , generic , sse2 , sse41 , avx2 , avx512 . | |
1134 | All except | |
1135 | .Sy cycle , fastest No and Sy generic | |
1136 | require instruction set extensions to be available, | |
1137 | and will only appear if ZFS detects that they are present at runtime. | |
1138 | If multiple implementations of BLAKE3 are available, the | |
1139 | .Sy fastest will be chosen using a micro benchmark. You can see the | |
1140 | benchmark results by reading this kstat file: | |
1141 | .Pa /proc/spl/kstat/zfs/chksum_bench . | |
1142 | . | |
2d815d95 | 1143 | .It Sy zfs_free_bpobj_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int |
ba5ad9a4 | 1144 | Enable/disable the processing of the free_bpobj object. |
2d815d95 | 1145 | . |
ab8d9c17 | 1146 | .It Sy zfs_async_block_max_blocks Ns = Ns Sy UINT64_MAX Po unlimited Pc Pq u64 |
2d815d95 AZ |
1147 | Maximum number of blocks freed in a single TXG. |
1148 | . | |
ab8d9c17 | 1149 | .It Sy zfs_max_async_dedup_frees Ns = Ns Sy 100000 Po 10^5 Pc Pq u64 |
2d815d95 AZ |
1150 | Maximum number of dedup blocks freed in a single TXG. |
1151 | . | |
fdc2d303 | 1152 | .It Sy zfs_vdev_async_read_max_active Ns = Ns Sy 3 Pq uint |
2d815d95 AZ |
1153 | Maximum asynchronous read I/O operations active to each device. |
1154 | .No See Sx ZFS I/O SCHEDULER . | |
1155 | . | |
fdc2d303 | 1156 | .It Sy zfs_vdev_async_read_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1157 | Minimum asynchronous read I/O operation active to each device. |
1158 | .No See Sx ZFS I/O SCHEDULER . | |
1159 | . | |
fdc2d303 | 1160 | .It Sy zfs_vdev_async_write_active_max_dirty_percent Ns = Ns Sy 60 Ns % Pq uint |
2d815d95 AZ |
1161 | When the pool has more than this much dirty data, use |
1162 | .Sy zfs_vdev_async_write_max_active | |
1163 | to limit active async writes. | |
1164 | If the dirty data is between the minimum and maximum, | |
1165 | the active I/O limit is linearly interpolated. | |
1166 | .No See Sx ZFS I/O SCHEDULER . | |
1167 | . | |
fdc2d303 | 1168 | .It Sy zfs_vdev_async_write_active_min_dirty_percent Ns = Ns Sy 30 Ns % Pq uint |
2d815d95 AZ |
1169 | When the pool has less than this much dirty data, use |
1170 | .Sy zfs_vdev_async_write_min_active | |
1171 | to limit active async writes. | |
1172 | If the dirty data is between the minimum and maximum, | |
1173 | the active I/O limit is linearly | |
1174 | interpolated. | |
1175 | .No See Sx ZFS I/O SCHEDULER . | |
1176 | . | |
077fd55e | 1177 | .It Sy zfs_vdev_async_write_max_active Ns = Ns Sy 10 Pq uint |
2d815d95 AZ |
1178 | Maximum asynchronous write I/O operations active to each device. |
1179 | .No See Sx ZFS I/O SCHEDULER . | |
1180 | . | |
fdc2d303 | 1181 | .It Sy zfs_vdev_async_write_min_active Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1182 | Minimum asynchronous write I/O operations active to each device. |
1183 | .No See Sx ZFS I/O SCHEDULER . | |
1184 | .Pp | |
06226b59 | 1185 | Lower values are associated with better latency on rotational media but poorer |
2d815d95 AZ |
1186 | resilver performance. |
1187 | The default value of | |
1188 | .Sy 2 | |
1189 | was chosen as a compromise. | |
1190 | A value of | |
1191 | .Sy 3 | |
1192 | has been shown to improve resilver performance further at a cost of | |
06226b59 | 1193 | further increasing latency. |
2d815d95 | 1194 | . |
fdc2d303 | 1195 | .It Sy zfs_vdev_initializing_max_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1196 | Maximum initializing I/O operations active to each device. |
1197 | .No See Sx ZFS I/O SCHEDULER . | |
1198 | . | |
fdc2d303 | 1199 | .It Sy zfs_vdev_initializing_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1200 | Minimum initializing I/O operations active to each device. |
1201 | .No See Sx ZFS I/O SCHEDULER . | |
1202 | . | |
fdc2d303 | 1203 | .It Sy zfs_vdev_max_active Ns = Ns Sy 1000 Pq uint |
2d815d95 AZ |
1204 | The maximum number of I/O operations active to each device. |
1205 | Ideally, this will be at least the sum of each queue's | |
1206 | .Sy max_active . | |
1207 | .No See Sx ZFS I/O SCHEDULER . | |
1208 | . | |
f66ffe68 SD |
1209 | .It Sy zfs_vdev_open_timeout_ms Ns = Ns Sy 1000 Pq uint |
1210 | Timeout value to wait before determining a device is missing | |
1211 | during import. | |
1212 | This is helpful for transient missing paths due | |
1213 | to links being briefly removed and recreated in response to | |
1214 | udev events. | |
1215 | . | |
fdc2d303 | 1216 | .It Sy zfs_vdev_rebuild_max_active Ns = Ns Sy 3 Pq uint |
2d815d95 AZ |
1217 | Maximum sequential resilver I/O operations active to each device. |
1218 | .No See Sx ZFS I/O SCHEDULER . | |
1219 | . | |
fdc2d303 | 1220 | .It Sy zfs_vdev_rebuild_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1221 | Minimum sequential resilver I/O operations active to each device. |
1222 | .No See Sx ZFS I/O SCHEDULER . | |
1223 | . | |
fdc2d303 | 1224 | .It Sy zfs_vdev_removal_max_active Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1225 | Maximum removal I/O operations active to each device. |
1226 | .No See Sx ZFS I/O SCHEDULER . | |
1227 | . | |
fdc2d303 | 1228 | .It Sy zfs_vdev_removal_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1229 | Minimum removal I/O operations active to each device. |
1230 | .No See Sx ZFS I/O SCHEDULER . | |
1231 | . | |
fdc2d303 | 1232 | .It Sy zfs_vdev_scrub_max_active Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1233 | Maximum scrub I/O operations active to each device. |
1234 | .No See Sx ZFS I/O SCHEDULER . | |
1235 | . | |
fdc2d303 | 1236 | .It Sy zfs_vdev_scrub_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1237 | Minimum scrub I/O operations active to each device. |
1238 | .No See Sx ZFS I/O SCHEDULER . | |
1239 | . | |
fdc2d303 | 1240 | .It Sy zfs_vdev_sync_read_max_active Ns = Ns Sy 10 Pq uint |
2d815d95 AZ |
1241 | Maximum synchronous read I/O operations active to each device. |
1242 | .No See Sx ZFS I/O SCHEDULER . | |
1243 | . | |
fdc2d303 | 1244 | .It Sy zfs_vdev_sync_read_min_active Ns = Ns Sy 10 Pq uint |
2d815d95 AZ |
1245 | Minimum synchronous read I/O operations active to each device. |
1246 | .No See Sx ZFS I/O SCHEDULER . | |
1247 | . | |
fdc2d303 | 1248 | .It Sy zfs_vdev_sync_write_max_active Ns = Ns Sy 10 Pq uint |
2d815d95 AZ |
1249 | Maximum synchronous write I/O operations active to each device. |
1250 | .No See Sx ZFS I/O SCHEDULER . | |
1251 | . | |
fdc2d303 | 1252 | .It Sy zfs_vdev_sync_write_min_active Ns = Ns Sy 10 Pq uint |
2d815d95 AZ |
1253 | Minimum synchronous write I/O operations active to each device. |
1254 | .No See Sx ZFS I/O SCHEDULER . | |
1255 | . | |
fdc2d303 | 1256 | .It Sy zfs_vdev_trim_max_active Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1257 | Maximum trim/discard I/O operations active to each device. |
1258 | .No See Sx ZFS I/O SCHEDULER . | |
1259 | . | |
fdc2d303 | 1260 | .It Sy zfs_vdev_trim_min_active Ns = Ns Sy 1 Pq uint |
2d815d95 AZ |
1261 | Minimum trim/discard I/O operations active to each device. |
1262 | .No See Sx ZFS I/O SCHEDULER . | |
1263 | . | |
fdc2d303 | 1264 | .It Sy zfs_vdev_nia_delay Ns = Ns Sy 5 Pq uint |
6f5aac3c | 1265 | For non-interactive I/O (scrub, resilver, removal, initialize and rebuild), |
2d815d95 AZ |
1266 | the number of concurrently-active I/O operations is limited to |
1267 | .Sy zfs_*_min_active , | |
1268 | unless the vdev is "idle". | |
0175272f | 1269 | When there are no interactive I/O operations active (synchronous or otherwise), |
2d815d95 AZ |
1270 | and |
1271 | .Sy zfs_vdev_nia_delay | |
1272 | operations have completed since the last interactive operation, | |
1273 | then the vdev is considered to be "idle", | |
1274 | and the number of concurrently-active non-interactive operations is increased to | |
1275 | .Sy zfs_*_max_active . | |
1276 | .No See Sx ZFS I/O SCHEDULER . | |
1277 | . | |
fdc2d303 | 1278 | .It Sy zfs_vdev_nia_credit Ns = Ns Sy 5 Pq uint |
2d815d95 AZ |
1279 | Some HDDs tend to prioritize sequential I/O so strongly, that concurrent |
1280 | random I/O latency reaches several seconds. | |
1281 | On some HDDs this happens even if sequential I/O operations | |
1282 | are submitted one at a time, and so setting | |
1283 | .Sy zfs_*_max_active Ns = Sy 1 | |
1284 | does not help. | |
1285 | To prevent non-interactive I/O, like scrub, | |
1286 | from monopolizing the device, no more than | |
1287 | .Sy zfs_vdev_nia_credit operations can be sent | |
1288 | while there are outstanding incomplete interactive operations. | |
1289 | This enforced wait ensures the HDD services the interactive I/O | |
6f5aac3c | 1290 | within a reasonable amount of time. |
2d815d95 AZ |
1291 | .No See Sx ZFS I/O SCHEDULER . |
1292 | . | |
fdc2d303 | 1293 | .It Sy zfs_vdev_queue_depth_pct Ns = Ns Sy 1000 Ns % Pq uint |
e815485f | 1294 | Maximum number of queued allocations per top-level vdev expressed as |
2d815d95 AZ |
1295 | a percentage of |
1296 | .Sy zfs_vdev_async_write_max_active , | |
1297 | which allows the system to detect devices that are more capable | |
1298 | of handling allocations and to allocate more blocks to those devices. | |
1299 | This allows for dynamic allocation distribution when devices are imbalanced, | |
1300 | as fuller devices will tend to be slower than empty devices. | |
1301 | .Pp | |
1302 | Also see | |
1303 | .Sy zio_dva_throttle_enabled . | |
1304 | . | |
ece7ab7e RN |
1305 | .It Sy zfs_vdev_def_queue_depth Ns = Ns Sy 32 Pq uint |
1306 | Default queue depth for each vdev IO allocator. | |
1307 | Higher values allow for better coalescing of sequential writes before sending | |
1308 | them to the disk, but can increase transaction commit times. | |
1309 | . | |
16f0fdad MZ |
1310 | .It Sy zfs_vdev_failfast_mask Ns = Ns Sy 1 Pq uint |
1311 | Defines if the driver should retire on a given error type. | |
1312 | The following options may be bitwise-ored together: | |
1313 | .TS | |
1314 | box; | |
1315 | lbz r l l . | |
1316 | Value Name Description | |
1317 | _ | |
1318 | 1 Device No driver retries on device errors | |
1319 | 2 Transport No driver retries on transport errors. | |
1320 | 4 Driver No driver retries on driver errors. | |
1321 | .TE | |
1322 | . | |
2d815d95 AZ |
1323 | .It Sy zfs_expire_snapshot Ns = Ns Sy 300 Ns s Pq int |
1324 | Time before expiring | |
1325 | .Pa .zfs/snapshot . | |
1326 | . | |
1327 | .It Sy zfs_admin_snapshot Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1328 | Allow the creation, removal, or renaming of entries in the | |
1329 | .Sy .zfs/snapshot | |
0500e835 | 1330 | directory to cause the creation, destruction, or renaming of snapshots. |
2d815d95 AZ |
1331 | When enabled, this functionality works both locally and over NFS exports |
1332 | which have the | |
1333 | .Em no_root_squash | |
1334 | option set. | |
1335 | . | |
1336 | .It Sy zfs_flags Ns = Ns Sy 0 Pq int | |
1337 | Set additional debugging flags. | |
1338 | The following flags may be bitwise-ored together: | |
33b6dbbc NB |
1339 | .TS |
1340 | box; | |
2d815d95 | 1341 | lbz r l l . |
16f0fdad | 1342 | Value Name Description |
33b6dbbc | 1343 | _ |
2d815d95 AZ |
1344 | 1 ZFS_DEBUG_DPRINTF Enable dprintf entries in the debug log. |
1345 | * 2 ZFS_DEBUG_DBUF_VERIFY Enable extra dbuf verifications. | |
1346 | * 4 ZFS_DEBUG_DNODE_VERIFY Enable extra dnode verifications. | |
1347 | 8 ZFS_DEBUG_SNAPNAMES Enable snapshot name verification. | |
bacf366f | 1348 | * 16 ZFS_DEBUG_MODIFY Check for illegally modified ARC buffers. |
2d815d95 AZ |
1349 | 64 ZFS_DEBUG_ZIO_FREE Enable verification of block frees. |
1350 | 128 ZFS_DEBUG_HISTOGRAM_VERIFY Enable extra spacemap histogram verifications. | |
1351 | 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP. | |
1352 | 512 ZFS_DEBUG_SET_ERROR Enable \fBSET_ERROR\fP and dprintf entries in the debug log. | |
1353 | 1024 ZFS_DEBUG_INDIRECT_REMAP Verify split blocks created by device removal. | |
1354 | 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree. | |
1355 | 4096 ZFS_DEBUG_LOG_SPACEMAP Verify that the log summary is consistent with the spacemap log | |
1356 | and enable \fBzfs_dbgmsgs\fP for metaslab loading and flushing. | |
33b6dbbc | 1357 | .TE |
b46be903 | 1358 | .Sy \& * No Requires debug build . |
2d815d95 | 1359 | . |
b24d1c77 RY |
1360 | .It Sy zfs_btree_verify_intensity Ns = Ns Sy 0 Pq uint |
1361 | Enables btree verification. | |
1362 | The following settings are culminative: | |
1363 | .TS | |
1364 | box; | |
1365 | lbz r l l . | |
1366 | Value Description | |
1367 | ||
1368 | 1 Verify height. | |
1369 | 2 Verify pointers from children to parent. | |
1370 | 3 Verify element counts. | |
1371 | 4 Verify element order. (expensive) | |
1372 | * 5 Verify unused memory is poisoned. (expensive) | |
1373 | .TE | |
b46be903 | 1374 | .Sy \& * No Requires debug build . |
b24d1c77 | 1375 | . |
2d815d95 AZ |
1376 | .It Sy zfs_free_leak_on_eio Ns = Ns Sy 0 Ns | Ns 1 Pq int |
1377 | If destroy encounters an | |
1378 | .Sy EIO | |
1379 | while reading metadata (e.g. indirect blocks), | |
1380 | space referenced by the missing metadata can not be freed. | |
1381 | Normally this causes the background destroy to become "stalled", | |
1382 | as it is unable to make forward progress. | |
1383 | While in this stalled state, all remaining space to free | |
1384 | from the error-encountering filesystem is "temporarily leaked". | |
1385 | Set this flag to cause it to ignore the | |
1386 | .Sy EIO , | |
fbeddd60 MA |
1387 | permanently leak the space from indirect blocks that can not be read, |
1388 | and continue to free everything else that it can. | |
2d815d95 AZ |
1389 | .Pp |
1390 | The default "stalling" behavior is useful if the storage partially | |
1391 | fails (i.e. some but not all I/O operations fail), and then later recovers. | |
1392 | In this case, we will be able to continue pool operations while it is | |
fbeddd60 | 1393 | partially failed, and when it recovers, we can continue to free the |
2d815d95 AZ |
1394 | space, with no leaks. |
1395 | Note, however, that this case is actually fairly rare. | |
1396 | .Pp | |
1397 | Typically pools either | |
1398 | .Bl -enum -compact -offset 4n -width "1." | |
1399 | .It | |
1400 | fail completely (but perhaps temporarily, | |
1401 | e.g. due to a top-level vdev going offline), or | |
1402 | .It | |
1403 | have localized, permanent errors (e.g. disk returns the wrong data | |
1404 | due to bit flip or firmware bug). | |
1405 | .El | |
1406 | In the former case, this setting does not matter because the | |
fbeddd60 | 1407 | pool will be suspended and the sync thread will not be able to make |
2d815d95 AZ |
1408 | forward progress regardless. |
1409 | In the latter, because the error is permanent, the best we can do | |
1410 | is leak the minimum amount of space, | |
1411 | which is what setting this flag will do. | |
1412 | It is therefore reasonable for this flag to normally be set, | |
1413 | but we chose the more conservative approach of not setting it, | |
1414 | so that there is no possibility of | |
fbeddd60 | 1415 | leaking space in the "partial temporary" failure case. |
2d815d95 | 1416 | . |
fdc2d303 | 1417 | .It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint |
2d815d95 AZ |
1418 | During a |
1419 | .Nm zfs Cm destroy | |
1420 | operation using the | |
1421 | .Sy async_destroy | |
1422 | feature, | |
1423 | a minimum of this much time will be spent working on freeing blocks per TXG. | |
1424 | . | |
fdc2d303 | 1425 | .It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint |
2d815d95 AZ |
1426 | Similar to |
1427 | .Sy zfs_free_min_time_ms , | |
1428 | but for cleanup of old indirection records for removed vdevs. | |
1429 | . | |
ab8d9c17 | 1430 | .It Sy zfs_immediate_write_sz Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq s64 |
2d815d95 AZ |
1431 | Largest data block to write to the ZIL. |
1432 | Larger blocks will be treated as if the dataset being written to had the | |
1433 | .Sy logbias Ns = Ns Sy throughput | |
1434 | property set. | |
1435 | . | |
ab8d9c17 | 1436 | .It Sy zfs_initialize_value Ns = Ns Sy 16045690984833335022 Po 0xDEADBEEFDEADBEEE Pc Pq u64 |
2d815d95 AZ |
1437 | Pattern written to vdev free space by |
1438 | .Xr zpool-initialize 8 . | |
1439 | . | |
ab8d9c17 | 1440 | .It Sy zfs_initialize_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 |
2d815d95 AZ |
1441 | Size of writes used by |
1442 | .Xr zpool-initialize 8 . | |
1443 | This option is used by the test suite. | |
1444 | . | |
ab8d9c17 | 1445 | .It Sy zfs_livelist_max_entries Ns = Ns Sy 500000 Po 5*10^5 Pc Pq u64 |
37f03da8 SH |
1446 | The threshold size (in block pointers) at which we create a new sub-livelist. |
1447 | Larger sublists are more costly from a memory perspective but the fewer | |
1448 | sublists there are, the lower the cost of insertion. | |
2d815d95 AZ |
1449 | . |
1450 | .It Sy zfs_livelist_min_percent_shared Ns = Ns Sy 75 Ns % Pq int | |
37f03da8 | 1451 | If the amount of shared space between a snapshot and its clone drops below |
2d815d95 AZ |
1452 | this threshold, the clone turns off the livelist and reverts to the old |
1453 | deletion method. | |
1454 | This is in place because livelists no long give us a benefit | |
1455 | once a clone has been overwritten enough. | |
1456 | . | |
1457 | .It Sy zfs_livelist_condense_new_alloc Ns = Ns Sy 0 Pq int | |
37f03da8 SH |
1458 | Incremented each time an extra ALLOC blkptr is added to a livelist entry while |
1459 | it is being condensed. | |
1460 | This option is used by the test suite to track race conditions. | |
2d815d95 AZ |
1461 | . |
1462 | .It Sy zfs_livelist_condense_sync_cancel Ns = Ns Sy 0 Pq int | |
37f03da8 | 1463 | Incremented each time livelist condensing is canceled while in |
2d815d95 | 1464 | .Fn spa_livelist_condense_sync . |
37f03da8 | 1465 | This option is used by the test suite to track race conditions. |
2d815d95 AZ |
1466 | . |
1467 | .It Sy zfs_livelist_condense_sync_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
37f03da8 | 1468 | When set, the livelist condense process pauses indefinitely before |
12bd322d | 1469 | executing the synctask \(em |
2d815d95 | 1470 | .Fn spa_livelist_condense_sync . |
37f03da8 | 1471 | This option is used by the test suite to trigger race conditions. |
2d815d95 AZ |
1472 | . |
1473 | .It Sy zfs_livelist_condense_zthr_cancel Ns = Ns Sy 0 Pq int | |
37f03da8 | 1474 | Incremented each time livelist condensing is canceled while in |
2d815d95 | 1475 | .Fn spa_livelist_condense_cb . |
37f03da8 | 1476 | This option is used by the test suite to track race conditions. |
2d815d95 AZ |
1477 | . |
1478 | .It Sy zfs_livelist_condense_zthr_pause Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
37f03da8 | 1479 | When set, the livelist condense process pauses indefinitely before |
2d815d95 AZ |
1480 | executing the open context condensing work in |
1481 | .Fn spa_livelist_condense_cb . | |
37f03da8 | 1482 | This option is used by the test suite to trigger race conditions. |
2d815d95 | 1483 | . |
ab8d9c17 | 1484 | .It Sy zfs_lua_max_instrlimit Ns = Ns Sy 100000000 Po 10^8 Pc Pq u64 |
917f475f JG |
1485 | The maximum execution time limit that can be set for a ZFS channel program, |
1486 | specified as a number of Lua instructions. | |
2d815d95 | 1487 | . |
ab8d9c17 | 1488 | .It Sy zfs_lua_max_memlimit Ns = Ns Sy 104857600 Po 100 MiB Pc Pq u64 |
917f475f JG |
1489 | The maximum memory limit that can be set for a ZFS channel program, specified |
1490 | in bytes. | |
2d815d95 AZ |
1491 | . |
1492 | .It Sy zfs_max_dataset_nesting Ns = Ns Sy 50 Pq int | |
1493 | The maximum depth of nested datasets. | |
1494 | This value can be tuned temporarily to | |
a7ed98d8 | 1495 | fix existing datasets that exceed the predefined limit. |
2d815d95 | 1496 | . |
ab8d9c17 | 1497 | .It Sy zfs_max_log_walking Ns = Ns Sy 5 Pq u64 |
93e28d66 SD |
1498 | The number of past TXGs that the flushing algorithm of the log spacemap |
1499 | feature uses to estimate incoming log blocks. | |
2d815d95 | 1500 | . |
ab8d9c17 | 1501 | .It Sy zfs_max_logsm_summary_length Ns = Ns Sy 10 Pq u64 |
93e28d66 | 1502 | Maximum number of rows allowed in the summary of the spacemap log. |
2d815d95 | 1503 | . |
fdc2d303 | 1504 | .It Sy zfs_max_recordsize Ns = Ns Sy 16777216 Po 16 MiB Pc Pq uint |
2d815d95 | 1505 | We currently support block sizes from |
a894ae75 | 1506 | .Em 512 Po 512 B Pc No to Em 16777216 Po 16 MiB Pc . |
2d815d95 AZ |
1507 | The benefits of larger blocks, and thus larger I/O, |
1508 | need to be weighed against the cost of COWing a giant block to modify one byte. | |
1509 | Additionally, very large blocks can have an impact on I/O latency, | |
1510 | and also potentially on the memory allocator. | |
f2330bd1 RE |
1511 | Therefore, we formerly forbade creating blocks larger than 1M. |
1512 | Larger blocks could be created by changing it, | |
2d815d95 | 1513 | and pools with larger blocks can always be imported and used, |
f1512ee6 | 1514 | regardless of this setting. |
2d815d95 AZ |
1515 | . |
1516 | .It Sy zfs_allow_redacted_dataset_mount Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1517 | Allow datasets received with redacted send/receive to be mounted. | |
1518 | Normally disabled because these datasets may be missing key data. | |
1519 | . | |
ab8d9c17 | 1520 | .It Sy zfs_min_metaslabs_to_flush Ns = Ns Sy 1 Pq u64 |
2d815d95 AZ |
1521 | Minimum number of metaslabs to flush per dirty TXG. |
1522 | . | |
fdc2d303 | 1523 | .It Sy zfs_metaslab_fragmentation_threshold Ns = Ns Sy 70 Ns % Pq uint |
f3a7f661 | 1524 | Allow metaslabs to keep their active state as long as their fragmentation |
2d815d95 AZ |
1525 | percentage is no more than this value. |
1526 | An active metaslab that exceeds this threshold | |
1527 | will no longer keep its active status allowing better metaslabs to be selected. | |
1528 | . | |
fdc2d303 | 1529 | .It Sy zfs_mg_fragmentation_threshold Ns = Ns Sy 95 Ns % Pq uint |
f3a7f661 | 1530 | Metaslab groups are considered eligible for allocations if their |
83426735 | 1531 | fragmentation metric (measured as a percentage) is less than or equal to |
2d815d95 AZ |
1532 | this value. |
1533 | If a metaslab group exceeds this threshold then it will be | |
f3a7f661 GW |
1534 | skipped unless all metaslab groups within the metaslab class have also |
1535 | crossed this threshold. | |
2d815d95 | 1536 | . |
fdc2d303 | 1537 | .It Sy zfs_mg_noalloc_threshold Ns = Ns Sy 0 Ns % Pq uint |
2d815d95 AZ |
1538 | Defines a threshold at which metaslab groups should be eligible for allocations. |
1539 | The value is expressed as a percentage of free space | |
f4a4046b TC |
1540 | beyond which a metaslab group is always eligible for allocations. |
1541 | If a metaslab group's free space is less than or equal to the | |
6b4e21c6 | 1542 | threshold, the allocator will avoid allocating to that group |
2d815d95 AZ |
1543 | unless all groups in the pool have reached the threshold. |
1544 | Once all groups have reached the threshold, all groups are allowed to accept | |
1545 | allocations. | |
1546 | The default value of | |
1547 | .Sy 0 | |
b46be903 DS |
1548 | disables the feature and causes all metaslab groups to be eligible for |
1549 | allocations. | |
2d815d95 | 1550 | .Pp |
b58237e7 | 1551 | This parameter allows one to deal with pools having heavily imbalanced |
f4a4046b TC |
1552 | vdevs such as would be the case when a new vdev has been added. |
1553 | Setting the threshold to a non-zero percentage will stop allocations | |
1554 | from being made to vdevs that aren't filled to the specified percentage | |
1555 | and allow lesser filled vdevs to acquire more allocations than they | |
2d815d95 AZ |
1556 | otherwise would under the old |
1557 | .Sy zfs_mg_alloc_failures | |
1558 | facility. | |
1559 | . | |
1560 | .It Sy zfs_ddt_data_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
cc99f275 | 1561 | If enabled, ZFS will place DDT data into the special allocation class. |
2d815d95 AZ |
1562 | . |
1563 | .It Sy zfs_user_indirect_is_special Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
1564 | If enabled, ZFS will place user data indirect blocks | |
cc99f275 | 1565 | into the special allocation class. |
2d815d95 | 1566 | . |
fdc2d303 | 1567 | .It Sy zfs_multihost_history Ns = Ns Sy 0 Pq uint |
b46be903 DS |
1568 | Historical statistics for this many latest multihost updates will be available |
1569 | in | |
2d815d95 AZ |
1570 | .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /multihost . |
1571 | . | |
ab8d9c17 | 1572 | .It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64 |
379ca9cf | 1573 | Used to control the frequency of multihost writes which are performed when the |
2d815d95 AZ |
1574 | .Sy multihost |
1575 | pool property is on. | |
1576 | This is one of the factors used to determine the | |
060f0226 | 1577 | length of the activity check during import. |
2d815d95 AZ |
1578 | .Pp |
1579 | The multihost write period is | |
12bd322d | 1580 | .Sy zfs_multihost_interval No / Sy leaf-vdevs . |
2d815d95 AZ |
1581 | On average a multihost write will be issued for each leaf vdev |
1582 | every | |
1583 | .Sy zfs_multihost_interval | |
1584 | milliseconds. | |
1585 | In practice, the observed period can vary with the I/O load | |
1586 | and this observed value is the delay which is stored in the uberblock. | |
1587 | . | |
1588 | .It Sy zfs_multihost_import_intervals Ns = Ns Sy 20 Pq uint | |
1589 | Used to control the duration of the activity test on import. | |
1590 | Smaller values of | |
1591 | .Sy zfs_multihost_import_intervals | |
1592 | will reduce the import time but increase | |
1593 | the risk of failing to detect an active pool. | |
1594 | The total activity check time is never allowed to drop below one second. | |
1595 | .Pp | |
060f0226 | 1596 | On import the activity check waits a minimum amount of time determined by |
12bd322d | 1597 | .Sy zfs_multihost_interval No \(mu Sy zfs_multihost_import_intervals , |
2d815d95 AZ |
1598 | or the same product computed on the host which last had the pool imported, |
1599 | whichever is greater. | |
1600 | The activity check time may be further extended if the value of MMP | |
060f0226 | 1601 | delay found in the best uberblock indicates actual multihost updates happened |
2d815d95 AZ |
1602 | at longer intervals than |
1603 | .Sy zfs_multihost_interval . | |
1604 | A minimum of | |
a894ae75 | 1605 | .Em 100 ms |
2d815d95 AZ |
1606 | is enforced. |
1607 | .Pp | |
1608 | .Sy 0 No is equivalent to Sy 1 . | |
1609 | . | |
1610 | .It Sy zfs_multihost_fail_intervals Ns = Ns Sy 10 Pq uint | |
060f0226 OF |
1611 | Controls the behavior of the pool when multihost write failures or delays are |
1612 | detected. | |
2d815d95 AZ |
1613 | .Pp |
1614 | When | |
1615 | .Sy 0 , | |
1616 | multihost write failures or delays are ignored. | |
1617 | The failures will still be reported to the ZED which depending on | |
060f0226 OF |
1618 | its configuration may take action such as suspending the pool or offlining a |
1619 | device. | |
2d815d95 AZ |
1620 | .Pp |
1621 | Otherwise, the pool will be suspended if | |
12bd322d | 1622 | .Sy zfs_multihost_fail_intervals No \(mu Sy zfs_multihost_interval |
2d815d95 AZ |
1623 | milliseconds pass without a successful MMP write. |
1624 | This guarantees the activity test will see MMP writes if the pool is imported. | |
1625 | .Sy 1 No is equivalent to Sy 2 ; | |
1626 | this is necessary to prevent the pool from being suspended | |
060f0226 | 1627 | due to normal, small I/O latency variations. |
2d815d95 AZ |
1628 | . |
1629 | .It Sy zfs_no_scrub_io Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1630 | Set to disable scrub I/O. | |
1631 | This results in scrubs not actually scrubbing data and | |
83426735 | 1632 | simply doing a metadata crawl of the pool instead. |
2d815d95 AZ |
1633 | . |
1634 | .It Sy zfs_no_scrub_prefetch Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
83426735 | 1635 | Set to disable block prefetching for scrubs. |
2d815d95 AZ |
1636 | . |
1637 | .It Sy zfs_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1638 | Disable cache flush operations on disks when writing. | |
1639 | Setting this will cause pool corruption on power loss | |
1640 | if a volatile out-of-order write cache is enabled. | |
1641 | . | |
1642 | .It Sy zfs_nopwrite_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
1643 | Allow no-operation writes. | |
1644 | The occurrence of nopwrites will further depend on other pool properties | |
1645 | .Pq i.a. the checksumming and compression algorithms . | |
1646 | . | |
05b3eb6d | 1647 | .It Sy zfs_dmu_offset_next_sync Ns = Ns Sy 1 Ns | Ns 0 Pq int |
2d815d95 | 1648 | Enable forcing TXG sync to find holes. |
05b3eb6d | 1649 | When enabled forces ZFS to sync data when |
2d815d95 | 1650 | .Sy SEEK_HOLE No or Sy SEEK_DATA |
05b3eb6d BB |
1651 | flags are used allowing holes in a file to be accurately reported. |
1652 | When disabled holes will not be reported in recently dirtied files. | |
2d815d95 | 1653 | . |
a894ae75 | 1654 | .It Sy zfs_pd_bytes_max Ns = Ns Sy 52428800 Ns B Po 50 MiB Pc Pq int |
2d815d95 AZ |
1655 | The number of bytes which should be prefetched during a pool traversal, like |
1656 | .Nm zfs Cm send | |
1657 | or other data crawling operations. | |
1658 | . | |
fdc2d303 | 1659 | .It Sy zfs_traverse_indirect_prefetch_limit Ns = Ns Sy 32 Pq uint |
2d815d95 AZ |
1660 | The number of blocks pointed by indirect (non-L0) block which should be |
1661 | prefetched during a pool traversal, like | |
1662 | .Nm zfs Cm send | |
1663 | or other data crawling operations. | |
1664 | . | |
ab8d9c17 | 1665 | .It Sy zfs_per_txg_dirty_frees_percent Ns = Ns Sy 30 Ns % Pq u64 |
2d815d95 AZ |
1666 | Control percentage of dirtied indirect blocks from frees allowed into one TXG. |
1667 | After this threshold is crossed, additional frees will wait until the next TXG. | |
b46be903 | 1668 | .Sy 0 No disables this throttle . |
2d815d95 AZ |
1669 | . |
1670 | .It Sy zfs_prefetch_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1671 | Disable predictive prefetch. | |
2d232ca8 AZ |
1672 | Note that it leaves "prescient" prefetch |
1673 | .Pq for, e.g., Nm zfs Cm send | |
2d815d95 AZ |
1674 | intact. |
1675 | Unlike predictive prefetch, prescient prefetch never issues I/O | |
1676 | that ends up not being needed, so it can't hurt performance. | |
1677 | . | |
1678 | .It Sy zfs_qat_checksum_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1679 | Disable QAT hardware acceleration for SHA256 checksums. | |
1680 | May be unset after the ZFS modules have been loaded to initialize the QAT | |
1681 | hardware as long as support is compiled in and the QAT driver is present. | |
1682 | . | |
1683 | .It Sy zfs_qat_compress_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1684 | Disable QAT hardware acceleration for gzip compression. | |
1685 | May be unset after the ZFS modules have been loaded to initialize the QAT | |
1686 | hardware as long as support is compiled in and the QAT driver is present. | |
1687 | . | |
1688 | .It Sy zfs_qat_encrypt_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1689 | Disable QAT hardware acceleration for AES-GCM encryption. | |
1690 | May be unset after the ZFS modules have been loaded to initialize the QAT | |
1691 | hardware as long as support is compiled in and the QAT driver is present. | |
1692 | . | |
ab8d9c17 | 1693 | .It Sy zfs_vnops_read_chunk_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 |
2d815d95 AZ |
1694 | Bytes to read per chunk. |
1695 | . | |
fdc2d303 | 1696 | .It Sy zfs_read_history Ns = Ns Sy 0 Pq uint |
2d815d95 AZ |
1697 | Historical statistics for this many latest reads will be available in |
1698 | .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /reads . | |
1699 | . | |
1700 | .It Sy zfs_read_history_hits Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
29714574 | 1701 | Include cache hits in read history |
2d815d95 | 1702 | . |
ab8d9c17 | 1703 | .It Sy zfs_rebuild_max_segment Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq u64 |
9a49d3f3 BB |
1704 | Maximum read segment size to issue when sequentially resilvering a |
1705 | top-level vdev. | |
2d815d95 AZ |
1706 | . |
1707 | .It Sy zfs_rebuild_scrub_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
b2255edc BB |
1708 | Automatically start a pool scrub when the last active sequential resilver |
1709 | completes in order to verify the checksums of all blocks which have been | |
2d815d95 AZ |
1710 | resilvered. |
1711 | This is enabled by default and strongly recommended. | |
1712 | . | |
973934b9 | 1713 | .It Sy zfs_rebuild_vdev_limit Ns = Ns Sy 67108864 Ns B Po 64 MiB Pc Pq u64 |
2d815d95 | 1714 | Maximum amount of I/O that can be concurrently issued for a sequential |
b2255edc | 1715 | resilver per leaf device, given in bytes. |
2d815d95 AZ |
1716 | . |
1717 | .It Sy zfs_reconstruct_indirect_combinations_max Ns = Ns Sy 4096 Pq int | |
4589f3ae BB |
1718 | If an indirect split block contains more than this many possible unique |
1719 | combinations when being reconstructed, consider it too computationally | |
2d815d95 AZ |
1720 | expensive to check them all. |
1721 | Instead, try at most this many randomly selected | |
1722 | combinations each time the block is accessed. | |
1723 | This allows all segment copies to participate fairly | |
1724 | in the reconstruction when all combinations | |
4589f3ae | 1725 | cannot be checked and prevents repeated use of one bad copy. |
2d815d95 AZ |
1726 | . |
1727 | .It Sy zfs_recover Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1728 | Set to attempt to recover from fatal errors. | |
1729 | This should only be used as a last resort, | |
1730 | as it typically results in leaked space, or worse. | |
1731 | . | |
1732 | .It Sy zfs_removal_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
a737b415 AZ |
1733 | Ignore hard I/O errors during device removal. |
1734 | When set, if a device encounters a hard I/O error during the removal process | |
2d815d95 | 1735 | the removal will not be cancelled. |
7c9a4292 | 1736 | This can result in a normally recoverable block becoming permanently damaged |
2d815d95 AZ |
1737 | and is hence not recommended. |
1738 | This should only be used as a last resort when the | |
7c9a4292 | 1739 | pool cannot be returned to a healthy state prior to removing the device. |
2d815d95 | 1740 | . |
fdc2d303 | 1741 | .It Sy zfs_removal_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq uint |
53dce5ac MA |
1742 | This is used by the test suite so that it can ensure that certain actions |
1743 | happen while in the middle of a removal. | |
2d815d95 | 1744 | . |
fdc2d303 | 1745 | .It Sy zfs_remove_max_segment Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint |
53dce5ac | 1746 | The largest contiguous segment that we will attempt to allocate when removing |
2d815d95 AZ |
1747 | a device. |
1748 | If there is a performance problem with attempting to allocate large blocks, | |
1749 | consider decreasing this. | |
1750 | The default value is also the maximum. | |
1751 | . | |
1752 | .It Sy zfs_resilver_disable_defer Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1753 | Ignore the | |
1754 | .Sy resilver_defer | |
1755 | feature, causing an operation that would start a resilver to | |
1756 | immediately restart the one in progress. | |
1757 | . | |
fdc2d303 | 1758 | .It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint |
2d815d95 AZ |
1759 | Resilvers are processed by the sync thread. |
1760 | While resilvering, it will spend at least this much time | |
1761 | working on a resilver between TXG flushes. | |
1762 | . | |
1763 | .It Sy zfs_scan_ignore_errors Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1764 | If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub), | |
1765 | even if there were unrepairable errors. | |
1766 | Intended to be used during pool repair or recovery to | |
02638a30 | 1767 | stop resilvering when the pool is next imported. |
2d815d95 | 1768 | . |
fdc2d303 | 1769 | .It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint |
2d815d95 AZ |
1770 | Scrubs are processed by the sync thread. |
1771 | While scrubbing, it will spend at least this much time | |
1772 | working on a scrub between TXG flushes. | |
1773 | . | |
482eeef8 GA |
1774 | .It Sy zfs_scrub_error_blocks_per_txg Ns = Ns Sy 4096 Pq uint |
1775 | Error blocks to be scrubbed in one txg. | |
1776 | . | |
fdc2d303 | 1777 | .It Sy zfs_scan_checkpoint_intval Ns = Ns Sy 7200 Ns s Po 2 hour Pc Pq uint |
2d815d95 AZ |
1778 | To preserve progress across reboots, the sequential scan algorithm periodically |
1779 | needs to stop metadata scanning and issue all the verification I/O to disk. | |
1780 | The frequency of this flushing is determined by this tunable. | |
1781 | . | |
fdc2d303 | 1782 | .It Sy zfs_scan_fill_weight Ns = Ns Sy 3 Pq uint |
2d815d95 AZ |
1783 | This tunable affects how scrub and resilver I/O segments are ordered. |
1784 | A higher number indicates that we care more about how filled in a segment is, | |
1785 | while a lower number indicates we care more about the size of the extent without | |
1786 | considering the gaps within a segment. | |
1787 | This value is only tunable upon module insertion. | |
b46be903 DS |
1788 | Changing the value afterwards will have no effect on scrub or resilver |
1789 | performance. | |
2d815d95 | 1790 | . |
fdc2d303 | 1791 | .It Sy zfs_scan_issue_strategy Ns = Ns Sy 0 Pq uint |
2d815d95 AZ |
1792 | Determines the order that data will be verified while scrubbing or resilvering: |
1793 | .Bl -tag -compact -offset 4n -width "a" | |
1794 | .It Sy 1 | |
1795 | Data will be verified as sequentially as possible, given the | |
1796 | amount of memory reserved for scrubbing | |
1797 | .Pq see Sy zfs_scan_mem_lim_fact . | |
1798 | This may improve scrub performance if the pool's data is very fragmented. | |
1799 | .It Sy 2 | |
1800 | The largest mostly-contiguous chunk of found data will be verified first. | |
1801 | By deferring scrubbing of small segments, we may later find adjacent data | |
1802 | to coalesce and increase the segment size. | |
1803 | .It Sy 0 | |
1804 | .No Use strategy Sy 1 No during normal verification | |
b46be903 | 1805 | .No and strategy Sy 2 No while taking a checkpoint . |
2d815d95 AZ |
1806 | .El |
1807 | . | |
1808 | .It Sy zfs_scan_legacy Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1809 | If unset, indicates that scrubs and resilvers will gather metadata in | |
1810 | memory before issuing sequential I/O. | |
1811 | Otherwise indicates that the legacy algorithm will be used, | |
1812 | where I/O is initiated as soon as it is discovered. | |
1813 | Unsetting will not affect scrubs or resilvers that are already in progress. | |
1814 | . | |
a894ae75 | 1815 | .It Sy zfs_scan_max_ext_gap Ns = Ns Sy 2097152 Ns B Po 2 MiB Pc Pq int |
2d815d95 AZ |
1816 | Sets the largest gap in bytes between scrub/resilver I/O operations |
1817 | that will still be considered sequential for sorting purposes. | |
1818 | Changing this value will not | |
d4a72f23 | 1819 | affect scrubs or resilvers that are already in progress. |
2d815d95 | 1820 | . |
fdc2d303 | 1821 | .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint |
d4a72f23 TC |
1822 | Maximum fraction of RAM used for I/O sorting by sequential scan algorithm. |
1823 | This tunable determines the hard limit for I/O sorting memory usage. | |
1824 | When the hard limit is reached we stop scanning metadata and start issuing | |
2d815d95 AZ |
1825 | data verification I/O. |
1826 | This is done until we get below the soft limit. | |
1827 | . | |
fdc2d303 | 1828 | .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint |
d4a72f23 | 1829 | The fraction of the hard limit used to determined the soft limit for I/O sorting |
2d815d95 AZ |
1830 | by the sequential scan algorithm. |
1831 | When we cross this limit from below no action is taken. | |
b46be903 DS |
1832 | When we cross this limit from above it is because we are issuing verification |
1833 | I/O. | |
2d815d95 AZ |
1834 | In this case (unless the metadata scan is done) we stop issuing verification I/O |
1835 | and start scanning metadata again until we get to the hard limit. | |
1836 | . | |
c85ac731 BB |
1837 | .It Sy zfs_scan_report_txgs Ns = Ns Sy 0 Ns | Ns 1 Pq uint |
1838 | When reporting resilver throughput and estimated completion time use the | |
1839 | performance observed over roughly the last | |
1840 | .Sy zfs_scan_report_txgs | |
1841 | TXGs. | |
1842 | When set to zero performance is calculated over the time between checkpoints. | |
1843 | . | |
2d815d95 AZ |
1844 | .It Sy zfs_scan_strict_mem_lim Ns = Ns Sy 0 Ns | Ns 1 Pq int |
1845 | Enforce tight memory limits on pool scans when a sequential scan is in progress. | |
1846 | When disabled, the memory limit may be exceeded by fast disks. | |
1847 | . | |
1848 | .It Sy zfs_scan_suspend_progress Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1849 | Freezes a scrub/resilver in progress without actually pausing it. | |
1850 | Intended for testing/debugging. | |
1851 | . | |
c0aea7cf | 1852 | .It Sy zfs_scan_vdev_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int |
d4a72f23 TC |
1853 | Maximum amount of data that can be concurrently issued at once for scrubs and |
1854 | resilvers per leaf device, given in bytes. | |
2d815d95 AZ |
1855 | . |
1856 | .It Sy zfs_send_corrupt_data Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
1857 | Allow sending of corrupt data (ignore read/checksum errors when sending). | |
1858 | . | |
1859 | .It Sy zfs_send_unmodified_spill_blocks Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
1860 | Include unmodified spill blocks in the send stream. | |
1861 | Under certain circumstances, previous versions of ZFS could incorrectly | |
1862 | remove the spill block from an existing object. | |
1863 | Including unmodified copies of the spill blocks creates a backwards-compatible | |
1864 | stream which will recreate a spill block if it was incorrectly removed. | |
1865 | . | |
fdc2d303 | 1866 | .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint |
2d815d95 AZ |
1867 | The fill fraction of the |
1868 | .Nm zfs Cm send | |
1869 | internal queues. | |
1870 | The fill fraction controls the timing with which internal threads are woken up. | |
1871 | . | |
fdc2d303 | 1872 | .It Sy zfs_send_no_prefetch_queue_length Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint |
2d815d95 AZ |
1873 | The maximum number of bytes allowed in |
1874 | .Nm zfs Cm send Ns 's | |
1875 | internal queues. | |
1876 | . | |
fdc2d303 | 1877 | .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint |
2d815d95 AZ |
1878 | The fill fraction of the |
1879 | .Nm zfs Cm send | |
1880 | prefetch queue. | |
1881 | The fill fraction controls the timing with which internal threads are woken up. | |
1882 | . | |
fdc2d303 | 1883 | .It Sy zfs_send_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint |
2d815d95 AZ |
1884 | The maximum number of bytes allowed that will be prefetched by |
1885 | .Nm zfs Cm send . | |
1886 | This value must be at least twice the maximum block size in use. | |
1887 | . | |
fdc2d303 | 1888 | .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint |
2d815d95 AZ |
1889 | The fill fraction of the |
1890 | .Nm zfs Cm receive | |
1891 | queue. | |
1892 | The fill fraction controls the timing with which internal threads are woken up. | |
1893 | . | |
fdc2d303 | 1894 | .It Sy zfs_recv_queue_length Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq uint |
2d815d95 AZ |
1895 | The maximum number of bytes allowed in the |
1896 | .Nm zfs Cm receive | |
1897 | queue. | |
30af21b0 | 1898 | This value must be at least twice the maximum block size in use. |
2d815d95 | 1899 | . |
fdc2d303 | 1900 | .It Sy zfs_recv_write_batch_size Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint |
2d815d95 AZ |
1901 | The maximum amount of data, in bytes, that |
1902 | .Nm zfs Cm receive | |
1903 | will write in one DMU transaction. | |
1904 | This is the uncompressed size, even when receiving a compressed send stream. | |
1905 | This setting will not reduce the write size below a single block. | |
1906 | Capped at a maximum of | |
a894ae75 | 1907 | .Sy 32 MiB . |
2d815d95 | 1908 | . |
e8cf3a4f AP |
1909 | .It Sy zfs_recv_best_effort_corrective Ns = Ns Sy 0 Pq int |
1910 | When this variable is set to non-zero a corrective receive: | |
1911 | .Bl -enum -compact -offset 4n -width "1." | |
1912 | .It | |
1913 | Does not enforce the restriction of source & destination snapshot GUIDs | |
1914 | matching. | |
1915 | .It | |
1916 | If there is an error during healing, the healing receive is not | |
1917 | terminated instead it moves on to the next record. | |
1918 | .El | |
1919 | . | |
fdc2d303 | 1920 | .It Sy zfs_override_estimate_recordsize Ns = Ns Sy 0 Ns | Ns 1 Pq uint |
30af21b0 | 1921 | Setting this variable overrides the default logic for estimating block |
2d815d95 AZ |
1922 | sizes when doing a |
1923 | .Nm zfs Cm send . | |
1924 | The default heuristic is that the average block size | |
1925 | will be the current recordsize. | |
1926 | Override this value if most data in your dataset is not of that size | |
1927 | and you require accurate zfs send size estimates. | |
1928 | . | |
fdc2d303 | 1929 | .It Sy zfs_sync_pass_deferred_free Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1930 | Flushing of data to disk is done in passes. |
1931 | Defer frees starting in this pass. | |
1932 | . | |
a894ae75 | 1933 | .It Sy zfs_spa_discard_memory_limit Ns = Ns Sy 16777216 Ns B Po 16 MiB Pc Pq int |
d2734cce SD |
1934 | Maximum memory used for prefetching a checkpoint's space map on each |
1935 | vdev while discarding the checkpoint. | |
2d815d95 | 1936 | . |
fdc2d303 | 1937 | .It Sy zfs_special_class_metadata_reserve_pct Ns = Ns Sy 25 Ns % Pq uint |
1f02ecc5 | 1938 | Only allow small data blocks to be allocated on the special and dedup vdev |
b46be903 DS |
1939 | types when the available free space percentage on these vdevs exceeds this |
1940 | value. | |
2d815d95 | 1941 | This ensures reserved space is available for pool metadata as the |
1f02ecc5 | 1942 | special vdevs approach capacity. |
2d815d95 | 1943 | . |
fdc2d303 | 1944 | .It Sy zfs_sync_pass_dont_compress Ns = Ns Sy 8 Pq uint |
2d815d95 | 1945 | Starting in this sync pass, disable compression (including of metadata). |
be89734a MA |
1946 | With the default setting, in practice, we don't have this many sync passes, |
1947 | so this has no effect. | |
2d815d95 | 1948 | .Pp |
be89734a | 1949 | The original intent was that disabling compression would help the sync passes |
2d815d95 AZ |
1950 | to converge. |
1951 | However, in practice, disabling compression increases | |
1952 | the average number of sync passes; because when we turn compression off, | |
1953 | many blocks' size will change, and thus we have to re-allocate | |
1954 | (not overwrite) them. | |
1955 | It also increases the number of | |
a894ae75 | 1956 | .Em 128 KiB |
2d815d95 AZ |
1957 | allocations (e.g. for indirect blocks and spacemaps) |
1958 | because these will not be compressed. | |
1959 | The | |
a894ae75 | 1960 | .Em 128 KiB |
2d815d95 | 1961 | allocations are especially detrimental to performance |
b46be903 DS |
1962 | on highly fragmented systems, which may have very few free segments of this |
1963 | size, | |
2d815d95 AZ |
1964 | and may need to load new metaslabs to satisfy these allocations. |
1965 | . | |
fdc2d303 | 1966 | .It Sy zfs_sync_pass_rewrite Ns = Ns Sy 2 Pq uint |
2d815d95 AZ |
1967 | Rewrite new block pointers starting in this pass. |
1968 | . | |
1969 | .It Sy zfs_sync_taskq_batch_pct Ns = Ns Sy 75 Ns % Pq int | |
1970 | This controls the number of threads used by | |
1971 | .Sy dp_sync_taskq . | |
1972 | The default value of | |
1973 | .Sy 75% | |
1974 | will create a maximum of one thread per CPU. | |
1975 | . | |
a894ae75 | 1976 | .It Sy zfs_trim_extent_bytes_max Ns = Ns Sy 134217728 Ns B Po 128 MiB Pc Pq uint |
2d815d95 | 1977 | Maximum size of TRIM command. |
b46be903 DS |
1978 | Larger ranges will be split into chunks no larger than this value before |
1979 | issuing. | |
2d815d95 | 1980 | . |
a894ae75 | 1981 | .It Sy zfs_trim_extent_bytes_min Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint |
2d815d95 AZ |
1982 | Minimum size of TRIM commands. |
1983 | TRIM ranges smaller than this will be skipped, | |
1984 | unless they're part of a larger range which was chunked. | |
1985 | This is done because it's common for these small TRIMs | |
1986 | to negatively impact overall performance. | |
1987 | . | |
1988 | .It Sy zfs_trim_metaslab_skip Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
1989 | Skip uninitialized metaslabs during the TRIM process. | |
b46be903 DS |
1990 | This option is useful for pools constructed from large thinly-provisioned |
1991 | devices | |
2d815d95 AZ |
1992 | where TRIM operations are slow. |
1993 | As a pool ages, an increasing fraction of the pool's metaslabs | |
1994 | will be initialized, progressively degrading the usefulness of this option. | |
1995 | This setting is stored when starting a manual TRIM and will | |
1b939560 | 1996 | persist for the duration of the requested TRIM. |
2d815d95 AZ |
1997 | . |
1998 | .It Sy zfs_trim_queue_limit Ns = Ns Sy 10 Pq uint | |
1999 | Maximum number of queued TRIMs outstanding per leaf vdev. | |
2000 | The number of concurrent TRIM commands issued to the device is controlled by | |
2001 | .Sy zfs_vdev_trim_min_active No and Sy zfs_vdev_trim_max_active . | |
2002 | . | |
2003 | .It Sy zfs_trim_txg_batch Ns = Ns Sy 32 Pq uint | |
2004 | The number of transaction groups' worth of frees which should be aggregated | |
2005 | before TRIM operations are issued to the device. | |
2006 | This setting represents a trade-off between issuing larger, | |
2007 | more efficient TRIM operations and the delay | |
2008 | before the recently trimmed space is available for use by the device. | |
2009 | .Pp | |
1b939560 | 2010 | Increasing this value will allow frees to be aggregated for a longer time. |
b46be903 DS |
2011 | This will result is larger TRIM operations and potentially increased memory |
2012 | usage. | |
2d815d95 AZ |
2013 | Decreasing this value will have the opposite effect. |
2014 | The default of | |
2015 | .Sy 32 | |
2016 | was determined to be a reasonable compromise. | |
2017 | . | |
fdc2d303 | 2018 | .It Sy zfs_txg_history Ns = Ns Sy 0 Pq uint |
2d815d95 AZ |
2019 | Historical statistics for this many latest TXGs will be available in |
2020 | .Pa /proc/spl/kstat/zfs/ Ns Ao Ar pool Ac Ns Pa /TXGs . | |
2021 | . | |
fdc2d303 | 2022 | .It Sy zfs_txg_timeout Ns = Ns Sy 5 Ns s Pq uint |
b46be903 DS |
2023 | Flush dirty data to disk at least every this many seconds (maximum TXG |
2024 | duration). | |
2d815d95 | 2025 | . |
fdc2d303 | 2026 | .It Sy zfs_vdev_aggregation_limit Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq uint |
2d815d95 AZ |
2027 | Max vdev I/O aggregation size. |
2028 | . | |
fdc2d303 | 2029 | .It Sy zfs_vdev_aggregation_limit_non_rotating Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint |
2d815d95 AZ |
2030 | Max vdev I/O aggregation size for non-rotating media. |
2031 | . | |
2d815d95 | 2032 | .It Sy zfs_vdev_mirror_rotating_inc Ns = Ns Sy 0 Pq int |
9f500936 | 2033 | A number by which the balancing algorithm increments the load calculation for |
2d815d95 AZ |
2034 | the purpose of selecting the least busy mirror member when an I/O operation |
2035 | immediately follows its predecessor on rotational vdevs | |
2036 | for the purpose of making decisions based on load. | |
2037 | . | |
2038 | .It Sy zfs_vdev_mirror_rotating_seek_inc Ns = Ns Sy 5 Pq int | |
9f500936 | 2039 | A number by which the balancing algorithm increments the load calculation for |
2d815d95 AZ |
2040 | the purpose of selecting the least busy mirror member when an I/O operation |
2041 | lacks locality as defined by | |
2042 | .Sy zfs_vdev_mirror_rotating_seek_offset . | |
2043 | Operations within this that are not immediately following the previous operation | |
2044 | are incremented by half. | |
2045 | . | |
a894ae75 | 2046 | .It Sy zfs_vdev_mirror_rotating_seek_offset Ns = Ns Sy 1048576 Ns B Po 1 MiB Pc Pq int |
2d815d95 AZ |
2047 | The maximum distance for the last queued I/O operation in which |
2048 | the balancing algorithm considers an operation to have locality. | |
2049 | .No See Sx ZFS I/O SCHEDULER . | |
2050 | . | |
2051 | .It Sy zfs_vdev_mirror_non_rotating_inc Ns = Ns Sy 0 Pq int | |
9f500936 | 2052 | A number by which the balancing algorithm increments the load calculation for |
2053 | the purpose of selecting the least busy mirror member on non-rotational vdevs | |
2d815d95 AZ |
2054 | when I/O operations do not immediately follow one another. |
2055 | . | |
2056 | .It Sy zfs_vdev_mirror_non_rotating_seek_inc Ns = Ns Sy 1 Pq int | |
9f500936 | 2057 | A number by which the balancing algorithm increments the load calculation for |
b46be903 DS |
2058 | the purpose of selecting the least busy mirror member when an I/O operation |
2059 | lacks | |
2d815d95 AZ |
2060 | locality as defined by the |
2061 | .Sy zfs_vdev_mirror_rotating_seek_offset . | |
2062 | Operations within this that are not immediately following the previous operation | |
2063 | are incremented by half. | |
2064 | . | |
fdc2d303 | 2065 | .It Sy zfs_vdev_read_gap_limit Ns = Ns Sy 32768 Ns B Po 32 KiB Pc Pq uint |
2d815d95 | 2066 | Aggregate read I/O operations if the on-disk gap between them is within this |
83426735 | 2067 | threshold. |
2d815d95 | 2068 | . |
fdc2d303 | 2069 | .It Sy zfs_vdev_write_gap_limit Ns = Ns Sy 4096 Ns B Po 4 KiB Pc Pq uint |
2d815d95 AZ |
2070 | Aggregate write I/O operations if the on-disk gap between them is within this |
2071 | threshold. | |
2072 | . | |
2073 | .It Sy zfs_vdev_raidz_impl Ns = Ns Sy fastest Pq string | |
2074 | Select the raidz parity implementation to use. | |
2075 | .Pp | |
2076 | Variants that don't depend on CPU-specific features | |
2077 | may be selected on module load, as they are supported on all systems. | |
2078 | The remaining options may only be set after the module is loaded, | |
2079 | as they are available only if the implementations are compiled in | |
2080 | and supported on the running system. | |
2081 | .Pp | |
2082 | Once the module is loaded, | |
2083 | .Pa /sys/module/zfs/parameters/zfs_vdev_raidz_impl | |
2084 | will show the available options, | |
2085 | with the currently selected one enclosed in square brackets. | |
2086 | .Pp | |
2087 | .TS | |
2088 | lb l l . | |
2089 | fastest selected by built-in benchmark | |
2090 | original original implementation | |
2091 | scalar scalar implementation | |
2092 | sse2 SSE2 instruction set 64-bit x86 | |
2093 | ssse3 SSSE3 instruction set 64-bit x86 | |
2094 | avx2 AVX2 instruction set 64-bit x86 | |
2095 | avx512f AVX512F instruction set 64-bit x86 | |
2096 | avx512bw AVX512F & AVX512BW instruction sets 64-bit x86 | |
2097 | aarch64_neon NEON Aarch64/64-bit ARMv8 | |
2098 | aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8 | |
2099 | powerpc_altivec Altivec PowerPC | |
2100 | .TE | |
2101 | . | |
2102 | .It Sy zfs_vdev_scheduler Pq charp | |
2103 | .Sy DEPRECATED . | |
0f402668 | 2104 | Prints warning to kernel log for compatibility. |
2d815d95 | 2105 | . |
fdc2d303 | 2106 | .It Sy zfs_zevent_len_max Ns = Ns Sy 512 Pq uint |
032a213e | 2107 | Max event queue length. |
2d815d95 AZ |
2108 | Events in the queue can be viewed with |
2109 | .Xr zpool-events 8 . | |
2110 | . | |
2111 | .It Sy zfs_zevent_retain_max Ns = Ns Sy 2000 Pq int | |
2112 | Maximum recent zevent records to retain for duplicate checking. | |
2113 | Setting this to | |
2114 | .Sy 0 | |
2115 | disables duplicate detection. | |
2116 | . | |
a894ae75 | 2117 | .It Sy zfs_zevent_retain_expire_secs Ns = Ns Sy 900 Ns s Po 15 min Pc Pq int |
4f072827 | 2118 | Lifespan for a recent ereport that was retained for duplicate checking. |
2d815d95 AZ |
2119 | . |
2120 | .It Sy zfs_zil_clean_taskq_maxalloc Ns = Ns Sy 1048576 Pq int | |
2121 | The maximum number of taskq entries that are allowed to be cached. | |
2122 | When this limit is exceeded transaction records (itxs) | |
2123 | will be cleaned synchronously. | |
2124 | . | |
2125 | .It Sy zfs_zil_clean_taskq_minalloc Ns = Ns Sy 1024 Pq int | |
a032ac4b BB |
2126 | The number of taskq entries that are pre-populated when the taskq is first |
2127 | created and are immediately available for use. | |
2d815d95 AZ |
2128 | . |
2129 | .It Sy zfs_zil_clean_taskq_nthr_pct Ns = Ns Sy 100 Ns % Pq int | |
2130 | This controls the number of threads used by | |
2131 | .Sy dp_zil_clean_taskq . | |
2132 | The default value of | |
2133 | .Sy 100% | |
2134 | will create a maximum of one thread per cpu. | |
2135 | . | |
fdc2d303 | 2136 | .It Sy zil_maxblocksize Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint |
2d815d95 AZ |
2137 | This sets the maximum block size used by the ZIL. |
2138 | On very fragmented pools, lowering this | |
a894ae75 | 2139 | .Pq typically to Sy 36 KiB |
2d815d95 AZ |
2140 | can improve performance. |
2141 | . | |
0f740a4f AM |
2142 | .It Sy zil_min_commit_timeout Ns = Ns Sy 5000 Pq u64 |
2143 | This sets the minimum delay in nanoseconds ZIL care to delay block commit, | |
2144 | waiting for more records. | |
2145 | If ZIL writes are too fast, kernel may not be able sleep for so short interval, | |
2146 | increasing log latency above allowed by | |
2147 | .Sy zfs_commit_timeout_pct . | |
2148 | . | |
2d815d95 AZ |
2149 | .It Sy zil_nocacheflush Ns = Ns Sy 0 Ns | Ns 1 Pq int |
2150 | Disable the cache flush commands that are normally sent to disk by | |
2151 | the ZIL after an LWB write has completed. | |
2152 | Setting this will cause ZIL corruption on power loss | |
2153 | if a volatile out-of-order write cache is enabled. | |
2154 | . | |
2155 | .It Sy zil_replay_disable Ns = Ns Sy 0 Ns | Ns 1 Pq int | |
2156 | Disable intent logging replay. | |
2157 | Can be disabled for recovery from corrupted ZIL. | |
2158 | . | |
ab8d9c17 | 2159 | .It Sy zil_slog_bulk Ns = Ns Sy 786432 Ns B Po 768 KiB Pc Pq u64 |
1b7c1e5c GDN |
2160 | Limit SLOG write size per commit executed with synchronous priority. |
2161 | Any writes above that will be executed with lower (asynchronous) priority | |
2162 | to limit potential SLOG device abuse by single active ZIL writer. | |
2d815d95 | 2163 | . |
361a7e82 JP |
2164 | .It Sy zfs_zil_saxattr Ns = Ns Sy 1 Ns | Ns 0 Pq int |
2165 | Setting this tunable to zero disables ZIL logging of new | |
2166 | .Sy xattr Ns = Ns Sy sa | |
2167 | records if the | |
2168 | .Sy org.openzfs:zilsaxattr | |
2169 | feature is enabled on the pool. | |
2170 | This would only be necessary to work around bugs in the ZIL logging or replay | |
2171 | code for this record type. | |
2172 | The tunable has no effect if the feature is disabled. | |
2173 | . | |
fdc2d303 | 2174 | .It Sy zfs_embedded_slog_min_ms Ns = Ns Sy 64 Pq uint |
2d815d95 AZ |
2175 | Usually, one metaslab from each normal-class vdev is dedicated for use by |
2176 | the ZIL to log synchronous writes. | |
2177 | However, if there are fewer than | |
2178 | .Sy zfs_embedded_slog_min_ms | |
2179 | metaslabs in the vdev, this functionality is disabled. | |
b46be903 DS |
2180 | This ensures that we don't set aside an unreasonable amount of space for the |
2181 | ZIL. | |
2d815d95 | 2182 | . |
fdc2d303 | 2183 | .It Sy zstd_earlyabort_pass Ns = Ns Sy 1 Pq uint |
f375b23c RE |
2184 | Whether heuristic for detection of incompressible data with zstd levels >= 3 |
2185 | using LZ4 and zstd-1 passes is enabled. | |
2186 | . | |
fdc2d303 | 2187 | .It Sy zstd_abort_size Ns = Ns Sy 131072 Pq uint |
f375b23c RE |
2188 | Minimal uncompressed size (inclusive) of a record before the early abort |
2189 | heuristic will be attempted. | |
2190 | . | |
2d815d95 AZ |
2191 | .It Sy zio_deadman_log_all Ns = Ns Sy 0 Ns | Ns 1 Pq int |
2192 | If non-zero, the zio deadman will produce debugging messages | |
2193 | .Pq see Sy zfs_dbgmsg_enable | |
2194 | for all zios, rather than only for leaf zios possessing a vdev. | |
2195 | This is meant to be used by developers to gain | |
638dd5f4 | 2196 | diagnostic information for hang conditions which don't involve a mutex |
2d815d95 | 2197 | or other locking primitive: typically conditions in which a thread in |
638dd5f4 | 2198 | the zio pipeline is looping indefinitely. |
2d815d95 | 2199 | . |
a894ae75 | 2200 | .It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int |
2d815d95 AZ |
2201 | When an I/O operation takes more than this much time to complete, |
2202 | it's marked as slow. | |
2203 | Each slow operation causes a delay zevent. | |
2204 | Slow I/O counters can be seen with | |
2205 | .Nm zpool Cm status Fl s . | |
2206 | . | |
2207 | .It Sy zio_dva_throttle_enabled Ns = Ns Sy 1 Ns | Ns 0 Pq int | |
2208 | Throttle block allocations in the I/O pipeline. | |
2209 | This allows for dynamic allocation distribution when devices are imbalanced. | |
e815485f | 2210 | When enabled, the maximum number of pending allocations per top-level vdev |
2d815d95 AZ |
2211 | is limited by |
2212 | .Sy zfs_vdev_queue_depth_pct . | |
2213 | . | |
5c006134 RM |
2214 | .It Sy zfs_xattr_compat Ns = Ns 0 Ns | Ns 1 Pq int |
2215 | Control the naming scheme used when setting new xattrs in the user namespace. | |
2216 | If | |
2217 | .Sy 0 | |
2218 | .Pq the default on Linux , | |
2219 | user namespace xattr names are prefixed with the namespace, to be backwards | |
2220 | compatible with previous versions of ZFS on Linux. | |
2221 | If | |
2222 | .Sy 1 | |
2223 | .Pq the default on Fx , | |
2224 | user namespace xattr names are not prefixed, to be backwards compatible with | |
2225 | previous versions of ZFS on illumos and | |
2226 | .Fx . | |
2227 | .Pp | |
2228 | Either naming scheme can be read on this and future versions of ZFS, regardless | |
2229 | of this tunable, but legacy ZFS on illumos or | |
2230 | .Fx | |
2231 | are unable to read user namespace xattrs written in the Linux format, and | |
2232 | legacy versions of ZFS on Linux are unable to read user namespace xattrs written | |
2233 | in the legacy ZFS format. | |
2234 | .Pp | |
2235 | An existing xattr with the alternate naming scheme is removed when overwriting | |
2236 | the xattr so as to not accumulate duplicates. | |
2237 | . | |
2d815d95 AZ |
2238 | .It Sy zio_requeue_io_start_cut_in_line Ns = Ns Sy 0 Ns | Ns 1 Pq int |
2239 | Prioritize requeued I/O. | |
2240 | . | |
2241 | .It Sy zio_taskq_batch_pct Ns = Ns Sy 80 Ns % Pq uint | |
2242 | Percentage of online CPUs which will run a worker thread for I/O. | |
2243 | These workers are responsible for I/O work such as compression and | |
2244 | checksum calculations. | |
2245 | Fractional number of CPUs will be rounded down. | |
2246 | .Pp | |
2247 | The default value of | |
2248 | .Sy 80% | |
2249 | was chosen to avoid using all CPUs which can result in | |
2250 | latency issues and inconsistent application performance, | |
2251 | especially when slower compression and/or checksumming is enabled. | |
2252 | . | |
2253 | .It Sy zio_taskq_batch_tpq Ns = Ns Sy 0 Pq uint | |
2254 | Number of worker threads per taskq. | |
2255 | Lower values improve I/O ordering and CPU utilization, | |
2256 | while higher reduces lock contention. | |
2257 | .Pp | |
2258 | If | |
2259 | .Sy 0 , | |
2260 | generate a system-dependent value close to 6 threads per taskq. | |
2261 | . | |
2262 | .It Sy zvol_inhibit_dev Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
2263 | Do not create zvol device nodes. | |
2264 | This may slightly improve startup time on | |
83426735 | 2265 | systems with a very large number of zvols. |
2d815d95 AZ |
2266 | . |
2267 | .It Sy zvol_major Ns = Ns Sy 230 Pq uint | |
2268 | Major number for zvol block devices. | |
2269 | . | |
ab8d9c17 | 2270 | .It Sy zvol_max_discard_blocks Ns = Ns Sy 16384 Pq long |
2d815d95 AZ |
2271 | Discard (TRIM) operations done on zvols will be done in batches of this |
2272 | many blocks, where block size is determined by the | |
2273 | .Sy volblocksize | |
2274 | property of a zvol. | |
2275 | . | |
a894ae75 | 2276 | .It Sy zvol_prefetch_bytes Ns = Ns Sy 131072 Ns B Po 128 KiB Pc Pq uint |
2d815d95 AZ |
2277 | When adding a zvol to the system, prefetch this many bytes |
2278 | from the start and end of the volume. | |
2279 | Prefetching these regions of the volume is desirable, | |
2280 | because they are likely to be accessed immediately by | |
2281 | .Xr blkid 8 | |
2282 | or the kernel partitioner. | |
2283 | . | |
2284 | .It Sy zvol_request_sync Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
2285 | When processing I/O requests for a zvol, submit them synchronously. | |
2286 | This effectively limits the queue depth to | |
2287 | .Em 1 | |
2288 | for each I/O submitter. | |
2289 | When unset, requests are handled asynchronously by a thread pool. | |
2290 | The number of requests which can be handled concurrently is controlled by | |
2291 | .Sy zvol_threads . | |
6f73d021 TH |
2292 | .Sy zvol_request_sync |
2293 | is ignored when running on a kernel that supports block multiqueue | |
2294 | .Pq Li blk-mq . | |
2d815d95 | 2295 | . |
6f73d021 TH |
2296 | .It Sy zvol_threads Ns = Ns Sy 0 Pq uint |
2297 | The number of system wide threads to use for processing zvol block IOs. | |
2298 | If | |
2299 | .Sy 0 | |
2300 | (the default) then internally set | |
2301 | .Sy zvol_threads | |
2302 | to the number of CPUs present or 32 (whichever is greater). | |
2303 | . | |
2304 | .It Sy zvol_blk_mq_threads Ns = Ns Sy 0 Pq uint | |
2305 | The number of threads per zvol to use for queuing IO requests. | |
2306 | This parameter will only appear if your kernel supports | |
2307 | .Li blk-mq | |
2308 | and is only read and assigned to a zvol at zvol load time. | |
2309 | If | |
2310 | .Sy 0 | |
2311 | (the default) then internally set | |
2312 | .Sy zvol_blk_mq_threads | |
2313 | to the number of CPUs present. | |
2314 | . | |
2315 | .It Sy zvol_use_blk_mq Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
2316 | Set to | |
2317 | .Sy 1 | |
2318 | to use the | |
2319 | .Li blk-mq | |
2320 | API for zvols. | |
2321 | Set to | |
2322 | .Sy 0 | |
2323 | (the default) to use the legacy zvol APIs. | |
2324 | This setting can give better or worse zvol performance depending on | |
2325 | the workload. | |
2326 | This parameter will only appear if your kernel supports | |
2327 | .Li blk-mq | |
2328 | and is only read and assigned to a zvol at zvol load time. | |
2329 | . | |
2330 | .It Sy zvol_blk_mq_blocks_per_thread Ns = Ns Sy 8 Pq uint | |
2331 | If | |
2332 | .Sy zvol_use_blk_mq | |
2333 | is enabled, then process this number of | |
2334 | .Sy volblocksize Ns -sized blocks per zvol thread. | |
2335 | This tunable can be use to favor better performance for zvol reads (lower | |
2336 | values) or writes (higher values). | |
2337 | If set to | |
2338 | .Sy 0 , | |
2339 | then the zvol layer will process the maximum number of blocks | |
2340 | per thread that it can. | |
2341 | This parameter will only appear if your kernel supports | |
2342 | .Li blk-mq | |
2343 | and is only applied at each zvol's load time. | |
2344 | . | |
2345 | .It Sy zvol_blk_mq_queue_depth Ns = Ns Sy 0 Pq uint | |
2346 | The queue_depth value for the zvol | |
2347 | .Li blk-mq | |
2348 | interface. | |
2349 | This parameter will only appear if your kernel supports | |
2350 | .Li blk-mq | |
2351 | and is only applied at each zvol's load time. | |
2352 | If | |
2353 | .Sy 0 | |
2354 | (the default) then use the kernel's default queue depth. | |
2355 | Values are clamped to the kernel's | |
2356 | .Dv BLKDEV_MIN_RQ | |
2357 | and | |
2358 | .Dv BLKDEV_MAX_RQ Ns / Ns Dv BLKDEV_DEFAULT_RQ | |
2359 | limits. | |
2d815d95 AZ |
2360 | . |
2361 | .It Sy zvol_volmode Ns = Ns Sy 1 Pq uint | |
2362 | Defines zvol block devices behaviour when | |
2363 | .Sy volmode Ns = Ns Sy default : | |
2364 | .Bl -tag -compact -offset 4n -width "a" | |
2365 | .It Sy 1 | |
2366 | .No equivalent to Sy full | |
2367 | .It Sy 2 | |
2368 | .No equivalent to Sy dev | |
2369 | .It Sy 3 | |
2370 | .No equivalent to Sy none | |
2371 | .El | |
945b4074 MZ |
2372 | . |
2373 | .It Sy zvol_enforce_quotas Ns = Ns Sy 0 Ns | Ns 1 Pq uint | |
2374 | Enable strict ZVOL quota enforcement. | |
2375 | The strict quota enforcement may have a performance impact. | |
2d815d95 AZ |
2376 | .El |
2377 | . | |
2378 | .Sh ZFS I/O SCHEDULER | |
2379 | ZFS issues I/O operations to leaf vdevs to satisfy and complete I/O operations. | |
2380 | The scheduler determines when and in what order those operations are issued. | |
2381 | The scheduler divides operations into five I/O classes, | |
e8b96c60 | 2382 | prioritized in the following order: sync read, sync write, async read, |
2d815d95 AZ |
2383 | async write, and scrub/resilver. |
2384 | Each queue defines the minimum and maximum number of concurrent operations | |
2385 | that may be issued to the device. | |
2386 | In addition, the device has an aggregate maximum, | |
2387 | .Sy zfs_vdev_max_active . | |
2388 | Note that the sum of the per-queue minima must not exceed the aggregate maximum. | |
2389 | If the sum of the per-queue maxima exceeds the aggregate maximum, | |
2390 | then the number of active operations may reach | |
2391 | .Sy zfs_vdev_max_active , | |
2392 | in which case no further operations will be issued, | |
2393 | regardless of whether all per-queue minima have been met. | |
2394 | .Pp | |
e8b96c60 | 2395 | For many physical devices, throughput increases with the number of |
2d815d95 AZ |
2396 | concurrent operations, but latency typically suffers. |
2397 | Furthermore, physical devices typically have a limit | |
2398 | at which more concurrent operations have no | |
e8b96c60 | 2399 | effect on throughput or can actually cause it to decrease. |
2d815d95 | 2400 | .Pp |
e8b96c60 | 2401 | The scheduler selects the next operation to issue by first looking for an |
2d815d95 AZ |
2402 | I/O class whose minimum has not been satisfied. |
2403 | Once all are satisfied and the aggregate maximum has not been hit, | |
2404 | the scheduler looks for classes whose maximum has not been satisfied. | |
2405 | Iteration through the I/O classes is done in the order specified above. | |
2406 | No further operations are issued | |
2407 | if the aggregate maximum number of concurrent operations has been hit, | |
b46be903 DS |
2408 | or if there are no operations queued for an I/O class that has not hit its |
2409 | maximum. | |
2d815d95 AZ |
2410 | Every time an I/O operation is queued or an operation completes, |
2411 | the scheduler looks for new operations to issue. | |
2412 | .Pp | |
2413 | In general, smaller | |
2414 | .Sy max_active Ns s | |
2415 | will lead to lower latency of synchronous operations. | |
2416 | Larger | |
2417 | .Sy max_active Ns s | |
2418 | may lead to higher overall throughput, depending on underlying storage. | |
2419 | .Pp | |
2420 | The ratio of the queues' | |
2421 | .Sy max_active Ns s | |
2422 | determines the balance of performance between reads, writes, and scrubs. | |
2423 | For example, increasing | |
2424 | .Sy zfs_vdev_scrub_max_active | |
2425 | will cause the scrub or resilver to complete more quickly, | |
2426 | but reads and writes to have higher latency and lower throughput. | |
2427 | .Pp | |
2428 | All I/O classes have a fixed maximum number of outstanding operations, | |
2429 | except for the async write class. | |
2430 | Asynchronous writes represent the data that is committed to stable storage | |
2431 | during the syncing stage for transaction groups. | |
2432 | Transaction groups enter the syncing state periodically, | |
2433 | so the number of queued async writes will quickly burst up | |
2434 | and then bleed down to zero. | |
2435 | Rather than servicing them as quickly as possible, | |
2436 | the I/O scheduler changes the maximum number of active async write operations | |
2437 | according to the amount of dirty data in the pool. | |
2438 | Since both throughput and latency typically increase with the number of | |
e8b96c60 | 2439 | concurrent operations issued to physical devices, reducing the |
b46be903 DS |
2440 | burstiness in the number of simultaneous operations also stabilizes the |
2441 | response time of operations from other queues, in particular synchronous ones. | |
2d815d95 | 2442 | In broad strokes, the I/O scheduler will issue more concurrent operations |
b46be903 | 2443 | from the async write queue as there is more dirty data in the pool. |
2d815d95 AZ |
2444 | . |
2445 | .Ss Async Writes | |
e8b96c60 | 2446 | The number of concurrent operations issued for the async write I/O class |
2d815d95 AZ |
2447 | follows a piece-wise linear function defined by a few adjustable points: |
2448 | .Bd -literal | |
2449 | | o---------| <-- \fBzfs_vdev_async_write_max_active\fP | |
e8b96c60 MA |
2450 | ^ | /^ | |
2451 | | | / | | | |
2452 | active | / | | | |
2453 | I/O | / | | | |
2454 | count | / | | | |
2455 | | / | | | |
2d815d95 | 2456 | |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP |
e8b96c60 | 2457 | 0|_______^______|_________| |
2d815d95 | 2458 | 0% | | 100% of \fBzfs_dirty_data_max\fP |
e8b96c60 | 2459 | | | |
2d815d95 AZ |
2460 | | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP |
2461 | `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP | |
2462 | .Ed | |
2463 | .Pp | |
e8b96c60 MA |
2464 | Until the amount of dirty data exceeds a minimum percentage of the dirty |
2465 | data allowed in the pool, the I/O scheduler will limit the number of | |
2d815d95 AZ |
2466 | concurrent operations to the minimum. |
2467 | As that threshold is crossed, the number of concurrent operations issued | |
2468 | increases linearly to the maximum at the specified maximum percentage | |
2469 | of the dirty data allowed in the pool. | |
2470 | .Pp | |
e8b96c60 | 2471 | Ideally, the amount of dirty data on a busy pool will stay in the sloped |
2d815d95 AZ |
2472 | part of the function between |
2473 | .Sy zfs_vdev_async_write_active_min_dirty_percent | |
2474 | and | |
2475 | .Sy zfs_vdev_async_write_active_max_dirty_percent . | |
2476 | If it exceeds the maximum percentage, | |
2477 | this indicates that the rate of incoming data is | |
2478 | greater than the rate that the backend storage can handle. | |
2479 | In this case, we must further throttle incoming writes, | |
2480 | as described in the next section. | |
2481 | . | |
2482 | .Sh ZFS TRANSACTION DELAY | |
e8b96c60 MA |
2483 | We delay transactions when we've determined that the backend storage |
2484 | isn't able to accommodate the rate of incoming writes. | |
2d815d95 | 2485 | .Pp |
e8b96c60 | 2486 | If there is already a transaction waiting, we delay relative to when |
2d815d95 AZ |
2487 | that transaction will finish waiting. |
2488 | This way the calculated delay time | |
2489 | is independent of the number of threads concurrently executing transactions. | |
2490 | .Pp | |
2491 | If we are the only waiter, wait relative to when the transaction started, | |
2492 | rather than the current time. | |
2493 | This credits the transaction for "time already served", | |
2494 | e.g. reading indirect blocks. | |
2495 | .Pp | |
2496 | The minimum time for a transaction to take is calculated as | |
12bd322d | 2497 | .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms) |
2d815d95 AZ |
2498 | .Pp |
2499 | The delay has two degrees of freedom that can be adjusted via tunables. | |
2500 | The percentage of dirty data at which we start to delay is defined by | |
2501 | .Sy zfs_delay_min_dirty_percent . | |
2502 | This should typically be at or above | |
2503 | .Sy zfs_vdev_async_write_active_max_dirty_percent , | |
2504 | so that we only start to delay after writing at full speed | |
2505 | has failed to keep up with the incoming write rate. | |
2506 | The scale of the curve is defined by | |
2507 | .Sy zfs_delay_scale . | |
b46be903 DS |
2508 | Roughly speaking, this variable determines the amount of delay at the midpoint |
2509 | of the curve. | |
2d815d95 | 2510 | .Bd -literal |
e8b96c60 MA |
2511 | delay |
2512 | 10ms +-------------------------------------------------------------*+ | |
2513 | | *| | |
2514 | 9ms + *+ | |
2515 | | *| | |
2516 | 8ms + *+ | |
2517 | | * | | |
2518 | 7ms + * + | |
2519 | | * | | |
2520 | 6ms + * + | |
2521 | | * | | |
2522 | 5ms + * + | |
2523 | | * | | |
2524 | 4ms + * + | |
2525 | | * | | |
2526 | 3ms + * + | |
2527 | | * | | |
2528 | 2ms + (midpoint) * + | |
2529 | | | ** | | |
2530 | 1ms + v *** + | |
2d815d95 | 2531 | | \fBzfs_delay_scale\fP ----------> ******** | |
e8b96c60 | 2532 | 0 +-------------------------------------*********----------------+ |
2d815d95 AZ |
2533 | 0% <- \fBzfs_dirty_data_max\fP -> 100% |
2534 | .Ed | |
2535 | .Pp | |
2536 | Note, that since the delay is added to the outstanding time remaining on the | |
2537 | most recent transaction it's effectively the inverse of IOPS. | |
2538 | Here, the midpoint of | |
a894ae75 | 2539 | .Em 500 us |
2d815d95 AZ |
2540 | translates to |
2541 | .Em 2000 IOPS . | |
2542 | The shape of the curve | |
e8b96c60 | 2543 | was chosen such that small changes in the amount of accumulated dirty data |
2d815d95 AZ |
2544 | in the first three quarters of the curve yield relatively small differences |
2545 | in the amount of delay. | |
2546 | .Pp | |
e8b96c60 | 2547 | The effects can be easier to understand when the amount of delay is |
2d815d95 AZ |
2548 | represented on a logarithmic scale: |
2549 | .Bd -literal | |
e8b96c60 MA |
2550 | delay |
2551 | 100ms +-------------------------------------------------------------++ | |
2552 | + + | |
2553 | | | | |
2554 | + *+ | |
2555 | 10ms + *+ | |
2556 | + ** + | |
2557 | | (midpoint) ** | | |
2558 | + | ** + | |
2559 | 1ms + v **** + | |
2d815d95 | 2560 | + \fBzfs_delay_scale\fP ----------> ***** + |
e8b96c60 MA |
2561 | | **** | |
2562 | + **** + | |
2563 | 100us + ** + | |
2564 | + * + | |
2565 | | * | | |
2566 | + * + | |
2567 | 10us + * + | |
2568 | + + | |
2569 | | | | |
2570 | + + | |
2571 | +--------------------------------------------------------------+ | |
2d815d95 AZ |
2572 | 0% <- \fBzfs_dirty_data_max\fP -> 100% |
2573 | .Ed | |
2574 | .Pp | |
e8b96c60 | 2575 | Note here that only as the amount of dirty data approaches its limit does |
2d815d95 AZ |
2576 | the delay start to increase rapidly. |
2577 | The goal of a properly tuned system should be to keep the amount of dirty data | |
2578 | out of that range by first ensuring that the appropriate limits are set | |
2579 | for the I/O scheduler to reach optimal throughput on the back-end storage, | |
2580 | and then by changing the value of | |
2581 | .Sy zfs_delay_scale | |
2582 | to increase the steepness of the curve. |