]>
Commit | Line | Data |
---|---|---|
34dc7c2f BB |
1 | /* |
2 | * CDDL HEADER START | |
3 | * | |
4 | * The contents of this file are subject to the terms of the | |
5 | * Common Development and Distribution License (the "License"). | |
6 | * You may not use this file except in compliance with the License. | |
7 | * | |
8 | * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE | |
9 | * or http://www.opensolaris.org/os/licensing. | |
10 | * See the License for the specific language governing permissions | |
11 | * and limitations under the License. | |
12 | * | |
13 | * When distributing Covered Code, include this CDDL HEADER in each | |
14 | * file and include the License file at usr/src/OPENSOLARIS.LICENSE. | |
15 | * If applicable, add the following below this CDDL HEADER, with the | |
16 | * fields enclosed by brackets "[]" replaced with your own identifying | |
17 | * information: Portions Copyright [yyyy] [name of copyright owner] | |
18 | * | |
19 | * CDDL HEADER END | |
20 | */ | |
21 | /* | |
428870ff | 22 | * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. |
3ec34e55 | 23 | * Copyright (c) 2018, Joyent, Inc. |
4f072827 | 24 | * Copyright (c) 2011, 2020, Delphix. All rights reserved. |
77f6826b GA |
25 | * Copyright (c) 2014, Saso Kiselkov. All rights reserved. |
26 | * Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved. | |
e3570464 | 27 | * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. |
77f6826b | 28 | * Copyright (c) 2020, George Amanakis. All rights reserved. |
10b3c7f5 MN |
29 | * Copyright (c) 2019, Klara Inc. |
30 | * Copyright (c) 2019, Allan Jude | |
fc34dfba AJ |
31 | * Copyright (c) 2020, The FreeBSD Foundation [1] |
32 | * | |
33 | * [1] Portions of this software were developed by Allan Jude | |
34 | * under sponsorship from the FreeBSD Foundation. | |
34dc7c2f BB |
35 | */ |
36 | ||
34dc7c2f BB |
37 | /* |
38 | * DVA-based Adjustable Replacement Cache | |
39 | * | |
40 | * While much of the theory of operation used here is | |
41 | * based on the self-tuning, low overhead replacement cache | |
42 | * presented by Megiddo and Modha at FAST 2003, there are some | |
43 | * significant differences: | |
44 | * | |
45 | * 1. The Megiddo and Modha model assumes any page is evictable. | |
46 | * Pages in its cache cannot be "locked" into memory. This makes | |
47 | * the eviction algorithm simple: evict the last page in the list. | |
48 | * This also make the performance characteristics easy to reason | |
49 | * about. Our cache is not so simple. At any given moment, some | |
50 | * subset of the blocks in the cache are un-evictable because we | |
51 | * have handed out a reference to them. Blocks are only evictable | |
52 | * when there are no external references active. This makes | |
53 | * eviction far more problematic: we choose to evict the evictable | |
54 | * blocks that are the "lowest" in the list. | |
55 | * | |
56 | * There are times when it is not possible to evict the requested | |
57 | * space. In these circumstances we are unable to adjust the cache | |
58 | * size. To prevent the cache growing unbounded at these times we | |
59 | * implement a "cache throttle" that slows the flow of new data | |
60 | * into the cache until we can make space available. | |
61 | * | |
62 | * 2. The Megiddo and Modha model assumes a fixed cache size. | |
63 | * Pages are evicted when the cache is full and there is a cache | |
64 | * miss. Our model has a variable sized cache. It grows with | |
65 | * high use, but also tries to react to memory pressure from the | |
66 | * operating system: decreasing its size when system memory is | |
67 | * tight. | |
68 | * | |
69 | * 3. The Megiddo and Modha model assumes a fixed page size. All | |
d3cc8b15 | 70 | * elements of the cache are therefore exactly the same size. So |
34dc7c2f BB |
71 | * when adjusting the cache size following a cache miss, its simply |
72 | * a matter of choosing a single page to evict. In our model, we | |
e1cfd73f | 73 | * have variable sized cache blocks (ranging from 512 bytes to |
d3cc8b15 | 74 | * 128K bytes). We therefore choose a set of blocks to evict to make |
34dc7c2f BB |
75 | * space for a cache miss that approximates as closely as possible |
76 | * the space used by the new block. | |
77 | * | |
78 | * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" | |
79 | * by N. Megiddo & D. Modha, FAST 2003 | |
80 | */ | |
81 | ||
82 | /* | |
83 | * The locking model: | |
84 | * | |
85 | * A new reference to a cache buffer can be obtained in two | |
86 | * ways: 1) via a hash table lookup using the DVA as a key, | |
87 | * or 2) via one of the ARC lists. The arc_read() interface | |
2aa34383 | 88 | * uses method 1, while the internal ARC algorithms for |
d3cc8b15 | 89 | * adjusting the cache use method 2. We therefore provide two |
34dc7c2f | 90 | * types of locks: 1) the hash table lock array, and 2) the |
2aa34383 | 91 | * ARC list locks. |
34dc7c2f | 92 | * |
5c839890 BC |
93 | * Buffers do not have their own mutexes, rather they rely on the |
94 | * hash table mutexes for the bulk of their protection (i.e. most | |
95 | * fields in the arc_buf_hdr_t are protected by these mutexes). | |
34dc7c2f BB |
96 | * |
97 | * buf_hash_find() returns the appropriate mutex (held) when it | |
98 | * locates the requested buffer in the hash table. It returns | |
99 | * NULL for the mutex if the buffer was not in the table. | |
100 | * | |
101 | * buf_hash_remove() expects the appropriate hash mutex to be | |
102 | * already held before it is invoked. | |
103 | * | |
2aa34383 | 104 | * Each ARC state also has a mutex which is used to protect the |
34dc7c2f | 105 | * buffer list associated with the state. When attempting to |
2aa34383 | 106 | * obtain a hash table lock while holding an ARC list lock you |
34dc7c2f BB |
107 | * must use: mutex_tryenter() to avoid deadlock. Also note that |
108 | * the active state mutex must be held before the ghost state mutex. | |
109 | * | |
ab26409d BB |
110 | * It as also possible to register a callback which is run when the |
111 | * arc_meta_limit is reached and no buffers can be safely evicted. In | |
112 | * this case the arc user should drop a reference on some arc buffers so | |
113 | * they can be reclaimed and the arc_meta_limit honored. For example, | |
114 | * when using the ZPL each dentry holds a references on a znode. These | |
115 | * dentries must be pruned before the arc buffer holding the znode can | |
116 | * be safely evicted. | |
117 | * | |
34dc7c2f BB |
118 | * Note that the majority of the performance stats are manipulated |
119 | * with atomic operations. | |
120 | * | |
b9541d6b | 121 | * The L2ARC uses the l2ad_mtx on each vdev for the following: |
34dc7c2f BB |
122 | * |
123 | * - L2ARC buflist creation | |
124 | * - L2ARC buflist eviction | |
125 | * - L2ARC write completion, which walks L2ARC buflists | |
126 | * - ARC header destruction, as it removes from L2ARC buflists | |
127 | * - ARC header release, as it removes from L2ARC buflists | |
128 | */ | |
129 | ||
d3c2ae1c GW |
130 | /* |
131 | * ARC operation: | |
132 | * | |
133 | * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. | |
134 | * This structure can point either to a block that is still in the cache or to | |
135 | * one that is only accessible in an L2 ARC device, or it can provide | |
136 | * information about a block that was recently evicted. If a block is | |
137 | * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough | |
138 | * information to retrieve it from the L2ARC device. This information is | |
139 | * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block | |
140 | * that is in this state cannot access the data directly. | |
141 | * | |
142 | * Blocks that are actively being referenced or have not been evicted | |
143 | * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within | |
144 | * the arc_buf_hdr_t that will point to the data block in memory. A block can | |
145 | * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC | |
2aa34383 | 146 | * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and |
a6255b7f | 147 | * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). |
2aa34383 DK |
148 | * |
149 | * The L1ARC's data pointer may or may not be uncompressed. The ARC has the | |
a6255b7f DQ |
150 | * ability to store the physical data (b_pabd) associated with the DVA of the |
151 | * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, | |
2aa34383 DK |
152 | * it will match its on-disk compression characteristics. This behavior can be |
153 | * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the | |
a6255b7f | 154 | * compressed ARC functionality is disabled, the b_pabd will point to an |
2aa34383 DK |
155 | * uncompressed version of the on-disk data. |
156 | * | |
157 | * Data in the L1ARC is not accessed by consumers of the ARC directly. Each | |
158 | * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. | |
159 | * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC | |
160 | * consumer. The ARC will provide references to this data and will keep it | |
161 | * cached until it is no longer in use. The ARC caches only the L1ARC's physical | |
162 | * data block and will evict any arc_buf_t that is no longer referenced. The | |
163 | * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the | |
d3c2ae1c GW |
164 | * "overhead_size" kstat. |
165 | * | |
2aa34383 DK |
166 | * Depending on the consumer, an arc_buf_t can be requested in uncompressed or |
167 | * compressed form. The typical case is that consumers will want uncompressed | |
168 | * data, and when that happens a new data buffer is allocated where the data is | |
169 | * decompressed for them to use. Currently the only consumer who wants | |
170 | * compressed arc_buf_t's is "zfs send", when it streams data exactly as it | |
171 | * exists on disk. When this happens, the arc_buf_t's data buffer is shared | |
172 | * with the arc_buf_hdr_t. | |
d3c2ae1c | 173 | * |
2aa34383 DK |
174 | * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The |
175 | * first one is owned by a compressed send consumer (and therefore references | |
176 | * the same compressed data buffer as the arc_buf_hdr_t) and the second could be | |
177 | * used by any other consumer (and has its own uncompressed copy of the data | |
178 | * buffer). | |
d3c2ae1c | 179 | * |
2aa34383 DK |
180 | * arc_buf_hdr_t |
181 | * +-----------+ | |
182 | * | fields | | |
183 | * | common to | | |
184 | * | L1- and | | |
185 | * | L2ARC | | |
186 | * +-----------+ | |
187 | * | l2arc_buf_hdr_t | |
188 | * | | | |
189 | * +-----------+ | |
190 | * | l1arc_buf_hdr_t | |
191 | * | | arc_buf_t | |
192 | * | b_buf +------------>+-----------+ arc_buf_t | |
a6255b7f | 193 | * | b_pabd +-+ |b_next +---->+-----------+ |
2aa34383 DK |
194 | * +-----------+ | |-----------| |b_next +-->NULL |
195 | * | |b_comp = T | +-----------+ | |
196 | * | |b_data +-+ |b_comp = F | | |
197 | * | +-----------+ | |b_data +-+ | |
198 | * +->+------+ | +-----------+ | | |
199 | * compressed | | | | | |
200 | * data | |<--------------+ | uncompressed | |
201 | * +------+ compressed, | data | |
202 | * shared +-->+------+ | |
203 | * data | | | |
204 | * | | | |
205 | * +------+ | |
d3c2ae1c GW |
206 | * |
207 | * When a consumer reads a block, the ARC must first look to see if the | |
2aa34383 DK |
208 | * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new |
209 | * arc_buf_t and either copies uncompressed data into a new data buffer from an | |
a6255b7f DQ |
210 | * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a |
211 | * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the | |
2aa34383 DK |
212 | * hdr is compressed and the desired compression characteristics of the |
213 | * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the | |
214 | * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be | |
215 | * the last buffer in the hdr's b_buf list, however a shared compressed buf can | |
216 | * be anywhere in the hdr's list. | |
d3c2ae1c GW |
217 | * |
218 | * The diagram below shows an example of an uncompressed ARC hdr that is | |
2aa34383 DK |
219 | * sharing its data with an arc_buf_t (note that the shared uncompressed buf is |
220 | * the last element in the buf list): | |
d3c2ae1c GW |
221 | * |
222 | * arc_buf_hdr_t | |
223 | * +-----------+ | |
224 | * | | | |
225 | * | | | |
226 | * | | | |
227 | * +-----------+ | |
228 | * l2arc_buf_hdr_t| | | |
229 | * | | | |
230 | * +-----------+ | |
231 | * l1arc_buf_hdr_t| | | |
232 | * | | arc_buf_t (shared) | |
233 | * | b_buf +------------>+---------+ arc_buf_t | |
234 | * | | |b_next +---->+---------+ | |
a6255b7f | 235 | * | b_pabd +-+ |---------| |b_next +-->NULL |
d3c2ae1c GW |
236 | * +-----------+ | | | +---------+ |
237 | * | |b_data +-+ | | | |
238 | * | +---------+ | |b_data +-+ | |
239 | * +->+------+ | +---------+ | | |
240 | * | | | | | |
241 | * uncompressed | | | | | |
242 | * data +------+ | | | |
243 | * ^ +->+------+ | | |
244 | * | uncompressed | | | | |
245 | * | data | | | | |
246 | * | +------+ | | |
247 | * +---------------------------------+ | |
248 | * | |
a6255b7f | 249 | * Writing to the ARC requires that the ARC first discard the hdr's b_pabd |
d3c2ae1c | 250 | * since the physical block is about to be rewritten. The new data contents |
2aa34383 DK |
251 | * will be contained in the arc_buf_t. As the I/O pipeline performs the write, |
252 | * it may compress the data before writing it to disk. The ARC will be called | |
253 | * with the transformed data and will bcopy the transformed on-disk block into | |
a6255b7f | 254 | * a newly allocated b_pabd. Writes are always done into buffers which have |
2aa34383 DK |
255 | * either been loaned (and hence are new and don't have other readers) or |
256 | * buffers which have been released (and hence have their own hdr, if there | |
257 | * were originally other readers of the buf's original hdr). This ensures that | |
258 | * the ARC only needs to update a single buf and its hdr after a write occurs. | |
d3c2ae1c | 259 | * |
a6255b7f DQ |
260 | * When the L2ARC is in use, it will also take advantage of the b_pabd. The |
261 | * L2ARC will always write the contents of b_pabd to the L2ARC. This means | |
2aa34383 | 262 | * that when compressed ARC is enabled that the L2ARC blocks are identical |
d3c2ae1c GW |
263 | * to the on-disk block in the main data pool. This provides a significant |
264 | * advantage since the ARC can leverage the bp's checksum when reading from the | |
265 | * L2ARC to determine if the contents are valid. However, if the compressed | |
2aa34383 | 266 | * ARC is disabled, then the L2ARC's block must be transformed to look |
d3c2ae1c GW |
267 | * like the physical block in the main data pool before comparing the |
268 | * checksum and determining its validity. | |
b5256303 TC |
269 | * |
270 | * The L1ARC has a slightly different system for storing encrypted data. | |
271 | * Raw (encrypted + possibly compressed) data has a few subtle differences from | |
272 | * data that is just compressed. The biggest difference is that it is not | |
e1cfd73f | 273 | * possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded. |
b5256303 TC |
274 | * The other difference is that encryption cannot be treated as a suggestion. |
275 | * If a caller would prefer compressed data, but they actually wind up with | |
276 | * uncompressed data the worst thing that could happen is there might be a | |
277 | * performance hit. If the caller requests encrypted data, however, we must be | |
278 | * sure they actually get it or else secret information could be leaked. Raw | |
279 | * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, | |
280 | * may have both an encrypted version and a decrypted version of its data at | |
281 | * once. When a caller needs a raw arc_buf_t, it is allocated and the data is | |
282 | * copied out of this header. To avoid complications with b_pabd, raw buffers | |
283 | * cannot be shared. | |
d3c2ae1c GW |
284 | */ |
285 | ||
34dc7c2f BB |
286 | #include <sys/spa.h> |
287 | #include <sys/zio.h> | |
d3c2ae1c | 288 | #include <sys/spa_impl.h> |
3a17a7a9 | 289 | #include <sys/zio_compress.h> |
d3c2ae1c | 290 | #include <sys/zio_checksum.h> |
34dc7c2f BB |
291 | #include <sys/zfs_context.h> |
292 | #include <sys/arc.h> | |
27d96d22 | 293 | #include <sys/zfs_refcount.h> |
b128c09f | 294 | #include <sys/vdev.h> |
9babb374 | 295 | #include <sys/vdev_impl.h> |
e8b96c60 | 296 | #include <sys/dsl_pool.h> |
ca0bf58d | 297 | #include <sys/multilist.h> |
a6255b7f | 298 | #include <sys/abd.h> |
b5256303 TC |
299 | #include <sys/zil.h> |
300 | #include <sys/fm/fs/zfs.h> | |
34dc7c2f BB |
301 | #include <sys/callb.h> |
302 | #include <sys/kstat.h> | |
3ec34e55 | 303 | #include <sys/zthr.h> |
428870ff | 304 | #include <zfs_fletcher.h> |
59ec819a | 305 | #include <sys/arc_impl.h> |
e5d1c27e | 306 | #include <sys/trace_zfs.h> |
37fb3e43 | 307 | #include <sys/aggsum.h> |
86706441 | 308 | #include <sys/wmsum.h> |
3f387973 | 309 | #include <cityhash.h> |
b7654bd7 | 310 | #include <sys/vdev_trim.h> |
64e0fe14 | 311 | #include <sys/zfs_racct.h> |
8a171ccd | 312 | #include <sys/zstd/zstd.h> |
34dc7c2f | 313 | |
498877ba MA |
314 | #ifndef _KERNEL |
315 | /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ | |
316 | boolean_t arc_watch = B_FALSE; | |
317 | #endif | |
318 | ||
3ec34e55 BL |
319 | /* |
320 | * This thread's job is to keep enough free memory in the system, by | |
321 | * calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves | |
322 | * arc_available_memory(). | |
323 | */ | |
3442c2a0 | 324 | static zthr_t *arc_reap_zthr; |
3ec34e55 BL |
325 | |
326 | /* | |
327 | * This thread's job is to keep arc_size under arc_c, by calling | |
5dd92909 | 328 | * arc_evict(), which improves arc_is_overflowing(). |
3ec34e55 | 329 | */ |
3442c2a0 | 330 | static zthr_t *arc_evict_zthr; |
3ec34e55 | 331 | |
3442c2a0 MA |
332 | static kmutex_t arc_evict_lock; |
333 | static boolean_t arc_evict_needed = B_FALSE; | |
334 | ||
335 | /* | |
336 | * Count of bytes evicted since boot. | |
337 | */ | |
338 | static uint64_t arc_evict_count; | |
339 | ||
340 | /* | |
341 | * List of arc_evict_waiter_t's, representing threads waiting for the | |
342 | * arc_evict_count to reach specific values. | |
343 | */ | |
344 | static list_t arc_evict_waiters; | |
345 | ||
346 | /* | |
347 | * When arc_is_overflowing(), arc_get_data_impl() waits for this percent of | |
348 | * the requested amount of data to be evicted. For example, by default for | |
349 | * every 2KB that's evicted, 1KB of it may be "reused" by a new allocation. | |
350 | * Since this is above 100%, it ensures that progress is made towards getting | |
351 | * arc_size under arc_c. Since this is finite, it ensures that allocations | |
352 | * can still happen, even during the potentially long time that arc_size is | |
353 | * more than arc_c. | |
354 | */ | |
355 | int zfs_arc_eviction_pct = 200; | |
ca0bf58d | 356 | |
e8b96c60 | 357 | /* |
ca0bf58d PS |
358 | * The number of headers to evict in arc_evict_state_impl() before |
359 | * dropping the sublist lock and evicting from another sublist. A lower | |
360 | * value means we're more likely to evict the "correct" header (i.e. the | |
361 | * oldest header in the arc state), but comes with higher overhead | |
362 | * (i.e. more invocations of arc_evict_state_impl()). | |
363 | */ | |
364 | int zfs_arc_evict_batch_limit = 10; | |
365 | ||
34dc7c2f | 366 | /* number of seconds before growing cache again */ |
c9c9c1e2 | 367 | int arc_grow_retry = 5; |
3ec34e55 BL |
368 | |
369 | /* | |
370 | * Minimum time between calls to arc_kmem_reap_soon(). | |
371 | */ | |
372 | int arc_kmem_cache_reap_retry_ms = 1000; | |
34dc7c2f | 373 | |
a6255b7f | 374 | /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ |
3ec34e55 | 375 | int zfs_arc_overflow_shift = 8; |
62422785 | 376 | |
728d6ae9 | 377 | /* shift of arc_c for calculating both min and max arc_p */ |
3ec34e55 | 378 | int arc_p_min_shift = 4; |
728d6ae9 | 379 | |
d164b209 | 380 | /* log2(fraction of arc to reclaim) */ |
c9c9c1e2 | 381 | int arc_shrink_shift = 7; |
d164b209 | 382 | |
03b60eee DB |
383 | /* percent of pagecache to reclaim arc to */ |
384 | #ifdef _KERNEL | |
c9c9c1e2 | 385 | uint_t zfs_arc_pc_percent = 0; |
03b60eee DB |
386 | #endif |
387 | ||
34dc7c2f | 388 | /* |
ca67b33a MA |
389 | * log2(fraction of ARC which must be free to allow growing). |
390 | * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, | |
391 | * when reading a new block into the ARC, we will evict an equal-sized block | |
392 | * from the ARC. | |
393 | * | |
394 | * This must be less than arc_shrink_shift, so that when we shrink the ARC, | |
395 | * we will still not allow it to grow. | |
34dc7c2f | 396 | */ |
ca67b33a | 397 | int arc_no_grow_shift = 5; |
bce45ec9 | 398 | |
49ddb315 | 399 | |
ca0bf58d PS |
400 | /* |
401 | * minimum lifespan of a prefetch block in clock ticks | |
402 | * (initialized in arc_init()) | |
403 | */ | |
d4a72f23 TC |
404 | static int arc_min_prefetch_ms; |
405 | static int arc_min_prescient_prefetch_ms; | |
ca0bf58d | 406 | |
e8b96c60 MA |
407 | /* |
408 | * If this percent of memory is free, don't throttle. | |
409 | */ | |
410 | int arc_lotsfree_percent = 10; | |
411 | ||
b128c09f BB |
412 | /* |
413 | * The arc has filled available memory and has now warmed up. | |
414 | */ | |
c9c9c1e2 | 415 | boolean_t arc_warm; |
b128c09f | 416 | |
34dc7c2f BB |
417 | /* |
418 | * These tunables are for performance analysis. | |
419 | */ | |
c28b2279 BB |
420 | unsigned long zfs_arc_max = 0; |
421 | unsigned long zfs_arc_min = 0; | |
422 | unsigned long zfs_arc_meta_limit = 0; | |
ca0bf58d | 423 | unsigned long zfs_arc_meta_min = 0; |
25458cbe TC |
424 | unsigned long zfs_arc_dnode_limit = 0; |
425 | unsigned long zfs_arc_dnode_reduce_percent = 10; | |
ca67b33a MA |
426 | int zfs_arc_grow_retry = 0; |
427 | int zfs_arc_shrink_shift = 0; | |
728d6ae9 | 428 | int zfs_arc_p_min_shift = 0; |
ca67b33a | 429 | int zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ |
34dc7c2f | 430 | |
dae3e9ea DB |
431 | /* |
432 | * ARC dirty data constraints for arc_tempreserve_space() throttle. | |
433 | */ | |
434 | unsigned long zfs_arc_dirty_limit_percent = 50; /* total dirty data limit */ | |
435 | unsigned long zfs_arc_anon_limit_percent = 25; /* anon block dirty limit */ | |
436 | unsigned long zfs_arc_pool_dirty_percent = 20; /* each pool's anon allowance */ | |
437 | ||
438 | /* | |
439 | * Enable or disable compressed arc buffers. | |
440 | */ | |
d3c2ae1c GW |
441 | int zfs_compressed_arc_enabled = B_TRUE; |
442 | ||
9907cc1c G |
443 | /* |
444 | * ARC will evict meta buffers that exceed arc_meta_limit. This | |
445 | * tunable make arc_meta_limit adjustable for different workloads. | |
446 | */ | |
447 | unsigned long zfs_arc_meta_limit_percent = 75; | |
448 | ||
449 | /* | |
450 | * Percentage that can be consumed by dnodes of ARC meta buffers. | |
451 | */ | |
452 | unsigned long zfs_arc_dnode_limit_percent = 10; | |
453 | ||
bc888666 | 454 | /* |
ca67b33a | 455 | * These tunables are Linux specific |
bc888666 | 456 | */ |
11f552fa | 457 | unsigned long zfs_arc_sys_free = 0; |
d4a72f23 TC |
458 | int zfs_arc_min_prefetch_ms = 0; |
459 | int zfs_arc_min_prescient_prefetch_ms = 0; | |
ca67b33a MA |
460 | int zfs_arc_p_dampener_disable = 1; |
461 | int zfs_arc_meta_prune = 10000; | |
462 | int zfs_arc_meta_strategy = ARC_STRATEGY_META_BALANCED; | |
463 | int zfs_arc_meta_adjust_restarts = 4096; | |
7e8bddd0 | 464 | int zfs_arc_lotsfree_percent = 10; |
bc888666 | 465 | |
34dc7c2f | 466 | /* The 6 states: */ |
13a4027a MM |
467 | arc_state_t ARC_anon; |
468 | arc_state_t ARC_mru; | |
469 | arc_state_t ARC_mru_ghost; | |
470 | arc_state_t ARC_mfu; | |
471 | arc_state_t ARC_mfu_ghost; | |
472 | arc_state_t ARC_l2c_only; | |
34dc7c2f | 473 | |
c9c9c1e2 | 474 | arc_stats_t arc_stats = { |
34dc7c2f BB |
475 | { "hits", KSTAT_DATA_UINT64 }, |
476 | { "misses", KSTAT_DATA_UINT64 }, | |
477 | { "demand_data_hits", KSTAT_DATA_UINT64 }, | |
478 | { "demand_data_misses", KSTAT_DATA_UINT64 }, | |
479 | { "demand_metadata_hits", KSTAT_DATA_UINT64 }, | |
480 | { "demand_metadata_misses", KSTAT_DATA_UINT64 }, | |
481 | { "prefetch_data_hits", KSTAT_DATA_UINT64 }, | |
482 | { "prefetch_data_misses", KSTAT_DATA_UINT64 }, | |
483 | { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, | |
484 | { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, | |
485 | { "mru_hits", KSTAT_DATA_UINT64 }, | |
486 | { "mru_ghost_hits", KSTAT_DATA_UINT64 }, | |
487 | { "mfu_hits", KSTAT_DATA_UINT64 }, | |
488 | { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, | |
489 | { "deleted", KSTAT_DATA_UINT64 }, | |
34dc7c2f | 490 | { "mutex_miss", KSTAT_DATA_UINT64 }, |
0873bb63 | 491 | { "access_skip", KSTAT_DATA_UINT64 }, |
34dc7c2f | 492 | { "evict_skip", KSTAT_DATA_UINT64 }, |
ca0bf58d | 493 | { "evict_not_enough", KSTAT_DATA_UINT64 }, |
428870ff BB |
494 | { "evict_l2_cached", KSTAT_DATA_UINT64 }, |
495 | { "evict_l2_eligible", KSTAT_DATA_UINT64 }, | |
08532162 GA |
496 | { "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 }, |
497 | { "evict_l2_eligible_mru", KSTAT_DATA_UINT64 }, | |
428870ff | 498 | { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, |
ca0bf58d | 499 | { "evict_l2_skip", KSTAT_DATA_UINT64 }, |
34dc7c2f BB |
500 | { "hash_elements", KSTAT_DATA_UINT64 }, |
501 | { "hash_elements_max", KSTAT_DATA_UINT64 }, | |
502 | { "hash_collisions", KSTAT_DATA_UINT64 }, | |
503 | { "hash_chains", KSTAT_DATA_UINT64 }, | |
504 | { "hash_chain_max", KSTAT_DATA_UINT64 }, | |
505 | { "p", KSTAT_DATA_UINT64 }, | |
506 | { "c", KSTAT_DATA_UINT64 }, | |
507 | { "c_min", KSTAT_DATA_UINT64 }, | |
508 | { "c_max", KSTAT_DATA_UINT64 }, | |
509 | { "size", KSTAT_DATA_UINT64 }, | |
d3c2ae1c GW |
510 | { "compressed_size", KSTAT_DATA_UINT64 }, |
511 | { "uncompressed_size", KSTAT_DATA_UINT64 }, | |
512 | { "overhead_size", KSTAT_DATA_UINT64 }, | |
34dc7c2f | 513 | { "hdr_size", KSTAT_DATA_UINT64 }, |
d164b209 | 514 | { "data_size", KSTAT_DATA_UINT64 }, |
500445c0 | 515 | { "metadata_size", KSTAT_DATA_UINT64 }, |
25458cbe TC |
516 | { "dbuf_size", KSTAT_DATA_UINT64 }, |
517 | { "dnode_size", KSTAT_DATA_UINT64 }, | |
518 | { "bonus_size", KSTAT_DATA_UINT64 }, | |
1c2725a1 MM |
519 | #if defined(COMPAT_FREEBSD11) |
520 | { "other_size", KSTAT_DATA_UINT64 }, | |
521 | #endif | |
13be560d | 522 | { "anon_size", KSTAT_DATA_UINT64 }, |
500445c0 PS |
523 | { "anon_evictable_data", KSTAT_DATA_UINT64 }, |
524 | { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, | |
13be560d | 525 | { "mru_size", KSTAT_DATA_UINT64 }, |
500445c0 PS |
526 | { "mru_evictable_data", KSTAT_DATA_UINT64 }, |
527 | { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, | |
13be560d | 528 | { "mru_ghost_size", KSTAT_DATA_UINT64 }, |
500445c0 PS |
529 | { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, |
530 | { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, | |
13be560d | 531 | { "mfu_size", KSTAT_DATA_UINT64 }, |
500445c0 PS |
532 | { "mfu_evictable_data", KSTAT_DATA_UINT64 }, |
533 | { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, | |
13be560d | 534 | { "mfu_ghost_size", KSTAT_DATA_UINT64 }, |
500445c0 PS |
535 | { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, |
536 | { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, | |
34dc7c2f BB |
537 | { "l2_hits", KSTAT_DATA_UINT64 }, |
538 | { "l2_misses", KSTAT_DATA_UINT64 }, | |
08532162 GA |
539 | { "l2_prefetch_asize", KSTAT_DATA_UINT64 }, |
540 | { "l2_mru_asize", KSTAT_DATA_UINT64 }, | |
541 | { "l2_mfu_asize", KSTAT_DATA_UINT64 }, | |
542 | { "l2_bufc_data_asize", KSTAT_DATA_UINT64 }, | |
543 | { "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 }, | |
34dc7c2f BB |
544 | { "l2_feeds", KSTAT_DATA_UINT64 }, |
545 | { "l2_rw_clash", KSTAT_DATA_UINT64 }, | |
d164b209 BB |
546 | { "l2_read_bytes", KSTAT_DATA_UINT64 }, |
547 | { "l2_write_bytes", KSTAT_DATA_UINT64 }, | |
34dc7c2f BB |
548 | { "l2_writes_sent", KSTAT_DATA_UINT64 }, |
549 | { "l2_writes_done", KSTAT_DATA_UINT64 }, | |
550 | { "l2_writes_error", KSTAT_DATA_UINT64 }, | |
ca0bf58d | 551 | { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, |
34dc7c2f BB |
552 | { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, |
553 | { "l2_evict_reading", KSTAT_DATA_UINT64 }, | |
b9541d6b | 554 | { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, |
34dc7c2f BB |
555 | { "l2_free_on_write", KSTAT_DATA_UINT64 }, |
556 | { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, | |
557 | { "l2_cksum_bad", KSTAT_DATA_UINT64 }, | |
558 | { "l2_io_error", KSTAT_DATA_UINT64 }, | |
559 | { "l2_size", KSTAT_DATA_UINT64 }, | |
3a17a7a9 | 560 | { "l2_asize", KSTAT_DATA_UINT64 }, |
34dc7c2f | 561 | { "l2_hdr_size", KSTAT_DATA_UINT64 }, |
77f6826b | 562 | { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, |
657fd33b GA |
563 | { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, |
564 | { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, | |
565 | { "l2_log_blk_count", KSTAT_DATA_UINT64 }, | |
77f6826b GA |
566 | { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, |
567 | { "l2_rebuild_success", KSTAT_DATA_UINT64 }, | |
568 | { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, | |
569 | { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, | |
570 | { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, | |
571 | { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, | |
572 | { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, | |
573 | { "l2_rebuild_size", KSTAT_DATA_UINT64 }, | |
657fd33b | 574 | { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, |
77f6826b GA |
575 | { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, |
576 | { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, | |
77f6826b | 577 | { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, |
1834f2d8 | 578 | { "memory_throttle_count", KSTAT_DATA_UINT64 }, |
7cb67b45 BB |
579 | { "memory_direct_count", KSTAT_DATA_UINT64 }, |
580 | { "memory_indirect_count", KSTAT_DATA_UINT64 }, | |
70f02287 BB |
581 | { "memory_all_bytes", KSTAT_DATA_UINT64 }, |
582 | { "memory_free_bytes", KSTAT_DATA_UINT64 }, | |
583 | { "memory_available_bytes", KSTAT_DATA_INT64 }, | |
1834f2d8 BB |
584 | { "arc_no_grow", KSTAT_DATA_UINT64 }, |
585 | { "arc_tempreserve", KSTAT_DATA_UINT64 }, | |
586 | { "arc_loaned_bytes", KSTAT_DATA_UINT64 }, | |
ab26409d | 587 | { "arc_prune", KSTAT_DATA_UINT64 }, |
1834f2d8 BB |
588 | { "arc_meta_used", KSTAT_DATA_UINT64 }, |
589 | { "arc_meta_limit", KSTAT_DATA_UINT64 }, | |
25458cbe | 590 | { "arc_dnode_limit", KSTAT_DATA_UINT64 }, |
1834f2d8 | 591 | { "arc_meta_max", KSTAT_DATA_UINT64 }, |
11f552fa | 592 | { "arc_meta_min", KSTAT_DATA_UINT64 }, |
a8b2e306 | 593 | { "async_upgrade_sync", KSTAT_DATA_UINT64 }, |
7f60329a | 594 | { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, |
d4a72f23 | 595 | { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, |
11f552fa | 596 | { "arc_need_free", KSTAT_DATA_UINT64 }, |
b5256303 | 597 | { "arc_sys_free", KSTAT_DATA_UINT64 }, |
1dc32a67 MA |
598 | { "arc_raw_size", KSTAT_DATA_UINT64 }, |
599 | { "cached_only_in_progress", KSTAT_DATA_UINT64 }, | |
85ec5cba | 600 | { "abd_chunk_waste_size", KSTAT_DATA_UINT64 }, |
34dc7c2f BB |
601 | }; |
602 | ||
c4c162c1 AM |
603 | arc_sums_t arc_sums; |
604 | ||
34dc7c2f BB |
605 | #define ARCSTAT_MAX(stat, val) { \ |
606 | uint64_t m; \ | |
607 | while ((val) > (m = arc_stats.stat.value.ui64) && \ | |
608 | (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ | |
609 | continue; \ | |
610 | } | |
611 | ||
34dc7c2f BB |
612 | /* |
613 | * We define a macro to allow ARC hits/misses to be easily broken down by | |
614 | * two separate conditions, giving a total of four different subtypes for | |
615 | * each of hits and misses (so eight statistics total). | |
616 | */ | |
617 | #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ | |
618 | if (cond1) { \ | |
619 | if (cond2) { \ | |
620 | ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ | |
621 | } else { \ | |
622 | ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ | |
623 | } \ | |
624 | } else { \ | |
625 | if (cond2) { \ | |
626 | ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ | |
627 | } else { \ | |
628 | ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ | |
629 | } \ | |
630 | } | |
631 | ||
77f6826b GA |
632 | /* |
633 | * This macro allows us to use kstats as floating averages. Each time we | |
634 | * update this kstat, we first factor it and the update value by | |
635 | * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall | |
636 | * average. This macro assumes that integer loads and stores are atomic, but | |
637 | * is not safe for multiple writers updating the kstat in parallel (only the | |
638 | * last writer's update will remain). | |
639 | */ | |
640 | #define ARCSTAT_F_AVG_FACTOR 3 | |
641 | #define ARCSTAT_F_AVG(stat, value) \ | |
642 | do { \ | |
643 | uint64_t x = ARCSTAT(stat); \ | |
644 | x = x - x / ARCSTAT_F_AVG_FACTOR + \ | |
645 | (value) / ARCSTAT_F_AVG_FACTOR; \ | |
646 | ARCSTAT(stat) = x; \ | |
77f6826b GA |
647 | } while (0) |
648 | ||
34dc7c2f | 649 | kstat_t *arc_ksp; |
c9c9c1e2 | 650 | |
34dc7c2f BB |
651 | /* |
652 | * There are several ARC variables that are critical to export as kstats -- | |
653 | * but we don't want to have to grovel around in the kstat whenever we wish to | |
654 | * manipulate them. For these variables, we therefore define them to be in | |
655 | * terms of the statistic variable. This assures that we are not introducing | |
656 | * the possibility of inconsistency by having shadow copies of the variables, | |
657 | * while still allowing the code to be readable. | |
658 | */ | |
1834f2d8 BB |
659 | #define arc_tempreserve ARCSTAT(arcstat_tempreserve) |
660 | #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes) | |
23c0a133 | 661 | #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */ |
03fdcb9a MM |
662 | /* max size for dnodes */ |
663 | #define arc_dnode_size_limit ARCSTAT(arcstat_dnode_limit) | |
ca0bf58d | 664 | #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */ |
3442c2a0 | 665 | #define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */ |
34dc7c2f | 666 | |
c9c9c1e2 MM |
667 | hrtime_t arc_growtime; |
668 | list_t arc_prune_list; | |
669 | kmutex_t arc_prune_mtx; | |
670 | taskq_t *arc_prune_taskq; | |
428870ff | 671 | |
34dc7c2f BB |
672 | #define GHOST_STATE(state) \ |
673 | ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ | |
674 | (state) == arc_l2c_only) | |
675 | ||
2a432414 GW |
676 | #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) |
677 | #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) | |
678 | #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) | |
679 | #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) | |
d4a72f23 TC |
680 | #define HDR_PRESCIENT_PREFETCH(hdr) \ |
681 | ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) | |
d3c2ae1c GW |
682 | #define HDR_COMPRESSION_ENABLED(hdr) \ |
683 | ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) | |
b9541d6b | 684 | |
2a432414 GW |
685 | #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) |
686 | #define HDR_L2_READING(hdr) \ | |
d3c2ae1c GW |
687 | (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ |
688 | ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) | |
2a432414 GW |
689 | #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) |
690 | #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) | |
691 | #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) | |
b5256303 TC |
692 | #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) |
693 | #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) | |
d3c2ae1c | 694 | #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) |
34dc7c2f | 695 | |
b9541d6b | 696 | #define HDR_ISTYPE_METADATA(hdr) \ |
d3c2ae1c | 697 | ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) |
b9541d6b CW |
698 | #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) |
699 | ||
700 | #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) | |
701 | #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) | |
b5256303 TC |
702 | #define HDR_HAS_RABD(hdr) \ |
703 | (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ | |
704 | (hdr)->b_crypt_hdr.b_rabd != NULL) | |
705 | #define HDR_ENCRYPTED(hdr) \ | |
706 | (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) | |
707 | #define HDR_AUTHENTICATED(hdr) \ | |
708 | (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) | |
b9541d6b | 709 | |
d3c2ae1c GW |
710 | /* For storing compression mode in b_flags */ |
711 | #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) | |
712 | ||
713 | #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ | |
714 | HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) | |
715 | #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ | |
716 | HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); | |
717 | ||
718 | #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) | |
524b4217 DK |
719 | #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) |
720 | #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) | |
b5256303 | 721 | #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) |
d3c2ae1c | 722 | |
34dc7c2f BB |
723 | /* |
724 | * Other sizes | |
725 | */ | |
726 | ||
b5256303 TC |
727 | #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) |
728 | #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr)) | |
b9541d6b | 729 | #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) |
34dc7c2f BB |
730 | |
731 | /* | |
732 | * Hash table routines | |
733 | */ | |
734 | ||
490c845e | 735 | #define BUF_LOCKS 2048 |
34dc7c2f BB |
736 | typedef struct buf_hash_table { |
737 | uint64_t ht_mask; | |
738 | arc_buf_hdr_t **ht_table; | |
490c845e | 739 | kmutex_t ht_locks[BUF_LOCKS] ____cacheline_aligned; |
34dc7c2f BB |
740 | } buf_hash_table_t; |
741 | ||
742 | static buf_hash_table_t buf_hash_table; | |
743 | ||
744 | #define BUF_HASH_INDEX(spa, dva, birth) \ | |
745 | (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) | |
490c845e | 746 | #define BUF_HASH_LOCK(idx) (&buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) |
428870ff BB |
747 | #define HDR_LOCK(hdr) \ |
748 | (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) | |
34dc7c2f BB |
749 | |
750 | uint64_t zfs_crc64_table[256]; | |
751 | ||
752 | /* | |
753 | * Level 2 ARC | |
754 | */ | |
755 | ||
756 | #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ | |
3a17a7a9 | 757 | #define L2ARC_HEADROOM 2 /* num of writes */ |
8a09d5fd | 758 | |
3a17a7a9 SK |
759 | /* |
760 | * If we discover during ARC scan any buffers to be compressed, we boost | |
761 | * our headroom for the next scanning cycle by this percentage multiple. | |
762 | */ | |
763 | #define L2ARC_HEADROOM_BOOST 200 | |
d164b209 BB |
764 | #define L2ARC_FEED_SECS 1 /* caching interval secs */ |
765 | #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ | |
34dc7c2f | 766 | |
4aafab91 G |
767 | /* |
768 | * We can feed L2ARC from two states of ARC buffers, mru and mfu, | |
769 | * and each of the state has two types: data and metadata. | |
770 | */ | |
771 | #define L2ARC_FEED_TYPES 4 | |
772 | ||
d3cc8b15 | 773 | /* L2ARC Performance Tunables */ |
abd8610c BB |
774 | unsigned long l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */ |
775 | unsigned long l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */ | |
776 | unsigned long l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */ | |
3a17a7a9 | 777 | unsigned long l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; |
abd8610c BB |
778 | unsigned long l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ |
779 | unsigned long l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */ | |
780 | int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ | |
781 | int l2arc_feed_again = B_TRUE; /* turbo warmup */ | |
c93504f0 | 782 | int l2arc_norw = B_FALSE; /* no reads during writes */ |
523e1295 | 783 | int l2arc_meta_percent = 33; /* limit on headers size */ |
34dc7c2f BB |
784 | |
785 | /* | |
786 | * L2ARC Internals | |
787 | */ | |
34dc7c2f BB |
788 | static list_t L2ARC_dev_list; /* device list */ |
789 | static list_t *l2arc_dev_list; /* device list pointer */ | |
790 | static kmutex_t l2arc_dev_mtx; /* device list mutex */ | |
791 | static l2arc_dev_t *l2arc_dev_last; /* last device used */ | |
34dc7c2f BB |
792 | static list_t L2ARC_free_on_write; /* free after write buf list */ |
793 | static list_t *l2arc_free_on_write; /* free after write list ptr */ | |
794 | static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ | |
795 | static uint64_t l2arc_ndev; /* number of devices */ | |
796 | ||
797 | typedef struct l2arc_read_callback { | |
2aa34383 | 798 | arc_buf_hdr_t *l2rcb_hdr; /* read header */ |
3a17a7a9 | 799 | blkptr_t l2rcb_bp; /* original blkptr */ |
5dbd68a3 | 800 | zbookmark_phys_t l2rcb_zb; /* original bookmark */ |
3a17a7a9 | 801 | int l2rcb_flags; /* original flags */ |
82710e99 | 802 | abd_t *l2rcb_abd; /* temporary buffer */ |
34dc7c2f BB |
803 | } l2arc_read_callback_t; |
804 | ||
34dc7c2f BB |
805 | typedef struct l2arc_data_free { |
806 | /* protected by l2arc_free_on_write_mtx */ | |
a6255b7f | 807 | abd_t *l2df_abd; |
34dc7c2f | 808 | size_t l2df_size; |
d3c2ae1c | 809 | arc_buf_contents_t l2df_type; |
34dc7c2f BB |
810 | list_node_t l2df_list_node; |
811 | } l2arc_data_free_t; | |
812 | ||
b5256303 TC |
813 | typedef enum arc_fill_flags { |
814 | ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ | |
815 | ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ | |
816 | ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ | |
817 | ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ | |
818 | ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ | |
819 | } arc_fill_flags_t; | |
820 | ||
f7de776d AM |
821 | typedef enum arc_ovf_level { |
822 | ARC_OVF_NONE, /* ARC within target size. */ | |
823 | ARC_OVF_SOME, /* ARC is slightly overflowed. */ | |
824 | ARC_OVF_SEVERE /* ARC is severely overflowed. */ | |
825 | } arc_ovf_level_t; | |
826 | ||
34dc7c2f BB |
827 | static kmutex_t l2arc_feed_thr_lock; |
828 | static kcondvar_t l2arc_feed_thr_cv; | |
829 | static uint8_t l2arc_thread_exit; | |
830 | ||
77f6826b GA |
831 | static kmutex_t l2arc_rebuild_thr_lock; |
832 | static kcondvar_t l2arc_rebuild_thr_cv; | |
833 | ||
e111c802 MM |
834 | enum arc_hdr_alloc_flags { |
835 | ARC_HDR_ALLOC_RDATA = 0x1, | |
836 | ARC_HDR_DO_ADAPT = 0x2, | |
837 | }; | |
838 | ||
839 | ||
840 | static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, void *, boolean_t); | |
d3c2ae1c | 841 | static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, void *); |
e111c802 | 842 | static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, void *, boolean_t); |
a6255b7f | 843 | static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, void *); |
d3c2ae1c | 844 | static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, void *); |
a6255b7f | 845 | static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag); |
b5256303 | 846 | static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t); |
e111c802 | 847 | static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int); |
2a432414 | 848 | static void arc_access(arc_buf_hdr_t *, kmutex_t *); |
2a432414 GW |
849 | static void arc_buf_watch(arc_buf_t *); |
850 | ||
b9541d6b CW |
851 | static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); |
852 | static uint32_t arc_bufc_to_flags(arc_buf_contents_t); | |
d3c2ae1c GW |
853 | static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); |
854 | static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); | |
b9541d6b | 855 | |
2a432414 GW |
856 | static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); |
857 | static void l2arc_read_done(zio_t *); | |
cfd59f90 | 858 | static void l2arc_do_free_on_write(void); |
08532162 GA |
859 | static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, |
860 | boolean_t state_only); | |
861 | ||
862 | #define l2arc_hdr_arcstats_increment(hdr) \ | |
863 | l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE) | |
864 | #define l2arc_hdr_arcstats_decrement(hdr) \ | |
865 | l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE) | |
866 | #define l2arc_hdr_arcstats_increment_state(hdr) \ | |
867 | l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE) | |
868 | #define l2arc_hdr_arcstats_decrement_state(hdr) \ | |
869 | l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE) | |
34dc7c2f | 870 | |
feb3a7ee GA |
871 | /* |
872 | * l2arc_mfuonly : A ZFS module parameter that controls whether only MFU | |
873 | * metadata and data are cached from ARC into L2ARC. | |
874 | */ | |
875 | int l2arc_mfuonly = 0; | |
876 | ||
b7654bd7 GA |
877 | /* |
878 | * L2ARC TRIM | |
879 | * l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of | |
880 | * the current write size (l2arc_write_max) we should TRIM if we | |
881 | * have filled the device. It is defined as a percentage of the | |
882 | * write size. If set to 100 we trim twice the space required to | |
883 | * accommodate upcoming writes. A minimum of 64MB will be trimmed. | |
884 | * It also enables TRIM of the whole L2ARC device upon creation or | |
885 | * addition to an existing pool or if the header of the device is | |
886 | * invalid upon importing a pool or onlining a cache device. The | |
887 | * default is 0, which disables TRIM on L2ARC altogether as it can | |
888 | * put significant stress on the underlying storage devices. This | |
889 | * will vary depending of how well the specific device handles | |
890 | * these commands. | |
891 | */ | |
892 | unsigned long l2arc_trim_ahead = 0; | |
893 | ||
77f6826b GA |
894 | /* |
895 | * Performance tuning of L2ARC persistence: | |
896 | * | |
897 | * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding | |
898 | * an L2ARC device (either at pool import or later) will attempt | |
899 | * to rebuild L2ARC buffer contents. | |
900 | * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls | |
901 | * whether log blocks are written to the L2ARC device. If the L2ARC | |
902 | * device is less than 1GB, the amount of data l2arc_evict() | |
903 | * evicts is significant compared to the amount of restored L2ARC | |
904 | * data. In this case do not write log blocks in L2ARC in order | |
905 | * not to waste space. | |
906 | */ | |
907 | int l2arc_rebuild_enabled = B_TRUE; | |
908 | unsigned long l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; | |
909 | ||
910 | /* L2ARC persistence rebuild control routines. */ | |
911 | void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); | |
3eaf76a8 | 912 | static void l2arc_dev_rebuild_thread(void *arg); |
77f6826b GA |
913 | static int l2arc_rebuild(l2arc_dev_t *dev); |
914 | ||
915 | /* L2ARC persistence read I/O routines. */ | |
916 | static int l2arc_dev_hdr_read(l2arc_dev_t *dev); | |
917 | static int l2arc_log_blk_read(l2arc_dev_t *dev, | |
918 | const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, | |
919 | l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, | |
920 | zio_t *this_io, zio_t **next_io); | |
921 | static zio_t *l2arc_log_blk_fetch(vdev_t *vd, | |
922 | const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); | |
923 | static void l2arc_log_blk_fetch_abort(zio_t *zio); | |
924 | ||
925 | /* L2ARC persistence block restoration routines. */ | |
926 | static void l2arc_log_blk_restore(l2arc_dev_t *dev, | |
a76e4e67 | 927 | const l2arc_log_blk_phys_t *lb, uint64_t lb_asize); |
77f6826b GA |
928 | static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, |
929 | l2arc_dev_t *dev); | |
930 | ||
931 | /* L2ARC persistence write I/O routines. */ | |
77f6826b GA |
932 | static void l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, |
933 | l2arc_write_callback_t *cb); | |
934 | ||
dd4bc569 | 935 | /* L2ARC persistence auxiliary routines. */ |
77f6826b GA |
936 | boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, |
937 | const l2arc_log_blkptr_t *lbp); | |
938 | static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, | |
939 | const arc_buf_hdr_t *ab); | |
940 | boolean_t l2arc_range_check_overlap(uint64_t bottom, | |
941 | uint64_t top, uint64_t check); | |
942 | static void l2arc_blk_fetch_done(zio_t *zio); | |
943 | static inline uint64_t | |
944 | l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); | |
37fb3e43 PD |
945 | |
946 | /* | |
947 | * We use Cityhash for this. It's fast, and has good hash properties without | |
948 | * requiring any large static buffers. | |
949 | */ | |
34dc7c2f | 950 | static uint64_t |
d164b209 | 951 | buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) |
34dc7c2f | 952 | { |
37fb3e43 | 953 | return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); |
34dc7c2f BB |
954 | } |
955 | ||
d3c2ae1c GW |
956 | #define HDR_EMPTY(hdr) \ |
957 | ((hdr)->b_dva.dva_word[0] == 0 && \ | |
958 | (hdr)->b_dva.dva_word[1] == 0) | |
34dc7c2f | 959 | |
ca6c7a94 BB |
960 | #define HDR_EMPTY_OR_LOCKED(hdr) \ |
961 | (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) | |
962 | ||
d3c2ae1c GW |
963 | #define HDR_EQUAL(spa, dva, birth, hdr) \ |
964 | ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ | |
965 | ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ | |
966 | ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) | |
34dc7c2f | 967 | |
428870ff BB |
968 | static void |
969 | buf_discard_identity(arc_buf_hdr_t *hdr) | |
970 | { | |
971 | hdr->b_dva.dva_word[0] = 0; | |
972 | hdr->b_dva.dva_word[1] = 0; | |
973 | hdr->b_birth = 0; | |
428870ff BB |
974 | } |
975 | ||
34dc7c2f | 976 | static arc_buf_hdr_t * |
9b67f605 | 977 | buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) |
34dc7c2f | 978 | { |
9b67f605 MA |
979 | const dva_t *dva = BP_IDENTITY(bp); |
980 | uint64_t birth = BP_PHYSICAL_BIRTH(bp); | |
34dc7c2f BB |
981 | uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); |
982 | kmutex_t *hash_lock = BUF_HASH_LOCK(idx); | |
2a432414 | 983 | arc_buf_hdr_t *hdr; |
34dc7c2f BB |
984 | |
985 | mutex_enter(hash_lock); | |
2a432414 GW |
986 | for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; |
987 | hdr = hdr->b_hash_next) { | |
d3c2ae1c | 988 | if (HDR_EQUAL(spa, dva, birth, hdr)) { |
34dc7c2f | 989 | *lockp = hash_lock; |
2a432414 | 990 | return (hdr); |
34dc7c2f BB |
991 | } |
992 | } | |
993 | mutex_exit(hash_lock); | |
994 | *lockp = NULL; | |
995 | return (NULL); | |
996 | } | |
997 | ||
998 | /* | |
999 | * Insert an entry into the hash table. If there is already an element | |
1000 | * equal to elem in the hash table, then the already existing element | |
1001 | * will be returned and the new element will not be inserted. | |
1002 | * Otherwise returns NULL. | |
b9541d6b | 1003 | * If lockp == NULL, the caller is assumed to already hold the hash lock. |
34dc7c2f BB |
1004 | */ |
1005 | static arc_buf_hdr_t * | |
2a432414 | 1006 | buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) |
34dc7c2f | 1007 | { |
2a432414 | 1008 | uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); |
34dc7c2f | 1009 | kmutex_t *hash_lock = BUF_HASH_LOCK(idx); |
2a432414 | 1010 | arc_buf_hdr_t *fhdr; |
34dc7c2f BB |
1011 | uint32_t i; |
1012 | ||
2a432414 GW |
1013 | ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); |
1014 | ASSERT(hdr->b_birth != 0); | |
1015 | ASSERT(!HDR_IN_HASH_TABLE(hdr)); | |
b9541d6b CW |
1016 | |
1017 | if (lockp != NULL) { | |
1018 | *lockp = hash_lock; | |
1019 | mutex_enter(hash_lock); | |
1020 | } else { | |
1021 | ASSERT(MUTEX_HELD(hash_lock)); | |
1022 | } | |
1023 | ||
2a432414 GW |
1024 | for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; |
1025 | fhdr = fhdr->b_hash_next, i++) { | |
d3c2ae1c | 1026 | if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) |
2a432414 | 1027 | return (fhdr); |
34dc7c2f BB |
1028 | } |
1029 | ||
2a432414 GW |
1030 | hdr->b_hash_next = buf_hash_table.ht_table[idx]; |
1031 | buf_hash_table.ht_table[idx] = hdr; | |
d3c2ae1c | 1032 | arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); |
34dc7c2f BB |
1033 | |
1034 | /* collect some hash table performance data */ | |
1035 | if (i > 0) { | |
1036 | ARCSTAT_BUMP(arcstat_hash_collisions); | |
1037 | if (i == 1) | |
1038 | ARCSTAT_BUMP(arcstat_hash_chains); | |
1039 | ||
1040 | ARCSTAT_MAX(arcstat_hash_chain_max, i); | |
1041 | } | |
c4c162c1 AM |
1042 | uint64_t he = atomic_inc_64_nv( |
1043 | &arc_stats.arcstat_hash_elements.value.ui64); | |
1044 | ARCSTAT_MAX(arcstat_hash_elements_max, he); | |
34dc7c2f BB |
1045 | |
1046 | return (NULL); | |
1047 | } | |
1048 | ||
1049 | static void | |
2a432414 | 1050 | buf_hash_remove(arc_buf_hdr_t *hdr) |
34dc7c2f | 1051 | { |
2a432414 GW |
1052 | arc_buf_hdr_t *fhdr, **hdrp; |
1053 | uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); | |
34dc7c2f BB |
1054 | |
1055 | ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); | |
2a432414 | 1056 | ASSERT(HDR_IN_HASH_TABLE(hdr)); |
34dc7c2f | 1057 | |
2a432414 GW |
1058 | hdrp = &buf_hash_table.ht_table[idx]; |
1059 | while ((fhdr = *hdrp) != hdr) { | |
d3c2ae1c | 1060 | ASSERT3P(fhdr, !=, NULL); |
2a432414 | 1061 | hdrp = &fhdr->b_hash_next; |
34dc7c2f | 1062 | } |
2a432414 GW |
1063 | *hdrp = hdr->b_hash_next; |
1064 | hdr->b_hash_next = NULL; | |
d3c2ae1c | 1065 | arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); |
34dc7c2f BB |
1066 | |
1067 | /* collect some hash table performance data */ | |
c4c162c1 | 1068 | atomic_dec_64(&arc_stats.arcstat_hash_elements.value.ui64); |
34dc7c2f BB |
1069 | |
1070 | if (buf_hash_table.ht_table[idx] && | |
1071 | buf_hash_table.ht_table[idx]->b_hash_next == NULL) | |
1072 | ARCSTAT_BUMPDOWN(arcstat_hash_chains); | |
1073 | } | |
1074 | ||
1075 | /* | |
1076 | * Global data structures and functions for the buf kmem cache. | |
1077 | */ | |
b5256303 | 1078 | |
b9541d6b | 1079 | static kmem_cache_t *hdr_full_cache; |
b5256303 | 1080 | static kmem_cache_t *hdr_full_crypt_cache; |
b9541d6b | 1081 | static kmem_cache_t *hdr_l2only_cache; |
34dc7c2f BB |
1082 | static kmem_cache_t *buf_cache; |
1083 | ||
1084 | static void | |
1085 | buf_fini(void) | |
1086 | { | |
1087 | int i; | |
1088 | ||
93ce2b4c | 1089 | #if defined(_KERNEL) |
d1d7e268 MK |
1090 | /* |
1091 | * Large allocations which do not require contiguous pages | |
1092 | * should be using vmem_free() in the linux kernel\ | |
1093 | */ | |
00b46022 BB |
1094 | vmem_free(buf_hash_table.ht_table, |
1095 | (buf_hash_table.ht_mask + 1) * sizeof (void *)); | |
1096 | #else | |
34dc7c2f BB |
1097 | kmem_free(buf_hash_table.ht_table, |
1098 | (buf_hash_table.ht_mask + 1) * sizeof (void *)); | |
00b46022 | 1099 | #endif |
34dc7c2f | 1100 | for (i = 0; i < BUF_LOCKS; i++) |
490c845e | 1101 | mutex_destroy(BUF_HASH_LOCK(i)); |
b9541d6b | 1102 | kmem_cache_destroy(hdr_full_cache); |
b5256303 | 1103 | kmem_cache_destroy(hdr_full_crypt_cache); |
b9541d6b | 1104 | kmem_cache_destroy(hdr_l2only_cache); |
34dc7c2f BB |
1105 | kmem_cache_destroy(buf_cache); |
1106 | } | |
1107 | ||
1108 | /* | |
1109 | * Constructor callback - called when the cache is empty | |
1110 | * and a new buf is requested. | |
1111 | */ | |
1112 | /* ARGSUSED */ | |
1113 | static int | |
b9541d6b CW |
1114 | hdr_full_cons(void *vbuf, void *unused, int kmflag) |
1115 | { | |
1116 | arc_buf_hdr_t *hdr = vbuf; | |
1117 | ||
1118 | bzero(hdr, HDR_FULL_SIZE); | |
ae76f45c | 1119 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; |
b9541d6b | 1120 | cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL); |
424fd7c3 | 1121 | zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); |
b9541d6b CW |
1122 | mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); |
1123 | list_link_init(&hdr->b_l1hdr.b_arc_node); | |
1124 | list_link_init(&hdr->b_l2hdr.b_l2node); | |
ca0bf58d | 1125 | multilist_link_init(&hdr->b_l1hdr.b_arc_node); |
b9541d6b CW |
1126 | arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); |
1127 | ||
1128 | return (0); | |
1129 | } | |
1130 | ||
b5256303 TC |
1131 | /* ARGSUSED */ |
1132 | static int | |
1133 | hdr_full_crypt_cons(void *vbuf, void *unused, int kmflag) | |
1134 | { | |
1135 | arc_buf_hdr_t *hdr = vbuf; | |
1136 | ||
1137 | hdr_full_cons(vbuf, unused, kmflag); | |
1138 | bzero(&hdr->b_crypt_hdr, sizeof (hdr->b_crypt_hdr)); | |
1139 | arc_space_consume(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); | |
1140 | ||
1141 | return (0); | |
1142 | } | |
1143 | ||
b9541d6b CW |
1144 | /* ARGSUSED */ |
1145 | static int | |
1146 | hdr_l2only_cons(void *vbuf, void *unused, int kmflag) | |
34dc7c2f | 1147 | { |
2a432414 GW |
1148 | arc_buf_hdr_t *hdr = vbuf; |
1149 | ||
b9541d6b CW |
1150 | bzero(hdr, HDR_L2ONLY_SIZE); |
1151 | arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); | |
34dc7c2f | 1152 | |
34dc7c2f BB |
1153 | return (0); |
1154 | } | |
1155 | ||
b128c09f BB |
1156 | /* ARGSUSED */ |
1157 | static int | |
1158 | buf_cons(void *vbuf, void *unused, int kmflag) | |
1159 | { | |
1160 | arc_buf_t *buf = vbuf; | |
1161 | ||
1162 | bzero(buf, sizeof (arc_buf_t)); | |
428870ff | 1163 | mutex_init(&buf->b_evict_lock, NULL, MUTEX_DEFAULT, NULL); |
d164b209 BB |
1164 | arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); |
1165 | ||
b128c09f BB |
1166 | return (0); |
1167 | } | |
1168 | ||
34dc7c2f BB |
1169 | /* |
1170 | * Destructor callback - called when a cached buf is | |
1171 | * no longer required. | |
1172 | */ | |
1173 | /* ARGSUSED */ | |
1174 | static void | |
b9541d6b | 1175 | hdr_full_dest(void *vbuf, void *unused) |
34dc7c2f | 1176 | { |
2a432414 | 1177 | arc_buf_hdr_t *hdr = vbuf; |
34dc7c2f | 1178 | |
d3c2ae1c | 1179 | ASSERT(HDR_EMPTY(hdr)); |
b9541d6b | 1180 | cv_destroy(&hdr->b_l1hdr.b_cv); |
424fd7c3 | 1181 | zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); |
b9541d6b | 1182 | mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); |
ca0bf58d | 1183 | ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); |
b9541d6b CW |
1184 | arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); |
1185 | } | |
1186 | ||
b5256303 TC |
1187 | /* ARGSUSED */ |
1188 | static void | |
1189 | hdr_full_crypt_dest(void *vbuf, void *unused) | |
1190 | { | |
1191 | arc_buf_hdr_t *hdr = vbuf; | |
1192 | ||
1193 | hdr_full_dest(vbuf, unused); | |
1194 | arc_space_return(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); | |
1195 | } | |
1196 | ||
b9541d6b CW |
1197 | /* ARGSUSED */ |
1198 | static void | |
1199 | hdr_l2only_dest(void *vbuf, void *unused) | |
1200 | { | |
2a8ba608 | 1201 | arc_buf_hdr_t *hdr __maybe_unused = vbuf; |
b9541d6b | 1202 | |
d3c2ae1c | 1203 | ASSERT(HDR_EMPTY(hdr)); |
b9541d6b | 1204 | arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); |
34dc7c2f BB |
1205 | } |
1206 | ||
b128c09f BB |
1207 | /* ARGSUSED */ |
1208 | static void | |
1209 | buf_dest(void *vbuf, void *unused) | |
1210 | { | |
1211 | arc_buf_t *buf = vbuf; | |
1212 | ||
428870ff | 1213 | mutex_destroy(&buf->b_evict_lock); |
d164b209 | 1214 | arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); |
b128c09f BB |
1215 | } |
1216 | ||
34dc7c2f BB |
1217 | static void |
1218 | buf_init(void) | |
1219 | { | |
2db28197 | 1220 | uint64_t *ct = NULL; |
34dc7c2f BB |
1221 | uint64_t hsize = 1ULL << 12; |
1222 | int i, j; | |
1223 | ||
1224 | /* | |
1225 | * The hash table is big enough to fill all of physical memory | |
49ddb315 MA |
1226 | * with an average block size of zfs_arc_average_blocksize (default 8K). |
1227 | * By default, the table will take up | |
1228 | * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). | |
34dc7c2f | 1229 | */ |
9edb3695 | 1230 | while (hsize * zfs_arc_average_blocksize < arc_all_memory()) |
34dc7c2f BB |
1231 | hsize <<= 1; |
1232 | retry: | |
1233 | buf_hash_table.ht_mask = hsize - 1; | |
93ce2b4c | 1234 | #if defined(_KERNEL) |
d1d7e268 MK |
1235 | /* |
1236 | * Large allocations which do not require contiguous pages | |
1237 | * should be using vmem_alloc() in the linux kernel | |
1238 | */ | |
00b46022 BB |
1239 | buf_hash_table.ht_table = |
1240 | vmem_zalloc(hsize * sizeof (void*), KM_SLEEP); | |
1241 | #else | |
34dc7c2f BB |
1242 | buf_hash_table.ht_table = |
1243 | kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); | |
00b46022 | 1244 | #endif |
34dc7c2f BB |
1245 | if (buf_hash_table.ht_table == NULL) { |
1246 | ASSERT(hsize > (1ULL << 8)); | |
1247 | hsize >>= 1; | |
1248 | goto retry; | |
1249 | } | |
1250 | ||
b9541d6b | 1251 | hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, |
026e529c | 1252 | 0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, 0); |
b5256303 TC |
1253 | hdr_full_crypt_cache = kmem_cache_create("arc_buf_hdr_t_full_crypt", |
1254 | HDR_FULL_CRYPT_SIZE, 0, hdr_full_crypt_cons, hdr_full_crypt_dest, | |
026e529c | 1255 | NULL, NULL, NULL, 0); |
b9541d6b | 1256 | hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", |
026e529c | 1257 | HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL, |
b9541d6b | 1258 | NULL, NULL, 0); |
34dc7c2f | 1259 | buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), |
b128c09f | 1260 | 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); |
34dc7c2f BB |
1261 | |
1262 | for (i = 0; i < 256; i++) | |
1263 | for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) | |
1264 | *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); | |
1265 | ||
490c845e AM |
1266 | for (i = 0; i < BUF_LOCKS; i++) |
1267 | mutex_init(BUF_HASH_LOCK(i), NULL, MUTEX_DEFAULT, NULL); | |
34dc7c2f BB |
1268 | } |
1269 | ||
d3c2ae1c | 1270 | #define ARC_MINTIME (hz>>4) /* 62 ms */ |
ca0bf58d | 1271 | |
2aa34383 DK |
1272 | /* |
1273 | * This is the size that the buf occupies in memory. If the buf is compressed, | |
1274 | * it will correspond to the compressed size. You should use this method of | |
1275 | * getting the buf size unless you explicitly need the logical size. | |
1276 | */ | |
1277 | uint64_t | |
1278 | arc_buf_size(arc_buf_t *buf) | |
1279 | { | |
1280 | return (ARC_BUF_COMPRESSED(buf) ? | |
1281 | HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); | |
1282 | } | |
1283 | ||
1284 | uint64_t | |
1285 | arc_buf_lsize(arc_buf_t *buf) | |
1286 | { | |
1287 | return (HDR_GET_LSIZE(buf->b_hdr)); | |
1288 | } | |
1289 | ||
b5256303 TC |
1290 | /* |
1291 | * This function will return B_TRUE if the buffer is encrypted in memory. | |
1292 | * This buffer can be decrypted by calling arc_untransform(). | |
1293 | */ | |
1294 | boolean_t | |
1295 | arc_is_encrypted(arc_buf_t *buf) | |
1296 | { | |
1297 | return (ARC_BUF_ENCRYPTED(buf) != 0); | |
1298 | } | |
1299 | ||
1300 | /* | |
1301 | * Returns B_TRUE if the buffer represents data that has not had its MAC | |
1302 | * verified yet. | |
1303 | */ | |
1304 | boolean_t | |
1305 | arc_is_unauthenticated(arc_buf_t *buf) | |
1306 | { | |
1307 | return (HDR_NOAUTH(buf->b_hdr) != 0); | |
1308 | } | |
1309 | ||
1310 | void | |
1311 | arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, | |
1312 | uint8_t *iv, uint8_t *mac) | |
1313 | { | |
1314 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
1315 | ||
1316 | ASSERT(HDR_PROTECTED(hdr)); | |
1317 | ||
1318 | bcopy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); | |
1319 | bcopy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); | |
1320 | bcopy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); | |
1321 | *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? | |
1322 | ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; | |
1323 | } | |
1324 | ||
1325 | /* | |
1326 | * Indicates how this buffer is compressed in memory. If it is not compressed | |
1327 | * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with | |
1328 | * arc_untransform() as long as it is also unencrypted. | |
1329 | */ | |
2aa34383 DK |
1330 | enum zio_compress |
1331 | arc_get_compression(arc_buf_t *buf) | |
1332 | { | |
1333 | return (ARC_BUF_COMPRESSED(buf) ? | |
1334 | HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); | |
1335 | } | |
1336 | ||
b5256303 TC |
1337 | /* |
1338 | * Return the compression algorithm used to store this data in the ARC. If ARC | |
1339 | * compression is enabled or this is an encrypted block, this will be the same | |
1340 | * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. | |
1341 | */ | |
1342 | static inline enum zio_compress | |
1343 | arc_hdr_get_compress(arc_buf_hdr_t *hdr) | |
1344 | { | |
1345 | return (HDR_COMPRESSION_ENABLED(hdr) ? | |
1346 | HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); | |
1347 | } | |
1348 | ||
10b3c7f5 MN |
1349 | uint8_t |
1350 | arc_get_complevel(arc_buf_t *buf) | |
1351 | { | |
1352 | return (buf->b_hdr->b_complevel); | |
1353 | } | |
1354 | ||
d3c2ae1c GW |
1355 | static inline boolean_t |
1356 | arc_buf_is_shared(arc_buf_t *buf) | |
1357 | { | |
1358 | boolean_t shared = (buf->b_data != NULL && | |
a6255b7f DQ |
1359 | buf->b_hdr->b_l1hdr.b_pabd != NULL && |
1360 | abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && | |
1361 | buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); | |
d3c2ae1c | 1362 | IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); |
2aa34383 DK |
1363 | IMPLY(shared, ARC_BUF_SHARED(buf)); |
1364 | IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); | |
524b4217 DK |
1365 | |
1366 | /* | |
1367 | * It would be nice to assert arc_can_share() too, but the "hdr isn't | |
1368 | * already being shared" requirement prevents us from doing that. | |
1369 | */ | |
1370 | ||
d3c2ae1c GW |
1371 | return (shared); |
1372 | } | |
ca0bf58d | 1373 | |
a7004725 DK |
1374 | /* |
1375 | * Free the checksum associated with this header. If there is no checksum, this | |
1376 | * is a no-op. | |
1377 | */ | |
d3c2ae1c GW |
1378 | static inline void |
1379 | arc_cksum_free(arc_buf_hdr_t *hdr) | |
1380 | { | |
1381 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
b5256303 | 1382 | |
d3c2ae1c GW |
1383 | mutex_enter(&hdr->b_l1hdr.b_freeze_lock); |
1384 | if (hdr->b_l1hdr.b_freeze_cksum != NULL) { | |
1385 | kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); | |
1386 | hdr->b_l1hdr.b_freeze_cksum = NULL; | |
b9541d6b | 1387 | } |
d3c2ae1c | 1388 | mutex_exit(&hdr->b_l1hdr.b_freeze_lock); |
b9541d6b CW |
1389 | } |
1390 | ||
a7004725 DK |
1391 | /* |
1392 | * Return true iff at least one of the bufs on hdr is not compressed. | |
b5256303 | 1393 | * Encrypted buffers count as compressed. |
a7004725 DK |
1394 | */ |
1395 | static boolean_t | |
1396 | arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) | |
1397 | { | |
ca6c7a94 | 1398 | ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); |
149ce888 | 1399 | |
a7004725 DK |
1400 | for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { |
1401 | if (!ARC_BUF_COMPRESSED(b)) { | |
1402 | return (B_TRUE); | |
1403 | } | |
1404 | } | |
1405 | return (B_FALSE); | |
1406 | } | |
1407 | ||
1408 | ||
524b4217 DK |
1409 | /* |
1410 | * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data | |
1411 | * matches the checksum that is stored in the hdr. If there is no checksum, | |
1412 | * or if the buf is compressed, this is a no-op. | |
1413 | */ | |
34dc7c2f BB |
1414 | static void |
1415 | arc_cksum_verify(arc_buf_t *buf) | |
1416 | { | |
d3c2ae1c | 1417 | arc_buf_hdr_t *hdr = buf->b_hdr; |
34dc7c2f BB |
1418 | zio_cksum_t zc; |
1419 | ||
1420 | if (!(zfs_flags & ZFS_DEBUG_MODIFY)) | |
1421 | return; | |
1422 | ||
149ce888 | 1423 | if (ARC_BUF_COMPRESSED(buf)) |
524b4217 | 1424 | return; |
524b4217 | 1425 | |
d3c2ae1c GW |
1426 | ASSERT(HDR_HAS_L1HDR(hdr)); |
1427 | ||
1428 | mutex_enter(&hdr->b_l1hdr.b_freeze_lock); | |
149ce888 | 1429 | |
d3c2ae1c GW |
1430 | if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { |
1431 | mutex_exit(&hdr->b_l1hdr.b_freeze_lock); | |
34dc7c2f BB |
1432 | return; |
1433 | } | |
2aa34383 | 1434 | |
3c67d83a | 1435 | fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); |
d3c2ae1c | 1436 | if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) |
34dc7c2f | 1437 | panic("buffer modified while frozen!"); |
d3c2ae1c | 1438 | mutex_exit(&hdr->b_l1hdr.b_freeze_lock); |
34dc7c2f BB |
1439 | } |
1440 | ||
b5256303 TC |
1441 | /* |
1442 | * This function makes the assumption that data stored in the L2ARC | |
1443 | * will be transformed exactly as it is in the main pool. Because of | |
1444 | * this we can verify the checksum against the reading process's bp. | |
1445 | */ | |
d3c2ae1c GW |
1446 | static boolean_t |
1447 | arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) | |
34dc7c2f | 1448 | { |
d3c2ae1c GW |
1449 | ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); |
1450 | VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); | |
34dc7c2f | 1451 | |
d3c2ae1c GW |
1452 | /* |
1453 | * Block pointers always store the checksum for the logical data. | |
1454 | * If the block pointer has the gang bit set, then the checksum | |
1455 | * it represents is for the reconstituted data and not for an | |
1456 | * individual gang member. The zio pipeline, however, must be able to | |
1457 | * determine the checksum of each of the gang constituents so it | |
1458 | * treats the checksum comparison differently than what we need | |
1459 | * for l2arc blocks. This prevents us from using the | |
1460 | * zio_checksum_error() interface directly. Instead we must call the | |
1461 | * zio_checksum_error_impl() so that we can ensure the checksum is | |
1462 | * generated using the correct checksum algorithm and accounts for the | |
1463 | * logical I/O size and not just a gang fragment. | |
1464 | */ | |
b5256303 | 1465 | return (zio_checksum_error_impl(zio->io_spa, zio->io_bp, |
a6255b7f | 1466 | BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, |
d3c2ae1c | 1467 | zio->io_offset, NULL) == 0); |
34dc7c2f BB |
1468 | } |
1469 | ||
524b4217 DK |
1470 | /* |
1471 | * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a | |
1472 | * checksum and attaches it to the buf's hdr so that we can ensure that the buf | |
1473 | * isn't modified later on. If buf is compressed or there is already a checksum | |
1474 | * on the hdr, this is a no-op (we only checksum uncompressed bufs). | |
1475 | */ | |
34dc7c2f | 1476 | static void |
d3c2ae1c | 1477 | arc_cksum_compute(arc_buf_t *buf) |
34dc7c2f | 1478 | { |
d3c2ae1c GW |
1479 | arc_buf_hdr_t *hdr = buf->b_hdr; |
1480 | ||
1481 | if (!(zfs_flags & ZFS_DEBUG_MODIFY)) | |
34dc7c2f BB |
1482 | return; |
1483 | ||
d3c2ae1c | 1484 | ASSERT(HDR_HAS_L1HDR(hdr)); |
2aa34383 | 1485 | |
b9541d6b | 1486 | mutex_enter(&buf->b_hdr->b_l1hdr.b_freeze_lock); |
149ce888 | 1487 | if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { |
d3c2ae1c | 1488 | mutex_exit(&hdr->b_l1hdr.b_freeze_lock); |
34dc7c2f BB |
1489 | return; |
1490 | } | |
2aa34383 | 1491 | |
b5256303 | 1492 | ASSERT(!ARC_BUF_ENCRYPTED(buf)); |
2aa34383 | 1493 | ASSERT(!ARC_BUF_COMPRESSED(buf)); |
d3c2ae1c GW |
1494 | hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), |
1495 | KM_SLEEP); | |
3c67d83a | 1496 | fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, |
d3c2ae1c GW |
1497 | hdr->b_l1hdr.b_freeze_cksum); |
1498 | mutex_exit(&hdr->b_l1hdr.b_freeze_lock); | |
498877ba MA |
1499 | arc_buf_watch(buf); |
1500 | } | |
1501 | ||
1502 | #ifndef _KERNEL | |
1503 | void | |
1504 | arc_buf_sigsegv(int sig, siginfo_t *si, void *unused) | |
1505 | { | |
02730c33 | 1506 | panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr); |
498877ba MA |
1507 | } |
1508 | #endif | |
1509 | ||
1510 | /* ARGSUSED */ | |
1511 | static void | |
1512 | arc_buf_unwatch(arc_buf_t *buf) | |
1513 | { | |
1514 | #ifndef _KERNEL | |
1515 | if (arc_watch) { | |
a7004725 | 1516 | ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), |
498877ba MA |
1517 | PROT_READ | PROT_WRITE)); |
1518 | } | |
1519 | #endif | |
1520 | } | |
1521 | ||
1522 | /* ARGSUSED */ | |
1523 | static void | |
1524 | arc_buf_watch(arc_buf_t *buf) | |
1525 | { | |
1526 | #ifndef _KERNEL | |
1527 | if (arc_watch) | |
2aa34383 | 1528 | ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), |
d3c2ae1c | 1529 | PROT_READ)); |
498877ba | 1530 | #endif |
34dc7c2f BB |
1531 | } |
1532 | ||
b9541d6b CW |
1533 | static arc_buf_contents_t |
1534 | arc_buf_type(arc_buf_hdr_t *hdr) | |
1535 | { | |
d3c2ae1c | 1536 | arc_buf_contents_t type; |
b9541d6b | 1537 | if (HDR_ISTYPE_METADATA(hdr)) { |
d3c2ae1c | 1538 | type = ARC_BUFC_METADATA; |
b9541d6b | 1539 | } else { |
d3c2ae1c | 1540 | type = ARC_BUFC_DATA; |
b9541d6b | 1541 | } |
d3c2ae1c GW |
1542 | VERIFY3U(hdr->b_type, ==, type); |
1543 | return (type); | |
b9541d6b CW |
1544 | } |
1545 | ||
2aa34383 DK |
1546 | boolean_t |
1547 | arc_is_metadata(arc_buf_t *buf) | |
1548 | { | |
1549 | return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); | |
1550 | } | |
1551 | ||
b9541d6b CW |
1552 | static uint32_t |
1553 | arc_bufc_to_flags(arc_buf_contents_t type) | |
1554 | { | |
1555 | switch (type) { | |
1556 | case ARC_BUFC_DATA: | |
1557 | /* metadata field is 0 if buffer contains normal data */ | |
1558 | return (0); | |
1559 | case ARC_BUFC_METADATA: | |
1560 | return (ARC_FLAG_BUFC_METADATA); | |
1561 | default: | |
1562 | break; | |
1563 | } | |
1564 | panic("undefined ARC buffer type!"); | |
1565 | return ((uint32_t)-1); | |
1566 | } | |
1567 | ||
34dc7c2f BB |
1568 | void |
1569 | arc_buf_thaw(arc_buf_t *buf) | |
1570 | { | |
d3c2ae1c GW |
1571 | arc_buf_hdr_t *hdr = buf->b_hdr; |
1572 | ||
2aa34383 DK |
1573 | ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); |
1574 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); | |
1575 | ||
524b4217 | 1576 | arc_cksum_verify(buf); |
34dc7c2f | 1577 | |
2aa34383 | 1578 | /* |
149ce888 | 1579 | * Compressed buffers do not manipulate the b_freeze_cksum. |
2aa34383 | 1580 | */ |
149ce888 | 1581 | if (ARC_BUF_COMPRESSED(buf)) |
2aa34383 | 1582 | return; |
2aa34383 | 1583 | |
d3c2ae1c GW |
1584 | ASSERT(HDR_HAS_L1HDR(hdr)); |
1585 | arc_cksum_free(hdr); | |
498877ba | 1586 | arc_buf_unwatch(buf); |
34dc7c2f BB |
1587 | } |
1588 | ||
1589 | void | |
1590 | arc_buf_freeze(arc_buf_t *buf) | |
1591 | { | |
1592 | if (!(zfs_flags & ZFS_DEBUG_MODIFY)) | |
1593 | return; | |
1594 | ||
149ce888 | 1595 | if (ARC_BUF_COMPRESSED(buf)) |
2aa34383 | 1596 | return; |
428870ff | 1597 | |
149ce888 | 1598 | ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); |
d3c2ae1c | 1599 | arc_cksum_compute(buf); |
34dc7c2f BB |
1600 | } |
1601 | ||
d3c2ae1c GW |
1602 | /* |
1603 | * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, | |
1604 | * the following functions should be used to ensure that the flags are | |
1605 | * updated in a thread-safe way. When manipulating the flags either | |
1606 | * the hash_lock must be held or the hdr must be undiscoverable. This | |
1607 | * ensures that we're not racing with any other threads when updating | |
1608 | * the flags. | |
1609 | */ | |
1610 | static inline void | |
1611 | arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) | |
1612 | { | |
ca6c7a94 | 1613 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
d3c2ae1c GW |
1614 | hdr->b_flags |= flags; |
1615 | } | |
1616 | ||
1617 | static inline void | |
1618 | arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) | |
1619 | { | |
ca6c7a94 | 1620 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
d3c2ae1c GW |
1621 | hdr->b_flags &= ~flags; |
1622 | } | |
1623 | ||
1624 | /* | |
1625 | * Setting the compression bits in the arc_buf_hdr_t's b_flags is | |
1626 | * done in a special way since we have to clear and set bits | |
1627 | * at the same time. Consumers that wish to set the compression bits | |
1628 | * must use this function to ensure that the flags are updated in | |
1629 | * thread-safe manner. | |
1630 | */ | |
1631 | static void | |
1632 | arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) | |
1633 | { | |
ca6c7a94 | 1634 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
d3c2ae1c GW |
1635 | |
1636 | /* | |
1637 | * Holes and embedded blocks will always have a psize = 0 so | |
1638 | * we ignore the compression of the blkptr and set the | |
d3c2ae1c GW |
1639 | * want to uncompress them. Mark them as uncompressed. |
1640 | */ | |
1641 | if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { | |
1642 | arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); | |
d3c2ae1c | 1643 | ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); |
d3c2ae1c GW |
1644 | } else { |
1645 | arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); | |
d3c2ae1c GW |
1646 | ASSERT(HDR_COMPRESSION_ENABLED(hdr)); |
1647 | } | |
b5256303 TC |
1648 | |
1649 | HDR_SET_COMPRESS(hdr, cmp); | |
1650 | ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); | |
d3c2ae1c GW |
1651 | } |
1652 | ||
524b4217 DK |
1653 | /* |
1654 | * Looks for another buf on the same hdr which has the data decompressed, copies | |
1655 | * from it, and returns true. If no such buf exists, returns false. | |
1656 | */ | |
1657 | static boolean_t | |
1658 | arc_buf_try_copy_decompressed_data(arc_buf_t *buf) | |
1659 | { | |
1660 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
524b4217 DK |
1661 | boolean_t copied = B_FALSE; |
1662 | ||
1663 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
1664 | ASSERT3P(buf->b_data, !=, NULL); | |
1665 | ASSERT(!ARC_BUF_COMPRESSED(buf)); | |
1666 | ||
a7004725 | 1667 | for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; |
524b4217 DK |
1668 | from = from->b_next) { |
1669 | /* can't use our own data buffer */ | |
1670 | if (from == buf) { | |
1671 | continue; | |
1672 | } | |
1673 | ||
1674 | if (!ARC_BUF_COMPRESSED(from)) { | |
1675 | bcopy(from->b_data, buf->b_data, arc_buf_size(buf)); | |
1676 | copied = B_TRUE; | |
1677 | break; | |
1678 | } | |
1679 | } | |
1680 | ||
1681 | /* | |
1682 | * There were no decompressed bufs, so there should not be a | |
1683 | * checksum on the hdr either. | |
1684 | */ | |
46db9d61 BB |
1685 | if (zfs_flags & ZFS_DEBUG_MODIFY) |
1686 | EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); | |
524b4217 DK |
1687 | |
1688 | return (copied); | |
1689 | } | |
1690 | ||
77f6826b GA |
1691 | /* |
1692 | * Allocates an ARC buf header that's in an evicted & L2-cached state. | |
1693 | * This is used during l2arc reconstruction to make empty ARC buffers | |
1694 | * which circumvent the regular disk->arc->l2arc path and instead come | |
1695 | * into being in the reverse order, i.e. l2arc->arc. | |
1696 | */ | |
65c7cc49 | 1697 | static arc_buf_hdr_t * |
77f6826b GA |
1698 | arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, |
1699 | dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, | |
10b3c7f5 | 1700 | enum zio_compress compress, uint8_t complevel, boolean_t protected, |
08532162 | 1701 | boolean_t prefetch, arc_state_type_t arcs_state) |
77f6826b GA |
1702 | { |
1703 | arc_buf_hdr_t *hdr; | |
1704 | ||
1705 | ASSERT(size != 0); | |
1706 | hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); | |
1707 | hdr->b_birth = birth; | |
1708 | hdr->b_type = type; | |
1709 | hdr->b_flags = 0; | |
1710 | arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); | |
1711 | HDR_SET_LSIZE(hdr, size); | |
1712 | HDR_SET_PSIZE(hdr, psize); | |
1713 | arc_hdr_set_compress(hdr, compress); | |
10b3c7f5 | 1714 | hdr->b_complevel = complevel; |
77f6826b GA |
1715 | if (protected) |
1716 | arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); | |
1717 | if (prefetch) | |
1718 | arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); | |
1719 | hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); | |
1720 | ||
1721 | hdr->b_dva = dva; | |
1722 | ||
1723 | hdr->b_l2hdr.b_dev = dev; | |
1724 | hdr->b_l2hdr.b_daddr = daddr; | |
08532162 | 1725 | hdr->b_l2hdr.b_arcs_state = arcs_state; |
77f6826b GA |
1726 | |
1727 | return (hdr); | |
1728 | } | |
1729 | ||
b5256303 TC |
1730 | /* |
1731 | * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. | |
1732 | */ | |
1733 | static uint64_t | |
1734 | arc_hdr_size(arc_buf_hdr_t *hdr) | |
1735 | { | |
1736 | uint64_t size; | |
1737 | ||
1738 | if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && | |
1739 | HDR_GET_PSIZE(hdr) > 0) { | |
1740 | size = HDR_GET_PSIZE(hdr); | |
1741 | } else { | |
1742 | ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); | |
1743 | size = HDR_GET_LSIZE(hdr); | |
1744 | } | |
1745 | return (size); | |
1746 | } | |
1747 | ||
1748 | static int | |
1749 | arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) | |
1750 | { | |
1751 | int ret; | |
1752 | uint64_t csize; | |
1753 | uint64_t lsize = HDR_GET_LSIZE(hdr); | |
1754 | uint64_t psize = HDR_GET_PSIZE(hdr); | |
1755 | void *tmpbuf = NULL; | |
1756 | abd_t *abd = hdr->b_l1hdr.b_pabd; | |
1757 | ||
ca6c7a94 | 1758 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
b5256303 TC |
1759 | ASSERT(HDR_AUTHENTICATED(hdr)); |
1760 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); | |
1761 | ||
1762 | /* | |
1763 | * The MAC is calculated on the compressed data that is stored on disk. | |
1764 | * However, if compressed arc is disabled we will only have the | |
1765 | * decompressed data available to us now. Compress it into a temporary | |
1766 | * abd so we can verify the MAC. The performance overhead of this will | |
1767 | * be relatively low, since most objects in an encrypted objset will | |
1768 | * be encrypted (instead of authenticated) anyway. | |
1769 | */ | |
1770 | if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && | |
1771 | !HDR_COMPRESSION_ENABLED(hdr)) { | |
1772 | tmpbuf = zio_buf_alloc(lsize); | |
1773 | abd = abd_get_from_buf(tmpbuf, lsize); | |
1774 | abd_take_ownership_of_buf(abd, B_TRUE); | |
b5256303 | 1775 | csize = zio_compress_data(HDR_GET_COMPRESS(hdr), |
10b3c7f5 | 1776 | hdr->b_l1hdr.b_pabd, tmpbuf, lsize, hdr->b_complevel); |
b5256303 TC |
1777 | ASSERT3U(csize, <=, psize); |
1778 | abd_zero_off(abd, csize, psize - csize); | |
1779 | } | |
1780 | ||
1781 | /* | |
1782 | * Authentication is best effort. We authenticate whenever the key is | |
1783 | * available. If we succeed we clear ARC_FLAG_NOAUTH. | |
1784 | */ | |
1785 | if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { | |
1786 | ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); | |
1787 | ASSERT3U(lsize, ==, psize); | |
1788 | ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, | |
1789 | psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); | |
1790 | } else { | |
1791 | ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, | |
1792 | hdr->b_crypt_hdr.b_mac); | |
1793 | } | |
1794 | ||
1795 | if (ret == 0) | |
1796 | arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); | |
1797 | else if (ret != ENOENT) | |
1798 | goto error; | |
1799 | ||
1800 | if (tmpbuf != NULL) | |
1801 | abd_free(abd); | |
1802 | ||
1803 | return (0); | |
1804 | ||
1805 | error: | |
1806 | if (tmpbuf != NULL) | |
1807 | abd_free(abd); | |
1808 | ||
1809 | return (ret); | |
1810 | } | |
1811 | ||
1812 | /* | |
1813 | * This function will take a header that only has raw encrypted data in | |
1814 | * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in | |
1815 | * b_l1hdr.b_pabd. If designated in the header flags, this function will | |
1816 | * also decompress the data. | |
1817 | */ | |
1818 | static int | |
be9a5c35 | 1819 | arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) |
b5256303 TC |
1820 | { |
1821 | int ret; | |
b5256303 TC |
1822 | abd_t *cabd = NULL; |
1823 | void *tmp = NULL; | |
1824 | boolean_t no_crypt = B_FALSE; | |
1825 | boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); | |
1826 | ||
ca6c7a94 | 1827 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
b5256303 TC |
1828 | ASSERT(HDR_ENCRYPTED(hdr)); |
1829 | ||
e111c802 | 1830 | arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); |
b5256303 | 1831 | |
be9a5c35 TC |
1832 | ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, |
1833 | B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, | |
1834 | hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, | |
b5256303 TC |
1835 | hdr->b_crypt_hdr.b_rabd, &no_crypt); |
1836 | if (ret != 0) | |
1837 | goto error; | |
1838 | ||
1839 | if (no_crypt) { | |
1840 | abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, | |
1841 | HDR_GET_PSIZE(hdr)); | |
1842 | } | |
1843 | ||
1844 | /* | |
1845 | * If this header has disabled arc compression but the b_pabd is | |
1846 | * compressed after decrypting it, we need to decompress the newly | |
1847 | * decrypted data. | |
1848 | */ | |
1849 | if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && | |
1850 | !HDR_COMPRESSION_ENABLED(hdr)) { | |
1851 | /* | |
1852 | * We want to make sure that we are correctly honoring the | |
1853 | * zfs_abd_scatter_enabled setting, so we allocate an abd here | |
1854 | * and then loan a buffer from it, rather than allocating a | |
1855 | * linear buffer and wrapping it in an abd later. | |
1856 | */ | |
e111c802 | 1857 | cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, B_TRUE); |
b5256303 TC |
1858 | tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); |
1859 | ||
1860 | ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), | |
1861 | hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), | |
10b3c7f5 | 1862 | HDR_GET_LSIZE(hdr), &hdr->b_complevel); |
b5256303 TC |
1863 | if (ret != 0) { |
1864 | abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); | |
1865 | goto error; | |
1866 | } | |
1867 | ||
1868 | abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); | |
1869 | arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, | |
1870 | arc_hdr_size(hdr), hdr); | |
1871 | hdr->b_l1hdr.b_pabd = cabd; | |
1872 | } | |
1873 | ||
b5256303 TC |
1874 | return (0); |
1875 | ||
1876 | error: | |
1877 | arc_hdr_free_abd(hdr, B_FALSE); | |
b5256303 TC |
1878 | if (cabd != NULL) |
1879 | arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); | |
1880 | ||
1881 | return (ret); | |
1882 | } | |
1883 | ||
1884 | /* | |
1885 | * This function is called during arc_buf_fill() to prepare the header's | |
1886 | * abd plaintext pointer for use. This involves authenticated protected | |
1887 | * data and decrypting encrypted data into the plaintext abd. | |
1888 | */ | |
1889 | static int | |
1890 | arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, | |
be9a5c35 | 1891 | const zbookmark_phys_t *zb, boolean_t noauth) |
b5256303 TC |
1892 | { |
1893 | int ret; | |
1894 | ||
1895 | ASSERT(HDR_PROTECTED(hdr)); | |
1896 | ||
1897 | if (hash_lock != NULL) | |
1898 | mutex_enter(hash_lock); | |
1899 | ||
1900 | if (HDR_NOAUTH(hdr) && !noauth) { | |
1901 | /* | |
1902 | * The caller requested authenticated data but our data has | |
1903 | * not been authenticated yet. Verify the MAC now if we can. | |
1904 | */ | |
be9a5c35 | 1905 | ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); |
b5256303 TC |
1906 | if (ret != 0) |
1907 | goto error; | |
1908 | } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { | |
1909 | /* | |
1910 | * If we only have the encrypted version of the data, but the | |
1911 | * unencrypted version was requested we take this opportunity | |
1912 | * to store the decrypted version in the header for future use. | |
1913 | */ | |
be9a5c35 | 1914 | ret = arc_hdr_decrypt(hdr, spa, zb); |
b5256303 TC |
1915 | if (ret != 0) |
1916 | goto error; | |
1917 | } | |
1918 | ||
1919 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); | |
1920 | ||
1921 | if (hash_lock != NULL) | |
1922 | mutex_exit(hash_lock); | |
1923 | ||
1924 | return (0); | |
1925 | ||
1926 | error: | |
1927 | if (hash_lock != NULL) | |
1928 | mutex_exit(hash_lock); | |
1929 | ||
1930 | return (ret); | |
1931 | } | |
1932 | ||
1933 | /* | |
1934 | * This function is used by the dbuf code to decrypt bonus buffers in place. | |
1935 | * The dbuf code itself doesn't have any locking for decrypting a shared dnode | |
1936 | * block, so we use the hash lock here to protect against concurrent calls to | |
1937 | * arc_buf_fill(). | |
1938 | */ | |
1939 | static void | |
1940 | arc_buf_untransform_in_place(arc_buf_t *buf, kmutex_t *hash_lock) | |
1941 | { | |
1942 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
1943 | ||
1944 | ASSERT(HDR_ENCRYPTED(hdr)); | |
1945 | ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); | |
ca6c7a94 | 1946 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
b5256303 TC |
1947 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); |
1948 | ||
1949 | zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, | |
1950 | arc_buf_size(buf)); | |
1951 | buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; | |
1952 | buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; | |
1953 | hdr->b_crypt_hdr.b_ebufcnt -= 1; | |
1954 | } | |
1955 | ||
524b4217 DK |
1956 | /* |
1957 | * Given a buf that has a data buffer attached to it, this function will | |
1958 | * efficiently fill the buf with data of the specified compression setting from | |
1959 | * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr | |
1960 | * are already sharing a data buf, no copy is performed. | |
1961 | * | |
1962 | * If the buf is marked as compressed but uncompressed data was requested, this | |
1963 | * will allocate a new data buffer for the buf, remove that flag, and fill the | |
1964 | * buf with uncompressed data. You can't request a compressed buf on a hdr with | |
1965 | * uncompressed data, and (since we haven't added support for it yet) if you | |
1966 | * want compressed data your buf must already be marked as compressed and have | |
1967 | * the correct-sized data buffer. | |
1968 | */ | |
1969 | static int | |
be9a5c35 TC |
1970 | arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, |
1971 | arc_fill_flags_t flags) | |
d3c2ae1c | 1972 | { |
b5256303 | 1973 | int error = 0; |
d3c2ae1c | 1974 | arc_buf_hdr_t *hdr = buf->b_hdr; |
b5256303 TC |
1975 | boolean_t hdr_compressed = |
1976 | (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); | |
1977 | boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; | |
1978 | boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; | |
d3c2ae1c | 1979 | dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; |
b5256303 | 1980 | kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); |
d3c2ae1c | 1981 | |
524b4217 | 1982 | ASSERT3P(buf->b_data, !=, NULL); |
b5256303 | 1983 | IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); |
524b4217 | 1984 | IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); |
b5256303 TC |
1985 | IMPLY(encrypted, HDR_ENCRYPTED(hdr)); |
1986 | IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); | |
1987 | IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); | |
1988 | IMPLY(encrypted, !ARC_BUF_SHARED(buf)); | |
1989 | ||
1990 | /* | |
1991 | * If the caller wanted encrypted data we just need to copy it from | |
1992 | * b_rabd and potentially byteswap it. We won't be able to do any | |
1993 | * further transforms on it. | |
1994 | */ | |
1995 | if (encrypted) { | |
1996 | ASSERT(HDR_HAS_RABD(hdr)); | |
1997 | abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, | |
1998 | HDR_GET_PSIZE(hdr)); | |
1999 | goto byteswap; | |
2000 | } | |
2001 | ||
2002 | /* | |
e1cfd73f | 2003 | * Adjust encrypted and authenticated headers to accommodate |
69830602 TC |
2004 | * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are |
2005 | * allowed to fail decryption due to keys not being loaded | |
2006 | * without being marked as an IO error. | |
b5256303 TC |
2007 | */ |
2008 | if (HDR_PROTECTED(hdr)) { | |
2009 | error = arc_fill_hdr_crypt(hdr, hash_lock, spa, | |
be9a5c35 | 2010 | zb, !!(flags & ARC_FILL_NOAUTH)); |
69830602 TC |
2011 | if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { |
2012 | return (error); | |
2013 | } else if (error != 0) { | |
e7504d7a TC |
2014 | if (hash_lock != NULL) |
2015 | mutex_enter(hash_lock); | |
2c24b5b1 | 2016 | arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); |
e7504d7a TC |
2017 | if (hash_lock != NULL) |
2018 | mutex_exit(hash_lock); | |
b5256303 | 2019 | return (error); |
2c24b5b1 | 2020 | } |
b5256303 TC |
2021 | } |
2022 | ||
2023 | /* | |
2024 | * There is a special case here for dnode blocks which are | |
2025 | * decrypting their bonus buffers. These blocks may request to | |
2026 | * be decrypted in-place. This is necessary because there may | |
2027 | * be many dnodes pointing into this buffer and there is | |
2028 | * currently no method to synchronize replacing the backing | |
2029 | * b_data buffer and updating all of the pointers. Here we use | |
2030 | * the hash lock to ensure there are no races. If the need | |
2031 | * arises for other types to be decrypted in-place, they must | |
2032 | * add handling here as well. | |
2033 | */ | |
2034 | if ((flags & ARC_FILL_IN_PLACE) != 0) { | |
2035 | ASSERT(!hdr_compressed); | |
2036 | ASSERT(!compressed); | |
2037 | ASSERT(!encrypted); | |
2038 | ||
2039 | if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { | |
2040 | ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); | |
2041 | ||
2042 | if (hash_lock != NULL) | |
2043 | mutex_enter(hash_lock); | |
2044 | arc_buf_untransform_in_place(buf, hash_lock); | |
2045 | if (hash_lock != NULL) | |
2046 | mutex_exit(hash_lock); | |
2047 | ||
2048 | /* Compute the hdr's checksum if necessary */ | |
2049 | arc_cksum_compute(buf); | |
2050 | } | |
2051 | ||
2052 | return (0); | |
2053 | } | |
524b4217 DK |
2054 | |
2055 | if (hdr_compressed == compressed) { | |
2aa34383 | 2056 | if (!arc_buf_is_shared(buf)) { |
a6255b7f | 2057 | abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, |
524b4217 | 2058 | arc_buf_size(buf)); |
2aa34383 | 2059 | } |
d3c2ae1c | 2060 | } else { |
524b4217 DK |
2061 | ASSERT(hdr_compressed); |
2062 | ASSERT(!compressed); | |
d3c2ae1c | 2063 | ASSERT3U(HDR_GET_LSIZE(hdr), !=, HDR_GET_PSIZE(hdr)); |
2aa34383 DK |
2064 | |
2065 | /* | |
524b4217 DK |
2066 | * If the buf is sharing its data with the hdr, unlink it and |
2067 | * allocate a new data buffer for the buf. | |
2aa34383 | 2068 | */ |
524b4217 DK |
2069 | if (arc_buf_is_shared(buf)) { |
2070 | ASSERT(ARC_BUF_COMPRESSED(buf)); | |
2071 | ||
e1cfd73f | 2072 | /* We need to give the buf its own b_data */ |
524b4217 | 2073 | buf->b_flags &= ~ARC_BUF_FLAG_SHARED; |
2aa34383 DK |
2074 | buf->b_data = |
2075 | arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); | |
2076 | arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); | |
2077 | ||
524b4217 | 2078 | /* Previously overhead was 0; just add new overhead */ |
2aa34383 | 2079 | ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); |
524b4217 DK |
2080 | } else if (ARC_BUF_COMPRESSED(buf)) { |
2081 | /* We need to reallocate the buf's b_data */ | |
2082 | arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), | |
2083 | buf); | |
2084 | buf->b_data = | |
2085 | arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); | |
2086 | ||
2087 | /* We increased the size of b_data; update overhead */ | |
2088 | ARCSTAT_INCR(arcstat_overhead_size, | |
2089 | HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); | |
2aa34383 DK |
2090 | } |
2091 | ||
524b4217 DK |
2092 | /* |
2093 | * Regardless of the buf's previous compression settings, it | |
2094 | * should not be compressed at the end of this function. | |
2095 | */ | |
2096 | buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; | |
2097 | ||
2098 | /* | |
2099 | * Try copying the data from another buf which already has a | |
2100 | * decompressed version. If that's not possible, it's time to | |
2101 | * bite the bullet and decompress the data from the hdr. | |
2102 | */ | |
2103 | if (arc_buf_try_copy_decompressed_data(buf)) { | |
2104 | /* Skip byteswapping and checksumming (already done) */ | |
524b4217 DK |
2105 | return (0); |
2106 | } else { | |
b5256303 | 2107 | error = zio_decompress_data(HDR_GET_COMPRESS(hdr), |
a6255b7f | 2108 | hdr->b_l1hdr.b_pabd, buf->b_data, |
10b3c7f5 MN |
2109 | HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr), |
2110 | &hdr->b_complevel); | |
524b4217 DK |
2111 | |
2112 | /* | |
2113 | * Absent hardware errors or software bugs, this should | |
2114 | * be impossible, but log it anyway so we can debug it. | |
2115 | */ | |
2116 | if (error != 0) { | |
2117 | zfs_dbgmsg( | |
a887d653 | 2118 | "hdr %px, compress %d, psize %d, lsize %d", |
b5256303 | 2119 | hdr, arc_hdr_get_compress(hdr), |
524b4217 | 2120 | HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); |
e7504d7a TC |
2121 | if (hash_lock != NULL) |
2122 | mutex_enter(hash_lock); | |
2c24b5b1 | 2123 | arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); |
e7504d7a TC |
2124 | if (hash_lock != NULL) |
2125 | mutex_exit(hash_lock); | |
524b4217 DK |
2126 | return (SET_ERROR(EIO)); |
2127 | } | |
d3c2ae1c GW |
2128 | } |
2129 | } | |
524b4217 | 2130 | |
b5256303 | 2131 | byteswap: |
524b4217 | 2132 | /* Byteswap the buf's data if necessary */ |
d3c2ae1c GW |
2133 | if (bswap != DMU_BSWAP_NUMFUNCS) { |
2134 | ASSERT(!HDR_SHARED_DATA(hdr)); | |
2135 | ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); | |
2136 | dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); | |
2137 | } | |
524b4217 DK |
2138 | |
2139 | /* Compute the hdr's checksum if necessary */ | |
d3c2ae1c | 2140 | arc_cksum_compute(buf); |
524b4217 | 2141 | |
d3c2ae1c GW |
2142 | return (0); |
2143 | } | |
2144 | ||
2145 | /* | |
b5256303 TC |
2146 | * If this function is being called to decrypt an encrypted buffer or verify an |
2147 | * authenticated one, the key must be loaded and a mapping must be made | |
2148 | * available in the keystore via spa_keystore_create_mapping() or one of its | |
2149 | * callers. | |
d3c2ae1c | 2150 | */ |
b5256303 | 2151 | int |
a2c2ed1b TC |
2152 | arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, |
2153 | boolean_t in_place) | |
d3c2ae1c | 2154 | { |
a2c2ed1b | 2155 | int ret; |
b5256303 | 2156 | arc_fill_flags_t flags = 0; |
d3c2ae1c | 2157 | |
b5256303 TC |
2158 | if (in_place) |
2159 | flags |= ARC_FILL_IN_PLACE; | |
2160 | ||
be9a5c35 | 2161 | ret = arc_buf_fill(buf, spa, zb, flags); |
a2c2ed1b TC |
2162 | if (ret == ECKSUM) { |
2163 | /* | |
2164 | * Convert authentication and decryption errors to EIO | |
2165 | * (and generate an ereport) before leaving the ARC. | |
2166 | */ | |
2167 | ret = SET_ERROR(EIO); | |
be9a5c35 | 2168 | spa_log_error(spa, zb); |
1144586b | 2169 | (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, |
4f072827 | 2170 | spa, NULL, zb, NULL, 0); |
a2c2ed1b TC |
2171 | } |
2172 | ||
2173 | return (ret); | |
d3c2ae1c GW |
2174 | } |
2175 | ||
2176 | /* | |
2177 | * Increment the amount of evictable space in the arc_state_t's refcount. | |
2178 | * We account for the space used by the hdr and the arc buf individually | |
2179 | * so that we can add and remove them from the refcount individually. | |
2180 | */ | |
34dc7c2f | 2181 | static void |
d3c2ae1c GW |
2182 | arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) |
2183 | { | |
2184 | arc_buf_contents_t type = arc_buf_type(hdr); | |
d3c2ae1c GW |
2185 | |
2186 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
2187 | ||
2188 | if (GHOST_STATE(state)) { | |
2189 | ASSERT0(hdr->b_l1hdr.b_bufcnt); | |
2190 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); | |
a6255b7f | 2191 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 2192 | ASSERT(!HDR_HAS_RABD(hdr)); |
424fd7c3 | 2193 | (void) zfs_refcount_add_many(&state->arcs_esize[type], |
2aa34383 | 2194 | HDR_GET_LSIZE(hdr), hdr); |
d3c2ae1c GW |
2195 | return; |
2196 | } | |
2197 | ||
a6255b7f | 2198 | if (hdr->b_l1hdr.b_pabd != NULL) { |
424fd7c3 | 2199 | (void) zfs_refcount_add_many(&state->arcs_esize[type], |
d3c2ae1c GW |
2200 | arc_hdr_size(hdr), hdr); |
2201 | } | |
b5256303 | 2202 | if (HDR_HAS_RABD(hdr)) { |
424fd7c3 | 2203 | (void) zfs_refcount_add_many(&state->arcs_esize[type], |
b5256303 TC |
2204 | HDR_GET_PSIZE(hdr), hdr); |
2205 | } | |
2206 | ||
1c27024e DB |
2207 | for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; |
2208 | buf = buf->b_next) { | |
2aa34383 | 2209 | if (arc_buf_is_shared(buf)) |
d3c2ae1c | 2210 | continue; |
424fd7c3 | 2211 | (void) zfs_refcount_add_many(&state->arcs_esize[type], |
2aa34383 | 2212 | arc_buf_size(buf), buf); |
d3c2ae1c GW |
2213 | } |
2214 | } | |
2215 | ||
2216 | /* | |
2217 | * Decrement the amount of evictable space in the arc_state_t's refcount. | |
2218 | * We account for the space used by the hdr and the arc buf individually | |
2219 | * so that we can add and remove them from the refcount individually. | |
2220 | */ | |
2221 | static void | |
2aa34383 | 2222 | arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) |
d3c2ae1c GW |
2223 | { |
2224 | arc_buf_contents_t type = arc_buf_type(hdr); | |
d3c2ae1c GW |
2225 | |
2226 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
2227 | ||
2228 | if (GHOST_STATE(state)) { | |
2229 | ASSERT0(hdr->b_l1hdr.b_bufcnt); | |
2230 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); | |
a6255b7f | 2231 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 2232 | ASSERT(!HDR_HAS_RABD(hdr)); |
424fd7c3 | 2233 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
2aa34383 | 2234 | HDR_GET_LSIZE(hdr), hdr); |
d3c2ae1c GW |
2235 | return; |
2236 | } | |
2237 | ||
a6255b7f | 2238 | if (hdr->b_l1hdr.b_pabd != NULL) { |
424fd7c3 | 2239 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
d3c2ae1c GW |
2240 | arc_hdr_size(hdr), hdr); |
2241 | } | |
b5256303 | 2242 | if (HDR_HAS_RABD(hdr)) { |
424fd7c3 | 2243 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
b5256303 TC |
2244 | HDR_GET_PSIZE(hdr), hdr); |
2245 | } | |
2246 | ||
1c27024e DB |
2247 | for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; |
2248 | buf = buf->b_next) { | |
2aa34383 | 2249 | if (arc_buf_is_shared(buf)) |
d3c2ae1c | 2250 | continue; |
424fd7c3 | 2251 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
2aa34383 | 2252 | arc_buf_size(buf), buf); |
d3c2ae1c GW |
2253 | } |
2254 | } | |
2255 | ||
2256 | /* | |
2257 | * Add a reference to this hdr indicating that someone is actively | |
2258 | * referencing that memory. When the refcount transitions from 0 to 1, | |
2259 | * we remove it from the respective arc_state_t list to indicate that | |
2260 | * it is not evictable. | |
2261 | */ | |
2262 | static void | |
2263 | add_reference(arc_buf_hdr_t *hdr, void *tag) | |
34dc7c2f | 2264 | { |
b9541d6b CW |
2265 | arc_state_t *state; |
2266 | ||
2267 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
ca6c7a94 | 2268 | if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { |
d3c2ae1c | 2269 | ASSERT(hdr->b_l1hdr.b_state == arc_anon); |
424fd7c3 | 2270 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
d3c2ae1c GW |
2271 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); |
2272 | } | |
34dc7c2f | 2273 | |
b9541d6b CW |
2274 | state = hdr->b_l1hdr.b_state; |
2275 | ||
c13060e4 | 2276 | if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && |
b9541d6b CW |
2277 | (state != arc_anon)) { |
2278 | /* We don't use the L2-only state list. */ | |
2279 | if (state != arc_l2c_only) { | |
ffdf019c | 2280 | multilist_remove(&state->arcs_list[arc_buf_type(hdr)], |
d3c2ae1c | 2281 | hdr); |
2aa34383 | 2282 | arc_evictable_space_decrement(hdr, state); |
34dc7c2f | 2283 | } |
b128c09f | 2284 | /* remove the prefetch flag if we get a reference */ |
08532162 GA |
2285 | if (HDR_HAS_L2HDR(hdr)) |
2286 | l2arc_hdr_arcstats_decrement_state(hdr); | |
d3c2ae1c | 2287 | arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH); |
08532162 GA |
2288 | if (HDR_HAS_L2HDR(hdr)) |
2289 | l2arc_hdr_arcstats_increment_state(hdr); | |
34dc7c2f BB |
2290 | } |
2291 | } | |
2292 | ||
d3c2ae1c GW |
2293 | /* |
2294 | * Remove a reference from this hdr. When the reference transitions from | |
2295 | * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's | |
2296 | * list making it eligible for eviction. | |
2297 | */ | |
34dc7c2f | 2298 | static int |
2a432414 | 2299 | remove_reference(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, void *tag) |
34dc7c2f BB |
2300 | { |
2301 | int cnt; | |
b9541d6b | 2302 | arc_state_t *state = hdr->b_l1hdr.b_state; |
34dc7c2f | 2303 | |
b9541d6b | 2304 | ASSERT(HDR_HAS_L1HDR(hdr)); |
34dc7c2f BB |
2305 | ASSERT(state == arc_anon || MUTEX_HELD(hash_lock)); |
2306 | ASSERT(!GHOST_STATE(state)); | |
2307 | ||
b9541d6b CW |
2308 | /* |
2309 | * arc_l2c_only counts as a ghost state so we don't need to explicitly | |
2310 | * check to prevent usage of the arc_l2c_only list. | |
2311 | */ | |
424fd7c3 | 2312 | if (((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) && |
34dc7c2f | 2313 | (state != arc_anon)) { |
ffdf019c | 2314 | multilist_insert(&state->arcs_list[arc_buf_type(hdr)], hdr); |
d3c2ae1c GW |
2315 | ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); |
2316 | arc_evictable_space_increment(hdr, state); | |
34dc7c2f BB |
2317 | } |
2318 | return (cnt); | |
2319 | } | |
2320 | ||
e0b0ca98 BB |
2321 | /* |
2322 | * Returns detailed information about a specific arc buffer. When the | |
2323 | * state_index argument is set the function will calculate the arc header | |
2324 | * list position for its arc state. Since this requires a linear traversal | |
2325 | * callers are strongly encourage not to do this. However, it can be helpful | |
2326 | * for targeted analysis so the functionality is provided. | |
2327 | */ | |
2328 | void | |
2329 | arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index) | |
2330 | { | |
2331 | arc_buf_hdr_t *hdr = ab->b_hdr; | |
b9541d6b CW |
2332 | l1arc_buf_hdr_t *l1hdr = NULL; |
2333 | l2arc_buf_hdr_t *l2hdr = NULL; | |
2334 | arc_state_t *state = NULL; | |
2335 | ||
8887c7d7 TC |
2336 | memset(abi, 0, sizeof (arc_buf_info_t)); |
2337 | ||
2338 | if (hdr == NULL) | |
2339 | return; | |
2340 | ||
2341 | abi->abi_flags = hdr->b_flags; | |
2342 | ||
b9541d6b CW |
2343 | if (HDR_HAS_L1HDR(hdr)) { |
2344 | l1hdr = &hdr->b_l1hdr; | |
2345 | state = l1hdr->b_state; | |
2346 | } | |
2347 | if (HDR_HAS_L2HDR(hdr)) | |
2348 | l2hdr = &hdr->b_l2hdr; | |
e0b0ca98 | 2349 | |
b9541d6b | 2350 | if (l1hdr) { |
d3c2ae1c | 2351 | abi->abi_bufcnt = l1hdr->b_bufcnt; |
b9541d6b CW |
2352 | abi->abi_access = l1hdr->b_arc_access; |
2353 | abi->abi_mru_hits = l1hdr->b_mru_hits; | |
2354 | abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits; | |
2355 | abi->abi_mfu_hits = l1hdr->b_mfu_hits; | |
2356 | abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits; | |
424fd7c3 | 2357 | abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt); |
b9541d6b CW |
2358 | } |
2359 | ||
2360 | if (l2hdr) { | |
2361 | abi->abi_l2arc_dattr = l2hdr->b_daddr; | |
b9541d6b CW |
2362 | abi->abi_l2arc_hits = l2hdr->b_hits; |
2363 | } | |
2364 | ||
e0b0ca98 | 2365 | abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON; |
b9541d6b | 2366 | abi->abi_state_contents = arc_buf_type(hdr); |
d3c2ae1c | 2367 | abi->abi_size = arc_hdr_size(hdr); |
e0b0ca98 BB |
2368 | } |
2369 | ||
34dc7c2f | 2370 | /* |
ca0bf58d | 2371 | * Move the supplied buffer to the indicated state. The hash lock |
34dc7c2f BB |
2372 | * for the buffer must be held by the caller. |
2373 | */ | |
2374 | static void | |
2a432414 GW |
2375 | arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr, |
2376 | kmutex_t *hash_lock) | |
34dc7c2f | 2377 | { |
b9541d6b CW |
2378 | arc_state_t *old_state; |
2379 | int64_t refcnt; | |
d3c2ae1c GW |
2380 | uint32_t bufcnt; |
2381 | boolean_t update_old, update_new; | |
b9541d6b CW |
2382 | arc_buf_contents_t buftype = arc_buf_type(hdr); |
2383 | ||
2384 | /* | |
2385 | * We almost always have an L1 hdr here, since we call arc_hdr_realloc() | |
2386 | * in arc_read() when bringing a buffer out of the L2ARC. However, the | |
2387 | * L1 hdr doesn't always exist when we change state to arc_anon before | |
2388 | * destroying a header, in which case reallocating to add the L1 hdr is | |
2389 | * pointless. | |
2390 | */ | |
2391 | if (HDR_HAS_L1HDR(hdr)) { | |
2392 | old_state = hdr->b_l1hdr.b_state; | |
424fd7c3 | 2393 | refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); |
d3c2ae1c | 2394 | bufcnt = hdr->b_l1hdr.b_bufcnt; |
b5256303 TC |
2395 | update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL || |
2396 | HDR_HAS_RABD(hdr)); | |
b9541d6b CW |
2397 | } else { |
2398 | old_state = arc_l2c_only; | |
2399 | refcnt = 0; | |
d3c2ae1c GW |
2400 | bufcnt = 0; |
2401 | update_old = B_FALSE; | |
b9541d6b | 2402 | } |
d3c2ae1c | 2403 | update_new = update_old; |
34dc7c2f BB |
2404 | |
2405 | ASSERT(MUTEX_HELD(hash_lock)); | |
e8b96c60 | 2406 | ASSERT3P(new_state, !=, old_state); |
d3c2ae1c GW |
2407 | ASSERT(!GHOST_STATE(new_state) || bufcnt == 0); |
2408 | ASSERT(old_state != arc_anon || bufcnt <= 1); | |
34dc7c2f BB |
2409 | |
2410 | /* | |
2411 | * If this buffer is evictable, transfer it from the | |
2412 | * old state list to the new state list. | |
2413 | */ | |
2414 | if (refcnt == 0) { | |
b9541d6b | 2415 | if (old_state != arc_anon && old_state != arc_l2c_only) { |
b9541d6b | 2416 | ASSERT(HDR_HAS_L1HDR(hdr)); |
ffdf019c | 2417 | multilist_remove(&old_state->arcs_list[buftype], hdr); |
34dc7c2f | 2418 | |
d3c2ae1c GW |
2419 | if (GHOST_STATE(old_state)) { |
2420 | ASSERT0(bufcnt); | |
2421 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); | |
2422 | update_old = B_TRUE; | |
34dc7c2f | 2423 | } |
2aa34383 | 2424 | arc_evictable_space_decrement(hdr, old_state); |
34dc7c2f | 2425 | } |
b9541d6b | 2426 | if (new_state != arc_anon && new_state != arc_l2c_only) { |
b9541d6b CW |
2427 | /* |
2428 | * An L1 header always exists here, since if we're | |
2429 | * moving to some L1-cached state (i.e. not l2c_only or | |
2430 | * anonymous), we realloc the header to add an L1hdr | |
2431 | * beforehand. | |
2432 | */ | |
2433 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
ffdf019c | 2434 | multilist_insert(&new_state->arcs_list[buftype], hdr); |
34dc7c2f | 2435 | |
34dc7c2f | 2436 | if (GHOST_STATE(new_state)) { |
d3c2ae1c GW |
2437 | ASSERT0(bufcnt); |
2438 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); | |
2439 | update_new = B_TRUE; | |
34dc7c2f | 2440 | } |
d3c2ae1c | 2441 | arc_evictable_space_increment(hdr, new_state); |
34dc7c2f BB |
2442 | } |
2443 | } | |
2444 | ||
d3c2ae1c | 2445 | ASSERT(!HDR_EMPTY(hdr)); |
2a432414 GW |
2446 | if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) |
2447 | buf_hash_remove(hdr); | |
34dc7c2f | 2448 | |
b9541d6b | 2449 | /* adjust state sizes (ignore arc_l2c_only) */ |
36da08ef | 2450 | |
d3c2ae1c | 2451 | if (update_new && new_state != arc_l2c_only) { |
36da08ef PS |
2452 | ASSERT(HDR_HAS_L1HDR(hdr)); |
2453 | if (GHOST_STATE(new_state)) { | |
d3c2ae1c | 2454 | ASSERT0(bufcnt); |
36da08ef PS |
2455 | |
2456 | /* | |
d3c2ae1c | 2457 | * When moving a header to a ghost state, we first |
36da08ef | 2458 | * remove all arc buffers. Thus, we'll have a |
d3c2ae1c | 2459 | * bufcnt of zero, and no arc buffer to use for |
36da08ef PS |
2460 | * the reference. As a result, we use the arc |
2461 | * header pointer for the reference. | |
2462 | */ | |
424fd7c3 | 2463 | (void) zfs_refcount_add_many(&new_state->arcs_size, |
d3c2ae1c | 2464 | HDR_GET_LSIZE(hdr), hdr); |
a6255b7f | 2465 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 2466 | ASSERT(!HDR_HAS_RABD(hdr)); |
36da08ef | 2467 | } else { |
d3c2ae1c | 2468 | uint32_t buffers = 0; |
36da08ef PS |
2469 | |
2470 | /* | |
2471 | * Each individual buffer holds a unique reference, | |
2472 | * thus we must remove each of these references one | |
2473 | * at a time. | |
2474 | */ | |
1c27024e | 2475 | for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; |
36da08ef | 2476 | buf = buf->b_next) { |
d3c2ae1c GW |
2477 | ASSERT3U(bufcnt, !=, 0); |
2478 | buffers++; | |
2479 | ||
2480 | /* | |
2481 | * When the arc_buf_t is sharing the data | |
2482 | * block with the hdr, the owner of the | |
2483 | * reference belongs to the hdr. Only | |
2484 | * add to the refcount if the arc_buf_t is | |
2485 | * not shared. | |
2486 | */ | |
2aa34383 | 2487 | if (arc_buf_is_shared(buf)) |
d3c2ae1c | 2488 | continue; |
d3c2ae1c | 2489 | |
424fd7c3 TS |
2490 | (void) zfs_refcount_add_many( |
2491 | &new_state->arcs_size, | |
2aa34383 | 2492 | arc_buf_size(buf), buf); |
d3c2ae1c GW |
2493 | } |
2494 | ASSERT3U(bufcnt, ==, buffers); | |
2495 | ||
a6255b7f | 2496 | if (hdr->b_l1hdr.b_pabd != NULL) { |
424fd7c3 TS |
2497 | (void) zfs_refcount_add_many( |
2498 | &new_state->arcs_size, | |
d3c2ae1c | 2499 | arc_hdr_size(hdr), hdr); |
b5256303 TC |
2500 | } |
2501 | ||
2502 | if (HDR_HAS_RABD(hdr)) { | |
424fd7c3 TS |
2503 | (void) zfs_refcount_add_many( |
2504 | &new_state->arcs_size, | |
b5256303 | 2505 | HDR_GET_PSIZE(hdr), hdr); |
36da08ef PS |
2506 | } |
2507 | } | |
2508 | } | |
2509 | ||
d3c2ae1c | 2510 | if (update_old && old_state != arc_l2c_only) { |
36da08ef PS |
2511 | ASSERT(HDR_HAS_L1HDR(hdr)); |
2512 | if (GHOST_STATE(old_state)) { | |
d3c2ae1c | 2513 | ASSERT0(bufcnt); |
a6255b7f | 2514 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 2515 | ASSERT(!HDR_HAS_RABD(hdr)); |
d3c2ae1c | 2516 | |
36da08ef PS |
2517 | /* |
2518 | * When moving a header off of a ghost state, | |
d3c2ae1c GW |
2519 | * the header will not contain any arc buffers. |
2520 | * We use the arc header pointer for the reference | |
2521 | * which is exactly what we did when we put the | |
2522 | * header on the ghost state. | |
36da08ef PS |
2523 | */ |
2524 | ||
424fd7c3 | 2525 | (void) zfs_refcount_remove_many(&old_state->arcs_size, |
d3c2ae1c | 2526 | HDR_GET_LSIZE(hdr), hdr); |
36da08ef | 2527 | } else { |
d3c2ae1c | 2528 | uint32_t buffers = 0; |
36da08ef PS |
2529 | |
2530 | /* | |
2531 | * Each individual buffer holds a unique reference, | |
2532 | * thus we must remove each of these references one | |
2533 | * at a time. | |
2534 | */ | |
1c27024e | 2535 | for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; |
36da08ef | 2536 | buf = buf->b_next) { |
d3c2ae1c GW |
2537 | ASSERT3U(bufcnt, !=, 0); |
2538 | buffers++; | |
2539 | ||
2540 | /* | |
2541 | * When the arc_buf_t is sharing the data | |
2542 | * block with the hdr, the owner of the | |
2543 | * reference belongs to the hdr. Only | |
2544 | * add to the refcount if the arc_buf_t is | |
2545 | * not shared. | |
2546 | */ | |
2aa34383 | 2547 | if (arc_buf_is_shared(buf)) |
d3c2ae1c | 2548 | continue; |
d3c2ae1c | 2549 | |
424fd7c3 | 2550 | (void) zfs_refcount_remove_many( |
2aa34383 | 2551 | &old_state->arcs_size, arc_buf_size(buf), |
d3c2ae1c | 2552 | buf); |
36da08ef | 2553 | } |
d3c2ae1c | 2554 | ASSERT3U(bufcnt, ==, buffers); |
b5256303 TC |
2555 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || |
2556 | HDR_HAS_RABD(hdr)); | |
2557 | ||
2558 | if (hdr->b_l1hdr.b_pabd != NULL) { | |
424fd7c3 | 2559 | (void) zfs_refcount_remove_many( |
b5256303 TC |
2560 | &old_state->arcs_size, arc_hdr_size(hdr), |
2561 | hdr); | |
2562 | } | |
2563 | ||
2564 | if (HDR_HAS_RABD(hdr)) { | |
424fd7c3 | 2565 | (void) zfs_refcount_remove_many( |
b5256303 TC |
2566 | &old_state->arcs_size, HDR_GET_PSIZE(hdr), |
2567 | hdr); | |
2568 | } | |
36da08ef | 2569 | } |
34dc7c2f | 2570 | } |
36da08ef | 2571 | |
08532162 | 2572 | if (HDR_HAS_L1HDR(hdr)) { |
b9541d6b | 2573 | hdr->b_l1hdr.b_state = new_state; |
34dc7c2f | 2574 | |
08532162 GA |
2575 | if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) { |
2576 | l2arc_hdr_arcstats_decrement_state(hdr); | |
2577 | hdr->b_l2hdr.b_arcs_state = new_state->arcs_state; | |
2578 | l2arc_hdr_arcstats_increment_state(hdr); | |
2579 | } | |
2580 | } | |
2581 | ||
b9541d6b CW |
2582 | /* |
2583 | * L2 headers should never be on the L2 state list since they don't | |
2584 | * have L1 headers allocated. | |
2585 | */ | |
ffdf019c AM |
2586 | ASSERT(multilist_is_empty(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]) && |
2587 | multilist_is_empty(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA])); | |
34dc7c2f BB |
2588 | } |
2589 | ||
2590 | void | |
d164b209 | 2591 | arc_space_consume(uint64_t space, arc_space_type_t type) |
34dc7c2f | 2592 | { |
d164b209 BB |
2593 | ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); |
2594 | ||
2595 | switch (type) { | |
e75c13c3 BB |
2596 | default: |
2597 | break; | |
d164b209 | 2598 | case ARC_SPACE_DATA: |
c4c162c1 | 2599 | ARCSTAT_INCR(arcstat_data_size, space); |
d164b209 | 2600 | break; |
cc7f677c | 2601 | case ARC_SPACE_META: |
c4c162c1 | 2602 | ARCSTAT_INCR(arcstat_metadata_size, space); |
cc7f677c | 2603 | break; |
25458cbe | 2604 | case ARC_SPACE_BONUS: |
c4c162c1 | 2605 | ARCSTAT_INCR(arcstat_bonus_size, space); |
25458cbe TC |
2606 | break; |
2607 | case ARC_SPACE_DNODE: | |
c4c162c1 | 2608 | aggsum_add(&arc_sums.arcstat_dnode_size, space); |
25458cbe TC |
2609 | break; |
2610 | case ARC_SPACE_DBUF: | |
c4c162c1 | 2611 | ARCSTAT_INCR(arcstat_dbuf_size, space); |
d164b209 BB |
2612 | break; |
2613 | case ARC_SPACE_HDRS: | |
c4c162c1 | 2614 | ARCSTAT_INCR(arcstat_hdr_size, space); |
d164b209 BB |
2615 | break; |
2616 | case ARC_SPACE_L2HDRS: | |
c4c162c1 | 2617 | aggsum_add(&arc_sums.arcstat_l2_hdr_size, space); |
d164b209 | 2618 | break; |
85ec5cba MA |
2619 | case ARC_SPACE_ABD_CHUNK_WASTE: |
2620 | /* | |
2621 | * Note: this includes space wasted by all scatter ABD's, not | |
2622 | * just those allocated by the ARC. But the vast majority of | |
2623 | * scatter ABD's come from the ARC, because other users are | |
2624 | * very short-lived. | |
2625 | */ | |
c4c162c1 | 2626 | ARCSTAT_INCR(arcstat_abd_chunk_waste_size, space); |
85ec5cba | 2627 | break; |
d164b209 BB |
2628 | } |
2629 | ||
85ec5cba | 2630 | if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) |
c4c162c1 | 2631 | aggsum_add(&arc_sums.arcstat_meta_used, space); |
cc7f677c | 2632 | |
c4c162c1 | 2633 | aggsum_add(&arc_sums.arcstat_size, space); |
34dc7c2f BB |
2634 | } |
2635 | ||
2636 | void | |
d164b209 | 2637 | arc_space_return(uint64_t space, arc_space_type_t type) |
34dc7c2f | 2638 | { |
d164b209 BB |
2639 | ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); |
2640 | ||
2641 | switch (type) { | |
e75c13c3 BB |
2642 | default: |
2643 | break; | |
d164b209 | 2644 | case ARC_SPACE_DATA: |
c4c162c1 | 2645 | ARCSTAT_INCR(arcstat_data_size, -space); |
d164b209 | 2646 | break; |
cc7f677c | 2647 | case ARC_SPACE_META: |
c4c162c1 | 2648 | ARCSTAT_INCR(arcstat_metadata_size, -space); |
cc7f677c | 2649 | break; |
25458cbe | 2650 | case ARC_SPACE_BONUS: |
c4c162c1 | 2651 | ARCSTAT_INCR(arcstat_bonus_size, -space); |
25458cbe TC |
2652 | break; |
2653 | case ARC_SPACE_DNODE: | |
c4c162c1 | 2654 | aggsum_add(&arc_sums.arcstat_dnode_size, -space); |
25458cbe TC |
2655 | break; |
2656 | case ARC_SPACE_DBUF: | |
c4c162c1 | 2657 | ARCSTAT_INCR(arcstat_dbuf_size, -space); |
d164b209 BB |
2658 | break; |
2659 | case ARC_SPACE_HDRS: | |
c4c162c1 | 2660 | ARCSTAT_INCR(arcstat_hdr_size, -space); |
d164b209 BB |
2661 | break; |
2662 | case ARC_SPACE_L2HDRS: | |
c4c162c1 | 2663 | aggsum_add(&arc_sums.arcstat_l2_hdr_size, -space); |
d164b209 | 2664 | break; |
85ec5cba | 2665 | case ARC_SPACE_ABD_CHUNK_WASTE: |
c4c162c1 | 2666 | ARCSTAT_INCR(arcstat_abd_chunk_waste_size, -space); |
85ec5cba | 2667 | break; |
d164b209 BB |
2668 | } |
2669 | ||
85ec5cba | 2670 | if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) { |
c4c162c1 AM |
2671 | ASSERT(aggsum_compare(&arc_sums.arcstat_meta_used, |
2672 | space) >= 0); | |
2673 | ARCSTAT_MAX(arcstat_meta_max, | |
2674 | aggsum_upper_bound(&arc_sums.arcstat_meta_used)); | |
2675 | aggsum_add(&arc_sums.arcstat_meta_used, -space); | |
cc7f677c PS |
2676 | } |
2677 | ||
c4c162c1 AM |
2678 | ASSERT(aggsum_compare(&arc_sums.arcstat_size, space) >= 0); |
2679 | aggsum_add(&arc_sums.arcstat_size, -space); | |
34dc7c2f BB |
2680 | } |
2681 | ||
d3c2ae1c | 2682 | /* |
524b4217 | 2683 | * Given a hdr and a buf, returns whether that buf can share its b_data buffer |
a6255b7f | 2684 | * with the hdr's b_pabd. |
d3c2ae1c | 2685 | */ |
524b4217 DK |
2686 | static boolean_t |
2687 | arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) | |
2688 | { | |
524b4217 DK |
2689 | /* |
2690 | * The criteria for sharing a hdr's data are: | |
b5256303 TC |
2691 | * 1. the buffer is not encrypted |
2692 | * 2. the hdr's compression matches the buf's compression | |
2693 | * 3. the hdr doesn't need to be byteswapped | |
2694 | * 4. the hdr isn't already being shared | |
2695 | * 5. the buf is either compressed or it is the last buf in the hdr list | |
524b4217 | 2696 | * |
b5256303 | 2697 | * Criterion #5 maintains the invariant that shared uncompressed |
524b4217 DK |
2698 | * bufs must be the final buf in the hdr's b_buf list. Reading this, you |
2699 | * might ask, "if a compressed buf is allocated first, won't that be the | |
2700 | * last thing in the list?", but in that case it's impossible to create | |
2701 | * a shared uncompressed buf anyway (because the hdr must be compressed | |
2702 | * to have the compressed buf). You might also think that #3 is | |
2703 | * sufficient to make this guarantee, however it's possible | |
2704 | * (specifically in the rare L2ARC write race mentioned in | |
2705 | * arc_buf_alloc_impl()) there will be an existing uncompressed buf that | |
e1cfd73f | 2706 | * is shareable, but wasn't at the time of its allocation. Rather than |
524b4217 DK |
2707 | * allow a new shared uncompressed buf to be created and then shuffle |
2708 | * the list around to make it the last element, this simply disallows | |
2709 | * sharing if the new buf isn't the first to be added. | |
2710 | */ | |
2711 | ASSERT3P(buf->b_hdr, ==, hdr); | |
b5256303 TC |
2712 | boolean_t hdr_compressed = |
2713 | arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF; | |
a7004725 | 2714 | boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; |
b5256303 TC |
2715 | return (!ARC_BUF_ENCRYPTED(buf) && |
2716 | buf_compressed == hdr_compressed && | |
524b4217 DK |
2717 | hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && |
2718 | !HDR_SHARED_DATA(hdr) && | |
2719 | (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); | |
2720 | } | |
2721 | ||
2722 | /* | |
2723 | * Allocate a buf for this hdr. If you care about the data that's in the hdr, | |
2724 | * or if you want a compressed buffer, pass those flags in. Returns 0 if the | |
2725 | * copy was made successfully, or an error code otherwise. | |
2726 | */ | |
2727 | static int | |
be9a5c35 TC |
2728 | arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, |
2729 | void *tag, boolean_t encrypted, boolean_t compressed, boolean_t noauth, | |
524b4217 | 2730 | boolean_t fill, arc_buf_t **ret) |
34dc7c2f | 2731 | { |
34dc7c2f | 2732 | arc_buf_t *buf; |
b5256303 | 2733 | arc_fill_flags_t flags = ARC_FILL_LOCKED; |
34dc7c2f | 2734 | |
d3c2ae1c GW |
2735 | ASSERT(HDR_HAS_L1HDR(hdr)); |
2736 | ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); | |
2737 | VERIFY(hdr->b_type == ARC_BUFC_DATA || | |
2738 | hdr->b_type == ARC_BUFC_METADATA); | |
524b4217 DK |
2739 | ASSERT3P(ret, !=, NULL); |
2740 | ASSERT3P(*ret, ==, NULL); | |
b5256303 | 2741 | IMPLY(encrypted, compressed); |
d3c2ae1c | 2742 | |
b9541d6b CW |
2743 | hdr->b_l1hdr.b_mru_hits = 0; |
2744 | hdr->b_l1hdr.b_mru_ghost_hits = 0; | |
2745 | hdr->b_l1hdr.b_mfu_hits = 0; | |
2746 | hdr->b_l1hdr.b_mfu_ghost_hits = 0; | |
2747 | hdr->b_l1hdr.b_l2_hits = 0; | |
2748 | ||
524b4217 | 2749 | buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); |
34dc7c2f BB |
2750 | buf->b_hdr = hdr; |
2751 | buf->b_data = NULL; | |
2aa34383 | 2752 | buf->b_next = hdr->b_l1hdr.b_buf; |
524b4217 | 2753 | buf->b_flags = 0; |
b9541d6b | 2754 | |
d3c2ae1c GW |
2755 | add_reference(hdr, tag); |
2756 | ||
2757 | /* | |
2758 | * We're about to change the hdr's b_flags. We must either | |
2759 | * hold the hash_lock or be undiscoverable. | |
2760 | */ | |
ca6c7a94 | 2761 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
d3c2ae1c GW |
2762 | |
2763 | /* | |
524b4217 | 2764 | * Only honor requests for compressed bufs if the hdr is actually |
e1cfd73f | 2765 | * compressed. This must be overridden if the buffer is encrypted since |
b5256303 | 2766 | * encrypted buffers cannot be decompressed. |
524b4217 | 2767 | */ |
b5256303 TC |
2768 | if (encrypted) { |
2769 | buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; | |
2770 | buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; | |
2771 | flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; | |
2772 | } else if (compressed && | |
2773 | arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { | |
524b4217 | 2774 | buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; |
b5256303 TC |
2775 | flags |= ARC_FILL_COMPRESSED; |
2776 | } | |
2777 | ||
2778 | if (noauth) { | |
2779 | ASSERT0(encrypted); | |
2780 | flags |= ARC_FILL_NOAUTH; | |
2781 | } | |
524b4217 | 2782 | |
524b4217 DK |
2783 | /* |
2784 | * If the hdr's data can be shared then we share the data buffer and | |
2785 | * set the appropriate bit in the hdr's b_flags to indicate the hdr is | |
5662fd57 MA |
2786 | * sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new |
2787 | * buffer to store the buf's data. | |
524b4217 | 2788 | * |
a6255b7f DQ |
2789 | * There are two additional restrictions here because we're sharing |
2790 | * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be | |
2791 | * actively involved in an L2ARC write, because if this buf is used by | |
2792 | * an arc_write() then the hdr's data buffer will be released when the | |
524b4217 | 2793 | * write completes, even though the L2ARC write might still be using it. |
a6255b7f | 2794 | * Second, the hdr's ABD must be linear so that the buf's user doesn't |
5662fd57 MA |
2795 | * need to be ABD-aware. It must be allocated via |
2796 | * zio_[data_]buf_alloc(), not as a page, because we need to be able | |
2797 | * to abd_release_ownership_of_buf(), which isn't allowed on "linear | |
2798 | * page" buffers because the ABD code needs to handle freeing them | |
2799 | * specially. | |
2800 | */ | |
2801 | boolean_t can_share = arc_can_share(hdr, buf) && | |
2802 | !HDR_L2_WRITING(hdr) && | |
2803 | hdr->b_l1hdr.b_pabd != NULL && | |
2804 | abd_is_linear(hdr->b_l1hdr.b_pabd) && | |
2805 | !abd_is_linear_page(hdr->b_l1hdr.b_pabd); | |
524b4217 DK |
2806 | |
2807 | /* Set up b_data and sharing */ | |
2808 | if (can_share) { | |
a6255b7f | 2809 | buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); |
524b4217 | 2810 | buf->b_flags |= ARC_BUF_FLAG_SHARED; |
d3c2ae1c GW |
2811 | arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); |
2812 | } else { | |
524b4217 DK |
2813 | buf->b_data = |
2814 | arc_get_data_buf(hdr, arc_buf_size(buf), buf); | |
2815 | ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); | |
d3c2ae1c GW |
2816 | } |
2817 | VERIFY3P(buf->b_data, !=, NULL); | |
b9541d6b CW |
2818 | |
2819 | hdr->b_l1hdr.b_buf = buf; | |
d3c2ae1c | 2820 | hdr->b_l1hdr.b_bufcnt += 1; |
b5256303 TC |
2821 | if (encrypted) |
2822 | hdr->b_crypt_hdr.b_ebufcnt += 1; | |
b9541d6b | 2823 | |
524b4217 DK |
2824 | /* |
2825 | * If the user wants the data from the hdr, we need to either copy or | |
2826 | * decompress the data. | |
2827 | */ | |
2828 | if (fill) { | |
be9a5c35 TC |
2829 | ASSERT3P(zb, !=, NULL); |
2830 | return (arc_buf_fill(buf, spa, zb, flags)); | |
524b4217 | 2831 | } |
d3c2ae1c | 2832 | |
524b4217 | 2833 | return (0); |
34dc7c2f BB |
2834 | } |
2835 | ||
9babb374 BB |
2836 | static char *arc_onloan_tag = "onloan"; |
2837 | ||
a7004725 DK |
2838 | static inline void |
2839 | arc_loaned_bytes_update(int64_t delta) | |
2840 | { | |
2841 | atomic_add_64(&arc_loaned_bytes, delta); | |
2842 | ||
2843 | /* assert that it did not wrap around */ | |
2844 | ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); | |
2845 | } | |
2846 | ||
9babb374 BB |
2847 | /* |
2848 | * Loan out an anonymous arc buffer. Loaned buffers are not counted as in | |
2849 | * flight data by arc_tempreserve_space() until they are "returned". Loaned | |
2850 | * buffers must be returned to the arc before they can be used by the DMU or | |
2851 | * freed. | |
2852 | */ | |
2853 | arc_buf_t * | |
2aa34383 | 2854 | arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) |
9babb374 | 2855 | { |
2aa34383 DK |
2856 | arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, |
2857 | is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); | |
9babb374 | 2858 | |
5152a740 | 2859 | arc_loaned_bytes_update(arc_buf_size(buf)); |
a7004725 | 2860 | |
9babb374 BB |
2861 | return (buf); |
2862 | } | |
2863 | ||
2aa34383 DK |
2864 | arc_buf_t * |
2865 | arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, | |
10b3c7f5 | 2866 | enum zio_compress compression_type, uint8_t complevel) |
2aa34383 DK |
2867 | { |
2868 | arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, | |
10b3c7f5 | 2869 | psize, lsize, compression_type, complevel); |
2aa34383 | 2870 | |
5152a740 | 2871 | arc_loaned_bytes_update(arc_buf_size(buf)); |
a7004725 | 2872 | |
2aa34383 DK |
2873 | return (buf); |
2874 | } | |
2875 | ||
b5256303 TC |
2876 | arc_buf_t * |
2877 | arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, | |
2878 | const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, | |
2879 | dmu_object_type_t ot, uint64_t psize, uint64_t lsize, | |
10b3c7f5 | 2880 | enum zio_compress compression_type, uint8_t complevel) |
b5256303 TC |
2881 | { |
2882 | arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, | |
10b3c7f5 MN |
2883 | byteorder, salt, iv, mac, ot, psize, lsize, compression_type, |
2884 | complevel); | |
b5256303 TC |
2885 | |
2886 | atomic_add_64(&arc_loaned_bytes, psize); | |
2887 | return (buf); | |
2888 | } | |
2889 | ||
2aa34383 | 2890 | |
9babb374 BB |
2891 | /* |
2892 | * Return a loaned arc buffer to the arc. | |
2893 | */ | |
2894 | void | |
2895 | arc_return_buf(arc_buf_t *buf, void *tag) | |
2896 | { | |
2897 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
2898 | ||
d3c2ae1c | 2899 | ASSERT3P(buf->b_data, !=, NULL); |
b9541d6b | 2900 | ASSERT(HDR_HAS_L1HDR(hdr)); |
c13060e4 | 2901 | (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); |
424fd7c3 | 2902 | (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); |
9babb374 | 2903 | |
a7004725 | 2904 | arc_loaned_bytes_update(-arc_buf_size(buf)); |
9babb374 BB |
2905 | } |
2906 | ||
428870ff BB |
2907 | /* Detach an arc_buf from a dbuf (tag) */ |
2908 | void | |
2909 | arc_loan_inuse_buf(arc_buf_t *buf, void *tag) | |
2910 | { | |
b9541d6b | 2911 | arc_buf_hdr_t *hdr = buf->b_hdr; |
428870ff | 2912 | |
d3c2ae1c | 2913 | ASSERT3P(buf->b_data, !=, NULL); |
b9541d6b | 2914 | ASSERT(HDR_HAS_L1HDR(hdr)); |
c13060e4 | 2915 | (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); |
424fd7c3 | 2916 | (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); |
428870ff | 2917 | |
a7004725 | 2918 | arc_loaned_bytes_update(arc_buf_size(buf)); |
428870ff BB |
2919 | } |
2920 | ||
d3c2ae1c | 2921 | static void |
a6255b7f | 2922 | l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) |
34dc7c2f | 2923 | { |
d3c2ae1c | 2924 | l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); |
34dc7c2f | 2925 | |
a6255b7f | 2926 | df->l2df_abd = abd; |
d3c2ae1c GW |
2927 | df->l2df_size = size; |
2928 | df->l2df_type = type; | |
2929 | mutex_enter(&l2arc_free_on_write_mtx); | |
2930 | list_insert_head(l2arc_free_on_write, df); | |
2931 | mutex_exit(&l2arc_free_on_write_mtx); | |
2932 | } | |
428870ff | 2933 | |
d3c2ae1c | 2934 | static void |
b5256303 | 2935 | arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) |
d3c2ae1c GW |
2936 | { |
2937 | arc_state_t *state = hdr->b_l1hdr.b_state; | |
2938 | arc_buf_contents_t type = arc_buf_type(hdr); | |
b5256303 | 2939 | uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); |
1eb5bfa3 | 2940 | |
d3c2ae1c GW |
2941 | /* protected by hash lock, if in the hash table */ |
2942 | if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { | |
424fd7c3 | 2943 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
d3c2ae1c GW |
2944 | ASSERT(state != arc_anon && state != arc_l2c_only); |
2945 | ||
424fd7c3 | 2946 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
d3c2ae1c | 2947 | size, hdr); |
1eb5bfa3 | 2948 | } |
424fd7c3 | 2949 | (void) zfs_refcount_remove_many(&state->arcs_size, size, hdr); |
423e7b62 AG |
2950 | if (type == ARC_BUFC_METADATA) { |
2951 | arc_space_return(size, ARC_SPACE_META); | |
2952 | } else { | |
2953 | ASSERT(type == ARC_BUFC_DATA); | |
2954 | arc_space_return(size, ARC_SPACE_DATA); | |
2955 | } | |
d3c2ae1c | 2956 | |
b5256303 TC |
2957 | if (free_rdata) { |
2958 | l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); | |
2959 | } else { | |
2960 | l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); | |
2961 | } | |
34dc7c2f BB |
2962 | } |
2963 | ||
d3c2ae1c GW |
2964 | /* |
2965 | * Share the arc_buf_t's data with the hdr. Whenever we are sharing the | |
2966 | * data buffer, we transfer the refcount ownership to the hdr and update | |
2967 | * the appropriate kstats. | |
2968 | */ | |
2969 | static void | |
2970 | arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) | |
34dc7c2f | 2971 | { |
524b4217 | 2972 | ASSERT(arc_can_share(hdr, buf)); |
a6255b7f | 2973 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 2974 | ASSERT(!ARC_BUF_ENCRYPTED(buf)); |
ca6c7a94 | 2975 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
34dc7c2f BB |
2976 | |
2977 | /* | |
d3c2ae1c GW |
2978 | * Start sharing the data buffer. We transfer the |
2979 | * refcount ownership to the hdr since it always owns | |
2980 | * the refcount whenever an arc_buf_t is shared. | |
34dc7c2f | 2981 | */ |
d7e4b30a BB |
2982 | zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, |
2983 | arc_hdr_size(hdr), buf, hdr); | |
a6255b7f DQ |
2984 | hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); |
2985 | abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, | |
2986 | HDR_ISTYPE_METADATA(hdr)); | |
d3c2ae1c | 2987 | arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); |
524b4217 | 2988 | buf->b_flags |= ARC_BUF_FLAG_SHARED; |
34dc7c2f | 2989 | |
d3c2ae1c GW |
2990 | /* |
2991 | * Since we've transferred ownership to the hdr we need | |
2992 | * to increment its compressed and uncompressed kstats and | |
2993 | * decrement the overhead size. | |
2994 | */ | |
2995 | ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); | |
2996 | ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); | |
2aa34383 | 2997 | ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); |
34dc7c2f BB |
2998 | } |
2999 | ||
ca0bf58d | 3000 | static void |
d3c2ae1c | 3001 | arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) |
ca0bf58d | 3002 | { |
d3c2ae1c | 3003 | ASSERT(arc_buf_is_shared(buf)); |
a6255b7f | 3004 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); |
ca6c7a94 | 3005 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
ca0bf58d | 3006 | |
d3c2ae1c GW |
3007 | /* |
3008 | * We are no longer sharing this buffer so we need | |
3009 | * to transfer its ownership to the rightful owner. | |
3010 | */ | |
d7e4b30a BB |
3011 | zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, |
3012 | arc_hdr_size(hdr), hdr, buf); | |
d3c2ae1c | 3013 | arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); |
a6255b7f | 3014 | abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); |
e2af2acc | 3015 | abd_free(hdr->b_l1hdr.b_pabd); |
a6255b7f | 3016 | hdr->b_l1hdr.b_pabd = NULL; |
524b4217 | 3017 | buf->b_flags &= ~ARC_BUF_FLAG_SHARED; |
d3c2ae1c GW |
3018 | |
3019 | /* | |
3020 | * Since the buffer is no longer shared between | |
3021 | * the arc buf and the hdr, count it as overhead. | |
3022 | */ | |
3023 | ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); | |
3024 | ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); | |
2aa34383 | 3025 | ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); |
ca0bf58d PS |
3026 | } |
3027 | ||
34dc7c2f | 3028 | /* |
2aa34383 DK |
3029 | * Remove an arc_buf_t from the hdr's buf list and return the last |
3030 | * arc_buf_t on the list. If no buffers remain on the list then return | |
3031 | * NULL. | |
3032 | */ | |
3033 | static arc_buf_t * | |
3034 | arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) | |
3035 | { | |
2aa34383 | 3036 | ASSERT(HDR_HAS_L1HDR(hdr)); |
ca6c7a94 | 3037 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
2aa34383 | 3038 | |
a7004725 DK |
3039 | arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; |
3040 | arc_buf_t *lastbuf = NULL; | |
3041 | ||
2aa34383 DK |
3042 | /* |
3043 | * Remove the buf from the hdr list and locate the last | |
3044 | * remaining buffer on the list. | |
3045 | */ | |
3046 | while (*bufp != NULL) { | |
3047 | if (*bufp == buf) | |
3048 | *bufp = buf->b_next; | |
3049 | ||
3050 | /* | |
3051 | * If we've removed a buffer in the middle of | |
3052 | * the list then update the lastbuf and update | |
3053 | * bufp. | |
3054 | */ | |
3055 | if (*bufp != NULL) { | |
3056 | lastbuf = *bufp; | |
3057 | bufp = &(*bufp)->b_next; | |
3058 | } | |
3059 | } | |
3060 | buf->b_next = NULL; | |
3061 | ASSERT3P(lastbuf, !=, buf); | |
3062 | IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL); | |
3063 | IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL); | |
3064 | IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); | |
3065 | ||
3066 | return (lastbuf); | |
3067 | } | |
3068 | ||
3069 | /* | |
e1cfd73f | 3070 | * Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's |
2aa34383 | 3071 | * list and free it. |
34dc7c2f BB |
3072 | */ |
3073 | static void | |
2aa34383 | 3074 | arc_buf_destroy_impl(arc_buf_t *buf) |
34dc7c2f | 3075 | { |
498877ba | 3076 | arc_buf_hdr_t *hdr = buf->b_hdr; |
ca0bf58d PS |
3077 | |
3078 | /* | |
524b4217 DK |
3079 | * Free up the data associated with the buf but only if we're not |
3080 | * sharing this with the hdr. If we are sharing it with the hdr, the | |
3081 | * hdr is responsible for doing the free. | |
ca0bf58d | 3082 | */ |
d3c2ae1c GW |
3083 | if (buf->b_data != NULL) { |
3084 | /* | |
3085 | * We're about to change the hdr's b_flags. We must either | |
3086 | * hold the hash_lock or be undiscoverable. | |
3087 | */ | |
ca6c7a94 | 3088 | ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); |
d3c2ae1c | 3089 | |
524b4217 | 3090 | arc_cksum_verify(buf); |
d3c2ae1c GW |
3091 | arc_buf_unwatch(buf); |
3092 | ||
2aa34383 | 3093 | if (arc_buf_is_shared(buf)) { |
d3c2ae1c GW |
3094 | arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); |
3095 | } else { | |
2aa34383 | 3096 | uint64_t size = arc_buf_size(buf); |
d3c2ae1c GW |
3097 | arc_free_data_buf(hdr, buf->b_data, size, buf); |
3098 | ARCSTAT_INCR(arcstat_overhead_size, -size); | |
3099 | } | |
3100 | buf->b_data = NULL; | |
3101 | ||
3102 | ASSERT(hdr->b_l1hdr.b_bufcnt > 0); | |
3103 | hdr->b_l1hdr.b_bufcnt -= 1; | |
b5256303 | 3104 | |
da5d4697 | 3105 | if (ARC_BUF_ENCRYPTED(buf)) { |
b5256303 TC |
3106 | hdr->b_crypt_hdr.b_ebufcnt -= 1; |
3107 | ||
da5d4697 D |
3108 | /* |
3109 | * If we have no more encrypted buffers and we've | |
3110 | * already gotten a copy of the decrypted data we can | |
3111 | * free b_rabd to save some space. | |
3112 | */ | |
3113 | if (hdr->b_crypt_hdr.b_ebufcnt == 0 && | |
3114 | HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd != NULL && | |
3115 | !HDR_IO_IN_PROGRESS(hdr)) { | |
3116 | arc_hdr_free_abd(hdr, B_TRUE); | |
3117 | } | |
440a3eb9 | 3118 | } |
d3c2ae1c GW |
3119 | } |
3120 | ||
a7004725 | 3121 | arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); |
d3c2ae1c | 3122 | |
524b4217 | 3123 | if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { |
2aa34383 | 3124 | /* |
524b4217 | 3125 | * If the current arc_buf_t is sharing its data buffer with the |
a6255b7f | 3126 | * hdr, then reassign the hdr's b_pabd to share it with the new |
524b4217 DK |
3127 | * buffer at the end of the list. The shared buffer is always |
3128 | * the last one on the hdr's buffer list. | |
3129 | * | |
3130 | * There is an equivalent case for compressed bufs, but since | |
3131 | * they aren't guaranteed to be the last buf in the list and | |
3132 | * that is an exceedingly rare case, we just allow that space be | |
b5256303 TC |
3133 | * wasted temporarily. We must also be careful not to share |
3134 | * encrypted buffers, since they cannot be shared. | |
2aa34383 | 3135 | */ |
b5256303 | 3136 | if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { |
524b4217 | 3137 | /* Only one buf can be shared at once */ |
2aa34383 | 3138 | VERIFY(!arc_buf_is_shared(lastbuf)); |
524b4217 DK |
3139 | /* hdr is uncompressed so can't have compressed buf */ |
3140 | VERIFY(!ARC_BUF_COMPRESSED(lastbuf)); | |
d3c2ae1c | 3141 | |
a6255b7f | 3142 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); |
b5256303 | 3143 | arc_hdr_free_abd(hdr, B_FALSE); |
d3c2ae1c | 3144 | |
2aa34383 DK |
3145 | /* |
3146 | * We must setup a new shared block between the | |
3147 | * last buffer and the hdr. The data would have | |
3148 | * been allocated by the arc buf so we need to transfer | |
3149 | * ownership to the hdr since it's now being shared. | |
3150 | */ | |
3151 | arc_share_buf(hdr, lastbuf); | |
3152 | } | |
3153 | } else if (HDR_SHARED_DATA(hdr)) { | |
d3c2ae1c | 3154 | /* |
2aa34383 DK |
3155 | * Uncompressed shared buffers are always at the end |
3156 | * of the list. Compressed buffers don't have the | |
3157 | * same requirements. This makes it hard to | |
3158 | * simply assert that the lastbuf is shared so | |
3159 | * we rely on the hdr's compression flags to determine | |
3160 | * if we have a compressed, shared buffer. | |
d3c2ae1c | 3161 | */ |
2aa34383 DK |
3162 | ASSERT3P(lastbuf, !=, NULL); |
3163 | ASSERT(arc_buf_is_shared(lastbuf) || | |
b5256303 | 3164 | arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); |
ca0bf58d PS |
3165 | } |
3166 | ||
a7004725 DK |
3167 | /* |
3168 | * Free the checksum if we're removing the last uncompressed buf from | |
3169 | * this hdr. | |
3170 | */ | |
3171 | if (!arc_hdr_has_uncompressed_buf(hdr)) { | |
d3c2ae1c | 3172 | arc_cksum_free(hdr); |
a7004725 | 3173 | } |
d3c2ae1c GW |
3174 | |
3175 | /* clean up the buf */ | |
3176 | buf->b_hdr = NULL; | |
3177 | kmem_cache_free(buf_cache, buf); | |
3178 | } | |
3179 | ||
3180 | static void | |
e111c802 | 3181 | arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags) |
d3c2ae1c | 3182 | { |
b5256303 | 3183 | uint64_t size; |
e111c802 MM |
3184 | boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); |
3185 | boolean_t do_adapt = ((alloc_flags & ARC_HDR_DO_ADAPT) != 0); | |
b5256303 | 3186 | |
d3c2ae1c GW |
3187 | ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); |
3188 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
b5256303 TC |
3189 | ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); |
3190 | IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); | |
d3c2ae1c | 3191 | |
b5256303 TC |
3192 | if (alloc_rdata) { |
3193 | size = HDR_GET_PSIZE(hdr); | |
3194 | ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); | |
e111c802 MM |
3195 | hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, |
3196 | do_adapt); | |
b5256303 TC |
3197 | ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); |
3198 | ARCSTAT_INCR(arcstat_raw_size, size); | |
3199 | } else { | |
3200 | size = arc_hdr_size(hdr); | |
3201 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); | |
e111c802 MM |
3202 | hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, |
3203 | do_adapt); | |
b5256303 TC |
3204 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); |
3205 | } | |
3206 | ||
3207 | ARCSTAT_INCR(arcstat_compressed_size, size); | |
d3c2ae1c GW |
3208 | ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); |
3209 | } | |
3210 | ||
3211 | static void | |
b5256303 | 3212 | arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata) |
d3c2ae1c | 3213 | { |
b5256303 TC |
3214 | uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); |
3215 | ||
d3c2ae1c | 3216 | ASSERT(HDR_HAS_L1HDR(hdr)); |
b5256303 TC |
3217 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); |
3218 | IMPLY(free_rdata, HDR_HAS_RABD(hdr)); | |
d3c2ae1c | 3219 | |
ca0bf58d | 3220 | /* |
d3c2ae1c GW |
3221 | * If the hdr is currently being written to the l2arc then |
3222 | * we defer freeing the data by adding it to the l2arc_free_on_write | |
3223 | * list. The l2arc will free the data once it's finished | |
3224 | * writing it to the l2arc device. | |
ca0bf58d | 3225 | */ |
d3c2ae1c | 3226 | if (HDR_L2_WRITING(hdr)) { |
b5256303 | 3227 | arc_hdr_free_on_write(hdr, free_rdata); |
d3c2ae1c | 3228 | ARCSTAT_BUMP(arcstat_l2_free_on_write); |
b5256303 TC |
3229 | } else if (free_rdata) { |
3230 | arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); | |
d3c2ae1c | 3231 | } else { |
b5256303 | 3232 | arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr); |
ca0bf58d PS |
3233 | } |
3234 | ||
b5256303 TC |
3235 | if (free_rdata) { |
3236 | hdr->b_crypt_hdr.b_rabd = NULL; | |
3237 | ARCSTAT_INCR(arcstat_raw_size, -size); | |
3238 | } else { | |
3239 | hdr->b_l1hdr.b_pabd = NULL; | |
3240 | } | |
3241 | ||
3242 | if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) | |
3243 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; | |
3244 | ||
3245 | ARCSTAT_INCR(arcstat_compressed_size, -size); | |
d3c2ae1c GW |
3246 | ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); |
3247 | } | |
3248 | ||
3249 | static arc_buf_hdr_t * | |
3250 | arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, | |
10b3c7f5 | 3251 | boolean_t protected, enum zio_compress compression_type, uint8_t complevel, |
b5256303 | 3252 | arc_buf_contents_t type, boolean_t alloc_rdata) |
d3c2ae1c GW |
3253 | { |
3254 | arc_buf_hdr_t *hdr; | |
e111c802 | 3255 | int flags = ARC_HDR_DO_ADAPT; |
d3c2ae1c | 3256 | |
d3c2ae1c | 3257 | VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); |
b5256303 TC |
3258 | if (protected) { |
3259 | hdr = kmem_cache_alloc(hdr_full_crypt_cache, KM_PUSHPAGE); | |
3260 | } else { | |
3261 | hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); | |
3262 | } | |
e111c802 | 3263 | flags |= alloc_rdata ? ARC_HDR_ALLOC_RDATA : 0; |
d3c2ae1c | 3264 | |
d3c2ae1c GW |
3265 | ASSERT(HDR_EMPTY(hdr)); |
3266 | ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); | |
3267 | HDR_SET_PSIZE(hdr, psize); | |
3268 | HDR_SET_LSIZE(hdr, lsize); | |
3269 | hdr->b_spa = spa; | |
3270 | hdr->b_type = type; | |
3271 | hdr->b_flags = 0; | |
3272 | arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); | |
2aa34383 | 3273 | arc_hdr_set_compress(hdr, compression_type); |
10b3c7f5 | 3274 | hdr->b_complevel = complevel; |
b5256303 TC |
3275 | if (protected) |
3276 | arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); | |
ca0bf58d | 3277 | |
d3c2ae1c GW |
3278 | hdr->b_l1hdr.b_state = arc_anon; |
3279 | hdr->b_l1hdr.b_arc_access = 0; | |
3280 | hdr->b_l1hdr.b_bufcnt = 0; | |
3281 | hdr->b_l1hdr.b_buf = NULL; | |
ca0bf58d | 3282 | |
d3c2ae1c GW |
3283 | /* |
3284 | * Allocate the hdr's buffer. This will contain either | |
3285 | * the compressed or uncompressed data depending on the block | |
3286 | * it references and compressed arc enablement. | |
3287 | */ | |
e111c802 | 3288 | arc_hdr_alloc_abd(hdr, flags); |
424fd7c3 | 3289 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
ca0bf58d | 3290 | |
d3c2ae1c | 3291 | return (hdr); |
ca0bf58d PS |
3292 | } |
3293 | ||
bd089c54 | 3294 | /* |
d3c2ae1c GW |
3295 | * Transition between the two allocation states for the arc_buf_hdr struct. |
3296 | * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without | |
3297 | * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller | |
3298 | * version is used when a cache buffer is only in the L2ARC in order to reduce | |
3299 | * memory usage. | |
bd089c54 | 3300 | */ |
d3c2ae1c GW |
3301 | static arc_buf_hdr_t * |
3302 | arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) | |
34dc7c2f | 3303 | { |
1c27024e DB |
3304 | ASSERT(HDR_HAS_L2HDR(hdr)); |
3305 | ||
d3c2ae1c GW |
3306 | arc_buf_hdr_t *nhdr; |
3307 | l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; | |
34dc7c2f | 3308 | |
d3c2ae1c GW |
3309 | ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || |
3310 | (old == hdr_l2only_cache && new == hdr_full_cache)); | |
34dc7c2f | 3311 | |
b5256303 TC |
3312 | /* |
3313 | * if the caller wanted a new full header and the header is to be | |
3314 | * encrypted we will actually allocate the header from the full crypt | |
3315 | * cache instead. The same applies to freeing from the old cache. | |
3316 | */ | |
3317 | if (HDR_PROTECTED(hdr) && new == hdr_full_cache) | |
3318 | new = hdr_full_crypt_cache; | |
3319 | if (HDR_PROTECTED(hdr) && old == hdr_full_cache) | |
3320 | old = hdr_full_crypt_cache; | |
3321 | ||
d3c2ae1c | 3322 | nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); |
428870ff | 3323 | |
d3c2ae1c GW |
3324 | ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); |
3325 | buf_hash_remove(hdr); | |
ca0bf58d | 3326 | |
d3c2ae1c | 3327 | bcopy(hdr, nhdr, HDR_L2ONLY_SIZE); |
34dc7c2f | 3328 | |
b5256303 | 3329 | if (new == hdr_full_cache || new == hdr_full_crypt_cache) { |
d3c2ae1c GW |
3330 | arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); |
3331 | /* | |
3332 | * arc_access and arc_change_state need to be aware that a | |
3333 | * header has just come out of L2ARC, so we set its state to | |
3334 | * l2c_only even though it's about to change. | |
3335 | */ | |
3336 | nhdr->b_l1hdr.b_state = arc_l2c_only; | |
34dc7c2f | 3337 | |
d3c2ae1c | 3338 | /* Verify previous threads set to NULL before freeing */ |
a6255b7f | 3339 | ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 3340 | ASSERT(!HDR_HAS_RABD(hdr)); |
d3c2ae1c GW |
3341 | } else { |
3342 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); | |
3343 | ASSERT0(hdr->b_l1hdr.b_bufcnt); | |
3344 | ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); | |
36da08ef | 3345 | |
d3c2ae1c GW |
3346 | /* |
3347 | * If we've reached here, We must have been called from | |
3348 | * arc_evict_hdr(), as such we should have already been | |
3349 | * removed from any ghost list we were previously on | |
3350 | * (which protects us from racing with arc_evict_state), | |
3351 | * thus no locking is needed during this check. | |
3352 | */ | |
3353 | ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); | |
1eb5bfa3 GW |
3354 | |
3355 | /* | |
d3c2ae1c GW |
3356 | * A buffer must not be moved into the arc_l2c_only |
3357 | * state if it's not finished being written out to the | |
a6255b7f | 3358 | * l2arc device. Otherwise, the b_l1hdr.b_pabd field |
d3c2ae1c | 3359 | * might try to be accessed, even though it was removed. |
1eb5bfa3 | 3360 | */ |
d3c2ae1c | 3361 | VERIFY(!HDR_L2_WRITING(hdr)); |
a6255b7f | 3362 | VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 3363 | ASSERT(!HDR_HAS_RABD(hdr)); |
d3c2ae1c GW |
3364 | |
3365 | arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); | |
34dc7c2f | 3366 | } |
d3c2ae1c GW |
3367 | /* |
3368 | * The header has been reallocated so we need to re-insert it into any | |
3369 | * lists it was on. | |
3370 | */ | |
3371 | (void) buf_hash_insert(nhdr, NULL); | |
34dc7c2f | 3372 | |
d3c2ae1c | 3373 | ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); |
34dc7c2f | 3374 | |
d3c2ae1c GW |
3375 | mutex_enter(&dev->l2ad_mtx); |
3376 | ||
3377 | /* | |
3378 | * We must place the realloc'ed header back into the list at | |
3379 | * the same spot. Otherwise, if it's placed earlier in the list, | |
3380 | * l2arc_write_buffers() could find it during the function's | |
3381 | * write phase, and try to write it out to the l2arc. | |
3382 | */ | |
3383 | list_insert_after(&dev->l2ad_buflist, hdr, nhdr); | |
3384 | list_remove(&dev->l2ad_buflist, hdr); | |
34dc7c2f | 3385 | |
d3c2ae1c | 3386 | mutex_exit(&dev->l2ad_mtx); |
34dc7c2f | 3387 | |
d3c2ae1c GW |
3388 | /* |
3389 | * Since we're using the pointer address as the tag when | |
3390 | * incrementing and decrementing the l2ad_alloc refcount, we | |
3391 | * must remove the old pointer (that we're about to destroy) and | |
3392 | * add the new pointer to the refcount. Otherwise we'd remove | |
3393 | * the wrong pointer address when calling arc_hdr_destroy() later. | |
3394 | */ | |
3395 | ||
424fd7c3 TS |
3396 | (void) zfs_refcount_remove_many(&dev->l2ad_alloc, |
3397 | arc_hdr_size(hdr), hdr); | |
3398 | (void) zfs_refcount_add_many(&dev->l2ad_alloc, | |
3399 | arc_hdr_size(nhdr), nhdr); | |
d3c2ae1c GW |
3400 | |
3401 | buf_discard_identity(hdr); | |
3402 | kmem_cache_free(old, hdr); | |
3403 | ||
3404 | return (nhdr); | |
3405 | } | |
3406 | ||
b5256303 TC |
3407 | /* |
3408 | * This function allows an L1 header to be reallocated as a crypt | |
3409 | * header and vice versa. If we are going to a crypt header, the | |
3410 | * new fields will be zeroed out. | |
3411 | */ | |
3412 | static arc_buf_hdr_t * | |
3413 | arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt) | |
3414 | { | |
3415 | arc_buf_hdr_t *nhdr; | |
3416 | arc_buf_t *buf; | |
3417 | kmem_cache_t *ncache, *ocache; | |
3418 | ||
b7ddeaef TC |
3419 | /* |
3420 | * This function requires that hdr is in the arc_anon state. | |
3421 | * Therefore it won't have any L2ARC data for us to worry | |
3422 | * about copying. | |
3423 | */ | |
b5256303 | 3424 | ASSERT(HDR_HAS_L1HDR(hdr)); |
b7ddeaef | 3425 | ASSERT(!HDR_HAS_L2HDR(hdr)); |
b5256303 TC |
3426 | ASSERT3U(!!HDR_PROTECTED(hdr), !=, need_crypt); |
3427 | ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); | |
3428 | ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); | |
b7ddeaef TC |
3429 | ASSERT(!list_link_active(&hdr->b_l2hdr.b_l2node)); |
3430 | ASSERT3P(hdr->b_hash_next, ==, NULL); | |
b5256303 TC |
3431 | |
3432 | if (need_crypt) { | |
3433 | ncache = hdr_full_crypt_cache; | |
3434 | ocache = hdr_full_cache; | |
3435 | } else { | |
3436 | ncache = hdr_full_cache; | |
3437 | ocache = hdr_full_crypt_cache; | |
3438 | } | |
3439 | ||
3440 | nhdr = kmem_cache_alloc(ncache, KM_PUSHPAGE); | |
b7ddeaef TC |
3441 | |
3442 | /* | |
3443 | * Copy all members that aren't locks or condvars to the new header. | |
3444 | * No lists are pointing to us (as we asserted above), so we don't | |
3445 | * need to worry about the list nodes. | |
3446 | */ | |
3447 | nhdr->b_dva = hdr->b_dva; | |
3448 | nhdr->b_birth = hdr->b_birth; | |
3449 | nhdr->b_type = hdr->b_type; | |
3450 | nhdr->b_flags = hdr->b_flags; | |
3451 | nhdr->b_psize = hdr->b_psize; | |
3452 | nhdr->b_lsize = hdr->b_lsize; | |
3453 | nhdr->b_spa = hdr->b_spa; | |
b5256303 TC |
3454 | nhdr->b_l1hdr.b_freeze_cksum = hdr->b_l1hdr.b_freeze_cksum; |
3455 | nhdr->b_l1hdr.b_bufcnt = hdr->b_l1hdr.b_bufcnt; | |
3456 | nhdr->b_l1hdr.b_byteswap = hdr->b_l1hdr.b_byteswap; | |
3457 | nhdr->b_l1hdr.b_state = hdr->b_l1hdr.b_state; | |
3458 | nhdr->b_l1hdr.b_arc_access = hdr->b_l1hdr.b_arc_access; | |
3459 | nhdr->b_l1hdr.b_mru_hits = hdr->b_l1hdr.b_mru_hits; | |
3460 | nhdr->b_l1hdr.b_mru_ghost_hits = hdr->b_l1hdr.b_mru_ghost_hits; | |
3461 | nhdr->b_l1hdr.b_mfu_hits = hdr->b_l1hdr.b_mfu_hits; | |
3462 | nhdr->b_l1hdr.b_mfu_ghost_hits = hdr->b_l1hdr.b_mfu_ghost_hits; | |
3463 | nhdr->b_l1hdr.b_l2_hits = hdr->b_l1hdr.b_l2_hits; | |
3464 | nhdr->b_l1hdr.b_acb = hdr->b_l1hdr.b_acb; | |
3465 | nhdr->b_l1hdr.b_pabd = hdr->b_l1hdr.b_pabd; | |
b5256303 TC |
3466 | |
3467 | /* | |
c13060e4 | 3468 | * This zfs_refcount_add() exists only to ensure that the individual |
b5256303 TC |
3469 | * arc buffers always point to a header that is referenced, avoiding |
3470 | * a small race condition that could trigger ASSERTs. | |
3471 | */ | |
c13060e4 | 3472 | (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, FTAG); |
b7ddeaef | 3473 | nhdr->b_l1hdr.b_buf = hdr->b_l1hdr.b_buf; |
b5256303 TC |
3474 | for (buf = nhdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) { |
3475 | mutex_enter(&buf->b_evict_lock); | |
3476 | buf->b_hdr = nhdr; | |
3477 | mutex_exit(&buf->b_evict_lock); | |
3478 | } | |
3479 | ||
424fd7c3 TS |
3480 | zfs_refcount_transfer(&nhdr->b_l1hdr.b_refcnt, &hdr->b_l1hdr.b_refcnt); |
3481 | (void) zfs_refcount_remove(&nhdr->b_l1hdr.b_refcnt, FTAG); | |
3482 | ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); | |
b5256303 TC |
3483 | |
3484 | if (need_crypt) { | |
3485 | arc_hdr_set_flags(nhdr, ARC_FLAG_PROTECTED); | |
3486 | } else { | |
3487 | arc_hdr_clear_flags(nhdr, ARC_FLAG_PROTECTED); | |
3488 | } | |
3489 | ||
b7ddeaef TC |
3490 | /* unset all members of the original hdr */ |
3491 | bzero(&hdr->b_dva, sizeof (dva_t)); | |
3492 | hdr->b_birth = 0; | |
3493 | hdr->b_type = ARC_BUFC_INVALID; | |
3494 | hdr->b_flags = 0; | |
3495 | hdr->b_psize = 0; | |
3496 | hdr->b_lsize = 0; | |
3497 | hdr->b_spa = 0; | |
3498 | hdr->b_l1hdr.b_freeze_cksum = NULL; | |
3499 | hdr->b_l1hdr.b_buf = NULL; | |
3500 | hdr->b_l1hdr.b_bufcnt = 0; | |
3501 | hdr->b_l1hdr.b_byteswap = 0; | |
3502 | hdr->b_l1hdr.b_state = NULL; | |
3503 | hdr->b_l1hdr.b_arc_access = 0; | |
3504 | hdr->b_l1hdr.b_mru_hits = 0; | |
3505 | hdr->b_l1hdr.b_mru_ghost_hits = 0; | |
3506 | hdr->b_l1hdr.b_mfu_hits = 0; | |
3507 | hdr->b_l1hdr.b_mfu_ghost_hits = 0; | |
3508 | hdr->b_l1hdr.b_l2_hits = 0; | |
3509 | hdr->b_l1hdr.b_acb = NULL; | |
3510 | hdr->b_l1hdr.b_pabd = NULL; | |
3511 | ||
3512 | if (ocache == hdr_full_crypt_cache) { | |
3513 | ASSERT(!HDR_HAS_RABD(hdr)); | |
3514 | hdr->b_crypt_hdr.b_ot = DMU_OT_NONE; | |
3515 | hdr->b_crypt_hdr.b_ebufcnt = 0; | |
3516 | hdr->b_crypt_hdr.b_dsobj = 0; | |
3517 | bzero(hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); | |
3518 | bzero(hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); | |
3519 | bzero(hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); | |
3520 | } | |
3521 | ||
b5256303 TC |
3522 | buf_discard_identity(hdr); |
3523 | kmem_cache_free(ocache, hdr); | |
3524 | ||
3525 | return (nhdr); | |
3526 | } | |
3527 | ||
3528 | /* | |
3529 | * This function is used by the send / receive code to convert a newly | |
3530 | * allocated arc_buf_t to one that is suitable for a raw encrypted write. It | |
e1cfd73f | 3531 | * is also used to allow the root objset block to be updated without altering |
b5256303 TC |
3532 | * its embedded MACs. Both block types will always be uncompressed so we do not |
3533 | * have to worry about compression type or psize. | |
3534 | */ | |
3535 | void | |
3536 | arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, | |
3537 | dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, | |
3538 | const uint8_t *mac) | |
3539 | { | |
3540 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
3541 | ||
3542 | ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); | |
3543 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
3544 | ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); | |
3545 | ||
3546 | buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); | |
3547 | if (!HDR_PROTECTED(hdr)) | |
3548 | hdr = arc_hdr_realloc_crypt(hdr, B_TRUE); | |
3549 | hdr->b_crypt_hdr.b_dsobj = dsobj; | |
3550 | hdr->b_crypt_hdr.b_ot = ot; | |
3551 | hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? | |
3552 | DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); | |
3553 | if (!arc_hdr_has_uncompressed_buf(hdr)) | |
3554 | arc_cksum_free(hdr); | |
3555 | ||
3556 | if (salt != NULL) | |
3557 | bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); | |
3558 | if (iv != NULL) | |
3559 | bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); | |
3560 | if (mac != NULL) | |
3561 | bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); | |
3562 | } | |
3563 | ||
d3c2ae1c GW |
3564 | /* |
3565 | * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. | |
3566 | * The buf is returned thawed since we expect the consumer to modify it. | |
3567 | */ | |
3568 | arc_buf_t * | |
2aa34383 | 3569 | arc_alloc_buf(spa_t *spa, void *tag, arc_buf_contents_t type, int32_t size) |
d3c2ae1c | 3570 | { |
d3c2ae1c | 3571 | arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, |
10b3c7f5 | 3572 | B_FALSE, ZIO_COMPRESS_OFF, 0, type, B_FALSE); |
2aa34383 | 3573 | |
a7004725 | 3574 | arc_buf_t *buf = NULL; |
be9a5c35 | 3575 | VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, |
b5256303 | 3576 | B_FALSE, B_FALSE, &buf)); |
d3c2ae1c | 3577 | arc_buf_thaw(buf); |
2aa34383 DK |
3578 | |
3579 | return (buf); | |
3580 | } | |
3581 | ||
3582 | /* | |
3583 | * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this | |
3584 | * for bufs containing metadata. | |
3585 | */ | |
3586 | arc_buf_t * | |
3587 | arc_alloc_compressed_buf(spa_t *spa, void *tag, uint64_t psize, uint64_t lsize, | |
10b3c7f5 | 3588 | enum zio_compress compression_type, uint8_t complevel) |
2aa34383 | 3589 | { |
2aa34383 DK |
3590 | ASSERT3U(lsize, >, 0); |
3591 | ASSERT3U(lsize, >=, psize); | |
b5256303 TC |
3592 | ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); |
3593 | ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); | |
2aa34383 | 3594 | |
a7004725 | 3595 | arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, |
10b3c7f5 | 3596 | B_FALSE, compression_type, complevel, ARC_BUFC_DATA, B_FALSE); |
2aa34383 | 3597 | |
a7004725 | 3598 | arc_buf_t *buf = NULL; |
be9a5c35 | 3599 | VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, |
b5256303 | 3600 | B_TRUE, B_FALSE, B_FALSE, &buf)); |
2aa34383 DK |
3601 | arc_buf_thaw(buf); |
3602 | ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); | |
3603 | ||
a6255b7f DQ |
3604 | if (!arc_buf_is_shared(buf)) { |
3605 | /* | |
3606 | * To ensure that the hdr has the correct data in it if we call | |
b5256303 | 3607 | * arc_untransform() on this buf before it's been written to |
a6255b7f DQ |
3608 | * disk, it's easiest if we just set up sharing between the |
3609 | * buf and the hdr. | |
3610 | */ | |
b5256303 | 3611 | arc_hdr_free_abd(hdr, B_FALSE); |
a6255b7f DQ |
3612 | arc_share_buf(hdr, buf); |
3613 | } | |
3614 | ||
d3c2ae1c | 3615 | return (buf); |
34dc7c2f BB |
3616 | } |
3617 | ||
b5256303 TC |
3618 | arc_buf_t * |
3619 | arc_alloc_raw_buf(spa_t *spa, void *tag, uint64_t dsobj, boolean_t byteorder, | |
3620 | const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, | |
3621 | dmu_object_type_t ot, uint64_t psize, uint64_t lsize, | |
10b3c7f5 | 3622 | enum zio_compress compression_type, uint8_t complevel) |
b5256303 TC |
3623 | { |
3624 | arc_buf_hdr_t *hdr; | |
3625 | arc_buf_t *buf; | |
3626 | arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? | |
3627 | ARC_BUFC_METADATA : ARC_BUFC_DATA; | |
3628 | ||
3629 | ASSERT3U(lsize, >, 0); | |
3630 | ASSERT3U(lsize, >=, psize); | |
3631 | ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); | |
3632 | ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); | |
3633 | ||
3634 | hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, | |
10b3c7f5 | 3635 | compression_type, complevel, type, B_TRUE); |
b5256303 TC |
3636 | |
3637 | hdr->b_crypt_hdr.b_dsobj = dsobj; | |
3638 | hdr->b_crypt_hdr.b_ot = ot; | |
3639 | hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? | |
3640 | DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); | |
3641 | bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); | |
3642 | bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); | |
3643 | bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); | |
3644 | ||
3645 | /* | |
3646 | * This buffer will be considered encrypted even if the ot is not an | |
3647 | * encrypted type. It will become authenticated instead in | |
3648 | * arc_write_ready(). | |
3649 | */ | |
3650 | buf = NULL; | |
be9a5c35 | 3651 | VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, |
b5256303 TC |
3652 | B_FALSE, B_FALSE, &buf)); |
3653 | arc_buf_thaw(buf); | |
3654 | ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); | |
3655 | ||
3656 | return (buf); | |
3657 | } | |
3658 | ||
08532162 GA |
3659 | static void |
3660 | l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, | |
3661 | boolean_t state_only) | |
3662 | { | |
3663 | l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; | |
3664 | l2arc_dev_t *dev = l2hdr->b_dev; | |
3665 | uint64_t lsize = HDR_GET_LSIZE(hdr); | |
3666 | uint64_t psize = HDR_GET_PSIZE(hdr); | |
3667 | uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); | |
3668 | arc_buf_contents_t type = hdr->b_type; | |
3669 | int64_t lsize_s; | |
3670 | int64_t psize_s; | |
3671 | int64_t asize_s; | |
3672 | ||
3673 | if (incr) { | |
3674 | lsize_s = lsize; | |
3675 | psize_s = psize; | |
3676 | asize_s = asize; | |
3677 | } else { | |
3678 | lsize_s = -lsize; | |
3679 | psize_s = -psize; | |
3680 | asize_s = -asize; | |
3681 | } | |
3682 | ||
3683 | /* If the buffer is a prefetch, count it as such. */ | |
3684 | if (HDR_PREFETCH(hdr)) { | |
3685 | ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s); | |
3686 | } else { | |
3687 | /* | |
3688 | * We use the value stored in the L2 header upon initial | |
3689 | * caching in L2ARC. This value will be updated in case | |
3690 | * an MRU/MRU_ghost buffer transitions to MFU but the L2ARC | |
3691 | * metadata (log entry) cannot currently be updated. Having | |
3692 | * the ARC state in the L2 header solves the problem of a | |
3693 | * possibly absent L1 header (apparent in buffers restored | |
3694 | * from persistent L2ARC). | |
3695 | */ | |
3696 | switch (hdr->b_l2hdr.b_arcs_state) { | |
3697 | case ARC_STATE_MRU_GHOST: | |
3698 | case ARC_STATE_MRU: | |
3699 | ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s); | |
3700 | break; | |
3701 | case ARC_STATE_MFU_GHOST: | |
3702 | case ARC_STATE_MFU: | |
3703 | ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s); | |
3704 | break; | |
3705 | default: | |
3706 | break; | |
3707 | } | |
3708 | } | |
3709 | ||
3710 | if (state_only) | |
3711 | return; | |
3712 | ||
3713 | ARCSTAT_INCR(arcstat_l2_psize, psize_s); | |
3714 | ARCSTAT_INCR(arcstat_l2_lsize, lsize_s); | |
3715 | ||
3716 | switch (type) { | |
3717 | case ARC_BUFC_DATA: | |
3718 | ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s); | |
3719 | break; | |
3720 | case ARC_BUFC_METADATA: | |
3721 | ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s); | |
3722 | break; | |
3723 | default: | |
3724 | break; | |
3725 | } | |
3726 | } | |
3727 | ||
3728 | ||
d962d5da PS |
3729 | static void |
3730 | arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) | |
3731 | { | |
3732 | l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; | |
3733 | l2arc_dev_t *dev = l2hdr->b_dev; | |
7558997d SD |
3734 | uint64_t psize = HDR_GET_PSIZE(hdr); |
3735 | uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); | |
d962d5da PS |
3736 | |
3737 | ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); | |
3738 | ASSERT(HDR_HAS_L2HDR(hdr)); | |
3739 | ||
3740 | list_remove(&dev->l2ad_buflist, hdr); | |
3741 | ||
08532162 | 3742 | l2arc_hdr_arcstats_decrement(hdr); |
7558997d | 3743 | vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); |
d962d5da | 3744 | |
7558997d SD |
3745 | (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), |
3746 | hdr); | |
d3c2ae1c | 3747 | arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); |
d962d5da PS |
3748 | } |
3749 | ||
34dc7c2f BB |
3750 | static void |
3751 | arc_hdr_destroy(arc_buf_hdr_t *hdr) | |
3752 | { | |
b9541d6b CW |
3753 | if (HDR_HAS_L1HDR(hdr)) { |
3754 | ASSERT(hdr->b_l1hdr.b_buf == NULL || | |
d3c2ae1c | 3755 | hdr->b_l1hdr.b_bufcnt > 0); |
424fd7c3 | 3756 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
b9541d6b CW |
3757 | ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); |
3758 | } | |
34dc7c2f | 3759 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); |
b9541d6b CW |
3760 | ASSERT(!HDR_IN_HASH_TABLE(hdr)); |
3761 | ||
3762 | if (HDR_HAS_L2HDR(hdr)) { | |
d962d5da PS |
3763 | l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; |
3764 | boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); | |
428870ff | 3765 | |
d962d5da PS |
3766 | if (!buflist_held) |
3767 | mutex_enter(&dev->l2ad_mtx); | |
b9541d6b | 3768 | |
ca0bf58d | 3769 | /* |
d962d5da PS |
3770 | * Even though we checked this conditional above, we |
3771 | * need to check this again now that we have the | |
3772 | * l2ad_mtx. This is because we could be racing with | |
3773 | * another thread calling l2arc_evict() which might have | |
3774 | * destroyed this header's L2 portion as we were waiting | |
3775 | * to acquire the l2ad_mtx. If that happens, we don't | |
3776 | * want to re-destroy the header's L2 portion. | |
ca0bf58d | 3777 | */ |
d962d5da PS |
3778 | if (HDR_HAS_L2HDR(hdr)) |
3779 | arc_hdr_l2hdr_destroy(hdr); | |
428870ff BB |
3780 | |
3781 | if (!buflist_held) | |
d962d5da | 3782 | mutex_exit(&dev->l2ad_mtx); |
34dc7c2f BB |
3783 | } |
3784 | ||
ca6c7a94 BB |
3785 | /* |
3786 | * The header's identify can only be safely discarded once it is no | |
3787 | * longer discoverable. This requires removing it from the hash table | |
3788 | * and the l2arc header list. After this point the hash lock can not | |
3789 | * be used to protect the header. | |
3790 | */ | |
3791 | if (!HDR_EMPTY(hdr)) | |
3792 | buf_discard_identity(hdr); | |
3793 | ||
d3c2ae1c GW |
3794 | if (HDR_HAS_L1HDR(hdr)) { |
3795 | arc_cksum_free(hdr); | |
b9541d6b | 3796 | |
d3c2ae1c | 3797 | while (hdr->b_l1hdr.b_buf != NULL) |
2aa34383 | 3798 | arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); |
34dc7c2f | 3799 | |
ca6c7a94 | 3800 | if (hdr->b_l1hdr.b_pabd != NULL) |
b5256303 | 3801 | arc_hdr_free_abd(hdr, B_FALSE); |
b5256303 | 3802 | |
440a3eb9 | 3803 | if (HDR_HAS_RABD(hdr)) |
b5256303 | 3804 | arc_hdr_free_abd(hdr, B_TRUE); |
b9541d6b CW |
3805 | } |
3806 | ||
34dc7c2f | 3807 | ASSERT3P(hdr->b_hash_next, ==, NULL); |
b9541d6b | 3808 | if (HDR_HAS_L1HDR(hdr)) { |
ca0bf58d | 3809 | ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); |
b9541d6b | 3810 | ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); |
b5256303 TC |
3811 | |
3812 | if (!HDR_PROTECTED(hdr)) { | |
3813 | kmem_cache_free(hdr_full_cache, hdr); | |
3814 | } else { | |
3815 | kmem_cache_free(hdr_full_crypt_cache, hdr); | |
3816 | } | |
b9541d6b CW |
3817 | } else { |
3818 | kmem_cache_free(hdr_l2only_cache, hdr); | |
3819 | } | |
34dc7c2f BB |
3820 | } |
3821 | ||
3822 | void | |
d3c2ae1c | 3823 | arc_buf_destroy(arc_buf_t *buf, void* tag) |
34dc7c2f BB |
3824 | { |
3825 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
34dc7c2f | 3826 | |
b9541d6b | 3827 | if (hdr->b_l1hdr.b_state == arc_anon) { |
d3c2ae1c GW |
3828 | ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); |
3829 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); | |
3830 | VERIFY0(remove_reference(hdr, NULL, tag)); | |
3831 | arc_hdr_destroy(hdr); | |
3832 | return; | |
34dc7c2f BB |
3833 | } |
3834 | ||
ca6c7a94 | 3835 | kmutex_t *hash_lock = HDR_LOCK(hdr); |
34dc7c2f | 3836 | mutex_enter(hash_lock); |
ca6c7a94 | 3837 | |
d3c2ae1c GW |
3838 | ASSERT3P(hdr, ==, buf->b_hdr); |
3839 | ASSERT(hdr->b_l1hdr.b_bufcnt > 0); | |
428870ff | 3840 | ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); |
d3c2ae1c GW |
3841 | ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); |
3842 | ASSERT3P(buf->b_data, !=, NULL); | |
34dc7c2f BB |
3843 | |
3844 | (void) remove_reference(hdr, hash_lock, tag); | |
2aa34383 | 3845 | arc_buf_destroy_impl(buf); |
34dc7c2f | 3846 | mutex_exit(hash_lock); |
34dc7c2f BB |
3847 | } |
3848 | ||
34dc7c2f | 3849 | /* |
ca0bf58d PS |
3850 | * Evict the arc_buf_hdr that is provided as a parameter. The resultant |
3851 | * state of the header is dependent on its state prior to entering this | |
3852 | * function. The following transitions are possible: | |
34dc7c2f | 3853 | * |
ca0bf58d PS |
3854 | * - arc_mru -> arc_mru_ghost |
3855 | * - arc_mfu -> arc_mfu_ghost | |
3856 | * - arc_mru_ghost -> arc_l2c_only | |
3857 | * - arc_mru_ghost -> deleted | |
3858 | * - arc_mfu_ghost -> arc_l2c_only | |
3859 | * - arc_mfu_ghost -> deleted | |
f7de776d AM |
3860 | * |
3861 | * Return total size of evicted data buffers for eviction progress tracking. | |
3862 | * When evicting from ghost states return logical buffer size to make eviction | |
3863 | * progress at the same (or at least comparable) rate as from non-ghost states. | |
3864 | * | |
3865 | * Return *real_evicted for actual ARC size reduction to wake up threads | |
3866 | * waiting for it. For non-ghost states it includes size of evicted data | |
3867 | * buffers (the headers are not freed there). For ghost states it includes | |
3868 | * only the evicted headers size. | |
34dc7c2f | 3869 | */ |
ca0bf58d | 3870 | static int64_t |
f7de776d | 3871 | arc_evict_hdr(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, uint64_t *real_evicted) |
34dc7c2f | 3872 | { |
ca0bf58d PS |
3873 | arc_state_t *evicted_state, *state; |
3874 | int64_t bytes_evicted = 0; | |
d4a72f23 TC |
3875 | int min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? |
3876 | arc_min_prescient_prefetch_ms : arc_min_prefetch_ms; | |
34dc7c2f | 3877 | |
ca0bf58d PS |
3878 | ASSERT(MUTEX_HELD(hash_lock)); |
3879 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
e8b96c60 | 3880 | |
f7de776d | 3881 | *real_evicted = 0; |
ca0bf58d PS |
3882 | state = hdr->b_l1hdr.b_state; |
3883 | if (GHOST_STATE(state)) { | |
3884 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); | |
d3c2ae1c | 3885 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); |
e8b96c60 MA |
3886 | |
3887 | /* | |
ca0bf58d | 3888 | * l2arc_write_buffers() relies on a header's L1 portion |
a6255b7f | 3889 | * (i.e. its b_pabd field) during it's write phase. |
ca0bf58d PS |
3890 | * Thus, we cannot push a header onto the arc_l2c_only |
3891 | * state (removing its L1 piece) until the header is | |
3892 | * done being written to the l2arc. | |
e8b96c60 | 3893 | */ |
ca0bf58d PS |
3894 | if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { |
3895 | ARCSTAT_BUMP(arcstat_evict_l2_skip); | |
3896 | return (bytes_evicted); | |
e8b96c60 MA |
3897 | } |
3898 | ||
ca0bf58d | 3899 | ARCSTAT_BUMP(arcstat_deleted); |
d3c2ae1c | 3900 | bytes_evicted += HDR_GET_LSIZE(hdr); |
428870ff | 3901 | |
ca0bf58d | 3902 | DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); |
428870ff | 3903 | |
ca0bf58d | 3904 | if (HDR_HAS_L2HDR(hdr)) { |
a6255b7f | 3905 | ASSERT(hdr->b_l1hdr.b_pabd == NULL); |
b5256303 | 3906 | ASSERT(!HDR_HAS_RABD(hdr)); |
ca0bf58d PS |
3907 | /* |
3908 | * This buffer is cached on the 2nd Level ARC; | |
3909 | * don't destroy the header. | |
3910 | */ | |
3911 | arc_change_state(arc_l2c_only, hdr, hash_lock); | |
3912 | /* | |
3913 | * dropping from L1+L2 cached to L2-only, | |
3914 | * realloc to remove the L1 header. | |
3915 | */ | |
3916 | hdr = arc_hdr_realloc(hdr, hdr_full_cache, | |
3917 | hdr_l2only_cache); | |
f7de776d | 3918 | *real_evicted += HDR_FULL_SIZE - HDR_L2ONLY_SIZE; |
34dc7c2f | 3919 | } else { |
ca0bf58d PS |
3920 | arc_change_state(arc_anon, hdr, hash_lock); |
3921 | arc_hdr_destroy(hdr); | |
f7de776d | 3922 | *real_evicted += HDR_FULL_SIZE; |
34dc7c2f | 3923 | } |
ca0bf58d | 3924 | return (bytes_evicted); |
34dc7c2f BB |
3925 | } |
3926 | ||
ca0bf58d PS |
3927 | ASSERT(state == arc_mru || state == arc_mfu); |
3928 | evicted_state = (state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost; | |
34dc7c2f | 3929 | |
ca0bf58d PS |
3930 | /* prefetch buffers have a minimum lifespan */ |
3931 | if (HDR_IO_IN_PROGRESS(hdr) || | |
3932 | ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && | |
2b84817f TC |
3933 | ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < |
3934 | MSEC_TO_TICK(min_lifetime))) { | |
ca0bf58d PS |
3935 | ARCSTAT_BUMP(arcstat_evict_skip); |
3936 | return (bytes_evicted); | |
da8ccd0e PS |
3937 | } |
3938 | ||
424fd7c3 | 3939 | ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); |
ca0bf58d PS |
3940 | while (hdr->b_l1hdr.b_buf) { |
3941 | arc_buf_t *buf = hdr->b_l1hdr.b_buf; | |
3942 | if (!mutex_tryenter(&buf->b_evict_lock)) { | |
3943 | ARCSTAT_BUMP(arcstat_mutex_miss); | |
3944 | break; | |
3945 | } | |
f7de776d | 3946 | if (buf->b_data != NULL) { |
d3c2ae1c | 3947 | bytes_evicted += HDR_GET_LSIZE(hdr); |
f7de776d AM |
3948 | *real_evicted += HDR_GET_LSIZE(hdr); |
3949 | } | |
d3c2ae1c | 3950 | mutex_exit(&buf->b_evict_lock); |
2aa34383 | 3951 | arc_buf_destroy_impl(buf); |
ca0bf58d | 3952 | } |
34dc7c2f | 3953 | |
ca0bf58d | 3954 | if (HDR_HAS_L2HDR(hdr)) { |
d3c2ae1c | 3955 | ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); |
ca0bf58d | 3956 | } else { |
d3c2ae1c GW |
3957 | if (l2arc_write_eligible(hdr->b_spa, hdr)) { |
3958 | ARCSTAT_INCR(arcstat_evict_l2_eligible, | |
3959 | HDR_GET_LSIZE(hdr)); | |
08532162 GA |
3960 | |
3961 | switch (state->arcs_state) { | |
3962 | case ARC_STATE_MRU: | |
3963 | ARCSTAT_INCR( | |
3964 | arcstat_evict_l2_eligible_mru, | |
3965 | HDR_GET_LSIZE(hdr)); | |
3966 | break; | |
3967 | case ARC_STATE_MFU: | |
3968 | ARCSTAT_INCR( | |
3969 | arcstat_evict_l2_eligible_mfu, | |
3970 | HDR_GET_LSIZE(hdr)); | |
3971 | break; | |
3972 | default: | |
3973 | break; | |
3974 | } | |
d3c2ae1c GW |
3975 | } else { |
3976 | ARCSTAT_INCR(arcstat_evict_l2_ineligible, | |
3977 | HDR_GET_LSIZE(hdr)); | |
3978 | } | |
ca0bf58d | 3979 | } |
34dc7c2f | 3980 | |
d3c2ae1c GW |
3981 | if (hdr->b_l1hdr.b_bufcnt == 0) { |
3982 | arc_cksum_free(hdr); | |
3983 | ||
3984 | bytes_evicted += arc_hdr_size(hdr); | |
f7de776d | 3985 | *real_evicted += arc_hdr_size(hdr); |
d3c2ae1c GW |
3986 | |
3987 | /* | |
3988 | * If this hdr is being evicted and has a compressed | |
3989 | * buffer then we discard it here before we change states. | |
3990 | * This ensures that the accounting is updated correctly | |
a6255b7f | 3991 | * in arc_free_data_impl(). |
d3c2ae1c | 3992 | */ |
b5256303 TC |
3993 | if (hdr->b_l1hdr.b_pabd != NULL) |
3994 | arc_hdr_free_abd(hdr, B_FALSE); | |
3995 | ||
3996 | if (HDR_HAS_RABD(hdr)) | |
3997 | arc_hdr_free_abd(hdr, B_TRUE); | |
d3c2ae1c | 3998 | |
ca0bf58d PS |
3999 | arc_change_state(evicted_state, hdr, hash_lock); |
4000 | ASSERT(HDR_IN_HASH_TABLE(hdr)); | |
d3c2ae1c | 4001 | arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); |
ca0bf58d PS |
4002 | DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); |
4003 | } | |
34dc7c2f | 4004 | |
ca0bf58d | 4005 | return (bytes_evicted); |
34dc7c2f BB |
4006 | } |
4007 | ||
3442c2a0 MA |
4008 | static void |
4009 | arc_set_need_free(void) | |
4010 | { | |
4011 | ASSERT(MUTEX_HELD(&arc_evict_lock)); | |
4012 | int64_t remaining = arc_free_memory() - arc_sys_free / 2; | |
4013 | arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters); | |
4014 | if (aw == NULL) { | |
4015 | arc_need_free = MAX(-remaining, 0); | |
4016 | } else { | |
4017 | arc_need_free = | |
4018 | MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count)); | |
4019 | } | |
4020 | } | |
4021 | ||
ca0bf58d PS |
4022 | static uint64_t |
4023 | arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, | |
8172df64 | 4024 | uint64_t spa, uint64_t bytes) |
34dc7c2f | 4025 | { |
ca0bf58d | 4026 | multilist_sublist_t *mls; |
f7de776d | 4027 | uint64_t bytes_evicted = 0, real_evicted = 0; |
ca0bf58d | 4028 | arc_buf_hdr_t *hdr; |
34dc7c2f | 4029 | kmutex_t *hash_lock; |
8172df64 | 4030 | int evict_count = zfs_arc_evict_batch_limit; |
34dc7c2f | 4031 | |
ca0bf58d | 4032 | ASSERT3P(marker, !=, NULL); |
ca0bf58d PS |
4033 | |
4034 | mls = multilist_sublist_lock(ml, idx); | |
572e2857 | 4035 | |
8172df64 | 4036 | for (hdr = multilist_sublist_prev(mls, marker); likely(hdr != NULL); |
ca0bf58d | 4037 | hdr = multilist_sublist_prev(mls, marker)) { |
8172df64 | 4038 | if ((evict_count <= 0) || (bytes_evicted >= bytes)) |
ca0bf58d PS |
4039 | break; |
4040 | ||
4041 | /* | |
4042 | * To keep our iteration location, move the marker | |
4043 | * forward. Since we're not holding hdr's hash lock, we | |
4044 | * must be very careful and not remove 'hdr' from the | |
4045 | * sublist. Otherwise, other consumers might mistake the | |
4046 | * 'hdr' as not being on a sublist when they call the | |
4047 | * multilist_link_active() function (they all rely on | |
4048 | * the hash lock protecting concurrent insertions and | |
4049 | * removals). multilist_sublist_move_forward() was | |
4050 | * specifically implemented to ensure this is the case | |
4051 | * (only 'marker' will be removed and re-inserted). | |
4052 | */ | |
4053 | multilist_sublist_move_forward(mls, marker); | |
4054 | ||
4055 | /* | |
4056 | * The only case where the b_spa field should ever be | |
4057 | * zero, is the marker headers inserted by | |
4058 | * arc_evict_state(). It's possible for multiple threads | |
4059 | * to be calling arc_evict_state() concurrently (e.g. | |
4060 | * dsl_pool_close() and zio_inject_fault()), so we must | |
4061 | * skip any markers we see from these other threads. | |
4062 | */ | |
2a432414 | 4063 | if (hdr->b_spa == 0) |
572e2857 BB |
4064 | continue; |
4065 | ||
ca0bf58d PS |
4066 | /* we're only interested in evicting buffers of a certain spa */ |
4067 | if (spa != 0 && hdr->b_spa != spa) { | |
4068 | ARCSTAT_BUMP(arcstat_evict_skip); | |
428870ff | 4069 | continue; |
ca0bf58d PS |
4070 | } |
4071 | ||
4072 | hash_lock = HDR_LOCK(hdr); | |
e8b96c60 MA |
4073 | |
4074 | /* | |
ca0bf58d PS |
4075 | * We aren't calling this function from any code path |
4076 | * that would already be holding a hash lock, so we're | |
4077 | * asserting on this assumption to be defensive in case | |
4078 | * this ever changes. Without this check, it would be | |
4079 | * possible to incorrectly increment arcstat_mutex_miss | |
4080 | * below (e.g. if the code changed such that we called | |
4081 | * this function with a hash lock held). | |
e8b96c60 | 4082 | */ |
ca0bf58d PS |
4083 | ASSERT(!MUTEX_HELD(hash_lock)); |
4084 | ||
34dc7c2f | 4085 | if (mutex_tryenter(hash_lock)) { |
f7de776d AM |
4086 | uint64_t revicted; |
4087 | uint64_t evicted = arc_evict_hdr(hdr, hash_lock, | |
4088 | &revicted); | |
ca0bf58d | 4089 | mutex_exit(hash_lock); |
34dc7c2f | 4090 | |
ca0bf58d | 4091 | bytes_evicted += evicted; |
f7de776d | 4092 | real_evicted += revicted; |
34dc7c2f | 4093 | |
572e2857 | 4094 | /* |
ca0bf58d PS |
4095 | * If evicted is zero, arc_evict_hdr() must have |
4096 | * decided to skip this header, don't increment | |
4097 | * evict_count in this case. | |
572e2857 | 4098 | */ |
ca0bf58d | 4099 | if (evicted != 0) |
8172df64 | 4100 | evict_count--; |
ca0bf58d | 4101 | |
e8b96c60 | 4102 | } else { |
ca0bf58d | 4103 | ARCSTAT_BUMP(arcstat_mutex_miss); |
e8b96c60 | 4104 | } |
34dc7c2f | 4105 | } |
34dc7c2f | 4106 | |
ca0bf58d | 4107 | multilist_sublist_unlock(mls); |
34dc7c2f | 4108 | |
3442c2a0 MA |
4109 | /* |
4110 | * Increment the count of evicted bytes, and wake up any threads that | |
4111 | * are waiting for the count to reach this value. Since the list is | |
4112 | * ordered by ascending aew_count, we pop off the beginning of the | |
4113 | * list until we reach the end, or a waiter that's past the current | |
4114 | * "count". Doing this outside the loop reduces the number of times | |
4115 | * we need to acquire the global arc_evict_lock. | |
4116 | * | |
4117 | * Only wake when there's sufficient free memory in the system | |
4118 | * (specifically, arc_sys_free/2, which by default is a bit more than | |
4119 | * 1/64th of RAM). See the comments in arc_wait_for_eviction(). | |
4120 | */ | |
4121 | mutex_enter(&arc_evict_lock); | |
f7de776d | 4122 | arc_evict_count += real_evicted; |
3442c2a0 | 4123 | |
dc303dcf | 4124 | if (arc_free_memory() > arc_sys_free / 2) { |
3442c2a0 MA |
4125 | arc_evict_waiter_t *aw; |
4126 | while ((aw = list_head(&arc_evict_waiters)) != NULL && | |
4127 | aw->aew_count <= arc_evict_count) { | |
4128 | list_remove(&arc_evict_waiters, aw); | |
4129 | cv_broadcast(&aw->aew_cv); | |
4130 | } | |
4131 | } | |
4132 | arc_set_need_free(); | |
4133 | mutex_exit(&arc_evict_lock); | |
4134 | ||
67c0f0de MA |
4135 | /* |
4136 | * If the ARC size is reduced from arc_c_max to arc_c_min (especially | |
4137 | * if the average cached block is small), eviction can be on-CPU for | |
4138 | * many seconds. To ensure that other threads that may be bound to | |
4139 | * this CPU are able to make progress, make a voluntary preemption | |
4140 | * call here. | |
4141 | */ | |
4142 | cond_resched(); | |
4143 | ||
ca0bf58d | 4144 | return (bytes_evicted); |
34dc7c2f BB |
4145 | } |
4146 | ||
ca0bf58d PS |
4147 | /* |
4148 | * Evict buffers from the given arc state, until we've removed the | |
4149 | * specified number of bytes. Move the removed buffers to the | |
4150 | * appropriate evict state. | |
4151 | * | |
4152 | * This function makes a "best effort". It skips over any buffers | |
4153 | * it can't get a hash_lock on, and so, may not catch all candidates. | |
4154 | * It may also return without evicting as much space as requested. | |
4155 | * | |
4156 | * If bytes is specified using the special value ARC_EVICT_ALL, this | |
4157 | * will evict all available (i.e. unlocked and evictable) buffers from | |
4158 | * the given arc state; which is used by arc_flush(). | |
4159 | */ | |
4160 | static uint64_t | |
8172df64 | 4161 | arc_evict_state(arc_state_t *state, uint64_t spa, uint64_t bytes, |
ca0bf58d | 4162 | arc_buf_contents_t type) |
34dc7c2f | 4163 | { |
ca0bf58d | 4164 | uint64_t total_evicted = 0; |
ffdf019c | 4165 | multilist_t *ml = &state->arcs_list[type]; |
ca0bf58d PS |
4166 | int num_sublists; |
4167 | arc_buf_hdr_t **markers; | |
ca0bf58d | 4168 | |
ca0bf58d | 4169 | num_sublists = multilist_get_num_sublists(ml); |
d164b209 BB |
4170 | |
4171 | /* | |
ca0bf58d PS |
4172 | * If we've tried to evict from each sublist, made some |
4173 | * progress, but still have not hit the target number of bytes | |
4174 | * to evict, we want to keep trying. The markers allow us to | |
4175 | * pick up where we left off for each individual sublist, rather | |
4176 | * than starting from the tail each time. | |
d164b209 | 4177 | */ |
ca0bf58d | 4178 | markers = kmem_zalloc(sizeof (*markers) * num_sublists, KM_SLEEP); |
1c27024e | 4179 | for (int i = 0; i < num_sublists; i++) { |
ca0bf58d | 4180 | multilist_sublist_t *mls; |
34dc7c2f | 4181 | |
ca0bf58d PS |
4182 | markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); |
4183 | ||
4184 | /* | |
4185 | * A b_spa of 0 is used to indicate that this header is | |
5dd92909 | 4186 | * a marker. This fact is used in arc_evict_type() and |
ca0bf58d PS |
4187 | * arc_evict_state_impl(). |
4188 | */ | |
4189 | markers[i]->b_spa = 0; | |
34dc7c2f | 4190 | |
ca0bf58d PS |
4191 | mls = multilist_sublist_lock(ml, i); |
4192 | multilist_sublist_insert_tail(mls, markers[i]); | |
4193 | multilist_sublist_unlock(mls); | |
34dc7c2f BB |
4194 | } |
4195 | ||
d164b209 | 4196 | /* |
ca0bf58d PS |
4197 | * While we haven't hit our target number of bytes to evict, or |
4198 | * we're evicting all available buffers. | |
d164b209 | 4199 | */ |
8172df64 | 4200 | while (total_evicted < bytes) { |
25458cbe TC |
4201 | int sublist_idx = multilist_get_random_index(ml); |
4202 | uint64_t scan_evicted = 0; | |
4203 | ||
4204 | /* | |
4205 | * Try to reduce pinned dnodes with a floor of arc_dnode_limit. | |
4206 | * Request that 10% of the LRUs be scanned by the superblock | |
4207 | * shrinker. | |
4208 | */ | |
c4c162c1 AM |
4209 | if (type == ARC_BUFC_DATA && aggsum_compare( |
4210 | &arc_sums.arcstat_dnode_size, arc_dnode_size_limit) > 0) { | |
4211 | arc_prune_async((aggsum_upper_bound( | |
4212 | &arc_sums.arcstat_dnode_size) - | |
03fdcb9a | 4213 | arc_dnode_size_limit) / sizeof (dnode_t) / |
37fb3e43 PD |
4214 | zfs_arc_dnode_reduce_percent); |
4215 | } | |
25458cbe | 4216 | |
ca0bf58d PS |
4217 | /* |
4218 | * Start eviction using a randomly selected sublist, | |
4219 | * this is to try and evenly balance eviction across all | |
4220 | * sublists. Always starting at the same sublist | |
4221 | * (e.g. index 0) would cause evictions to favor certain | |
4222 | * sublists over others. | |
4223 | */ | |
1c27024e | 4224 | for (int i = 0; i < num_sublists; i++) { |
ca0bf58d PS |
4225 | uint64_t bytes_remaining; |
4226 | uint64_t bytes_evicted; | |
d164b209 | 4227 | |
8172df64 | 4228 | if (total_evicted < bytes) |
ca0bf58d PS |
4229 | bytes_remaining = bytes - total_evicted; |
4230 | else | |
4231 | break; | |
34dc7c2f | 4232 | |
ca0bf58d PS |
4233 | bytes_evicted = arc_evict_state_impl(ml, sublist_idx, |
4234 | markers[sublist_idx], spa, bytes_remaining); | |
4235 | ||
4236 | scan_evicted += bytes_evicted; | |
4237 | total_evicted += bytes_evicted; | |
4238 | ||
4239 | /* we've reached the end, wrap to the beginning */ | |
4240 | if (++sublist_idx >= num_sublists) | |
4241 | sublist_idx = 0; | |
4242 | } | |
4243 | ||
4244 | /* | |
4245 | * If we didn't evict anything during this scan, we have | |
4246 | * no reason to believe we'll evict more during another | |
4247 | * scan, so break the loop. | |
4248 | */ | |
4249 | if (scan_evicted == 0) { | |
4250 | /* This isn't possible, let's make that obvious */ | |
4251 | ASSERT3S(bytes, !=, 0); | |
34dc7c2f | 4252 | |
ca0bf58d PS |
4253 | /* |
4254 | * When bytes is ARC_EVICT_ALL, the only way to | |
4255 | * break the loop is when scan_evicted is zero. | |
4256 | * In that case, we actually have evicted enough, | |
4257 | * so we don't want to increment the kstat. | |
4258 | */ | |
4259 | if (bytes != ARC_EVICT_ALL) { | |
4260 | ASSERT3S(total_evicted, <, bytes); | |
4261 | ARCSTAT_BUMP(arcstat_evict_not_enough); | |
4262 | } | |
d164b209 | 4263 | |
ca0bf58d PS |
4264 | break; |
4265 | } | |
d164b209 | 4266 | } |
34dc7c2f | 4267 | |
1c27024e | 4268 | for (int i = 0; i < num_sublists; i++) { |
ca0bf58d PS |
4269 | multilist_sublist_t *mls = multilist_sublist_lock(ml, i); |
4270 | multilist_sublist_remove(mls, markers[i]); | |
4271 | multilist_sublist_unlock(mls); | |
34dc7c2f | 4272 | |
ca0bf58d | 4273 | kmem_cache_free(hdr_full_cache, markers[i]); |
34dc7c2f | 4274 | } |
ca0bf58d PS |
4275 | kmem_free(markers, sizeof (*markers) * num_sublists); |
4276 | ||
4277 | return (total_evicted); | |
4278 | } | |
4279 | ||
4280 | /* | |
4281 | * Flush all "evictable" data of the given type from the arc state | |
4282 | * specified. This will not evict any "active" buffers (i.e. referenced). | |
4283 | * | |
d3c2ae1c | 4284 | * When 'retry' is set to B_FALSE, the function will make a single pass |
ca0bf58d PS |
4285 | * over the state and evict any buffers that it can. Since it doesn't |
4286 | * continually retry the eviction, it might end up leaving some buffers | |
4287 | * in the ARC due to lock misses. | |
4288 | * | |
d3c2ae1c | 4289 | * When 'retry' is set to B_TRUE, the function will continually retry the |
ca0bf58d PS |
4290 | * eviction until *all* evictable buffers have been removed from the |
4291 | * state. As a result, if concurrent insertions into the state are | |
4292 | * allowed (e.g. if the ARC isn't shutting down), this function might | |
4293 | * wind up in an infinite loop, continually trying to evict buffers. | |
4294 | */ | |
4295 | static uint64_t | |
4296 | arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, | |
4297 | boolean_t retry) | |
4298 | { | |
4299 | uint64_t evicted = 0; | |
4300 | ||
424fd7c3 | 4301 | while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { |
ca0bf58d PS |
4302 | evicted += arc_evict_state(state, spa, ARC_EVICT_ALL, type); |
4303 | ||
4304 | if (!retry) | |
4305 | break; | |
4306 | } | |
4307 | ||
4308 | return (evicted); | |
34dc7c2f BB |
4309 | } |
4310 | ||
ca0bf58d PS |
4311 | /* |
4312 | * Evict the specified number of bytes from the state specified, | |
4313 | * restricting eviction to the spa and type given. This function | |
4314 | * prevents us from trying to evict more from a state's list than | |
4315 | * is "evictable", and to skip evicting altogether when passed a | |
4316 | * negative value for "bytes". In contrast, arc_evict_state() will | |
4317 | * evict everything it can, when passed a negative value for "bytes". | |
4318 | */ | |
4319 | static uint64_t | |
5dd92909 | 4320 | arc_evict_impl(arc_state_t *state, uint64_t spa, int64_t bytes, |
ca0bf58d PS |
4321 | arc_buf_contents_t type) |
4322 | { | |
8172df64 | 4323 | uint64_t delta; |
ca0bf58d | 4324 | |
424fd7c3 TS |
4325 | if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { |
4326 | delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), | |
4327 | bytes); | |
ca0bf58d PS |
4328 | return (arc_evict_state(state, spa, delta, type)); |
4329 | } | |
4330 | ||
4331 | return (0); | |
4332 | } | |
4333 | ||
4334 | /* | |
4335 | * The goal of this function is to evict enough meta data buffers from the | |
4336 | * ARC in order to enforce the arc_meta_limit. Achieving this is slightly | |
4337 | * more complicated than it appears because it is common for data buffers | |
4338 | * to have holds on meta data buffers. In addition, dnode meta data buffers | |
4339 | * will be held by the dnodes in the block preventing them from being freed. | |
4340 | * This means we can't simply traverse the ARC and expect to always find | |
4341 | * enough unheld meta data buffer to release. | |
4342 | * | |
4343 | * Therefore, this function has been updated to make alternating passes | |
4344 | * over the ARC releasing data buffers and then newly unheld meta data | |
37fb3e43 | 4345 | * buffers. This ensures forward progress is maintained and meta_used |
ca0bf58d PS |
4346 | * will decrease. Normally this is sufficient, but if required the ARC |
4347 | * will call the registered prune callbacks causing dentry and inodes to | |
4348 | * be dropped from the VFS cache. This will make dnode meta data buffers | |
4349 | * available for reclaim. | |
4350 | */ | |
4351 | static uint64_t | |
5dd92909 | 4352 | arc_evict_meta_balanced(uint64_t meta_used) |
ca0bf58d | 4353 | { |
25e2ab16 TC |
4354 | int64_t delta, prune = 0, adjustmnt; |
4355 | uint64_t total_evicted = 0; | |
ca0bf58d | 4356 | arc_buf_contents_t type = ARC_BUFC_DATA; |
ca67b33a | 4357 | int restarts = MAX(zfs_arc_meta_adjust_restarts, 0); |
ca0bf58d PS |
4358 | |
4359 | restart: | |
4360 | /* | |
4361 | * This slightly differs than the way we evict from the mru in | |
5dd92909 | 4362 | * arc_evict because we don't have a "target" value (i.e. no |
ca0bf58d PS |
4363 | * "meta" arc_p). As a result, I think we can completely |
4364 | * cannibalize the metadata in the MRU before we evict the | |
4365 | * metadata from the MFU. I think we probably need to implement a | |
4366 | * "metadata arc_p" value to do this properly. | |
4367 | */ | |
37fb3e43 | 4368 | adjustmnt = meta_used - arc_meta_limit; |
ca0bf58d | 4369 | |
424fd7c3 TS |
4370 | if (adjustmnt > 0 && |
4371 | zfs_refcount_count(&arc_mru->arcs_esize[type]) > 0) { | |
4372 | delta = MIN(zfs_refcount_count(&arc_mru->arcs_esize[type]), | |
d3c2ae1c | 4373 | adjustmnt); |
5dd92909 | 4374 | total_evicted += arc_evict_impl(arc_mru, 0, delta, type); |
ca0bf58d PS |
4375 | adjustmnt -= delta; |
4376 | } | |
4377 | ||
4378 | /* | |
4379 | * We can't afford to recalculate adjustmnt here. If we do, | |
4380 | * new metadata buffers can sneak into the MRU or ANON lists, | |
4381 | * thus penalize the MFU metadata. Although the fudge factor is | |
4382 | * small, it has been empirically shown to be significant for | |
4383 | * certain workloads (e.g. creating many empty directories). As | |
4384 | * such, we use the original calculation for adjustmnt, and | |
4385 | * simply decrement the amount of data evicted from the MRU. | |
4386 | */ | |
4387 | ||
424fd7c3 TS |
4388 | if (adjustmnt > 0 && |
4389 | zfs_refcount_count(&arc_mfu->arcs_esize[type]) > 0) { | |
4390 | delta = MIN(zfs_refcount_count(&arc_mfu->arcs_esize[type]), | |
d3c2ae1c | 4391 | adjustmnt); |
5dd92909 | 4392 | total_evicted += arc_evict_impl(arc_mfu, 0, delta, type); |
ca0bf58d PS |
4393 | } |
4394 | ||
37fb3e43 | 4395 | adjustmnt = meta_used - arc_meta_limit; |
ca0bf58d | 4396 | |
d3c2ae1c | 4397 | if (adjustmnt > 0 && |
424fd7c3 | 4398 | zfs_refcount_count(&arc_mru_ghost->arcs_esize[type]) > 0) { |
ca0bf58d | 4399 | delta = MIN(adjustmnt, |
424fd7c3 | 4400 | zfs_refcount_count(&arc_mru_ghost->arcs_esize[type])); |
5dd92909 | 4401 | total_evicted += arc_evict_impl(arc_mru_ghost, 0, delta, type); |
ca0bf58d PS |
4402 | adjustmnt -= delta; |
4403 | } | |
4404 | ||
d3c2ae1c | 4405 | if (adjustmnt > 0 && |
424fd7c3 | 4406 | zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type]) > 0) { |
ca0bf58d | 4407 | delta = MIN(adjustmnt, |
424fd7c3 | 4408 | zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type])); |
5dd92909 | 4409 | total_evicted += arc_evict_impl(arc_mfu_ghost, 0, delta, type); |
ca0bf58d PS |
4410 | } |
4411 | ||
4412 | /* | |
4413 | * If after attempting to make the requested adjustment to the ARC | |
4414 | * the meta limit is still being exceeded then request that the | |
4415 | * higher layers drop some cached objects which have holds on ARC | |
4416 | * meta buffers. Requests to the upper layers will be made with | |
4417 | * increasingly large scan sizes until the ARC is below the limit. | |
4418 | */ | |
37fb3e43 | 4419 | if (meta_used > arc_meta_limit) { |
ca0bf58d PS |
4420 | if (type == ARC_BUFC_DATA) { |
4421 | type = ARC_BUFC_METADATA; | |
4422 | } else { | |
4423 | type = ARC_BUFC_DATA; | |
4424 | ||
4425 | if (zfs_arc_meta_prune) { | |
4426 | prune += zfs_arc_meta_prune; | |
f6046738 | 4427 | arc_prune_async(prune); |
ca0bf58d PS |
4428 | } |
4429 | } | |
4430 | ||
4431 | if (restarts > 0) { | |
4432 | restarts--; | |
4433 | goto restart; | |
4434 | } | |
4435 | } | |
4436 | return (total_evicted); | |
4437 | } | |
4438 | ||
f6046738 | 4439 | /* |
c4c162c1 | 4440 | * Evict metadata buffers from the cache, such that arcstat_meta_used is |
f6046738 BB |
4441 | * capped by the arc_meta_limit tunable. |
4442 | */ | |
4443 | static uint64_t | |
5dd92909 | 4444 | arc_evict_meta_only(uint64_t meta_used) |
f6046738 BB |
4445 | { |
4446 | uint64_t total_evicted = 0; | |
4447 | int64_t target; | |
4448 | ||
4449 | /* | |
4450 | * If we're over the meta limit, we want to evict enough | |
4451 | * metadata to get back under the meta limit. We don't want to | |
4452 | * evict so much that we drop the MRU below arc_p, though. If | |
4453 | * we're over the meta limit more than we're over arc_p, we | |
4454 | * evict some from the MRU here, and some from the MFU below. | |
4455 | */ | |
37fb3e43 | 4456 | target = MIN((int64_t)(meta_used - arc_meta_limit), |
424fd7c3 TS |
4457 | (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + |
4458 | zfs_refcount_count(&arc_mru->arcs_size) - arc_p)); | |
f6046738 | 4459 | |
5dd92909 | 4460 | total_evicted += arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); |
f6046738 BB |
4461 | |
4462 | /* | |
4463 | * Similar to the above, we want to evict enough bytes to get us | |
4464 | * below the meta limit, but not so much as to drop us below the | |
2aa34383 | 4465 | * space allotted to the MFU (which is defined as arc_c - arc_p). |
f6046738 | 4466 | */ |
37fb3e43 | 4467 | target = MIN((int64_t)(meta_used - arc_meta_limit), |
424fd7c3 | 4468 | (int64_t)(zfs_refcount_count(&arc_mfu->arcs_size) - |
37fb3e43 | 4469 | (arc_c - arc_p))); |
f6046738 | 4470 | |
5dd92909 | 4471 | total_evicted += arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); |
f6046738 BB |
4472 | |
4473 | return (total_evicted); | |
4474 | } | |
4475 | ||
4476 | static uint64_t | |
5dd92909 | 4477 | arc_evict_meta(uint64_t meta_used) |
f6046738 BB |
4478 | { |
4479 | if (zfs_arc_meta_strategy == ARC_STRATEGY_META_ONLY) | |
5dd92909 | 4480 | return (arc_evict_meta_only(meta_used)); |
f6046738 | 4481 | else |
5dd92909 | 4482 | return (arc_evict_meta_balanced(meta_used)); |
f6046738 BB |
4483 | } |
4484 | ||
ca0bf58d PS |
4485 | /* |
4486 | * Return the type of the oldest buffer in the given arc state | |
4487 | * | |
4488 | * This function will select a random sublist of type ARC_BUFC_DATA and | |
4489 | * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist | |
4490 | * is compared, and the type which contains the "older" buffer will be | |
4491 | * returned. | |
4492 | */ | |
4493 | static arc_buf_contents_t | |
5dd92909 | 4494 | arc_evict_type(arc_state_t *state) |
ca0bf58d | 4495 | { |
ffdf019c AM |
4496 | multilist_t *data_ml = &state->arcs_list[ARC_BUFC_DATA]; |
4497 | multilist_t *meta_ml = &state->arcs_list[ARC_BUFC_METADATA]; | |
ca0bf58d PS |
4498 | int data_idx = multilist_get_random_index(data_ml); |
4499 | int meta_idx = multilist_get_random_index(meta_ml); | |
4500 | multilist_sublist_t *data_mls; | |
4501 | multilist_sublist_t *meta_mls; | |
4502 | arc_buf_contents_t type; | |
4503 | arc_buf_hdr_t *data_hdr; | |
4504 | arc_buf_hdr_t *meta_hdr; | |
4505 | ||
4506 | /* | |
4507 | * We keep the sublist lock until we're finished, to prevent | |
4508 | * the headers from being destroyed via arc_evict_state(). | |
4509 | */ | |
4510 | data_mls = multilist_sublist_lock(data_ml, data_idx); | |
4511 | meta_mls = multilist_sublist_lock(meta_ml, meta_idx); | |
4512 | ||
4513 | /* | |
4514 | * These two loops are to ensure we skip any markers that | |
4515 | * might be at the tail of the lists due to arc_evict_state(). | |
4516 | */ | |
4517 | ||
4518 | for (data_hdr = multilist_sublist_tail(data_mls); data_hdr != NULL; | |
4519 | data_hdr = multilist_sublist_prev(data_mls, data_hdr)) { | |
4520 | if (data_hdr->b_spa != 0) | |
4521 | break; | |
4522 | } | |
4523 | ||
4524 | for (meta_hdr = multilist_sublist_tail(meta_mls); meta_hdr != NULL; | |
4525 | meta_hdr = multilist_sublist_prev(meta_mls, meta_hdr)) { | |
4526 | if (meta_hdr->b_spa != 0) | |
4527 | break; | |
4528 | } | |
4529 | ||
4530 | if (data_hdr == NULL && meta_hdr == NULL) { | |
4531 | type = ARC_BUFC_DATA; | |
4532 | } else if (data_hdr == NULL) { | |
4533 | ASSERT3P(meta_hdr, !=, NULL); | |
4534 | type = ARC_BUFC_METADATA; | |
4535 | } else if (meta_hdr == NULL) { | |
4536 | ASSERT3P(data_hdr, !=, NULL); | |
4537 | type = ARC_BUFC_DATA; | |
4538 | } else { | |
4539 | ASSERT3P(data_hdr, !=, NULL); | |
4540 | ASSERT3P(meta_hdr, !=, NULL); | |
4541 | ||
4542 | /* The headers can't be on the sublist without an L1 header */ | |
4543 | ASSERT(HDR_HAS_L1HDR(data_hdr)); | |
4544 | ASSERT(HDR_HAS_L1HDR(meta_hdr)); | |
4545 | ||
4546 | if (data_hdr->b_l1hdr.b_arc_access < | |
4547 | meta_hdr->b_l1hdr.b_arc_access) { | |
4548 | type = ARC_BUFC_DATA; | |
4549 | } else { | |
4550 | type = ARC_BUFC_METADATA; | |
4551 | } | |
4552 | } | |
4553 | ||
4554 | multilist_sublist_unlock(meta_mls); | |
4555 | multilist_sublist_unlock(data_mls); | |
4556 | ||
4557 | return (type); | |
4558 | } | |
4559 | ||
4560 | /* | |
c4c162c1 | 4561 | * Evict buffers from the cache, such that arcstat_size is capped by arc_c. |
ca0bf58d PS |
4562 | */ |
4563 | static uint64_t | |
5dd92909 | 4564 | arc_evict(void) |
ca0bf58d PS |
4565 | { |
4566 | uint64_t total_evicted = 0; | |
4567 | uint64_t bytes; | |
4568 | int64_t target; | |
c4c162c1 AM |
4569 | uint64_t asize = aggsum_value(&arc_sums.arcstat_size); |
4570 | uint64_t ameta = aggsum_value(&arc_sums.arcstat_meta_used); | |
ca0bf58d PS |
4571 | |
4572 | /* | |
4573 | * If we're over arc_meta_limit, we want to correct that before | |
4574 | * potentially evicting data buffers below. | |
4575 | */ | |
5dd92909 | 4576 | total_evicted += arc_evict_meta(ameta); |
ca0bf58d PS |
4577 | |
4578 | /* | |
4579 | * Adjust MRU size | |
4580 | * | |
4581 | * If we're over the target cache size, we want to evict enough | |
4582 | * from the list to get back to our target size. We don't want | |
4583 | * to evict too much from the MRU, such that it drops below | |
4584 | * arc_p. So, if we're over our target cache size more than | |
4585 | * the MRU is over arc_p, we'll evict enough to get back to | |
4586 | * arc_p here, and then evict more from the MFU below. | |
4587 | */ | |
37fb3e43 | 4588 | target = MIN((int64_t)(asize - arc_c), |
424fd7c3 TS |
4589 | (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + |
4590 | zfs_refcount_count(&arc_mru->arcs_size) + ameta - arc_p)); | |
ca0bf58d PS |
4591 | |
4592 | /* | |
4593 | * If we're below arc_meta_min, always prefer to evict data. | |
4594 | * Otherwise, try to satisfy the requested number of bytes to | |
4595 | * evict from the type which contains older buffers; in an | |
4596 | * effort to keep newer buffers in the cache regardless of their | |
4597 | * type. If we cannot satisfy the number of bytes from this | |
4598 | * type, spill over into the next type. | |
4599 | */ | |
5dd92909 | 4600 | if (arc_evict_type(arc_mru) == ARC_BUFC_METADATA && |
37fb3e43 | 4601 | ameta > arc_meta_min) { |
5dd92909 | 4602 | bytes = arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4603 | total_evicted += bytes; |
4604 | ||
4605 | /* | |
4606 | * If we couldn't evict our target number of bytes from | |
4607 | * metadata, we try to get the rest from data. | |
4608 | */ | |
4609 | target -= bytes; | |
4610 | ||
4611 | total_evicted += | |
5dd92909 | 4612 | arc_evict_impl(arc_mru, 0, target, ARC_BUFC_DATA); |
ca0bf58d | 4613 | } else { |
5dd92909 | 4614 | bytes = arc_evict_impl(arc_mru, 0, target, ARC_BUFC_DATA); |
ca0bf58d PS |
4615 | total_evicted += bytes; |
4616 | ||
4617 | /* | |
4618 | * If we couldn't evict our target number of bytes from | |
4619 | * data, we try to get the rest from metadata. | |
4620 | */ | |
4621 | target -= bytes; | |
4622 | ||
4623 | total_evicted += | |
5dd92909 | 4624 | arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4625 | } |
4626 | ||
0405eeea RE |
4627 | /* |
4628 | * Re-sum ARC stats after the first round of evictions. | |
4629 | */ | |
c4c162c1 AM |
4630 | asize = aggsum_value(&arc_sums.arcstat_size); |
4631 | ameta = aggsum_value(&arc_sums.arcstat_meta_used); | |
0405eeea RE |
4632 | |
4633 | ||
ca0bf58d PS |
4634 | /* |
4635 | * Adjust MFU size | |
4636 | * | |
4637 | * Now that we've tried to evict enough from the MRU to get its | |
4638 | * size back to arc_p, if we're still above the target cache | |
4639 | * size, we evict the rest from the MFU. | |
4640 | */ | |
37fb3e43 | 4641 | target = asize - arc_c; |
ca0bf58d | 4642 | |
5dd92909 | 4643 | if (arc_evict_type(arc_mfu) == ARC_BUFC_METADATA && |
37fb3e43 | 4644 | ameta > arc_meta_min) { |
5dd92909 | 4645 | bytes = arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4646 | total_evicted += bytes; |
4647 | ||
4648 | /* | |
4649 | * If we couldn't evict our target number of bytes from | |
4650 | * metadata, we try to get the rest from data. | |
4651 | */ | |
4652 | target -= bytes; | |
4653 | ||
4654 | total_evicted += | |
5dd92909 | 4655 | arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_DATA); |
ca0bf58d | 4656 | } else { |
5dd92909 | 4657 | bytes = arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_DATA); |
ca0bf58d PS |
4658 | total_evicted += bytes; |
4659 | ||
4660 | /* | |
4661 | * If we couldn't evict our target number of bytes from | |
4662 | * data, we try to get the rest from data. | |
4663 | */ | |
4664 | target -= bytes; | |
4665 | ||
4666 | total_evicted += | |
5dd92909 | 4667 | arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4668 | } |
4669 | ||
4670 | /* | |
4671 | * Adjust ghost lists | |
4672 | * | |
4673 | * In addition to the above, the ARC also defines target values | |
4674 | * for the ghost lists. The sum of the mru list and mru ghost | |
4675 | * list should never exceed the target size of the cache, and | |
4676 | * the sum of the mru list, mfu list, mru ghost list, and mfu | |
4677 | * ghost list should never exceed twice the target size of the | |
4678 | * cache. The following logic enforces these limits on the ghost | |
4679 | * caches, and evicts from them as needed. | |
4680 | */ | |
424fd7c3 TS |
4681 | target = zfs_refcount_count(&arc_mru->arcs_size) + |
4682 | zfs_refcount_count(&arc_mru_ghost->arcs_size) - arc_c; | |
ca0bf58d | 4683 | |
5dd92909 | 4684 | bytes = arc_evict_impl(arc_mru_ghost, 0, target, ARC_BUFC_DATA); |
ca0bf58d PS |
4685 | total_evicted += bytes; |
4686 | ||
4687 | target -= bytes; | |
4688 | ||
4689 | total_evicted += | |
5dd92909 | 4690 | arc_evict_impl(arc_mru_ghost, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4691 | |
4692 | /* | |
4693 | * We assume the sum of the mru list and mfu list is less than | |
4694 | * or equal to arc_c (we enforced this above), which means we | |
4695 | * can use the simpler of the two equations below: | |
4696 | * | |
4697 | * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c | |
4698 | * mru ghost + mfu ghost <= arc_c | |
4699 | */ | |
424fd7c3 TS |
4700 | target = zfs_refcount_count(&arc_mru_ghost->arcs_size) + |
4701 | zfs_refcount_count(&arc_mfu_ghost->arcs_size) - arc_c; | |
ca0bf58d | 4702 | |
5dd92909 | 4703 | bytes = arc_evict_impl(arc_mfu_ghost, 0, target, ARC_BUFC_DATA); |
ca0bf58d PS |
4704 | total_evicted += bytes; |
4705 | ||
4706 | target -= bytes; | |
4707 | ||
4708 | total_evicted += | |
5dd92909 | 4709 | arc_evict_impl(arc_mfu_ghost, 0, target, ARC_BUFC_METADATA); |
ca0bf58d PS |
4710 | |
4711 | return (total_evicted); | |
4712 | } | |
4713 | ||
ca0bf58d PS |
4714 | void |
4715 | arc_flush(spa_t *spa, boolean_t retry) | |
ab26409d | 4716 | { |
ca0bf58d | 4717 | uint64_t guid = 0; |
94520ca4 | 4718 | |
bc888666 | 4719 | /* |
d3c2ae1c | 4720 | * If retry is B_TRUE, a spa must not be specified since we have |
ca0bf58d PS |
4721 | * no good way to determine if all of a spa's buffers have been |
4722 | * evicted from an arc state. | |
bc888666 | 4723 | */ |
ca0bf58d | 4724 | ASSERT(!retry || spa == 0); |
d164b209 | 4725 | |
b9541d6b | 4726 | if (spa != NULL) |
3541dc6d | 4727 | guid = spa_load_guid(spa); |
d164b209 | 4728 | |
ca0bf58d PS |
4729 | (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); |
4730 | (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); | |
4731 | ||
4732 | (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); | |
4733 | (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); | |
4734 | ||
4735 | (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); | |
4736 | (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); | |
34dc7c2f | 4737 | |
ca0bf58d PS |
4738 | (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); |
4739 | (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); | |
34dc7c2f BB |
4740 | } |
4741 | ||
c9c9c1e2 | 4742 | void |
3ec34e55 | 4743 | arc_reduce_target_size(int64_t to_free) |
34dc7c2f | 4744 | { |
c4c162c1 | 4745 | uint64_t asize = aggsum_value(&arc_sums.arcstat_size); |
3442c2a0 MA |
4746 | |
4747 | /* | |
4748 | * All callers want the ARC to actually evict (at least) this much | |
4749 | * memory. Therefore we reduce from the lower of the current size and | |
4750 | * the target size. This way, even if arc_c is much higher than | |
4751 | * arc_size (as can be the case after many calls to arc_freed(), we will | |
4752 | * immediately have arc_c < arc_size and therefore the arc_evict_zthr | |
4753 | * will evict. | |
4754 | */ | |
4755 | uint64_t c = MIN(arc_c, asize); | |
34dc7c2f | 4756 | |
1b8951b3 TC |
4757 | if (c > to_free && c - to_free > arc_c_min) { |
4758 | arc_c = c - to_free; | |
ca67b33a | 4759 | atomic_add_64(&arc_p, -(arc_p >> arc_shrink_shift)); |
34dc7c2f BB |
4760 | if (arc_p > arc_c) |
4761 | arc_p = (arc_c >> 1); | |
4762 | ASSERT(arc_c >= arc_c_min); | |
4763 | ASSERT((int64_t)arc_p >= 0); | |
1b8951b3 TC |
4764 | } else { |
4765 | arc_c = arc_c_min; | |
34dc7c2f BB |
4766 | } |
4767 | ||
3ec34e55 | 4768 | if (asize > arc_c) { |
5dd92909 MA |
4769 | /* See comment in arc_evict_cb_check() on why lock+flag */ |
4770 | mutex_enter(&arc_evict_lock); | |
4771 | arc_evict_needed = B_TRUE; | |
4772 | mutex_exit(&arc_evict_lock); | |
4773 | zthr_wakeup(arc_evict_zthr); | |
3ec34e55 | 4774 | } |
34dc7c2f | 4775 | } |
ca67b33a MA |
4776 | |
4777 | /* | |
4778 | * Determine if the system is under memory pressure and is asking | |
d3c2ae1c | 4779 | * to reclaim memory. A return value of B_TRUE indicates that the system |
ca67b33a MA |
4780 | * is under memory pressure and that the arc should adjust accordingly. |
4781 | */ | |
c9c9c1e2 | 4782 | boolean_t |
ca67b33a MA |
4783 | arc_reclaim_needed(void) |
4784 | { | |
4785 | return (arc_available_memory() < 0); | |
4786 | } | |
4787 | ||
c9c9c1e2 | 4788 | void |
3ec34e55 | 4789 | arc_kmem_reap_soon(void) |
34dc7c2f BB |
4790 | { |
4791 | size_t i; | |
4792 | kmem_cache_t *prev_cache = NULL; | |
4793 | kmem_cache_t *prev_data_cache = NULL; | |
4794 | extern kmem_cache_t *zio_buf_cache[]; | |
4795 | extern kmem_cache_t *zio_data_buf_cache[]; | |
34dc7c2f | 4796 | |
70f02287 | 4797 | #ifdef _KERNEL |
c4c162c1 AM |
4798 | if ((aggsum_compare(&arc_sums.arcstat_meta_used, |
4799 | arc_meta_limit) >= 0) && zfs_arc_meta_prune) { | |
f6046738 BB |
4800 | /* |
4801 | * We are exceeding our meta-data cache limit. | |
4802 | * Prune some entries to release holds on meta-data. | |
4803 | */ | |
ef5b2e10 | 4804 | arc_prune_async(zfs_arc_meta_prune); |
f6046738 | 4805 | } |
70f02287 BB |
4806 | #if defined(_ILP32) |
4807 | /* | |
4808 | * Reclaim unused memory from all kmem caches. | |
4809 | */ | |
4810 | kmem_reap(); | |
4811 | #endif | |
4812 | #endif | |
f6046738 | 4813 | |
34dc7c2f | 4814 | for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { |
70f02287 | 4815 | #if defined(_ILP32) |
d0c614ec | 4816 | /* reach upper limit of cache size on 32-bit */ |
4817 | if (zio_buf_cache[i] == NULL) | |
4818 | break; | |
4819 | #endif | |
34dc7c2f BB |
4820 | if (zio_buf_cache[i] != prev_cache) { |
4821 | prev_cache = zio_buf_cache[i]; | |
4822 | kmem_cache_reap_now(zio_buf_cache[i]); | |
4823 | } | |
4824 | if (zio_data_buf_cache[i] != prev_data_cache) { | |
4825 | prev_data_cache = zio_data_buf_cache[i]; | |
4826 | kmem_cache_reap_now(zio_data_buf_cache[i]); | |
4827 | } | |
4828 | } | |
ca0bf58d | 4829 | kmem_cache_reap_now(buf_cache); |
b9541d6b CW |
4830 | kmem_cache_reap_now(hdr_full_cache); |
4831 | kmem_cache_reap_now(hdr_l2only_cache); | |
ca577779 | 4832 | kmem_cache_reap_now(zfs_btree_leaf_cache); |
7564073e | 4833 | abd_cache_reap_now(); |
34dc7c2f BB |
4834 | } |
4835 | ||
3ec34e55 BL |
4836 | /* ARGSUSED */ |
4837 | static boolean_t | |
5dd92909 | 4838 | arc_evict_cb_check(void *arg, zthr_t *zthr) |
3ec34e55 | 4839 | { |
1531506d | 4840 | #ifdef ZFS_DEBUG |
3ec34e55 BL |
4841 | /* |
4842 | * This is necessary in order to keep the kstat information | |
4843 | * up to date for tools that display kstat data such as the | |
4844 | * mdb ::arc dcmd and the Linux crash utility. These tools | |
4845 | * typically do not call kstat's update function, but simply | |
4846 | * dump out stats from the most recent update. Without | |
4847 | * this call, these commands may show stale stats for the | |
1531506d RM |
4848 | * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even |
4849 | * with this call, the data might be out of date if the | |
4850 | * evict thread hasn't been woken recently; but that should | |
4851 | * suffice. The arc_state_t structures can be queried | |
4852 | * directly if more accurate information is needed. | |
3ec34e55 BL |
4853 | */ |
4854 | if (arc_ksp != NULL) | |
4855 | arc_ksp->ks_update(arc_ksp, KSTAT_READ); | |
1531506d | 4856 | #endif |
3ec34e55 BL |
4857 | |
4858 | /* | |
3442c2a0 MA |
4859 | * We have to rely on arc_wait_for_eviction() to tell us when to |
4860 | * evict, rather than checking if we are overflowing here, so that we | |
4861 | * are sure to not leave arc_wait_for_eviction() waiting on aew_cv. | |
4862 | * If we have become "not overflowing" since arc_wait_for_eviction() | |
4863 | * checked, we need to wake it up. We could broadcast the CV here, | |
4864 | * but arc_wait_for_eviction() may have not yet gone to sleep. We | |
4865 | * would need to use a mutex to ensure that this function doesn't | |
4866 | * broadcast until arc_wait_for_eviction() has gone to sleep (e.g. | |
4867 | * the arc_evict_lock). However, the lock ordering of such a lock | |
4868 | * would necessarily be incorrect with respect to the zthr_lock, | |
4869 | * which is held before this function is called, and is held by | |
4870 | * arc_wait_for_eviction() when it calls zthr_wakeup(). | |
3ec34e55 | 4871 | */ |
5dd92909 | 4872 | return (arc_evict_needed); |
3ec34e55 BL |
4873 | } |
4874 | ||
302f753f | 4875 | /* |
5dd92909 | 4876 | * Keep arc_size under arc_c by running arc_evict which evicts data |
3ec34e55 | 4877 | * from the ARC. |
302f753f | 4878 | */ |
867959b5 | 4879 | /* ARGSUSED */ |
61c3391a | 4880 | static void |
5dd92909 | 4881 | arc_evict_cb(void *arg, zthr_t *zthr) |
34dc7c2f | 4882 | { |
3ec34e55 BL |
4883 | uint64_t evicted = 0; |
4884 | fstrans_cookie_t cookie = spl_fstrans_mark(); | |
34dc7c2f | 4885 | |
3ec34e55 | 4886 | /* Evict from cache */ |
5dd92909 | 4887 | evicted = arc_evict(); |
34dc7c2f | 4888 | |
3ec34e55 BL |
4889 | /* |
4890 | * If evicted is zero, we couldn't evict anything | |
5dd92909 | 4891 | * via arc_evict(). This could be due to hash lock |
3ec34e55 BL |
4892 | * collisions, but more likely due to the majority of |
4893 | * arc buffers being unevictable. Therefore, even if | |
4894 | * arc_size is above arc_c, another pass is unlikely to | |
4895 | * be helpful and could potentially cause us to enter an | |
4896 | * infinite loop. Additionally, zthr_iscancelled() is | |
4897 | * checked here so that if the arc is shutting down, the | |
5dd92909 | 4898 | * broadcast will wake any remaining arc evict waiters. |
3ec34e55 | 4899 | */ |
5dd92909 MA |
4900 | mutex_enter(&arc_evict_lock); |
4901 | arc_evict_needed = !zthr_iscancelled(arc_evict_zthr) && | |
c4c162c1 | 4902 | evicted > 0 && aggsum_compare(&arc_sums.arcstat_size, arc_c) > 0; |
5dd92909 | 4903 | if (!arc_evict_needed) { |
d3c2ae1c | 4904 | /* |
3ec34e55 BL |
4905 | * We're either no longer overflowing, or we |
4906 | * can't evict anything more, so we should wake | |
4907 | * arc_get_data_impl() sooner. | |
d3c2ae1c | 4908 | */ |
3442c2a0 MA |
4909 | arc_evict_waiter_t *aw; |
4910 | while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) { | |
4911 | cv_broadcast(&aw->aew_cv); | |
4912 | } | |
4913 | arc_set_need_free(); | |
3ec34e55 | 4914 | } |
5dd92909 | 4915 | mutex_exit(&arc_evict_lock); |
3ec34e55 | 4916 | spl_fstrans_unmark(cookie); |
3ec34e55 BL |
4917 | } |
4918 | ||
4919 | /* ARGSUSED */ | |
4920 | static boolean_t | |
4921 | arc_reap_cb_check(void *arg, zthr_t *zthr) | |
4922 | { | |
4923 | int64_t free_memory = arc_available_memory(); | |
8a171ccd | 4924 | static int reap_cb_check_counter = 0; |
3ec34e55 BL |
4925 | |
4926 | /* | |
4927 | * If a kmem reap is already active, don't schedule more. We must | |
4928 | * check for this because kmem_cache_reap_soon() won't actually | |
4929 | * block on the cache being reaped (this is to prevent callers from | |
4930 | * becoming implicitly blocked by a system-wide kmem reap -- which, | |
4931 | * on a system with many, many full magazines, can take minutes). | |
4932 | */ | |
4933 | if (!kmem_cache_reap_active() && free_memory < 0) { | |
34dc7c2f | 4934 | |
3ec34e55 BL |
4935 | arc_no_grow = B_TRUE; |
4936 | arc_warm = B_TRUE; | |
0a252dae | 4937 | /* |
3ec34e55 BL |
4938 | * Wait at least zfs_grow_retry (default 5) seconds |
4939 | * before considering growing. | |
0a252dae | 4940 | */ |
3ec34e55 BL |
4941 | arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); |
4942 | return (B_TRUE); | |
4943 | } else if (free_memory < arc_c >> arc_no_grow_shift) { | |
4944 | arc_no_grow = B_TRUE; | |
4945 | } else if (gethrtime() >= arc_growtime) { | |
4946 | arc_no_grow = B_FALSE; | |
4947 | } | |
0a252dae | 4948 | |
8a171ccd SG |
4949 | /* |
4950 | * Called unconditionally every 60 seconds to reclaim unused | |
4951 | * zstd compression and decompression context. This is done | |
4952 | * here to avoid the need for an independent thread. | |
4953 | */ | |
4954 | if (!((reap_cb_check_counter++) % 60)) | |
4955 | zfs_zstd_cache_reap_now(); | |
4956 | ||
3ec34e55 BL |
4957 | return (B_FALSE); |
4958 | } | |
34dc7c2f | 4959 | |
3ec34e55 BL |
4960 | /* |
4961 | * Keep enough free memory in the system by reaping the ARC's kmem | |
4962 | * caches. To cause more slabs to be reapable, we may reduce the | |
5dd92909 | 4963 | * target size of the cache (arc_c), causing the arc_evict_cb() |
3ec34e55 BL |
4964 | * to free more buffers. |
4965 | */ | |
4966 | /* ARGSUSED */ | |
61c3391a | 4967 | static void |
3ec34e55 BL |
4968 | arc_reap_cb(void *arg, zthr_t *zthr) |
4969 | { | |
4970 | int64_t free_memory; | |
4971 | fstrans_cookie_t cookie = spl_fstrans_mark(); | |
34dc7c2f | 4972 | |
3ec34e55 BL |
4973 | /* |
4974 | * Kick off asynchronous kmem_reap()'s of all our caches. | |
4975 | */ | |
4976 | arc_kmem_reap_soon(); | |
6a8f9b6b | 4977 | |
3ec34e55 BL |
4978 | /* |
4979 | * Wait at least arc_kmem_cache_reap_retry_ms between | |
4980 | * arc_kmem_reap_soon() calls. Without this check it is possible to | |
4981 | * end up in a situation where we spend lots of time reaping | |
4982 | * caches, while we're near arc_c_min. Waiting here also gives the | |
4983 | * subsequent free memory check a chance of finding that the | |
4984 | * asynchronous reap has already freed enough memory, and we don't | |
4985 | * need to call arc_reduce_target_size(). | |
4986 | */ | |
4987 | delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); | |
34dc7c2f | 4988 | |
3ec34e55 BL |
4989 | /* |
4990 | * Reduce the target size as needed to maintain the amount of free | |
4991 | * memory in the system at a fraction of the arc_size (1/128th by | |
4992 | * default). If oversubscribed (free_memory < 0) then reduce the | |
4993 | * target arc_size by the deficit amount plus the fractional | |
bf169e9f | 4994 | * amount. If free memory is positive but less than the fractional |
3ec34e55 BL |
4995 | * amount, reduce by what is needed to hit the fractional amount. |
4996 | */ | |
4997 | free_memory = arc_available_memory(); | |
34dc7c2f | 4998 | |
3ec34e55 BL |
4999 | int64_t to_free = |
5000 | (arc_c >> arc_shrink_shift) - free_memory; | |
5001 | if (to_free > 0) { | |
3ec34e55 | 5002 | arc_reduce_target_size(to_free); |
ca0bf58d | 5003 | } |
ca0bf58d | 5004 | spl_fstrans_unmark(cookie); |
ca0bf58d PS |
5005 | } |
5006 | ||
7cb67b45 BB |
5007 | #ifdef _KERNEL |
5008 | /* | |
302f753f BB |
5009 | * Determine the amount of memory eligible for eviction contained in the |
5010 | * ARC. All clean data reported by the ghost lists can always be safely | |
5011 | * evicted. Due to arc_c_min, the same does not hold for all clean data | |
5012 | * contained by the regular mru and mfu lists. | |
5013 | * | |
5014 | * In the case of the regular mru and mfu lists, we need to report as | |
5015 | * much clean data as possible, such that evicting that same reported | |
5016 | * data will not bring arc_size below arc_c_min. Thus, in certain | |
5017 | * circumstances, the total amount of clean data in the mru and mfu | |
5018 | * lists might not actually be evictable. | |
5019 | * | |
5020 | * The following two distinct cases are accounted for: | |
5021 | * | |
5022 | * 1. The sum of the amount of dirty data contained by both the mru and | |
5023 | * mfu lists, plus the ARC's other accounting (e.g. the anon list), | |
5024 | * is greater than or equal to arc_c_min. | |
5025 | * (i.e. amount of dirty data >= arc_c_min) | |
5026 | * | |
5027 | * This is the easy case; all clean data contained by the mru and mfu | |
5028 | * lists is evictable. Evicting all clean data can only drop arc_size | |
5029 | * to the amount of dirty data, which is greater than arc_c_min. | |
5030 | * | |
5031 | * 2. The sum of the amount of dirty data contained by both the mru and | |
5032 | * mfu lists, plus the ARC's other accounting (e.g. the anon list), | |
5033 | * is less than arc_c_min. | |
5034 | * (i.e. arc_c_min > amount of dirty data) | |
5035 | * | |
5036 | * 2.1. arc_size is greater than or equal arc_c_min. | |
5037 | * (i.e. arc_size >= arc_c_min > amount of dirty data) | |
5038 | * | |
5039 | * In this case, not all clean data from the regular mru and mfu | |
5040 | * lists is actually evictable; we must leave enough clean data | |
5041 | * to keep arc_size above arc_c_min. Thus, the maximum amount of | |
5042 | * evictable data from the two lists combined, is exactly the | |
5043 | * difference between arc_size and arc_c_min. | |
5044 | * | |
5045 | * 2.2. arc_size is less than arc_c_min | |
5046 | * (i.e. arc_c_min > arc_size > amount of dirty data) | |
5047 | * | |
5048 | * In this case, none of the data contained in the mru and mfu | |
5049 | * lists is evictable, even if it's clean. Since arc_size is | |
5050 | * already below arc_c_min, evicting any more would only | |
5051 | * increase this negative difference. | |
7cb67b45 | 5052 | */ |
7cb67b45 | 5053 | |
7cb67b45 BB |
5054 | #endif /* _KERNEL */ |
5055 | ||
34dc7c2f BB |
5056 | /* |
5057 | * Adapt arc info given the number of bytes we are trying to add and | |
4e33ba4c | 5058 | * the state that we are coming from. This function is only called |
34dc7c2f BB |
5059 | * when we are adding new content to the cache. |
5060 | */ | |
5061 | static void | |
5062 | arc_adapt(int bytes, arc_state_t *state) | |
5063 | { | |
5064 | int mult; | |
728d6ae9 | 5065 | uint64_t arc_p_min = (arc_c >> arc_p_min_shift); |
424fd7c3 TS |
5066 | int64_t mrug_size = zfs_refcount_count(&arc_mru_ghost->arcs_size); |
5067 | int64_t mfug_size = zfs_refcount_count(&arc_mfu_ghost->arcs_size); | |
34dc7c2f | 5068 | |
34dc7c2f BB |
5069 | ASSERT(bytes > 0); |
5070 | /* | |
5071 | * Adapt the target size of the MRU list: | |
5072 | * - if we just hit in the MRU ghost list, then increase | |
5073 | * the target size of the MRU list. | |
5074 | * - if we just hit in the MFU ghost list, then increase | |
5075 | * the target size of the MFU list by decreasing the | |
5076 | * target size of the MRU list. | |
5077 | */ | |
5078 | if (state == arc_mru_ghost) { | |
36da08ef | 5079 | mult = (mrug_size >= mfug_size) ? 1 : (mfug_size / mrug_size); |
62422785 PS |
5080 | if (!zfs_arc_p_dampener_disable) |
5081 | mult = MIN(mult, 10); /* avoid wild arc_p adjustment */ | |
34dc7c2f | 5082 | |
728d6ae9 | 5083 | arc_p = MIN(arc_c - arc_p_min, arc_p + bytes * mult); |
34dc7c2f | 5084 | } else if (state == arc_mfu_ghost) { |
d164b209 BB |
5085 | uint64_t delta; |
5086 | ||
36da08ef | 5087 | mult = (mfug_size >= mrug_size) ? 1 : (mrug_size / mfug_size); |
62422785 PS |
5088 | if (!zfs_arc_p_dampener_disable) |
5089 | mult = MIN(mult, 10); | |
34dc7c2f | 5090 | |
d164b209 | 5091 | delta = MIN(bytes * mult, arc_p); |
728d6ae9 | 5092 | arc_p = MAX(arc_p_min, arc_p - delta); |
34dc7c2f BB |
5093 | } |
5094 | ASSERT((int64_t)arc_p >= 0); | |
5095 | ||
3ec34e55 BL |
5096 | /* |
5097 | * Wake reap thread if we do not have any available memory | |
5098 | */ | |
ca67b33a | 5099 | if (arc_reclaim_needed()) { |
3ec34e55 | 5100 | zthr_wakeup(arc_reap_zthr); |
ca67b33a MA |
5101 | return; |
5102 | } | |
5103 | ||
34dc7c2f BB |
5104 | if (arc_no_grow) |
5105 | return; | |
5106 | ||
5107 | if (arc_c >= arc_c_max) | |
5108 | return; | |
5109 | ||
5110 | /* | |
5111 | * If we're within (2 * maxblocksize) bytes of the target | |
5112 | * cache size, increment the target cache size | |
5113 | */ | |
935434ef | 5114 | ASSERT3U(arc_c, >=, 2ULL << SPA_MAXBLOCKSHIFT); |
c4c162c1 | 5115 | if (aggsum_upper_bound(&arc_sums.arcstat_size) >= |
17ca3018 | 5116 | arc_c - (2ULL << SPA_MAXBLOCKSHIFT)) { |
34dc7c2f BB |
5117 | atomic_add_64(&arc_c, (int64_t)bytes); |
5118 | if (arc_c > arc_c_max) | |
5119 | arc_c = arc_c_max; | |
5120 | else if (state == arc_anon) | |
5121 | atomic_add_64(&arc_p, (int64_t)bytes); | |
5122 | if (arc_p > arc_c) | |
5123 | arc_p = arc_c; | |
5124 | } | |
5125 | ASSERT((int64_t)arc_p >= 0); | |
5126 | } | |
5127 | ||
5128 | /* | |
ca0bf58d PS |
5129 | * Check if arc_size has grown past our upper threshold, determined by |
5130 | * zfs_arc_overflow_shift. | |
34dc7c2f | 5131 | */ |
f7de776d | 5132 | static arc_ovf_level_t |
ca0bf58d | 5133 | arc_is_overflowing(void) |
34dc7c2f | 5134 | { |
ca0bf58d | 5135 | /* Always allow at least one block of overflow */ |
5a902f5a | 5136 | int64_t overflow = MAX(SPA_MAXBLOCKSIZE, |
ca0bf58d | 5137 | arc_c >> zfs_arc_overflow_shift); |
34dc7c2f | 5138 | |
37fb3e43 PD |
5139 | /* |
5140 | * We just compare the lower bound here for performance reasons. Our | |
5141 | * primary goals are to make sure that the arc never grows without | |
5142 | * bound, and that it can reach its maximum size. This check | |
5143 | * accomplishes both goals. The maximum amount we could run over by is | |
5144 | * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block | |
5145 | * in the ARC. In practice, that's in the tens of MB, which is low | |
5146 | * enough to be safe. | |
5147 | */ | |
f7de776d AM |
5148 | int64_t over = aggsum_lower_bound(&arc_sums.arcstat_size) - |
5149 | arc_c - overflow / 2; | |
5150 | return (over < 0 ? ARC_OVF_NONE : | |
5151 | over < overflow ? ARC_OVF_SOME : ARC_OVF_SEVERE); | |
34dc7c2f BB |
5152 | } |
5153 | ||
a6255b7f | 5154 | static abd_t * |
e111c802 MM |
5155 | arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, void *tag, |
5156 | boolean_t do_adapt) | |
a6255b7f DQ |
5157 | { |
5158 | arc_buf_contents_t type = arc_buf_type(hdr); | |
5159 | ||
e111c802 | 5160 | arc_get_data_impl(hdr, size, tag, do_adapt); |
a6255b7f DQ |
5161 | if (type == ARC_BUFC_METADATA) { |
5162 | return (abd_alloc(size, B_TRUE)); | |
5163 | } else { | |
5164 | ASSERT(type == ARC_BUFC_DATA); | |
5165 | return (abd_alloc(size, B_FALSE)); | |
5166 | } | |
5167 | } | |
5168 | ||
5169 | static void * | |
5170 | arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, void *tag) | |
5171 | { | |
5172 | arc_buf_contents_t type = arc_buf_type(hdr); | |
5173 | ||
e111c802 | 5174 | arc_get_data_impl(hdr, size, tag, B_TRUE); |
a6255b7f DQ |
5175 | if (type == ARC_BUFC_METADATA) { |
5176 | return (zio_buf_alloc(size)); | |
5177 | } else { | |
5178 | ASSERT(type == ARC_BUFC_DATA); | |
5179 | return (zio_data_buf_alloc(size)); | |
5180 | } | |
5181 | } | |
5182 | ||
3442c2a0 MA |
5183 | /* |
5184 | * Wait for the specified amount of data (in bytes) to be evicted from the | |
5185 | * ARC, and for there to be sufficient free memory in the system. Waiting for | |
5186 | * eviction ensures that the memory used by the ARC decreases. Waiting for | |
5187 | * free memory ensures that the system won't run out of free pages, regardless | |
5188 | * of ARC behavior and settings. See arc_lowmem_init(). | |
5189 | */ | |
5190 | void | |
5191 | arc_wait_for_eviction(uint64_t amount) | |
5192 | { | |
f7de776d AM |
5193 | switch (arc_is_overflowing()) { |
5194 | case ARC_OVF_NONE: | |
5195 | return; | |
5196 | case ARC_OVF_SOME: | |
5197 | /* | |
5198 | * This is a bit racy without taking arc_evict_lock, but the | |
5199 | * worst that can happen is we either call zthr_wakeup() extra | |
5200 | * time due to race with other thread here, or the set flag | |
5201 | * get cleared by arc_evict_cb(), which is unlikely due to | |
5202 | * big hysteresis, but also not important since at this level | |
5203 | * of overflow the eviction is purely advisory. Same time | |
5204 | * taking the global lock here every time without waiting for | |
5205 | * the actual eviction creates a significant lock contention. | |
5206 | */ | |
5207 | if (!arc_evict_needed) { | |
5208 | arc_evict_needed = B_TRUE; | |
5209 | zthr_wakeup(arc_evict_zthr); | |
5210 | } | |
5211 | return; | |
5212 | case ARC_OVF_SEVERE: | |
5213 | default: | |
5214 | { | |
5215 | arc_evict_waiter_t aw; | |
5216 | list_link_init(&aw.aew_node); | |
5217 | cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL); | |
3442c2a0 | 5218 | |
f7de776d AM |
5219 | uint64_t last_count = 0; |
5220 | mutex_enter(&arc_evict_lock); | |
5221 | if (!list_is_empty(&arc_evict_waiters)) { | |
5222 | arc_evict_waiter_t *last = | |
5223 | list_tail(&arc_evict_waiters); | |
5224 | last_count = last->aew_count; | |
5225 | } else if (!arc_evict_needed) { | |
5226 | arc_evict_needed = B_TRUE; | |
5227 | zthr_wakeup(arc_evict_zthr); | |
5228 | } | |
5229 | /* | |
5230 | * Note, the last waiter's count may be less than | |
5231 | * arc_evict_count if we are low on memory in which | |
5232 | * case arc_evict_state_impl() may have deferred | |
5233 | * wakeups (but still incremented arc_evict_count). | |
5234 | */ | |
5235 | aw.aew_count = MAX(last_count, arc_evict_count) + amount; | |
3442c2a0 | 5236 | |
f7de776d | 5237 | list_insert_tail(&arc_evict_waiters, &aw); |
3442c2a0 | 5238 | |
f7de776d | 5239 | arc_set_need_free(); |
3442c2a0 | 5240 | |
f7de776d AM |
5241 | DTRACE_PROBE3(arc__wait__for__eviction, |
5242 | uint64_t, amount, | |
5243 | uint64_t, arc_evict_count, | |
5244 | uint64_t, aw.aew_count); | |
3442c2a0 | 5245 | |
f7de776d AM |
5246 | /* |
5247 | * We will be woken up either when arc_evict_count reaches | |
5248 | * aew_count, or when the ARC is no longer overflowing and | |
5249 | * eviction completes. | |
5250 | * In case of "false" wakeup, we will still be on the list. | |
5251 | */ | |
5252 | do { | |
3442c2a0 | 5253 | cv_wait(&aw.aew_cv, &arc_evict_lock); |
f7de776d AM |
5254 | } while (list_link_active(&aw.aew_node)); |
5255 | mutex_exit(&arc_evict_lock); | |
3442c2a0 | 5256 | |
f7de776d AM |
5257 | cv_destroy(&aw.aew_cv); |
5258 | } | |
3442c2a0 | 5259 | } |
3442c2a0 MA |
5260 | } |
5261 | ||
34dc7c2f | 5262 | /* |
d3c2ae1c GW |
5263 | * Allocate a block and return it to the caller. If we are hitting the |
5264 | * hard limit for the cache size, we must sleep, waiting for the eviction | |
5265 | * thread to catch up. If we're past the target size but below the hard | |
5266 | * limit, we'll only signal the reclaim thread and continue on. | |
34dc7c2f | 5267 | */ |
a6255b7f | 5268 | static void |
e111c802 MM |
5269 | arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag, |
5270 | boolean_t do_adapt) | |
34dc7c2f | 5271 | { |
a6255b7f DQ |
5272 | arc_state_t *state = hdr->b_l1hdr.b_state; |
5273 | arc_buf_contents_t type = arc_buf_type(hdr); | |
34dc7c2f | 5274 | |
e111c802 MM |
5275 | if (do_adapt) |
5276 | arc_adapt(size, state); | |
34dc7c2f BB |
5277 | |
5278 | /* | |
3442c2a0 MA |
5279 | * If arc_size is currently overflowing, we must be adding data |
5280 | * faster than we are evicting. To ensure we don't compound the | |
ca0bf58d | 5281 | * problem by adding more data and forcing arc_size to grow even |
3442c2a0 MA |
5282 | * further past it's target size, we wait for the eviction thread to |
5283 | * make some progress. We also wait for there to be sufficient free | |
5284 | * memory in the system, as measured by arc_free_memory(). | |
5285 | * | |
5286 | * Specifically, we wait for zfs_arc_eviction_pct percent of the | |
5287 | * requested size to be evicted. This should be more than 100%, to | |
5288 | * ensure that that progress is also made towards getting arc_size | |
5289 | * under arc_c. See the comment above zfs_arc_eviction_pct. | |
34dc7c2f | 5290 | */ |
f7de776d | 5291 | arc_wait_for_eviction(size * zfs_arc_eviction_pct / 100); |
ab26409d | 5292 | |
d3c2ae1c | 5293 | VERIFY3U(hdr->b_type, ==, type); |
da8ccd0e | 5294 | if (type == ARC_BUFC_METADATA) { |
ca0bf58d PS |
5295 | arc_space_consume(size, ARC_SPACE_META); |
5296 | } else { | |
ca0bf58d | 5297 | arc_space_consume(size, ARC_SPACE_DATA); |
da8ccd0e PS |
5298 | } |
5299 | ||
34dc7c2f BB |
5300 | /* |
5301 | * Update the state size. Note that ghost states have a | |
5302 | * "ghost size" and so don't need to be updated. | |
5303 | */ | |
d3c2ae1c | 5304 | if (!GHOST_STATE(state)) { |
34dc7c2f | 5305 | |
424fd7c3 | 5306 | (void) zfs_refcount_add_many(&state->arcs_size, size, tag); |
ca0bf58d PS |
5307 | |
5308 | /* | |
5309 | * If this is reached via arc_read, the link is | |
5310 | * protected by the hash lock. If reached via | |
5311 | * arc_buf_alloc, the header should not be accessed by | |
5312 | * any other thread. And, if reached via arc_read_done, | |
5313 | * the hash lock will protect it if it's found in the | |
5314 | * hash table; otherwise no other thread should be | |
5315 | * trying to [add|remove]_reference it. | |
5316 | */ | |
5317 | if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { | |
424fd7c3 TS |
5318 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
5319 | (void) zfs_refcount_add_many(&state->arcs_esize[type], | |
d3c2ae1c | 5320 | size, tag); |
34dc7c2f | 5321 | } |
d3c2ae1c | 5322 | |
34dc7c2f BB |
5323 | /* |
5324 | * If we are growing the cache, and we are adding anonymous | |
5325 | * data, and we have outgrown arc_p, update arc_p | |
5326 | */ | |
c4c162c1 | 5327 | if (aggsum_upper_bound(&arc_sums.arcstat_size) < arc_c && |
37fb3e43 | 5328 | hdr->b_l1hdr.b_state == arc_anon && |
424fd7c3 TS |
5329 | (zfs_refcount_count(&arc_anon->arcs_size) + |
5330 | zfs_refcount_count(&arc_mru->arcs_size) > arc_p)) | |
34dc7c2f BB |
5331 | arc_p = MIN(arc_c, arc_p + size); |
5332 | } | |
a6255b7f DQ |
5333 | } |
5334 | ||
5335 | static void | |
5336 | arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, void *tag) | |
5337 | { | |
5338 | arc_free_data_impl(hdr, size, tag); | |
5339 | abd_free(abd); | |
5340 | } | |
5341 | ||
5342 | static void | |
5343 | arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, void *tag) | |
5344 | { | |
5345 | arc_buf_contents_t type = arc_buf_type(hdr); | |
5346 | ||
5347 | arc_free_data_impl(hdr, size, tag); | |
5348 | if (type == ARC_BUFC_METADATA) { | |
5349 | zio_buf_free(buf, size); | |
5350 | } else { | |
5351 | ASSERT(type == ARC_BUFC_DATA); | |
5352 | zio_data_buf_free(buf, size); | |
5353 | } | |
d3c2ae1c GW |
5354 | } |
5355 | ||
5356 | /* | |
5357 | * Free the arc data buffer. | |
5358 | */ | |
5359 | static void | |
a6255b7f | 5360 | arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag) |
d3c2ae1c GW |
5361 | { |
5362 | arc_state_t *state = hdr->b_l1hdr.b_state; | |
5363 | arc_buf_contents_t type = arc_buf_type(hdr); | |
5364 | ||
5365 | /* protected by hash lock, if in the hash table */ | |
5366 | if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { | |
424fd7c3 | 5367 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
d3c2ae1c GW |
5368 | ASSERT(state != arc_anon && state != arc_l2c_only); |
5369 | ||
424fd7c3 | 5370 | (void) zfs_refcount_remove_many(&state->arcs_esize[type], |
d3c2ae1c GW |
5371 | size, tag); |
5372 | } | |
424fd7c3 | 5373 | (void) zfs_refcount_remove_many(&state->arcs_size, size, tag); |
d3c2ae1c GW |
5374 | |
5375 | VERIFY3U(hdr->b_type, ==, type); | |
5376 | if (type == ARC_BUFC_METADATA) { | |
d3c2ae1c GW |
5377 | arc_space_return(size, ARC_SPACE_META); |
5378 | } else { | |
5379 | ASSERT(type == ARC_BUFC_DATA); | |
d3c2ae1c GW |
5380 | arc_space_return(size, ARC_SPACE_DATA); |
5381 | } | |
34dc7c2f BB |
5382 | } |
5383 | ||
5384 | /* | |
5385 | * This routine is called whenever a buffer is accessed. | |
5386 | * NOTE: the hash lock is dropped in this function. | |
5387 | */ | |
5388 | static void | |
2a432414 | 5389 | arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock) |
34dc7c2f | 5390 | { |
428870ff BB |
5391 | clock_t now; |
5392 | ||
34dc7c2f | 5393 | ASSERT(MUTEX_HELD(hash_lock)); |
b9541d6b | 5394 | ASSERT(HDR_HAS_L1HDR(hdr)); |
34dc7c2f | 5395 | |
b9541d6b | 5396 | if (hdr->b_l1hdr.b_state == arc_anon) { |
34dc7c2f BB |
5397 | /* |
5398 | * This buffer is not in the cache, and does not | |
5399 | * appear in our "ghost" list. Add the new buffer | |
5400 | * to the MRU state. | |
5401 | */ | |
5402 | ||
b9541d6b CW |
5403 | ASSERT0(hdr->b_l1hdr.b_arc_access); |
5404 | hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); | |
2a432414 GW |
5405 | DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); |
5406 | arc_change_state(arc_mru, hdr, hash_lock); | |
34dc7c2f | 5407 | |
b9541d6b | 5408 | } else if (hdr->b_l1hdr.b_state == arc_mru) { |
428870ff BB |
5409 | now = ddi_get_lbolt(); |
5410 | ||
34dc7c2f BB |
5411 | /* |
5412 | * If this buffer is here because of a prefetch, then either: | |
5413 | * - clear the flag if this is a "referencing" read | |
5414 | * (any subsequent access will bump this into the MFU state). | |
5415 | * or | |
5416 | * - move the buffer to the head of the list if this is | |
5417 | * another prefetch (to make it less likely to be evicted). | |
5418 | */ | |
d4a72f23 | 5419 | if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { |
424fd7c3 | 5420 | if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) { |
ca0bf58d PS |
5421 | /* link protected by hash lock */ |
5422 | ASSERT(multilist_link_active( | |
b9541d6b | 5423 | &hdr->b_l1hdr.b_arc_node)); |
34dc7c2f | 5424 | } else { |
08532162 GA |
5425 | if (HDR_HAS_L2HDR(hdr)) |
5426 | l2arc_hdr_arcstats_decrement_state(hdr); | |
d4a72f23 TC |
5427 | arc_hdr_clear_flags(hdr, |
5428 | ARC_FLAG_PREFETCH | | |
5429 | ARC_FLAG_PRESCIENT_PREFETCH); | |
b9541d6b | 5430 | atomic_inc_32(&hdr->b_l1hdr.b_mru_hits); |
34dc7c2f | 5431 | ARCSTAT_BUMP(arcstat_mru_hits); |
08532162 GA |
5432 | if (HDR_HAS_L2HDR(hdr)) |
5433 | l2arc_hdr_arcstats_increment_state(hdr); | |
34dc7c2f | 5434 | } |
b9541d6b | 5435 | hdr->b_l1hdr.b_arc_access = now; |
34dc7c2f BB |
5436 | return; |
5437 | } | |
5438 | ||
5439 | /* | |
5440 | * This buffer has been "accessed" only once so far, | |
5441 | * but it is still in the cache. Move it to the MFU | |
5442 | * state. | |
5443 | */ | |
b9541d6b CW |
5444 | if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access + |
5445 | ARC_MINTIME)) { | |
34dc7c2f BB |
5446 | /* |
5447 | * More than 125ms have passed since we | |
5448 | * instantiated this buffer. Move it to the | |
5449 | * most frequently used state. | |
5450 | */ | |
b9541d6b | 5451 | hdr->b_l1hdr.b_arc_access = now; |
2a432414 GW |
5452 | DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); |
5453 | arc_change_state(arc_mfu, hdr, hash_lock); | |
34dc7c2f | 5454 | } |
b9541d6b | 5455 | atomic_inc_32(&hdr->b_l1hdr.b_mru_hits); |
34dc7c2f | 5456 | ARCSTAT_BUMP(arcstat_mru_hits); |
b9541d6b | 5457 | } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { |
34dc7c2f BB |
5458 | arc_state_t *new_state; |
5459 | /* | |
5460 | * This buffer has been "accessed" recently, but | |
5461 | * was evicted from the cache. Move it to the | |
5462 | * MFU state. | |
5463 | */ | |
d4a72f23 | 5464 | if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { |
34dc7c2f | 5465 | new_state = arc_mru; |
424fd7c3 | 5466 | if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) > 0) { |
08532162 GA |
5467 | if (HDR_HAS_L2HDR(hdr)) |
5468 | l2arc_hdr_arcstats_decrement_state(hdr); | |
d4a72f23 TC |
5469 | arc_hdr_clear_flags(hdr, |
5470 | ARC_FLAG_PREFETCH | | |
5471 | ARC_FLAG_PRESCIENT_PREFETCH); | |
08532162 GA |
5472 | if (HDR_HAS_L2HDR(hdr)) |
5473 | l2arc_hdr_arcstats_increment_state(hdr); | |
d4a72f23 | 5474 | } |
2a432414 | 5475 | DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); |
34dc7c2f BB |
5476 | } else { |
5477 | new_state = arc_mfu; | |
2a432414 | 5478 | DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); |
34dc7c2f BB |
5479 | } |
5480 | ||
b9541d6b | 5481 | hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); |
2a432414 | 5482 | arc_change_state(new_state, hdr, hash_lock); |
34dc7c2f | 5483 | |
b9541d6b | 5484 | atomic_inc_32(&hdr->b_l1hdr.b_mru_ghost_hits); |
34dc7c2f | 5485 | ARCSTAT_BUMP(arcstat_mru_ghost_hits); |
b9541d6b | 5486 | } else if (hdr->b_l1hdr.b_state == arc_mfu) { |
34dc7c2f BB |
5487 | /* |
5488 | * This buffer has been accessed more than once and is | |
5489 | * still in the cache. Keep it in the MFU state. | |
5490 | * | |
5491 | * NOTE: an add_reference() that occurred when we did | |
5492 | * the arc_read() will have kicked this off the list. | |
5493 | * If it was a prefetch, we will explicitly move it to | |
5494 | * the head of the list now. | |
5495 | */ | |
d4a72f23 | 5496 | |
b9541d6b | 5497 | atomic_inc_32(&hdr->b_l1hdr.b_mfu_hits); |
34dc7c2f | 5498 | ARCSTAT_BUMP(arcstat_mfu_hits); |
b9541d6b CW |
5499 | hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); |
5500 | } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { | |
34dc7c2f BB |
5501 | arc_state_t *new_state = arc_mfu; |
5502 | /* | |
5503 | * This buffer has been accessed more than once but has | |
5504 | * been evicted from the cache. Move it back to the | |
5505 | * MFU state. | |
5506 | */ | |
5507 | ||
d4a72f23 | 5508 | if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { |
34dc7c2f BB |
5509 | /* |
5510 | * This is a prefetch access... | |
5511 | * move this block back to the MRU state. | |
5512 | */ | |
34dc7c2f BB |
5513 | new_state = arc_mru; |
5514 | } | |
5515 | ||
b9541d6b | 5516 | hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); |
2a432414 GW |
5517 | DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); |
5518 | arc_change_state(new_state, hdr, hash_lock); | |
34dc7c2f | 5519 | |
b9541d6b | 5520 | atomic_inc_32(&hdr->b_l1hdr.b_mfu_ghost_hits); |
34dc7c2f | 5521 | ARCSTAT_BUMP(arcstat_mfu_ghost_hits); |
b9541d6b | 5522 | } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { |
34dc7c2f BB |
5523 | /* |
5524 | * This buffer is on the 2nd Level ARC. | |
5525 | */ | |
5526 | ||
b9541d6b | 5527 | hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); |
2a432414 GW |
5528 | DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); |
5529 | arc_change_state(arc_mfu, hdr, hash_lock); | |
34dc7c2f | 5530 | } else { |
b9541d6b CW |
5531 | cmn_err(CE_PANIC, "invalid arc state 0x%p", |
5532 | hdr->b_l1hdr.b_state); | |
34dc7c2f BB |
5533 | } |
5534 | } | |
5535 | ||
0873bb63 BB |
5536 | /* |
5537 | * This routine is called by dbuf_hold() to update the arc_access() state | |
5538 | * which otherwise would be skipped for entries in the dbuf cache. | |
5539 | */ | |
5540 | void | |
5541 | arc_buf_access(arc_buf_t *buf) | |
5542 | { | |
5543 | mutex_enter(&buf->b_evict_lock); | |
5544 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
5545 | ||
5546 | /* | |
5547 | * Avoid taking the hash_lock when possible as an optimization. | |
5548 | * The header must be checked again under the hash_lock in order | |
5549 | * to handle the case where it is concurrently being released. | |
5550 | */ | |
5551 | if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { | |
5552 | mutex_exit(&buf->b_evict_lock); | |
5553 | return; | |
5554 | } | |
5555 | ||
5556 | kmutex_t *hash_lock = HDR_LOCK(hdr); | |
5557 | mutex_enter(hash_lock); | |
5558 | ||
5559 | if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { | |
5560 | mutex_exit(hash_lock); | |
5561 | mutex_exit(&buf->b_evict_lock); | |
5562 | ARCSTAT_BUMP(arcstat_access_skip); | |
5563 | return; | |
5564 | } | |
5565 | ||
5566 | mutex_exit(&buf->b_evict_lock); | |
5567 | ||
5568 | ASSERT(hdr->b_l1hdr.b_state == arc_mru || | |
5569 | hdr->b_l1hdr.b_state == arc_mfu); | |
5570 | ||
5571 | DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); | |
5572 | arc_access(hdr, hash_lock); | |
5573 | mutex_exit(hash_lock); | |
5574 | ||
5575 | ARCSTAT_BUMP(arcstat_hits); | |
5576 | ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr) && !HDR_PRESCIENT_PREFETCH(hdr), | |
5577 | demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); | |
5578 | } | |
5579 | ||
b5256303 | 5580 | /* a generic arc_read_done_func_t which you can use */ |
34dc7c2f BB |
5581 | /* ARGSUSED */ |
5582 | void | |
d4a72f23 TC |
5583 | arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, |
5584 | arc_buf_t *buf, void *arg) | |
34dc7c2f | 5585 | { |
d4a72f23 TC |
5586 | if (buf == NULL) |
5587 | return; | |
5588 | ||
5589 | bcopy(buf->b_data, arg, arc_buf_size(buf)); | |
d3c2ae1c | 5590 | arc_buf_destroy(buf, arg); |
34dc7c2f BB |
5591 | } |
5592 | ||
b5256303 | 5593 | /* a generic arc_read_done_func_t */ |
d4a72f23 | 5594 | /* ARGSUSED */ |
34dc7c2f | 5595 | void |
d4a72f23 TC |
5596 | arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, |
5597 | arc_buf_t *buf, void *arg) | |
34dc7c2f BB |
5598 | { |
5599 | arc_buf_t **bufp = arg; | |
d4a72f23 TC |
5600 | |
5601 | if (buf == NULL) { | |
c3bd3fb4 | 5602 | ASSERT(zio == NULL || zio->io_error != 0); |
34dc7c2f BB |
5603 | *bufp = NULL; |
5604 | } else { | |
c3bd3fb4 | 5605 | ASSERT(zio == NULL || zio->io_error == 0); |
34dc7c2f | 5606 | *bufp = buf; |
c3bd3fb4 | 5607 | ASSERT(buf->b_data != NULL); |
34dc7c2f BB |
5608 | } |
5609 | } | |
5610 | ||
d3c2ae1c GW |
5611 | static void |
5612 | arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp) | |
5613 | { | |
5614 | if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { | |
5615 | ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); | |
b5256303 | 5616 | ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); |
d3c2ae1c GW |
5617 | } else { |
5618 | if (HDR_COMPRESSION_ENABLED(hdr)) { | |
b5256303 | 5619 | ASSERT3U(arc_hdr_get_compress(hdr), ==, |
d3c2ae1c GW |
5620 | BP_GET_COMPRESS(bp)); |
5621 | } | |
5622 | ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); | |
5623 | ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); | |
b5256303 | 5624 | ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); |
d3c2ae1c GW |
5625 | } |
5626 | } | |
5627 | ||
34dc7c2f BB |
5628 | static void |
5629 | arc_read_done(zio_t *zio) | |
5630 | { | |
b5256303 | 5631 | blkptr_t *bp = zio->io_bp; |
d3c2ae1c | 5632 | arc_buf_hdr_t *hdr = zio->io_private; |
9b67f605 | 5633 | kmutex_t *hash_lock = NULL; |
524b4217 DK |
5634 | arc_callback_t *callback_list; |
5635 | arc_callback_t *acb; | |
2aa34383 | 5636 | boolean_t freeable = B_FALSE; |
a7004725 | 5637 | |
34dc7c2f BB |
5638 | /* |
5639 | * The hdr was inserted into hash-table and removed from lists | |
5640 | * prior to starting I/O. We should find this header, since | |
5641 | * it's in the hash table, and it should be legit since it's | |
5642 | * not possible to evict it during the I/O. The only possible | |
5643 | * reason for it not to be found is if we were freed during the | |
5644 | * read. | |
5645 | */ | |
9b67f605 | 5646 | if (HDR_IN_HASH_TABLE(hdr)) { |
31df97cd DB |
5647 | arc_buf_hdr_t *found; |
5648 | ||
9b67f605 MA |
5649 | ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); |
5650 | ASSERT3U(hdr->b_dva.dva_word[0], ==, | |
5651 | BP_IDENTITY(zio->io_bp)->dva_word[0]); | |
5652 | ASSERT3U(hdr->b_dva.dva_word[1], ==, | |
5653 | BP_IDENTITY(zio->io_bp)->dva_word[1]); | |
5654 | ||
31df97cd | 5655 | found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock); |
9b67f605 | 5656 | |
d3c2ae1c | 5657 | ASSERT((found == hdr && |
9b67f605 MA |
5658 | DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || |
5659 | (found == hdr && HDR_L2_READING(hdr))); | |
d3c2ae1c GW |
5660 | ASSERT3P(hash_lock, !=, NULL); |
5661 | } | |
5662 | ||
b5256303 TC |
5663 | if (BP_IS_PROTECTED(bp)) { |
5664 | hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); | |
5665 | hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; | |
5666 | zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, | |
5667 | hdr->b_crypt_hdr.b_iv); | |
5668 | ||
5669 | if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { | |
5670 | void *tmpbuf; | |
5671 | ||
5672 | tmpbuf = abd_borrow_buf_copy(zio->io_abd, | |
5673 | sizeof (zil_chain_t)); | |
5674 | zio_crypt_decode_mac_zil(tmpbuf, | |
5675 | hdr->b_crypt_hdr.b_mac); | |
5676 | abd_return_buf(zio->io_abd, tmpbuf, | |
5677 | sizeof (zil_chain_t)); | |
5678 | } else { | |
5679 | zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); | |
5680 | } | |
5681 | } | |
5682 | ||
d4a72f23 | 5683 | if (zio->io_error == 0) { |
d3c2ae1c GW |
5684 | /* byteswap if necessary */ |
5685 | if (BP_SHOULD_BYTESWAP(zio->io_bp)) { | |
5686 | if (BP_GET_LEVEL(zio->io_bp) > 0) { | |
5687 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; | |
5688 | } else { | |
5689 | hdr->b_l1hdr.b_byteswap = | |
5690 | DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); | |
5691 | } | |
5692 | } else { | |
5693 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; | |
5694 | } | |
10b3c7f5 MN |
5695 | if (!HDR_L2_READING(hdr)) { |
5696 | hdr->b_complevel = zio->io_prop.zp_complevel; | |
5697 | } | |
9b67f605 | 5698 | } |
34dc7c2f | 5699 | |
d3c2ae1c | 5700 | arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); |
c6f5e9d9 GA |
5701 | if (l2arc_noprefetch && HDR_PREFETCH(hdr)) |
5702 | arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); | |
34dc7c2f | 5703 | |
b9541d6b | 5704 | callback_list = hdr->b_l1hdr.b_acb; |
d3c2ae1c | 5705 | ASSERT3P(callback_list, !=, NULL); |
34dc7c2f | 5706 | |
d4a72f23 TC |
5707 | if (hash_lock && zio->io_error == 0 && |
5708 | hdr->b_l1hdr.b_state == arc_anon) { | |
428870ff BB |
5709 | /* |
5710 | * Only call arc_access on anonymous buffers. This is because | |
5711 | * if we've issued an I/O for an evicted buffer, we've already | |
5712 | * called arc_access (to prevent any simultaneous readers from | |
5713 | * getting confused). | |
5714 | */ | |
5715 | arc_access(hdr, hash_lock); | |
5716 | } | |
5717 | ||
524b4217 DK |
5718 | /* |
5719 | * If a read request has a callback (i.e. acb_done is not NULL), then we | |
5720 | * make a buf containing the data according to the parameters which were | |
5721 | * passed in. The implementation of arc_buf_alloc_impl() ensures that we | |
5722 | * aren't needlessly decompressing the data multiple times. | |
5723 | */ | |
a7004725 | 5724 | int callback_cnt = 0; |
2aa34383 | 5725 | for (acb = callback_list; acb != NULL; acb = acb->acb_next) { |
923d7303 | 5726 | if (!acb->acb_done || acb->acb_nobuf) |
2aa34383 DK |
5727 | continue; |
5728 | ||
2aa34383 | 5729 | callback_cnt++; |
524b4217 | 5730 | |
d4a72f23 TC |
5731 | if (zio->io_error != 0) |
5732 | continue; | |
5733 | ||
b5256303 | 5734 | int error = arc_buf_alloc_impl(hdr, zio->io_spa, |
be9a5c35 | 5735 | &acb->acb_zb, acb->acb_private, acb->acb_encrypted, |
d4a72f23 | 5736 | acb->acb_compressed, acb->acb_noauth, B_TRUE, |
440a3eb9 | 5737 | &acb->acb_buf); |
b5256303 TC |
5738 | |
5739 | /* | |
440a3eb9 | 5740 | * Assert non-speculative zios didn't fail because an |
b5256303 TC |
5741 | * encryption key wasn't loaded |
5742 | */ | |
a2c2ed1b | 5743 | ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || |
be9a5c35 | 5744 | error != EACCES); |
b5256303 TC |
5745 | |
5746 | /* | |
5747 | * If we failed to decrypt, report an error now (as the zio | |
5748 | * layer would have done if it had done the transforms). | |
5749 | */ | |
5750 | if (error == ECKSUM) { | |
5751 | ASSERT(BP_IS_PROTECTED(bp)); | |
5752 | error = SET_ERROR(EIO); | |
b5256303 | 5753 | if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { |
be9a5c35 | 5754 | spa_log_error(zio->io_spa, &acb->acb_zb); |
1144586b TS |
5755 | (void) zfs_ereport_post( |
5756 | FM_EREPORT_ZFS_AUTHENTICATION, | |
4f072827 | 5757 | zio->io_spa, NULL, &acb->acb_zb, zio, 0); |
b5256303 TC |
5758 | } |
5759 | } | |
5760 | ||
c3bd3fb4 TC |
5761 | if (error != 0) { |
5762 | /* | |
5763 | * Decompression or decryption failed. Set | |
5764 | * io_error so that when we call acb_done | |
5765 | * (below), we will indicate that the read | |
5766 | * failed. Note that in the unusual case | |
5767 | * where one callback is compressed and another | |
5768 | * uncompressed, we will mark all of them | |
5769 | * as failed, even though the uncompressed | |
5770 | * one can't actually fail. In this case, | |
5771 | * the hdr will not be anonymous, because | |
5772 | * if there are multiple callbacks, it's | |
5773 | * because multiple threads found the same | |
5774 | * arc buf in the hash table. | |
5775 | */ | |
524b4217 | 5776 | zio->io_error = error; |
c3bd3fb4 | 5777 | } |
34dc7c2f | 5778 | } |
c3bd3fb4 TC |
5779 | |
5780 | /* | |
5781 | * If there are multiple callbacks, we must have the hash lock, | |
5782 | * because the only way for multiple threads to find this hdr is | |
5783 | * in the hash table. This ensures that if there are multiple | |
5784 | * callbacks, the hdr is not anonymous. If it were anonymous, | |
5785 | * we couldn't use arc_buf_destroy() in the error case below. | |
5786 | */ | |
5787 | ASSERT(callback_cnt < 2 || hash_lock != NULL); | |
5788 | ||
b9541d6b | 5789 | hdr->b_l1hdr.b_acb = NULL; |
d3c2ae1c | 5790 | arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); |
440a3eb9 | 5791 | if (callback_cnt == 0) |
b5256303 | 5792 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); |
34dc7c2f | 5793 | |
424fd7c3 | 5794 | ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt) || |
b9541d6b | 5795 | callback_list != NULL); |
34dc7c2f | 5796 | |
d4a72f23 | 5797 | if (zio->io_error == 0) { |
d3c2ae1c GW |
5798 | arc_hdr_verify(hdr, zio->io_bp); |
5799 | } else { | |
5800 | arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); | |
b9541d6b | 5801 | if (hdr->b_l1hdr.b_state != arc_anon) |
34dc7c2f BB |
5802 | arc_change_state(arc_anon, hdr, hash_lock); |
5803 | if (HDR_IN_HASH_TABLE(hdr)) | |
5804 | buf_hash_remove(hdr); | |
424fd7c3 | 5805 | freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); |
34dc7c2f BB |
5806 | } |
5807 | ||
5808 | /* | |
5809 | * Broadcast before we drop the hash_lock to avoid the possibility | |
5810 | * that the hdr (and hence the cv) might be freed before we get to | |
5811 | * the cv_broadcast(). | |
5812 | */ | |
b9541d6b | 5813 | cv_broadcast(&hdr->b_l1hdr.b_cv); |
34dc7c2f | 5814 | |
b9541d6b | 5815 | if (hash_lock != NULL) { |
34dc7c2f BB |
5816 | mutex_exit(hash_lock); |
5817 | } else { | |
5818 | /* | |
5819 | * This block was freed while we waited for the read to | |
5820 | * complete. It has been removed from the hash table and | |
5821 | * moved to the anonymous state (so that it won't show up | |
5822 | * in the cache). | |
5823 | */ | |
b9541d6b | 5824 | ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); |
424fd7c3 | 5825 | freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); |
34dc7c2f BB |
5826 | } |
5827 | ||
5828 | /* execute each callback and free its structure */ | |
5829 | while ((acb = callback_list) != NULL) { | |
c3bd3fb4 TC |
5830 | if (acb->acb_done != NULL) { |
5831 | if (zio->io_error != 0 && acb->acb_buf != NULL) { | |
5832 | /* | |
5833 | * If arc_buf_alloc_impl() fails during | |
5834 | * decompression, the buf will still be | |
5835 | * allocated, and needs to be freed here. | |
5836 | */ | |
5837 | arc_buf_destroy(acb->acb_buf, | |
5838 | acb->acb_private); | |
5839 | acb->acb_buf = NULL; | |
5840 | } | |
d4a72f23 TC |
5841 | acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, |
5842 | acb->acb_buf, acb->acb_private); | |
b5256303 | 5843 | } |
34dc7c2f BB |
5844 | |
5845 | if (acb->acb_zio_dummy != NULL) { | |
5846 | acb->acb_zio_dummy->io_error = zio->io_error; | |
5847 | zio_nowait(acb->acb_zio_dummy); | |
5848 | } | |
5849 | ||
5850 | callback_list = acb->acb_next; | |
5851 | kmem_free(acb, sizeof (arc_callback_t)); | |
5852 | } | |
5853 | ||
5854 | if (freeable) | |
5855 | arc_hdr_destroy(hdr); | |
5856 | } | |
5857 | ||
5858 | /* | |
5c839890 | 5859 | * "Read" the block at the specified DVA (in bp) via the |
34dc7c2f BB |
5860 | * cache. If the block is found in the cache, invoke the provided |
5861 | * callback immediately and return. Note that the `zio' parameter | |
5862 | * in the callback will be NULL in this case, since no IO was | |
5863 | * required. If the block is not in the cache pass the read request | |
5864 | * on to the spa with a substitute callback function, so that the | |
5865 | * requested block will be added to the cache. | |
5866 | * | |
5867 | * If a read request arrives for a block that has a read in-progress, | |
5868 | * either wait for the in-progress read to complete (and return the | |
5869 | * results); or, if this is a read with a "done" func, add a record | |
5870 | * to the read to invoke the "done" func when the read completes, | |
5871 | * and return; or just return. | |
5872 | * | |
5873 | * arc_read_done() will invoke all the requested "done" functions | |
5874 | * for readers of this block. | |
5875 | */ | |
5876 | int | |
b5256303 TC |
5877 | arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, |
5878 | arc_read_done_func_t *done, void *private, zio_priority_t priority, | |
5879 | int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb) | |
34dc7c2f | 5880 | { |
9b67f605 | 5881 | arc_buf_hdr_t *hdr = NULL; |
9b67f605 | 5882 | kmutex_t *hash_lock = NULL; |
34dc7c2f | 5883 | zio_t *rzio; |
3541dc6d | 5884 | uint64_t guid = spa_load_guid(spa); |
b5256303 TC |
5885 | boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; |
5886 | boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && | |
5887 | (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; | |
5888 | boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && | |
5889 | (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; | |
0902c457 | 5890 | boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp); |
1e4732cb | 5891 | boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF; |
1421c891 | 5892 | int rc = 0; |
34dc7c2f | 5893 | |
0902c457 | 5894 | ASSERT(!embedded_bp || |
9b67f605 | 5895 | BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); |
30af21b0 PD |
5896 | ASSERT(!BP_IS_HOLE(bp)); |
5897 | ASSERT(!BP_IS_REDACTED(bp)); | |
9b67f605 | 5898 | |
1e9231ad MR |
5899 | /* |
5900 | * Normally SPL_FSTRANS will already be set since kernel threads which | |
5901 | * expect to call the DMU interfaces will set it when created. System | |
5902 | * calls are similarly handled by setting/cleaning the bit in the | |
5903 | * registered callback (module/os/.../zfs/zpl_*). | |
5904 | * | |
5905 | * External consumers such as Lustre which call the exported DMU | |
5906 | * interfaces may not have set SPL_FSTRANS. To avoid a deadlock | |
5907 | * on the hash_lock always set and clear the bit. | |
5908 | */ | |
5909 | fstrans_cookie_t cookie = spl_fstrans_mark(); | |
34dc7c2f | 5910 | top: |
0902c457 | 5911 | if (!embedded_bp) { |
9b67f605 MA |
5912 | /* |
5913 | * Embedded BP's have no DVA and require no I/O to "read". | |
5914 | * Create an anonymous arc buf to back it. | |
5915 | */ | |
9ffcaa37 GA |
5916 | if (!zfs_blkptr_verify(spa, bp, zio_flags & |
5917 | ZIO_FLAG_CONFIG_WRITER, BLK_VERIFY_LOG)) { | |
5918 | rc = SET_ERROR(ECKSUM); | |
5919 | goto out; | |
5920 | } | |
5921 | ||
9b67f605 MA |
5922 | hdr = buf_hash_find(guid, bp, &hash_lock); |
5923 | } | |
5924 | ||
b5256303 TC |
5925 | /* |
5926 | * Determine if we have an L1 cache hit or a cache miss. For simplicity | |
e1cfd73f | 5927 | * we maintain encrypted data separately from compressed / uncompressed |
b5256303 TC |
5928 | * data. If the user is requesting raw encrypted data and we don't have |
5929 | * that in the header we will read from disk to guarantee that we can | |
5930 | * get it even if the encryption keys aren't loaded. | |
5931 | */ | |
5932 | if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || | |
5933 | (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { | |
d3c2ae1c | 5934 | arc_buf_t *buf = NULL; |
2a432414 | 5935 | *arc_flags |= ARC_FLAG_CACHED; |
34dc7c2f BB |
5936 | |
5937 | if (HDR_IO_IN_PROGRESS(hdr)) { | |
a8b2e306 | 5938 | zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; |
34dc7c2f | 5939 | |
1dc32a67 MA |
5940 | if (*arc_flags & ARC_FLAG_CACHED_ONLY) { |
5941 | mutex_exit(hash_lock); | |
5942 | ARCSTAT_BUMP(arcstat_cached_only_in_progress); | |
5943 | rc = SET_ERROR(ENOENT); | |
5944 | goto out; | |
5945 | } | |
5946 | ||
a8b2e306 | 5947 | ASSERT3P(head_zio, !=, NULL); |
7f60329a MA |
5948 | if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && |
5949 | priority == ZIO_PRIORITY_SYNC_READ) { | |
5950 | /* | |
a8b2e306 TC |
5951 | * This is a sync read that needs to wait for |
5952 | * an in-flight async read. Request that the | |
5953 | * zio have its priority upgraded. | |
7f60329a | 5954 | */ |
a8b2e306 TC |
5955 | zio_change_priority(head_zio, priority); |
5956 | DTRACE_PROBE1(arc__async__upgrade__sync, | |
7f60329a | 5957 | arc_buf_hdr_t *, hdr); |
a8b2e306 | 5958 | ARCSTAT_BUMP(arcstat_async_upgrade_sync); |
7f60329a MA |
5959 | } |
5960 | if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { | |
d3c2ae1c GW |
5961 | arc_hdr_clear_flags(hdr, |
5962 | ARC_FLAG_PREDICTIVE_PREFETCH); | |
7f60329a MA |
5963 | } |
5964 | ||
2a432414 | 5965 | if (*arc_flags & ARC_FLAG_WAIT) { |
b9541d6b | 5966 | cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); |
34dc7c2f BB |
5967 | mutex_exit(hash_lock); |
5968 | goto top; | |
5969 | } | |
2a432414 | 5970 | ASSERT(*arc_flags & ARC_FLAG_NOWAIT); |
34dc7c2f | 5971 | |
923d7303 | 5972 | if (done) { |
7f60329a | 5973 | arc_callback_t *acb = NULL; |
34dc7c2f BB |
5974 | |
5975 | acb = kmem_zalloc(sizeof (arc_callback_t), | |
79c76d5b | 5976 | KM_SLEEP); |
34dc7c2f BB |
5977 | acb->acb_done = done; |
5978 | acb->acb_private = private; | |
a7004725 | 5979 | acb->acb_compressed = compressed_read; |
440a3eb9 TC |
5980 | acb->acb_encrypted = encrypted_read; |
5981 | acb->acb_noauth = noauth_read; | |
923d7303 | 5982 | acb->acb_nobuf = no_buf; |
be9a5c35 | 5983 | acb->acb_zb = *zb; |
34dc7c2f BB |
5984 | if (pio != NULL) |
5985 | acb->acb_zio_dummy = zio_null(pio, | |
d164b209 | 5986 | spa, NULL, NULL, NULL, zio_flags); |
34dc7c2f | 5987 | |
d3c2ae1c | 5988 | ASSERT3P(acb->acb_done, !=, NULL); |
a8b2e306 | 5989 | acb->acb_zio_head = head_zio; |
b9541d6b CW |
5990 | acb->acb_next = hdr->b_l1hdr.b_acb; |
5991 | hdr->b_l1hdr.b_acb = acb; | |
34dc7c2f BB |
5992 | } |
5993 | mutex_exit(hash_lock); | |
1421c891 | 5994 | goto out; |
34dc7c2f BB |
5995 | } |
5996 | ||
b9541d6b CW |
5997 | ASSERT(hdr->b_l1hdr.b_state == arc_mru || |
5998 | hdr->b_l1hdr.b_state == arc_mfu); | |
34dc7c2f | 5999 | |
1e4732cb | 6000 | if (done && !no_buf) { |
7f60329a MA |
6001 | if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { |
6002 | /* | |
6003 | * This is a demand read which does not have to | |
6004 | * wait for i/o because we did a predictive | |
6005 | * prefetch i/o for it, which has completed. | |
6006 | */ | |
6007 | DTRACE_PROBE1( | |
6008 | arc__demand__hit__predictive__prefetch, | |
6009 | arc_buf_hdr_t *, hdr); | |
6010 | ARCSTAT_BUMP( | |
6011 | arcstat_demand_hit_predictive_prefetch); | |
d3c2ae1c GW |
6012 | arc_hdr_clear_flags(hdr, |
6013 | ARC_FLAG_PREDICTIVE_PREFETCH); | |
7f60329a | 6014 | } |
d4a72f23 TC |
6015 | |
6016 | if (hdr->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) { | |
6017 | ARCSTAT_BUMP( | |
6018 | arcstat_demand_hit_prescient_prefetch); | |
6019 | arc_hdr_clear_flags(hdr, | |
6020 | ARC_FLAG_PRESCIENT_PREFETCH); | |
6021 | } | |
6022 | ||
0902c457 | 6023 | ASSERT(!embedded_bp || !BP_IS_HOLE(bp)); |
d3c2ae1c | 6024 | |
524b4217 | 6025 | /* Get a buf with the desired data in it. */ |
be9a5c35 TC |
6026 | rc = arc_buf_alloc_impl(hdr, spa, zb, private, |
6027 | encrypted_read, compressed_read, noauth_read, | |
6028 | B_TRUE, &buf); | |
a2c2ed1b TC |
6029 | if (rc == ECKSUM) { |
6030 | /* | |
6031 | * Convert authentication and decryption errors | |
be9a5c35 TC |
6032 | * to EIO (and generate an ereport if needed) |
6033 | * before leaving the ARC. | |
a2c2ed1b TC |
6034 | */ |
6035 | rc = SET_ERROR(EIO); | |
be9a5c35 TC |
6036 | if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { |
6037 | spa_log_error(spa, zb); | |
1144586b | 6038 | (void) zfs_ereport_post( |
be9a5c35 | 6039 | FM_EREPORT_ZFS_AUTHENTICATION, |
4f072827 | 6040 | spa, NULL, zb, NULL, 0); |
be9a5c35 | 6041 | } |
a2c2ed1b | 6042 | } |
d4a72f23 | 6043 | if (rc != 0) { |
2c24b5b1 TC |
6044 | (void) remove_reference(hdr, hash_lock, |
6045 | private); | |
6046 | arc_buf_destroy_impl(buf); | |
d4a72f23 TC |
6047 | buf = NULL; |
6048 | } | |
6049 | ||
a2c2ed1b TC |
6050 | /* assert any errors weren't due to unloaded keys */ |
6051 | ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || | |
be9a5c35 | 6052 | rc != EACCES); |
2a432414 | 6053 | } else if (*arc_flags & ARC_FLAG_PREFETCH && |
08532162 GA |
6054 | zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { |
6055 | if (HDR_HAS_L2HDR(hdr)) | |
6056 | l2arc_hdr_arcstats_decrement_state(hdr); | |
d3c2ae1c | 6057 | arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); |
08532162 GA |
6058 | if (HDR_HAS_L2HDR(hdr)) |
6059 | l2arc_hdr_arcstats_increment_state(hdr); | |
34dc7c2f BB |
6060 | } |
6061 | DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); | |
6062 | arc_access(hdr, hash_lock); | |
d4a72f23 TC |
6063 | if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) |
6064 | arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); | |
2a432414 | 6065 | if (*arc_flags & ARC_FLAG_L2CACHE) |
d3c2ae1c | 6066 | arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); |
34dc7c2f BB |
6067 | mutex_exit(hash_lock); |
6068 | ARCSTAT_BUMP(arcstat_hits); | |
b9541d6b CW |
6069 | ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), |
6070 | demand, prefetch, !HDR_ISTYPE_METADATA(hdr), | |
34dc7c2f BB |
6071 | data, metadata, hits); |
6072 | ||
6073 | if (done) | |
d4a72f23 | 6074 | done(NULL, zb, bp, buf, private); |
34dc7c2f | 6075 | } else { |
d3c2ae1c GW |
6076 | uint64_t lsize = BP_GET_LSIZE(bp); |
6077 | uint64_t psize = BP_GET_PSIZE(bp); | |
9b67f605 | 6078 | arc_callback_t *acb; |
b128c09f | 6079 | vdev_t *vd = NULL; |
a117a6d6 | 6080 | uint64_t addr = 0; |
d164b209 | 6081 | boolean_t devw = B_FALSE; |
d3c2ae1c | 6082 | uint64_t size; |
440a3eb9 | 6083 | abd_t *hdr_abd; |
e111c802 | 6084 | int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; |
34dc7c2f | 6085 | |
1dc32a67 MA |
6086 | if (*arc_flags & ARC_FLAG_CACHED_ONLY) { |
6087 | rc = SET_ERROR(ENOENT); | |
6088 | if (hash_lock != NULL) | |
6089 | mutex_exit(hash_lock); | |
6090 | goto out; | |
6091 | } | |
6092 | ||
34dc7c2f | 6093 | if (hdr == NULL) { |
0902c457 TC |
6094 | /* |
6095 | * This block is not in the cache or it has | |
6096 | * embedded data. | |
6097 | */ | |
9b67f605 | 6098 | arc_buf_hdr_t *exists = NULL; |
34dc7c2f | 6099 | arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); |
d3c2ae1c | 6100 | hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, |
10b3c7f5 | 6101 | BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type, |
b5256303 | 6102 | encrypted_read); |
d3c2ae1c | 6103 | |
0902c457 | 6104 | if (!embedded_bp) { |
9b67f605 MA |
6105 | hdr->b_dva = *BP_IDENTITY(bp); |
6106 | hdr->b_birth = BP_PHYSICAL_BIRTH(bp); | |
9b67f605 MA |
6107 | exists = buf_hash_insert(hdr, &hash_lock); |
6108 | } | |
6109 | if (exists != NULL) { | |
34dc7c2f BB |
6110 | /* somebody beat us to the hash insert */ |
6111 | mutex_exit(hash_lock); | |
428870ff | 6112 | buf_discard_identity(hdr); |
d3c2ae1c | 6113 | arc_hdr_destroy(hdr); |
34dc7c2f BB |
6114 | goto top; /* restart the IO request */ |
6115 | } | |
34dc7c2f | 6116 | } else { |
b9541d6b | 6117 | /* |
b5256303 TC |
6118 | * This block is in the ghost cache or encrypted data |
6119 | * was requested and we didn't have it. If it was | |
6120 | * L2-only (and thus didn't have an L1 hdr), | |
6121 | * we realloc the header to add an L1 hdr. | |
b9541d6b CW |
6122 | */ |
6123 | if (!HDR_HAS_L1HDR(hdr)) { | |
6124 | hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, | |
6125 | hdr_full_cache); | |
6126 | } | |
6127 | ||
b5256303 TC |
6128 | if (GHOST_STATE(hdr->b_l1hdr.b_state)) { |
6129 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); | |
6130 | ASSERT(!HDR_HAS_RABD(hdr)); | |
6131 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); | |
424fd7c3 TS |
6132 | ASSERT0(zfs_refcount_count( |
6133 | &hdr->b_l1hdr.b_refcnt)); | |
b5256303 TC |
6134 | ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); |
6135 | ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); | |
6136 | } else if (HDR_IO_IN_PROGRESS(hdr)) { | |
6137 | /* | |
6138 | * If this header already had an IO in progress | |
6139 | * and we are performing another IO to fetch | |
6140 | * encrypted data we must wait until the first | |
6141 | * IO completes so as not to confuse | |
6142 | * arc_read_done(). This should be very rare | |
6143 | * and so the performance impact shouldn't | |
6144 | * matter. | |
6145 | */ | |
6146 | cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); | |
6147 | mutex_exit(hash_lock); | |
6148 | goto top; | |
6149 | } | |
34dc7c2f | 6150 | |
7f60329a | 6151 | /* |
d3c2ae1c | 6152 | * This is a delicate dance that we play here. |
b5256303 TC |
6153 | * This hdr might be in the ghost list so we access |
6154 | * it to move it out of the ghost list before we | |
d3c2ae1c GW |
6155 | * initiate the read. If it's a prefetch then |
6156 | * it won't have a callback so we'll remove the | |
6157 | * reference that arc_buf_alloc_impl() created. We | |
6158 | * do this after we've called arc_access() to | |
6159 | * avoid hitting an assert in remove_reference(). | |
7f60329a | 6160 | */ |
e111c802 | 6161 | arc_adapt(arc_hdr_size(hdr), hdr->b_l1hdr.b_state); |
428870ff | 6162 | arc_access(hdr, hash_lock); |
e111c802 | 6163 | arc_hdr_alloc_abd(hdr, alloc_flags); |
d3c2ae1c | 6164 | } |
d3c2ae1c | 6165 | |
b5256303 TC |
6166 | if (encrypted_read) { |
6167 | ASSERT(HDR_HAS_RABD(hdr)); | |
6168 | size = HDR_GET_PSIZE(hdr); | |
6169 | hdr_abd = hdr->b_crypt_hdr.b_rabd; | |
d3c2ae1c | 6170 | zio_flags |= ZIO_FLAG_RAW; |
b5256303 TC |
6171 | } else { |
6172 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); | |
6173 | size = arc_hdr_size(hdr); | |
6174 | hdr_abd = hdr->b_l1hdr.b_pabd; | |
6175 | ||
6176 | if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { | |
6177 | zio_flags |= ZIO_FLAG_RAW_COMPRESS; | |
6178 | } | |
6179 | ||
6180 | /* | |
6181 | * For authenticated bp's, we do not ask the ZIO layer | |
6182 | * to authenticate them since this will cause the entire | |
6183 | * IO to fail if the key isn't loaded. Instead, we | |
6184 | * defer authentication until arc_buf_fill(), which will | |
6185 | * verify the data when the key is available. | |
6186 | */ | |
6187 | if (BP_IS_AUTHENTICATED(bp)) | |
6188 | zio_flags |= ZIO_FLAG_RAW_ENCRYPT; | |
34dc7c2f BB |
6189 | } |
6190 | ||
b5256303 | 6191 | if (*arc_flags & ARC_FLAG_PREFETCH && |
08532162 GA |
6192 | zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { |
6193 | if (HDR_HAS_L2HDR(hdr)) | |
6194 | l2arc_hdr_arcstats_decrement_state(hdr); | |
d3c2ae1c | 6195 | arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); |
08532162 GA |
6196 | if (HDR_HAS_L2HDR(hdr)) |
6197 | l2arc_hdr_arcstats_increment_state(hdr); | |
6198 | } | |
d4a72f23 TC |
6199 | if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) |
6200 | arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); | |
d3c2ae1c GW |
6201 | if (*arc_flags & ARC_FLAG_L2CACHE) |
6202 | arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); | |
b5256303 TC |
6203 | if (BP_IS_AUTHENTICATED(bp)) |
6204 | arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); | |
d3c2ae1c GW |
6205 | if (BP_GET_LEVEL(bp) > 0) |
6206 | arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); | |
7f60329a | 6207 | if (*arc_flags & ARC_FLAG_PREDICTIVE_PREFETCH) |
d3c2ae1c | 6208 | arc_hdr_set_flags(hdr, ARC_FLAG_PREDICTIVE_PREFETCH); |
b9541d6b | 6209 | ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); |
428870ff | 6210 | |
79c76d5b | 6211 | acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); |
34dc7c2f BB |
6212 | acb->acb_done = done; |
6213 | acb->acb_private = private; | |
2aa34383 | 6214 | acb->acb_compressed = compressed_read; |
b5256303 TC |
6215 | acb->acb_encrypted = encrypted_read; |
6216 | acb->acb_noauth = noauth_read; | |
be9a5c35 | 6217 | acb->acb_zb = *zb; |
34dc7c2f | 6218 | |
d3c2ae1c | 6219 | ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); |
b9541d6b | 6220 | hdr->b_l1hdr.b_acb = acb; |
d3c2ae1c | 6221 | arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); |
34dc7c2f | 6222 | |
b9541d6b CW |
6223 | if (HDR_HAS_L2HDR(hdr) && |
6224 | (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { | |
6225 | devw = hdr->b_l2hdr.b_dev->l2ad_writing; | |
6226 | addr = hdr->b_l2hdr.b_daddr; | |
b128c09f | 6227 | /* |
a1d477c2 | 6228 | * Lock out L2ARC device removal. |
b128c09f BB |
6229 | */ |
6230 | if (vdev_is_dead(vd) || | |
6231 | !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) | |
6232 | vd = NULL; | |
6233 | } | |
6234 | ||
a8b2e306 TC |
6235 | /* |
6236 | * We count both async reads and scrub IOs as asynchronous so | |
6237 | * that both can be upgraded in the event of a cache hit while | |
6238 | * the read IO is still in-flight. | |
6239 | */ | |
6240 | if (priority == ZIO_PRIORITY_ASYNC_READ || | |
6241 | priority == ZIO_PRIORITY_SCRUB) | |
d3c2ae1c GW |
6242 | arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); |
6243 | else | |
6244 | arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); | |
6245 | ||
e49f1e20 | 6246 | /* |
0902c457 TC |
6247 | * At this point, we have a level 1 cache miss or a blkptr |
6248 | * with embedded data. Try again in L2ARC if possible. | |
e49f1e20 | 6249 | */ |
d3c2ae1c GW |
6250 | ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); |
6251 | ||
0902c457 TC |
6252 | /* |
6253 | * Skip ARC stat bump for block pointers with embedded | |
6254 | * data. The data are read from the blkptr itself via | |
6255 | * decode_embedded_bp_compressed(). | |
6256 | */ | |
6257 | if (!embedded_bp) { | |
6258 | DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, | |
6259 | blkptr_t *, bp, uint64_t, lsize, | |
6260 | zbookmark_phys_t *, zb); | |
6261 | ARCSTAT_BUMP(arcstat_misses); | |
6262 | ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), | |
6263 | demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, | |
6264 | metadata, misses); | |
64e0fe14 | 6265 | zfs_racct_read(size, 1); |
0902c457 | 6266 | } |
34dc7c2f | 6267 | |
666aa69f AM |
6268 | /* Check if the spa even has l2 configured */ |
6269 | const boolean_t spa_has_l2 = l2arc_ndev != 0 && | |
6270 | spa->spa_l2cache.sav_count > 0; | |
6271 | ||
6272 | if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) { | |
34dc7c2f BB |
6273 | /* |
6274 | * Read from the L2ARC if the following are true: | |
b128c09f BB |
6275 | * 1. The L2ARC vdev was previously cached. |
6276 | * 2. This buffer still has L2ARC metadata. | |
6277 | * 3. This buffer isn't currently writing to the L2ARC. | |
6278 | * 4. The L2ARC entry wasn't evicted, which may | |
6279 | * also have invalidated the vdev. | |
08532162 | 6280 | * 5. This isn't prefetch or l2arc_noprefetch is 0. |
34dc7c2f | 6281 | */ |
b9541d6b | 6282 | if (HDR_HAS_L2HDR(hdr) && |
d164b209 BB |
6283 | !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) && |
6284 | !(l2arc_noprefetch && HDR_PREFETCH(hdr))) { | |
34dc7c2f | 6285 | l2arc_read_callback_t *cb; |
82710e99 GDN |
6286 | abd_t *abd; |
6287 | uint64_t asize; | |
34dc7c2f BB |
6288 | |
6289 | DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); | |
6290 | ARCSTAT_BUMP(arcstat_l2_hits); | |
b9541d6b | 6291 | atomic_inc_32(&hdr->b_l2hdr.b_hits); |
34dc7c2f | 6292 | |
34dc7c2f | 6293 | cb = kmem_zalloc(sizeof (l2arc_read_callback_t), |
79c76d5b | 6294 | KM_SLEEP); |
d3c2ae1c | 6295 | cb->l2rcb_hdr = hdr; |
34dc7c2f BB |
6296 | cb->l2rcb_bp = *bp; |
6297 | cb->l2rcb_zb = *zb; | |
b128c09f | 6298 | cb->l2rcb_flags = zio_flags; |
34dc7c2f | 6299 | |
fc34dfba AJ |
6300 | /* |
6301 | * When Compressed ARC is disabled, but the | |
6302 | * L2ARC block is compressed, arc_hdr_size() | |
6303 | * will have returned LSIZE rather than PSIZE. | |
6304 | */ | |
6305 | if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && | |
6306 | !HDR_COMPRESSION_ENABLED(hdr) && | |
6307 | HDR_GET_PSIZE(hdr) != 0) { | |
6308 | size = HDR_GET_PSIZE(hdr); | |
6309 | } | |
6310 | ||
82710e99 GDN |
6311 | asize = vdev_psize_to_asize(vd, size); |
6312 | if (asize != size) { | |
6313 | abd = abd_alloc_for_io(asize, | |
6314 | HDR_ISTYPE_METADATA(hdr)); | |
6315 | cb->l2rcb_abd = abd; | |
6316 | } else { | |
b5256303 | 6317 | abd = hdr_abd; |
82710e99 GDN |
6318 | } |
6319 | ||
a117a6d6 | 6320 | ASSERT(addr >= VDEV_LABEL_START_SIZE && |
82710e99 | 6321 | addr + asize <= vd->vdev_psize - |
a117a6d6 GW |
6322 | VDEV_LABEL_END_SIZE); |
6323 | ||
34dc7c2f | 6324 | /* |
b128c09f BB |
6325 | * l2arc read. The SCL_L2ARC lock will be |
6326 | * released by l2arc_read_done(). | |
3a17a7a9 SK |
6327 | * Issue a null zio if the underlying buffer |
6328 | * was squashed to zero size by compression. | |
34dc7c2f | 6329 | */ |
b5256303 | 6330 | ASSERT3U(arc_hdr_get_compress(hdr), !=, |
d3c2ae1c GW |
6331 | ZIO_COMPRESS_EMPTY); |
6332 | rzio = zio_read_phys(pio, vd, addr, | |
82710e99 | 6333 | asize, abd, |
d3c2ae1c GW |
6334 | ZIO_CHECKSUM_OFF, |
6335 | l2arc_read_done, cb, priority, | |
6336 | zio_flags | ZIO_FLAG_DONT_CACHE | | |
6337 | ZIO_FLAG_CANFAIL | | |
6338 | ZIO_FLAG_DONT_PROPAGATE | | |
6339 | ZIO_FLAG_DONT_RETRY, B_FALSE); | |
a8b2e306 TC |
6340 | acb->acb_zio_head = rzio; |
6341 | ||
6342 | if (hash_lock != NULL) | |
6343 | mutex_exit(hash_lock); | |
d3c2ae1c | 6344 | |
34dc7c2f BB |
6345 | DTRACE_PROBE2(l2arc__read, vdev_t *, vd, |
6346 | zio_t *, rzio); | |
b5256303 TC |
6347 | ARCSTAT_INCR(arcstat_l2_read_bytes, |
6348 | HDR_GET_PSIZE(hdr)); | |
34dc7c2f | 6349 | |
2a432414 | 6350 | if (*arc_flags & ARC_FLAG_NOWAIT) { |
b128c09f | 6351 | zio_nowait(rzio); |
1421c891 | 6352 | goto out; |
b128c09f | 6353 | } |
34dc7c2f | 6354 | |
2a432414 | 6355 | ASSERT(*arc_flags & ARC_FLAG_WAIT); |
b128c09f | 6356 | if (zio_wait(rzio) == 0) |
1421c891 | 6357 | goto out; |
b128c09f BB |
6358 | |
6359 | /* l2arc read error; goto zio_read() */ | |
a8b2e306 TC |
6360 | if (hash_lock != NULL) |
6361 | mutex_enter(hash_lock); | |
34dc7c2f BB |
6362 | } else { |
6363 | DTRACE_PROBE1(l2arc__miss, | |
6364 | arc_buf_hdr_t *, hdr); | |
6365 | ARCSTAT_BUMP(arcstat_l2_misses); | |
6366 | if (HDR_L2_WRITING(hdr)) | |
6367 | ARCSTAT_BUMP(arcstat_l2_rw_clash); | |
b128c09f | 6368 | spa_config_exit(spa, SCL_L2ARC, vd); |
34dc7c2f | 6369 | } |
d164b209 BB |
6370 | } else { |
6371 | if (vd != NULL) | |
6372 | spa_config_exit(spa, SCL_L2ARC, vd); | |
666aa69f | 6373 | |
0902c457 | 6374 | /* |
666aa69f AM |
6375 | * Only a spa with l2 should contribute to l2 |
6376 | * miss stats. (Including the case of having a | |
6377 | * faulted cache device - that's also a miss.) | |
0902c457 | 6378 | */ |
666aa69f AM |
6379 | if (spa_has_l2) { |
6380 | /* | |
6381 | * Skip ARC stat bump for block pointers with | |
6382 | * embedded data. The data are read from the | |
6383 | * blkptr itself via | |
6384 | * decode_embedded_bp_compressed(). | |
6385 | */ | |
6386 | if (!embedded_bp) { | |
6387 | DTRACE_PROBE1(l2arc__miss, | |
6388 | arc_buf_hdr_t *, hdr); | |
6389 | ARCSTAT_BUMP(arcstat_l2_misses); | |
6390 | } | |
d164b209 | 6391 | } |
34dc7c2f | 6392 | } |
34dc7c2f | 6393 | |
b5256303 | 6394 | rzio = zio_read(pio, spa, bp, hdr_abd, size, |
d3c2ae1c | 6395 | arc_read_done, hdr, priority, zio_flags, zb); |
a8b2e306 TC |
6396 | acb->acb_zio_head = rzio; |
6397 | ||
6398 | if (hash_lock != NULL) | |
6399 | mutex_exit(hash_lock); | |
34dc7c2f | 6400 | |
2a432414 | 6401 | if (*arc_flags & ARC_FLAG_WAIT) { |
1421c891 PS |
6402 | rc = zio_wait(rzio); |
6403 | goto out; | |
6404 | } | |
34dc7c2f | 6405 | |
2a432414 | 6406 | ASSERT(*arc_flags & ARC_FLAG_NOWAIT); |
34dc7c2f BB |
6407 | zio_nowait(rzio); |
6408 | } | |
1421c891 PS |
6409 | |
6410 | out: | |
157ef7f6 | 6411 | /* embedded bps don't actually go to disk */ |
0902c457 | 6412 | if (!embedded_bp) |
157ef7f6 | 6413 | spa_read_history_add(spa, zb, *arc_flags); |
1e9231ad | 6414 | spl_fstrans_unmark(cookie); |
1421c891 | 6415 | return (rc); |
34dc7c2f BB |
6416 | } |
6417 | ||
ab26409d BB |
6418 | arc_prune_t * |
6419 | arc_add_prune_callback(arc_prune_func_t *func, void *private) | |
6420 | { | |
6421 | arc_prune_t *p; | |
6422 | ||
d1d7e268 | 6423 | p = kmem_alloc(sizeof (*p), KM_SLEEP); |
ab26409d BB |
6424 | p->p_pfunc = func; |
6425 | p->p_private = private; | |
6426 | list_link_init(&p->p_node); | |
424fd7c3 | 6427 | zfs_refcount_create(&p->p_refcnt); |
ab26409d BB |
6428 | |
6429 | mutex_enter(&arc_prune_mtx); | |
c13060e4 | 6430 | zfs_refcount_add(&p->p_refcnt, &arc_prune_list); |
ab26409d BB |
6431 | list_insert_head(&arc_prune_list, p); |
6432 | mutex_exit(&arc_prune_mtx); | |
6433 | ||
6434 | return (p); | |
6435 | } | |
6436 | ||
6437 | void | |
6438 | arc_remove_prune_callback(arc_prune_t *p) | |
6439 | { | |
4442f60d | 6440 | boolean_t wait = B_FALSE; |
ab26409d BB |
6441 | mutex_enter(&arc_prune_mtx); |
6442 | list_remove(&arc_prune_list, p); | |
424fd7c3 | 6443 | if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0) |
4442f60d | 6444 | wait = B_TRUE; |
ab26409d | 6445 | mutex_exit(&arc_prune_mtx); |
4442f60d CC |
6446 | |
6447 | /* wait for arc_prune_task to finish */ | |
6448 | if (wait) | |
6449 | taskq_wait_outstanding(arc_prune_taskq, 0); | |
424fd7c3 TS |
6450 | ASSERT0(zfs_refcount_count(&p->p_refcnt)); |
6451 | zfs_refcount_destroy(&p->p_refcnt); | |
4442f60d | 6452 | kmem_free(p, sizeof (*p)); |
ab26409d BB |
6453 | } |
6454 | ||
df4474f9 MA |
6455 | /* |
6456 | * Notify the arc that a block was freed, and thus will never be used again. | |
6457 | */ | |
6458 | void | |
6459 | arc_freed(spa_t *spa, const blkptr_t *bp) | |
6460 | { | |
6461 | arc_buf_hdr_t *hdr; | |
6462 | kmutex_t *hash_lock; | |
6463 | uint64_t guid = spa_load_guid(spa); | |
6464 | ||
9b67f605 MA |
6465 | ASSERT(!BP_IS_EMBEDDED(bp)); |
6466 | ||
6467 | hdr = buf_hash_find(guid, bp, &hash_lock); | |
df4474f9 MA |
6468 | if (hdr == NULL) |
6469 | return; | |
df4474f9 | 6470 | |
d3c2ae1c GW |
6471 | /* |
6472 | * We might be trying to free a block that is still doing I/O | |
6473 | * (i.e. prefetch) or has a reference (i.e. a dedup-ed, | |
6474 | * dmu_sync-ed block). If this block is being prefetched, then it | |
6475 | * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr | |
6476 | * until the I/O completes. A block may also have a reference if it is | |
6477 | * part of a dedup-ed, dmu_synced write. The dmu_sync() function would | |
6478 | * have written the new block to its final resting place on disk but | |
6479 | * without the dedup flag set. This would have left the hdr in the MRU | |
6480 | * state and discoverable. When the txg finally syncs it detects that | |
6481 | * the block was overridden in open context and issues an override I/O. | |
6482 | * Since this is a dedup block, the override I/O will determine if the | |
6483 | * block is already in the DDT. If so, then it will replace the io_bp | |
6484 | * with the bp from the DDT and allow the I/O to finish. When the I/O | |
6485 | * reaches the done callback, dbuf_write_override_done, it will | |
6486 | * check to see if the io_bp and io_bp_override are identical. | |
6487 | * If they are not, then it indicates that the bp was replaced with | |
6488 | * the bp in the DDT and the override bp is freed. This allows | |
6489 | * us to arrive here with a reference on a block that is being | |
6490 | * freed. So if we have an I/O in progress, or a reference to | |
6491 | * this hdr, then we don't destroy the hdr. | |
6492 | */ | |
6493 | if (!HDR_HAS_L1HDR(hdr) || (!HDR_IO_IN_PROGRESS(hdr) && | |
424fd7c3 | 6494 | zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) { |
d3c2ae1c GW |
6495 | arc_change_state(arc_anon, hdr, hash_lock); |
6496 | arc_hdr_destroy(hdr); | |
df4474f9 | 6497 | mutex_exit(hash_lock); |
bd089c54 | 6498 | } else { |
d3c2ae1c | 6499 | mutex_exit(hash_lock); |
34dc7c2f | 6500 | } |
34dc7c2f | 6501 | |
34dc7c2f BB |
6502 | } |
6503 | ||
6504 | /* | |
e49f1e20 WA |
6505 | * Release this buffer from the cache, making it an anonymous buffer. This |
6506 | * must be done after a read and prior to modifying the buffer contents. | |
34dc7c2f | 6507 | * If the buffer has more than one reference, we must make |
b128c09f | 6508 | * a new hdr for the buffer. |
34dc7c2f BB |
6509 | */ |
6510 | void | |
6511 | arc_release(arc_buf_t *buf, void *tag) | |
6512 | { | |
b9541d6b | 6513 | arc_buf_hdr_t *hdr = buf->b_hdr; |
34dc7c2f | 6514 | |
428870ff | 6515 | /* |
ca0bf58d | 6516 | * It would be nice to assert that if its DMU metadata (level > |
428870ff BB |
6517 | * 0 || it's the dnode file), then it must be syncing context. |
6518 | * But we don't know that information at this level. | |
6519 | */ | |
6520 | ||
6521 | mutex_enter(&buf->b_evict_lock); | |
b128c09f | 6522 | |
ca0bf58d PS |
6523 | ASSERT(HDR_HAS_L1HDR(hdr)); |
6524 | ||
b9541d6b CW |
6525 | /* |
6526 | * We don't grab the hash lock prior to this check, because if | |
6527 | * the buffer's header is in the arc_anon state, it won't be | |
6528 | * linked into the hash table. | |
6529 | */ | |
6530 | if (hdr->b_l1hdr.b_state == arc_anon) { | |
6531 | mutex_exit(&buf->b_evict_lock); | |
6532 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); | |
6533 | ASSERT(!HDR_IN_HASH_TABLE(hdr)); | |
6534 | ASSERT(!HDR_HAS_L2HDR(hdr)); | |
d3c2ae1c | 6535 | ASSERT(HDR_EMPTY(hdr)); |
34dc7c2f | 6536 | |
d3c2ae1c | 6537 | ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); |
424fd7c3 | 6538 | ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); |
b9541d6b CW |
6539 | ASSERT(!list_link_active(&hdr->b_l1hdr.b_arc_node)); |
6540 | ||
b9541d6b | 6541 | hdr->b_l1hdr.b_arc_access = 0; |
d3c2ae1c GW |
6542 | |
6543 | /* | |
6544 | * If the buf is being overridden then it may already | |
6545 | * have a hdr that is not empty. | |
6546 | */ | |
6547 | buf_discard_identity(hdr); | |
b9541d6b CW |
6548 | arc_buf_thaw(buf); |
6549 | ||
6550 | return; | |
34dc7c2f BB |
6551 | } |
6552 | ||
1c27024e | 6553 | kmutex_t *hash_lock = HDR_LOCK(hdr); |
b9541d6b CW |
6554 | mutex_enter(hash_lock); |
6555 | ||
6556 | /* | |
6557 | * This assignment is only valid as long as the hash_lock is | |
6558 | * held, we must be careful not to reference state or the | |
6559 | * b_state field after dropping the lock. | |
6560 | */ | |
1c27024e | 6561 | arc_state_t *state = hdr->b_l1hdr.b_state; |
b9541d6b CW |
6562 | ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); |
6563 | ASSERT3P(state, !=, arc_anon); | |
6564 | ||
6565 | /* this buffer is not on any list */ | |
424fd7c3 | 6566 | ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); |
b9541d6b CW |
6567 | |
6568 | if (HDR_HAS_L2HDR(hdr)) { | |
b9541d6b | 6569 | mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); |
ca0bf58d PS |
6570 | |
6571 | /* | |
d962d5da PS |
6572 | * We have to recheck this conditional again now that |
6573 | * we're holding the l2ad_mtx to prevent a race with | |
6574 | * another thread which might be concurrently calling | |
6575 | * l2arc_evict(). In that case, l2arc_evict() might have | |
6576 | * destroyed the header's L2 portion as we were waiting | |
6577 | * to acquire the l2ad_mtx. | |
ca0bf58d | 6578 | */ |
d962d5da PS |
6579 | if (HDR_HAS_L2HDR(hdr)) |
6580 | arc_hdr_l2hdr_destroy(hdr); | |
ca0bf58d | 6581 | |
b9541d6b | 6582 | mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); |
b128c09f BB |
6583 | } |
6584 | ||
34dc7c2f BB |
6585 | /* |
6586 | * Do we have more than one buf? | |
6587 | */ | |
d3c2ae1c | 6588 | if (hdr->b_l1hdr.b_bufcnt > 1) { |
34dc7c2f | 6589 | arc_buf_hdr_t *nhdr; |
d164b209 | 6590 | uint64_t spa = hdr->b_spa; |
d3c2ae1c GW |
6591 | uint64_t psize = HDR_GET_PSIZE(hdr); |
6592 | uint64_t lsize = HDR_GET_LSIZE(hdr); | |
b5256303 TC |
6593 | boolean_t protected = HDR_PROTECTED(hdr); |
6594 | enum zio_compress compress = arc_hdr_get_compress(hdr); | |
b9541d6b | 6595 | arc_buf_contents_t type = arc_buf_type(hdr); |
d3c2ae1c | 6596 | VERIFY3U(hdr->b_type, ==, type); |
34dc7c2f | 6597 | |
b9541d6b | 6598 | ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); |
d3c2ae1c GW |
6599 | (void) remove_reference(hdr, hash_lock, tag); |
6600 | ||
524b4217 | 6601 | if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) { |
d3c2ae1c | 6602 | ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); |
524b4217 DK |
6603 | ASSERT(ARC_BUF_LAST(buf)); |
6604 | } | |
d3c2ae1c | 6605 | |
34dc7c2f | 6606 | /* |
428870ff | 6607 | * Pull the data off of this hdr and attach it to |
d3c2ae1c GW |
6608 | * a new anonymous hdr. Also find the last buffer |
6609 | * in the hdr's buffer list. | |
34dc7c2f | 6610 | */ |
a7004725 | 6611 | arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); |
d3c2ae1c | 6612 | ASSERT3P(lastbuf, !=, NULL); |
34dc7c2f | 6613 | |
d3c2ae1c GW |
6614 | /* |
6615 | * If the current arc_buf_t and the hdr are sharing their data | |
524b4217 | 6616 | * buffer, then we must stop sharing that block. |
d3c2ae1c GW |
6617 | */ |
6618 | if (arc_buf_is_shared(buf)) { | |
6619 | ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); | |
d3c2ae1c GW |
6620 | VERIFY(!arc_buf_is_shared(lastbuf)); |
6621 | ||
6622 | /* | |
6623 | * First, sever the block sharing relationship between | |
a7004725 | 6624 | * buf and the arc_buf_hdr_t. |
d3c2ae1c GW |
6625 | */ |
6626 | arc_unshare_buf(hdr, buf); | |
2aa34383 DK |
6627 | |
6628 | /* | |
a6255b7f | 6629 | * Now we need to recreate the hdr's b_pabd. Since we |
524b4217 | 6630 | * have lastbuf handy, we try to share with it, but if |
a6255b7f | 6631 | * we can't then we allocate a new b_pabd and copy the |
524b4217 | 6632 | * data from buf into it. |
2aa34383 | 6633 | */ |
524b4217 DK |
6634 | if (arc_can_share(hdr, lastbuf)) { |
6635 | arc_share_buf(hdr, lastbuf); | |
6636 | } else { | |
e111c802 | 6637 | arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); |
a6255b7f DQ |
6638 | abd_copy_from_buf(hdr->b_l1hdr.b_pabd, |
6639 | buf->b_data, psize); | |
2aa34383 | 6640 | } |
d3c2ae1c GW |
6641 | VERIFY3P(lastbuf->b_data, !=, NULL); |
6642 | } else if (HDR_SHARED_DATA(hdr)) { | |
2aa34383 DK |
6643 | /* |
6644 | * Uncompressed shared buffers are always at the end | |
6645 | * of the list. Compressed buffers don't have the | |
6646 | * same requirements. This makes it hard to | |
6647 | * simply assert that the lastbuf is shared so | |
6648 | * we rely on the hdr's compression flags to determine | |
6649 | * if we have a compressed, shared buffer. | |
6650 | */ | |
6651 | ASSERT(arc_buf_is_shared(lastbuf) || | |
b5256303 | 6652 | arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); |
2aa34383 | 6653 | ASSERT(!ARC_BUF_SHARED(buf)); |
d3c2ae1c | 6654 | } |
b5256303 TC |
6655 | |
6656 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); | |
b9541d6b | 6657 | ASSERT3P(state, !=, arc_l2c_only); |
36da08ef | 6658 | |
424fd7c3 | 6659 | (void) zfs_refcount_remove_many(&state->arcs_size, |
2aa34383 | 6660 | arc_buf_size(buf), buf); |
36da08ef | 6661 | |
424fd7c3 | 6662 | if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { |
b9541d6b | 6663 | ASSERT3P(state, !=, arc_l2c_only); |
424fd7c3 TS |
6664 | (void) zfs_refcount_remove_many( |
6665 | &state->arcs_esize[type], | |
2aa34383 | 6666 | arc_buf_size(buf), buf); |
34dc7c2f | 6667 | } |
1eb5bfa3 | 6668 | |
d3c2ae1c | 6669 | hdr->b_l1hdr.b_bufcnt -= 1; |
b5256303 TC |
6670 | if (ARC_BUF_ENCRYPTED(buf)) |
6671 | hdr->b_crypt_hdr.b_ebufcnt -= 1; | |
6672 | ||
34dc7c2f | 6673 | arc_cksum_verify(buf); |
498877ba | 6674 | arc_buf_unwatch(buf); |
34dc7c2f | 6675 | |
f486f584 TC |
6676 | /* if this is the last uncompressed buf free the checksum */ |
6677 | if (!arc_hdr_has_uncompressed_buf(hdr)) | |
6678 | arc_cksum_free(hdr); | |
6679 | ||
34dc7c2f BB |
6680 | mutex_exit(hash_lock); |
6681 | ||
d3c2ae1c | 6682 | /* |
a6255b7f | 6683 | * Allocate a new hdr. The new hdr will contain a b_pabd |
d3c2ae1c GW |
6684 | * buffer which will be freed in arc_write(). |
6685 | */ | |
b5256303 | 6686 | nhdr = arc_hdr_alloc(spa, psize, lsize, protected, |
10b3c7f5 | 6687 | compress, hdr->b_complevel, type, HDR_HAS_RABD(hdr)); |
d3c2ae1c GW |
6688 | ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); |
6689 | ASSERT0(nhdr->b_l1hdr.b_bufcnt); | |
424fd7c3 | 6690 | ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); |
d3c2ae1c GW |
6691 | VERIFY3U(nhdr->b_type, ==, type); |
6692 | ASSERT(!HDR_SHARED_DATA(nhdr)); | |
b9541d6b | 6693 | |
d3c2ae1c GW |
6694 | nhdr->b_l1hdr.b_buf = buf; |
6695 | nhdr->b_l1hdr.b_bufcnt = 1; | |
b5256303 TC |
6696 | if (ARC_BUF_ENCRYPTED(buf)) |
6697 | nhdr->b_crypt_hdr.b_ebufcnt = 1; | |
b9541d6b CW |
6698 | nhdr->b_l1hdr.b_mru_hits = 0; |
6699 | nhdr->b_l1hdr.b_mru_ghost_hits = 0; | |
6700 | nhdr->b_l1hdr.b_mfu_hits = 0; | |
6701 | nhdr->b_l1hdr.b_mfu_ghost_hits = 0; | |
6702 | nhdr->b_l1hdr.b_l2_hits = 0; | |
c13060e4 | 6703 | (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); |
34dc7c2f | 6704 | buf->b_hdr = nhdr; |
d3c2ae1c | 6705 | |
428870ff | 6706 | mutex_exit(&buf->b_evict_lock); |
424fd7c3 | 6707 | (void) zfs_refcount_add_many(&arc_anon->arcs_size, |
5e8ff256 | 6708 | arc_buf_size(buf), buf); |
34dc7c2f | 6709 | } else { |
428870ff | 6710 | mutex_exit(&buf->b_evict_lock); |
424fd7c3 | 6711 | ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); |
ca0bf58d PS |
6712 | /* protected by hash lock, or hdr is on arc_anon */ |
6713 | ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); | |
34dc7c2f | 6714 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); |
b9541d6b CW |
6715 | hdr->b_l1hdr.b_mru_hits = 0; |
6716 | hdr->b_l1hdr.b_mru_ghost_hits = 0; | |
6717 | hdr->b_l1hdr.b_mfu_hits = 0; | |
6718 | hdr->b_l1hdr.b_mfu_ghost_hits = 0; | |
6719 | hdr->b_l1hdr.b_l2_hits = 0; | |
6720 | arc_change_state(arc_anon, hdr, hash_lock); | |
6721 | hdr->b_l1hdr.b_arc_access = 0; | |
34dc7c2f | 6722 | |
b5256303 | 6723 | mutex_exit(hash_lock); |
428870ff | 6724 | buf_discard_identity(hdr); |
34dc7c2f BB |
6725 | arc_buf_thaw(buf); |
6726 | } | |
34dc7c2f BB |
6727 | } |
6728 | ||
6729 | int | |
6730 | arc_released(arc_buf_t *buf) | |
6731 | { | |
b128c09f BB |
6732 | int released; |
6733 | ||
428870ff | 6734 | mutex_enter(&buf->b_evict_lock); |
b9541d6b CW |
6735 | released = (buf->b_data != NULL && |
6736 | buf->b_hdr->b_l1hdr.b_state == arc_anon); | |
428870ff | 6737 | mutex_exit(&buf->b_evict_lock); |
b128c09f | 6738 | return (released); |
34dc7c2f BB |
6739 | } |
6740 | ||
34dc7c2f BB |
6741 | #ifdef ZFS_DEBUG |
6742 | int | |
6743 | arc_referenced(arc_buf_t *buf) | |
6744 | { | |
b128c09f BB |
6745 | int referenced; |
6746 | ||
428870ff | 6747 | mutex_enter(&buf->b_evict_lock); |
424fd7c3 | 6748 | referenced = (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); |
428870ff | 6749 | mutex_exit(&buf->b_evict_lock); |
b128c09f | 6750 | return (referenced); |
34dc7c2f BB |
6751 | } |
6752 | #endif | |
6753 | ||
6754 | static void | |
6755 | arc_write_ready(zio_t *zio) | |
6756 | { | |
6757 | arc_write_callback_t *callback = zio->io_private; | |
6758 | arc_buf_t *buf = callback->awcb_buf; | |
6759 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
b5256303 TC |
6760 | blkptr_t *bp = zio->io_bp; |
6761 | uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); | |
a6255b7f | 6762 | fstrans_cookie_t cookie = spl_fstrans_mark(); |
34dc7c2f | 6763 | |
b9541d6b | 6764 | ASSERT(HDR_HAS_L1HDR(hdr)); |
424fd7c3 | 6765 | ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); |
d3c2ae1c | 6766 | ASSERT(hdr->b_l1hdr.b_bufcnt > 0); |
b128c09f | 6767 | |
34dc7c2f | 6768 | /* |
d3c2ae1c GW |
6769 | * If we're reexecuting this zio because the pool suspended, then |
6770 | * cleanup any state that was previously set the first time the | |
2aa34383 | 6771 | * callback was invoked. |
34dc7c2f | 6772 | */ |
d3c2ae1c GW |
6773 | if (zio->io_flags & ZIO_FLAG_REEXECUTED) { |
6774 | arc_cksum_free(hdr); | |
6775 | arc_buf_unwatch(buf); | |
a6255b7f | 6776 | if (hdr->b_l1hdr.b_pabd != NULL) { |
d3c2ae1c | 6777 | if (arc_buf_is_shared(buf)) { |
d3c2ae1c GW |
6778 | arc_unshare_buf(hdr, buf); |
6779 | } else { | |
b5256303 | 6780 | arc_hdr_free_abd(hdr, B_FALSE); |
d3c2ae1c | 6781 | } |
34dc7c2f | 6782 | } |
b5256303 TC |
6783 | |
6784 | if (HDR_HAS_RABD(hdr)) | |
6785 | arc_hdr_free_abd(hdr, B_TRUE); | |
34dc7c2f | 6786 | } |
a6255b7f | 6787 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
b5256303 | 6788 | ASSERT(!HDR_HAS_RABD(hdr)); |
d3c2ae1c GW |
6789 | ASSERT(!HDR_SHARED_DATA(hdr)); |
6790 | ASSERT(!arc_buf_is_shared(buf)); | |
6791 | ||
6792 | callback->awcb_ready(zio, buf, callback->awcb_private); | |
6793 | ||
6794 | if (HDR_IO_IN_PROGRESS(hdr)) | |
6795 | ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); | |
6796 | ||
d3c2ae1c GW |
6797 | arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); |
6798 | ||
b5256303 TC |
6799 | if (BP_IS_PROTECTED(bp) != !!HDR_PROTECTED(hdr)) |
6800 | hdr = arc_hdr_realloc_crypt(hdr, BP_IS_PROTECTED(bp)); | |
6801 | ||
6802 | if (BP_IS_PROTECTED(bp)) { | |
6803 | /* ZIL blocks are written through zio_rewrite */ | |
6804 | ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); | |
6805 | ASSERT(HDR_PROTECTED(hdr)); | |
6806 | ||
ae76f45c TC |
6807 | if (BP_SHOULD_BYTESWAP(bp)) { |
6808 | if (BP_GET_LEVEL(bp) > 0) { | |
6809 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; | |
6810 | } else { | |
6811 | hdr->b_l1hdr.b_byteswap = | |
6812 | DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); | |
6813 | } | |
6814 | } else { | |
6815 | hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; | |
6816 | } | |
6817 | ||
b5256303 TC |
6818 | hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); |
6819 | hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; | |
6820 | zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, | |
6821 | hdr->b_crypt_hdr.b_iv); | |
6822 | zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); | |
6823 | } | |
6824 | ||
6825 | /* | |
6826 | * If this block was written for raw encryption but the zio layer | |
6827 | * ended up only authenticating it, adjust the buffer flags now. | |
6828 | */ | |
6829 | if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { | |
6830 | arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); | |
6831 | buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; | |
6832 | if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) | |
6833 | buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; | |
b1d21733 TC |
6834 | } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { |
6835 | buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; | |
6836 | buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; | |
b5256303 TC |
6837 | } |
6838 | ||
6839 | /* this must be done after the buffer flags are adjusted */ | |
6840 | arc_cksum_compute(buf); | |
6841 | ||
1c27024e | 6842 | enum zio_compress compress; |
b5256303 | 6843 | if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { |
d3c2ae1c GW |
6844 | compress = ZIO_COMPRESS_OFF; |
6845 | } else { | |
b5256303 TC |
6846 | ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); |
6847 | compress = BP_GET_COMPRESS(bp); | |
d3c2ae1c GW |
6848 | } |
6849 | HDR_SET_PSIZE(hdr, psize); | |
6850 | arc_hdr_set_compress(hdr, compress); | |
10b3c7f5 | 6851 | hdr->b_complevel = zio->io_prop.zp_complevel; |
d3c2ae1c | 6852 | |
4807c0ba TC |
6853 | if (zio->io_error != 0 || psize == 0) |
6854 | goto out; | |
6855 | ||
d3c2ae1c | 6856 | /* |
b5256303 TC |
6857 | * Fill the hdr with data. If the buffer is encrypted we have no choice |
6858 | * but to copy the data into b_radb. If the hdr is compressed, the data | |
6859 | * we want is available from the zio, otherwise we can take it from | |
6860 | * the buf. | |
a6255b7f DQ |
6861 | * |
6862 | * We might be able to share the buf's data with the hdr here. However, | |
6863 | * doing so would cause the ARC to be full of linear ABDs if we write a | |
6864 | * lot of shareable data. As a compromise, we check whether scattered | |
6865 | * ABDs are allowed, and assume that if they are then the user wants | |
6866 | * the ARC to be primarily filled with them regardless of the data being | |
6867 | * written. Therefore, if they're allowed then we allocate one and copy | |
6868 | * the data into it; otherwise, we share the data directly if we can. | |
d3c2ae1c | 6869 | */ |
b5256303 | 6870 | if (ARC_BUF_ENCRYPTED(buf)) { |
4807c0ba | 6871 | ASSERT3U(psize, >, 0); |
b5256303 | 6872 | ASSERT(ARC_BUF_COMPRESSED(buf)); |
e111c802 | 6873 | arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); |
b5256303 TC |
6874 | abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); |
6875 | } else if (zfs_abd_scatter_enabled || !arc_can_share(hdr, buf)) { | |
a6255b7f DQ |
6876 | /* |
6877 | * Ideally, we would always copy the io_abd into b_pabd, but the | |
6878 | * user may have disabled compressed ARC, thus we must check the | |
6879 | * hdr's compression setting rather than the io_bp's. | |
6880 | */ | |
b5256303 | 6881 | if (BP_IS_ENCRYPTED(bp)) { |
a6255b7f | 6882 | ASSERT3U(psize, >, 0); |
e111c802 MM |
6883 | arc_hdr_alloc_abd(hdr, |
6884 | ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); | |
b5256303 TC |
6885 | abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); |
6886 | } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && | |
6887 | !ARC_BUF_COMPRESSED(buf)) { | |
6888 | ASSERT3U(psize, >, 0); | |
e111c802 | 6889 | arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); |
a6255b7f DQ |
6890 | abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); |
6891 | } else { | |
6892 | ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); | |
e111c802 | 6893 | arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); |
a6255b7f DQ |
6894 | abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, |
6895 | arc_buf_size(buf)); | |
6896 | } | |
d3c2ae1c | 6897 | } else { |
a6255b7f | 6898 | ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); |
2aa34383 | 6899 | ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); |
d3c2ae1c | 6900 | ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); |
d3c2ae1c | 6901 | |
d3c2ae1c | 6902 | arc_share_buf(hdr, buf); |
d3c2ae1c | 6903 | } |
a6255b7f | 6904 | |
4807c0ba | 6905 | out: |
b5256303 | 6906 | arc_hdr_verify(hdr, bp); |
a6255b7f | 6907 | spl_fstrans_unmark(cookie); |
34dc7c2f BB |
6908 | } |
6909 | ||
bc77ba73 PD |
6910 | static void |
6911 | arc_write_children_ready(zio_t *zio) | |
6912 | { | |
6913 | arc_write_callback_t *callback = zio->io_private; | |
6914 | arc_buf_t *buf = callback->awcb_buf; | |
6915 | ||
6916 | callback->awcb_children_ready(zio, buf, callback->awcb_private); | |
6917 | } | |
6918 | ||
e8b96c60 MA |
6919 | /* |
6920 | * The SPA calls this callback for each physical write that happens on behalf | |
6921 | * of a logical write. See the comment in dbuf_write_physdone() for details. | |
6922 | */ | |
6923 | static void | |
6924 | arc_write_physdone(zio_t *zio) | |
6925 | { | |
6926 | arc_write_callback_t *cb = zio->io_private; | |
6927 | if (cb->awcb_physdone != NULL) | |
6928 | cb->awcb_physdone(zio, cb->awcb_buf, cb->awcb_private); | |
6929 | } | |
6930 | ||
34dc7c2f BB |
6931 | static void |
6932 | arc_write_done(zio_t *zio) | |
6933 | { | |
6934 | arc_write_callback_t *callback = zio->io_private; | |
6935 | arc_buf_t *buf = callback->awcb_buf; | |
6936 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
6937 | ||
d3c2ae1c | 6938 | ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); |
428870ff BB |
6939 | |
6940 | if (zio->io_error == 0) { | |
d3c2ae1c GW |
6941 | arc_hdr_verify(hdr, zio->io_bp); |
6942 | ||
9b67f605 | 6943 | if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { |
b0bc7a84 MG |
6944 | buf_discard_identity(hdr); |
6945 | } else { | |
6946 | hdr->b_dva = *BP_IDENTITY(zio->io_bp); | |
6947 | hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); | |
b0bc7a84 | 6948 | } |
428870ff | 6949 | } else { |
d3c2ae1c | 6950 | ASSERT(HDR_EMPTY(hdr)); |
428870ff | 6951 | } |
34dc7c2f | 6952 | |
34dc7c2f | 6953 | /* |
9b67f605 MA |
6954 | * If the block to be written was all-zero or compressed enough to be |
6955 | * embedded in the BP, no write was performed so there will be no | |
6956 | * dva/birth/checksum. The buffer must therefore remain anonymous | |
6957 | * (and uncached). | |
34dc7c2f | 6958 | */ |
d3c2ae1c | 6959 | if (!HDR_EMPTY(hdr)) { |
34dc7c2f BB |
6960 | arc_buf_hdr_t *exists; |
6961 | kmutex_t *hash_lock; | |
6962 | ||
524b4217 | 6963 | ASSERT3U(zio->io_error, ==, 0); |
428870ff | 6964 | |
34dc7c2f BB |
6965 | arc_cksum_verify(buf); |
6966 | ||
6967 | exists = buf_hash_insert(hdr, &hash_lock); | |
b9541d6b | 6968 | if (exists != NULL) { |
34dc7c2f BB |
6969 | /* |
6970 | * This can only happen if we overwrite for | |
6971 | * sync-to-convergence, because we remove | |
6972 | * buffers from the hash table when we arc_free(). | |
6973 | */ | |
428870ff BB |
6974 | if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { |
6975 | if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) | |
6976 | panic("bad overwrite, hdr=%p exists=%p", | |
6977 | (void *)hdr, (void *)exists); | |
424fd7c3 | 6978 | ASSERT(zfs_refcount_is_zero( |
b9541d6b | 6979 | &exists->b_l1hdr.b_refcnt)); |
428870ff | 6980 | arc_change_state(arc_anon, exists, hash_lock); |
428870ff | 6981 | arc_hdr_destroy(exists); |
ca6c7a94 | 6982 | mutex_exit(hash_lock); |
428870ff BB |
6983 | exists = buf_hash_insert(hdr, &hash_lock); |
6984 | ASSERT3P(exists, ==, NULL); | |
03c6040b GW |
6985 | } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { |
6986 | /* nopwrite */ | |
6987 | ASSERT(zio->io_prop.zp_nopwrite); | |
6988 | if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) | |
6989 | panic("bad nopwrite, hdr=%p exists=%p", | |
6990 | (void *)hdr, (void *)exists); | |
428870ff BB |
6991 | } else { |
6992 | /* Dedup */ | |
d3c2ae1c | 6993 | ASSERT(hdr->b_l1hdr.b_bufcnt == 1); |
b9541d6b | 6994 | ASSERT(hdr->b_l1hdr.b_state == arc_anon); |
428870ff BB |
6995 | ASSERT(BP_GET_DEDUP(zio->io_bp)); |
6996 | ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); | |
6997 | } | |
34dc7c2f | 6998 | } |
d3c2ae1c | 6999 | arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); |
b128c09f | 7000 | /* if it's not anon, we are doing a scrub */ |
b9541d6b | 7001 | if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) |
b128c09f | 7002 | arc_access(hdr, hash_lock); |
34dc7c2f | 7003 | mutex_exit(hash_lock); |
34dc7c2f | 7004 | } else { |
d3c2ae1c | 7005 | arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); |
34dc7c2f BB |
7006 | } |
7007 | ||
424fd7c3 | 7008 | ASSERT(!zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); |
428870ff | 7009 | callback->awcb_done(zio, buf, callback->awcb_private); |
34dc7c2f | 7010 | |
e2af2acc | 7011 | abd_free(zio->io_abd); |
34dc7c2f BB |
7012 | kmem_free(callback, sizeof (arc_write_callback_t)); |
7013 | } | |
7014 | ||
7015 | zio_t * | |
428870ff | 7016 | arc_write(zio_t *pio, spa_t *spa, uint64_t txg, |
d3c2ae1c | 7017 | blkptr_t *bp, arc_buf_t *buf, boolean_t l2arc, |
b5256303 TC |
7018 | const zio_prop_t *zp, arc_write_done_func_t *ready, |
7019 | arc_write_done_func_t *children_ready, arc_write_done_func_t *physdone, | |
7020 | arc_write_done_func_t *done, void *private, zio_priority_t priority, | |
5dbd68a3 | 7021 | int zio_flags, const zbookmark_phys_t *zb) |
34dc7c2f BB |
7022 | { |
7023 | arc_buf_hdr_t *hdr = buf->b_hdr; | |
7024 | arc_write_callback_t *callback; | |
b128c09f | 7025 | zio_t *zio; |
82644107 | 7026 | zio_prop_t localprop = *zp; |
34dc7c2f | 7027 | |
d3c2ae1c GW |
7028 | ASSERT3P(ready, !=, NULL); |
7029 | ASSERT3P(done, !=, NULL); | |
34dc7c2f | 7030 | ASSERT(!HDR_IO_ERROR(hdr)); |
b9541d6b | 7031 | ASSERT(!HDR_IO_IN_PROGRESS(hdr)); |
d3c2ae1c GW |
7032 | ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); |
7033 | ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); | |
b128c09f | 7034 | if (l2arc) |
d3c2ae1c | 7035 | arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); |
82644107 | 7036 | |
b5256303 TC |
7037 | if (ARC_BUF_ENCRYPTED(buf)) { |
7038 | ASSERT(ARC_BUF_COMPRESSED(buf)); | |
7039 | localprop.zp_encrypt = B_TRUE; | |
7040 | localprop.zp_compress = HDR_GET_COMPRESS(hdr); | |
10b3c7f5 | 7041 | localprop.zp_complevel = hdr->b_complevel; |
b5256303 TC |
7042 | localprop.zp_byteorder = |
7043 | (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? | |
7044 | ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; | |
7045 | bcopy(hdr->b_crypt_hdr.b_salt, localprop.zp_salt, | |
7046 | ZIO_DATA_SALT_LEN); | |
7047 | bcopy(hdr->b_crypt_hdr.b_iv, localprop.zp_iv, | |
7048 | ZIO_DATA_IV_LEN); | |
7049 | bcopy(hdr->b_crypt_hdr.b_mac, localprop.zp_mac, | |
7050 | ZIO_DATA_MAC_LEN); | |
7051 | if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { | |
7052 | localprop.zp_nopwrite = B_FALSE; | |
7053 | localprop.zp_copies = | |
7054 | MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); | |
7055 | } | |
2aa34383 | 7056 | zio_flags |= ZIO_FLAG_RAW; |
b5256303 TC |
7057 | } else if (ARC_BUF_COMPRESSED(buf)) { |
7058 | ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); | |
7059 | localprop.zp_compress = HDR_GET_COMPRESS(hdr); | |
10b3c7f5 | 7060 | localprop.zp_complevel = hdr->b_complevel; |
b5256303 | 7061 | zio_flags |= ZIO_FLAG_RAW_COMPRESS; |
2aa34383 | 7062 | } |
79c76d5b | 7063 | callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); |
34dc7c2f | 7064 | callback->awcb_ready = ready; |
bc77ba73 | 7065 | callback->awcb_children_ready = children_ready; |
e8b96c60 | 7066 | callback->awcb_physdone = physdone; |
34dc7c2f BB |
7067 | callback->awcb_done = done; |
7068 | callback->awcb_private = private; | |
7069 | callback->awcb_buf = buf; | |
b128c09f | 7070 | |
d3c2ae1c | 7071 | /* |
a6255b7f | 7072 | * The hdr's b_pabd is now stale, free it now. A new data block |
d3c2ae1c GW |
7073 | * will be allocated when the zio pipeline calls arc_write_ready(). |
7074 | */ | |
a6255b7f | 7075 | if (hdr->b_l1hdr.b_pabd != NULL) { |
d3c2ae1c GW |
7076 | /* |
7077 | * If the buf is currently sharing the data block with | |
7078 | * the hdr then we need to break that relationship here. | |
7079 | * The hdr will remain with a NULL data pointer and the | |
7080 | * buf will take sole ownership of the block. | |
7081 | */ | |
7082 | if (arc_buf_is_shared(buf)) { | |
d3c2ae1c GW |
7083 | arc_unshare_buf(hdr, buf); |
7084 | } else { | |
b5256303 | 7085 | arc_hdr_free_abd(hdr, B_FALSE); |
d3c2ae1c GW |
7086 | } |
7087 | VERIFY3P(buf->b_data, !=, NULL); | |
d3c2ae1c | 7088 | } |
b5256303 TC |
7089 | |
7090 | if (HDR_HAS_RABD(hdr)) | |
7091 | arc_hdr_free_abd(hdr, B_TRUE); | |
7092 | ||
71a24c3c TC |
7093 | if (!(zio_flags & ZIO_FLAG_RAW)) |
7094 | arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); | |
b5256303 | 7095 | |
d3c2ae1c | 7096 | ASSERT(!arc_buf_is_shared(buf)); |
a6255b7f | 7097 | ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); |
d3c2ae1c | 7098 | |
a6255b7f DQ |
7099 | zio = zio_write(pio, spa, txg, bp, |
7100 | abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), | |
82644107 | 7101 | HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, |
bc77ba73 PD |
7102 | (children_ready != NULL) ? arc_write_children_ready : NULL, |
7103 | arc_write_physdone, arc_write_done, callback, | |
e8b96c60 | 7104 | priority, zio_flags, zb); |
34dc7c2f BB |
7105 | |
7106 | return (zio); | |
7107 | } | |
7108 | ||
34dc7c2f BB |
7109 | void |
7110 | arc_tempreserve_clear(uint64_t reserve) | |
7111 | { | |
7112 | atomic_add_64(&arc_tempreserve, -reserve); | |
7113 | ASSERT((int64_t)arc_tempreserve >= 0); | |
7114 | } | |
7115 | ||
7116 | int | |
dae3e9ea | 7117 | arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) |
34dc7c2f BB |
7118 | { |
7119 | int error; | |
9babb374 | 7120 | uint64_t anon_size; |
34dc7c2f | 7121 | |
1b8951b3 TC |
7122 | if (!arc_no_grow && |
7123 | reserve > arc_c/4 && | |
7124 | reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT)) | |
34dc7c2f | 7125 | arc_c = MIN(arc_c_max, reserve * 4); |
12f9a6a3 BB |
7126 | |
7127 | /* | |
7128 | * Throttle when the calculated memory footprint for the TXG | |
7129 | * exceeds the target ARC size. | |
7130 | */ | |
570827e1 BB |
7131 | if (reserve > arc_c) { |
7132 | DMU_TX_STAT_BUMP(dmu_tx_memory_reserve); | |
12f9a6a3 | 7133 | return (SET_ERROR(ERESTART)); |
570827e1 | 7134 | } |
34dc7c2f | 7135 | |
9babb374 BB |
7136 | /* |
7137 | * Don't count loaned bufs as in flight dirty data to prevent long | |
7138 | * network delays from blocking transactions that are ready to be | |
7139 | * assigned to a txg. | |
7140 | */ | |
a7004725 DK |
7141 | |
7142 | /* assert that it has not wrapped around */ | |
7143 | ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); | |
7144 | ||
424fd7c3 | 7145 | anon_size = MAX((int64_t)(zfs_refcount_count(&arc_anon->arcs_size) - |
36da08ef | 7146 | arc_loaned_bytes), 0); |
9babb374 | 7147 | |
34dc7c2f BB |
7148 | /* |
7149 | * Writes will, almost always, require additional memory allocations | |
d3cc8b15 | 7150 | * in order to compress/encrypt/etc the data. We therefore need to |
34dc7c2f BB |
7151 | * make sure that there is sufficient available memory for this. |
7152 | */ | |
dae3e9ea | 7153 | error = arc_memory_throttle(spa, reserve, txg); |
e8b96c60 | 7154 | if (error != 0) |
34dc7c2f BB |
7155 | return (error); |
7156 | ||
7157 | /* | |
7158 | * Throttle writes when the amount of dirty data in the cache | |
7159 | * gets too large. We try to keep the cache less than half full | |
7160 | * of dirty blocks so that our sync times don't grow too large. | |
dae3e9ea DB |
7161 | * |
7162 | * In the case of one pool being built on another pool, we want | |
7163 | * to make sure we don't end up throttling the lower (backing) | |
7164 | * pool when the upper pool is the majority contributor to dirty | |
7165 | * data. To insure we make forward progress during throttling, we | |
7166 | * also check the current pool's net dirty data and only throttle | |
7167 | * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty | |
7168 | * data in the cache. | |
7169 | * | |
34dc7c2f BB |
7170 | * Note: if two requests come in concurrently, we might let them |
7171 | * both succeed, when one of them should fail. Not a huge deal. | |
7172 | */ | |
dae3e9ea DB |
7173 | uint64_t total_dirty = reserve + arc_tempreserve + anon_size; |
7174 | uint64_t spa_dirty_anon = spa_dirty_data(spa); | |
daabddaa AM |
7175 | uint64_t rarc_c = arc_warm ? arc_c : arc_c_max; |
7176 | if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 && | |
7177 | anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 && | |
dae3e9ea | 7178 | spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { |
2fd92c3d | 7179 | #ifdef ZFS_DEBUG |
424fd7c3 TS |
7180 | uint64_t meta_esize = zfs_refcount_count( |
7181 | &arc_anon->arcs_esize[ARC_BUFC_METADATA]); | |
d3c2ae1c | 7182 | uint64_t data_esize = |
424fd7c3 | 7183 | zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); |
34dc7c2f | 7184 | dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " |
daabddaa | 7185 | "anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n", |
8e739b2c RE |
7186 | (u_longlong_t)arc_tempreserve >> 10, |
7187 | (u_longlong_t)meta_esize >> 10, | |
7188 | (u_longlong_t)data_esize >> 10, | |
7189 | (u_longlong_t)reserve >> 10, | |
7190 | (u_longlong_t)rarc_c >> 10); | |
2fd92c3d | 7191 | #endif |
570827e1 | 7192 | DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle); |
2e528b49 | 7193 | return (SET_ERROR(ERESTART)); |
34dc7c2f BB |
7194 | } |
7195 | atomic_add_64(&arc_tempreserve, reserve); | |
7196 | return (0); | |
7197 | } | |
7198 | ||
13be560d BB |
7199 | static void |
7200 | arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, | |
7201 | kstat_named_t *evict_data, kstat_named_t *evict_metadata) | |
7202 | { | |
424fd7c3 | 7203 | size->value.ui64 = zfs_refcount_count(&state->arcs_size); |
d3c2ae1c | 7204 | evict_data->value.ui64 = |
424fd7c3 | 7205 | zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); |
d3c2ae1c | 7206 | evict_metadata->value.ui64 = |
424fd7c3 | 7207 | zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); |
13be560d BB |
7208 | } |
7209 | ||
7210 | static int | |
7211 | arc_kstat_update(kstat_t *ksp, int rw) | |
7212 | { | |
7213 | arc_stats_t *as = ksp->ks_data; | |
7214 | ||
c4c162c1 | 7215 | if (rw == KSTAT_WRITE) |
ecb2b7dc | 7216 | return (SET_ERROR(EACCES)); |
c4c162c1 AM |
7217 | |
7218 | as->arcstat_hits.value.ui64 = | |
7219 | wmsum_value(&arc_sums.arcstat_hits); | |
7220 | as->arcstat_misses.value.ui64 = | |
7221 | wmsum_value(&arc_sums.arcstat_misses); | |
7222 | as->arcstat_demand_data_hits.value.ui64 = | |
7223 | wmsum_value(&arc_sums.arcstat_demand_data_hits); | |
7224 | as->arcstat_demand_data_misses.value.ui64 = | |
7225 | wmsum_value(&arc_sums.arcstat_demand_data_misses); | |
7226 | as->arcstat_demand_metadata_hits.value.ui64 = | |
7227 | wmsum_value(&arc_sums.arcstat_demand_metadata_hits); | |
7228 | as->arcstat_demand_metadata_misses.value.ui64 = | |
7229 | wmsum_value(&arc_sums.arcstat_demand_metadata_misses); | |
7230 | as->arcstat_prefetch_data_hits.value.ui64 = | |
7231 | wmsum_value(&arc_sums.arcstat_prefetch_data_hits); | |
7232 | as->arcstat_prefetch_data_misses.value.ui64 = | |
7233 | wmsum_value(&arc_sums.arcstat_prefetch_data_misses); | |
7234 | as->arcstat_prefetch_metadata_hits.value.ui64 = | |
7235 | wmsum_value(&arc_sums.arcstat_prefetch_metadata_hits); | |
7236 | as->arcstat_prefetch_metadata_misses.value.ui64 = | |
7237 | wmsum_value(&arc_sums.arcstat_prefetch_metadata_misses); | |
7238 | as->arcstat_mru_hits.value.ui64 = | |
7239 | wmsum_value(&arc_sums.arcstat_mru_hits); | |
7240 | as->arcstat_mru_ghost_hits.value.ui64 = | |
7241 | wmsum_value(&arc_sums.arcstat_mru_ghost_hits); | |
7242 | as->arcstat_mfu_hits.value.ui64 = | |
7243 | wmsum_value(&arc_sums.arcstat_mfu_hits); | |
7244 | as->arcstat_mfu_ghost_hits.value.ui64 = | |
7245 | wmsum_value(&arc_sums.arcstat_mfu_ghost_hits); | |
7246 | as->arcstat_deleted.value.ui64 = | |
7247 | wmsum_value(&arc_sums.arcstat_deleted); | |
7248 | as->arcstat_mutex_miss.value.ui64 = | |
7249 | wmsum_value(&arc_sums.arcstat_mutex_miss); | |
7250 | as->arcstat_access_skip.value.ui64 = | |
7251 | wmsum_value(&arc_sums.arcstat_access_skip); | |
7252 | as->arcstat_evict_skip.value.ui64 = | |
7253 | wmsum_value(&arc_sums.arcstat_evict_skip); | |
7254 | as->arcstat_evict_not_enough.value.ui64 = | |
7255 | wmsum_value(&arc_sums.arcstat_evict_not_enough); | |
7256 | as->arcstat_evict_l2_cached.value.ui64 = | |
7257 | wmsum_value(&arc_sums.arcstat_evict_l2_cached); | |
7258 | as->arcstat_evict_l2_eligible.value.ui64 = | |
7259 | wmsum_value(&arc_sums.arcstat_evict_l2_eligible); | |
7260 | as->arcstat_evict_l2_eligible_mfu.value.ui64 = | |
7261 | wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mfu); | |
7262 | as->arcstat_evict_l2_eligible_mru.value.ui64 = | |
7263 | wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mru); | |
7264 | as->arcstat_evict_l2_ineligible.value.ui64 = | |
7265 | wmsum_value(&arc_sums.arcstat_evict_l2_ineligible); | |
7266 | as->arcstat_evict_l2_skip.value.ui64 = | |
7267 | wmsum_value(&arc_sums.arcstat_evict_l2_skip); | |
7268 | as->arcstat_hash_collisions.value.ui64 = | |
7269 | wmsum_value(&arc_sums.arcstat_hash_collisions); | |
7270 | as->arcstat_hash_chains.value.ui64 = | |
7271 | wmsum_value(&arc_sums.arcstat_hash_chains); | |
7272 | as->arcstat_size.value.ui64 = | |
7273 | aggsum_value(&arc_sums.arcstat_size); | |
7274 | as->arcstat_compressed_size.value.ui64 = | |
7275 | wmsum_value(&arc_sums.arcstat_compressed_size); | |
7276 | as->arcstat_uncompressed_size.value.ui64 = | |
7277 | wmsum_value(&arc_sums.arcstat_uncompressed_size); | |
7278 | as->arcstat_overhead_size.value.ui64 = | |
7279 | wmsum_value(&arc_sums.arcstat_overhead_size); | |
7280 | as->arcstat_hdr_size.value.ui64 = | |
7281 | wmsum_value(&arc_sums.arcstat_hdr_size); | |
7282 | as->arcstat_data_size.value.ui64 = | |
7283 | wmsum_value(&arc_sums.arcstat_data_size); | |
7284 | as->arcstat_metadata_size.value.ui64 = | |
7285 | wmsum_value(&arc_sums.arcstat_metadata_size); | |
7286 | as->arcstat_dbuf_size.value.ui64 = | |
7287 | wmsum_value(&arc_sums.arcstat_dbuf_size); | |
1c2725a1 | 7288 | #if defined(COMPAT_FREEBSD11) |
c4c162c1 AM |
7289 | as->arcstat_other_size.value.ui64 = |
7290 | wmsum_value(&arc_sums.arcstat_bonus_size) + | |
7291 | aggsum_value(&arc_sums.arcstat_dnode_size) + | |
7292 | wmsum_value(&arc_sums.arcstat_dbuf_size); | |
1c2725a1 | 7293 | #endif |
37fb3e43 | 7294 | |
c4c162c1 AM |
7295 | arc_kstat_update_state(arc_anon, |
7296 | &as->arcstat_anon_size, | |
7297 | &as->arcstat_anon_evictable_data, | |
7298 | &as->arcstat_anon_evictable_metadata); | |
7299 | arc_kstat_update_state(arc_mru, | |
7300 | &as->arcstat_mru_size, | |
7301 | &as->arcstat_mru_evictable_data, | |
7302 | &as->arcstat_mru_evictable_metadata); | |
7303 | arc_kstat_update_state(arc_mru_ghost, | |
7304 | &as->arcstat_mru_ghost_size, | |
7305 | &as->arcstat_mru_ghost_evictable_data, | |
7306 | &as->arcstat_mru_ghost_evictable_metadata); | |
7307 | arc_kstat_update_state(arc_mfu, | |
7308 | &as->arcstat_mfu_size, | |
7309 | &as->arcstat_mfu_evictable_data, | |
7310 | &as->arcstat_mfu_evictable_metadata); | |
7311 | arc_kstat_update_state(arc_mfu_ghost, | |
7312 | &as->arcstat_mfu_ghost_size, | |
7313 | &as->arcstat_mfu_ghost_evictable_data, | |
7314 | &as->arcstat_mfu_ghost_evictable_metadata); | |
7315 | ||
7316 | as->arcstat_dnode_size.value.ui64 = | |
7317 | aggsum_value(&arc_sums.arcstat_dnode_size); | |
7318 | as->arcstat_bonus_size.value.ui64 = | |
7319 | wmsum_value(&arc_sums.arcstat_bonus_size); | |
7320 | as->arcstat_l2_hits.value.ui64 = | |
7321 | wmsum_value(&arc_sums.arcstat_l2_hits); | |
7322 | as->arcstat_l2_misses.value.ui64 = | |
7323 | wmsum_value(&arc_sums.arcstat_l2_misses); | |
7324 | as->arcstat_l2_prefetch_asize.value.ui64 = | |
7325 | wmsum_value(&arc_sums.arcstat_l2_prefetch_asize); | |
7326 | as->arcstat_l2_mru_asize.value.ui64 = | |
7327 | wmsum_value(&arc_sums.arcstat_l2_mru_asize); | |
7328 | as->arcstat_l2_mfu_asize.value.ui64 = | |
7329 | wmsum_value(&arc_sums.arcstat_l2_mfu_asize); | |
7330 | as->arcstat_l2_bufc_data_asize.value.ui64 = | |
7331 | wmsum_value(&arc_sums.arcstat_l2_bufc_data_asize); | |
7332 | as->arcstat_l2_bufc_metadata_asize.value.ui64 = | |
7333 | wmsum_value(&arc_sums.arcstat_l2_bufc_metadata_asize); | |
7334 | as->arcstat_l2_feeds.value.ui64 = | |
7335 | wmsum_value(&arc_sums.arcstat_l2_feeds); | |
7336 | as->arcstat_l2_rw_clash.value.ui64 = | |
7337 | wmsum_value(&arc_sums.arcstat_l2_rw_clash); | |
7338 | as->arcstat_l2_read_bytes.value.ui64 = | |
7339 | wmsum_value(&arc_sums.arcstat_l2_read_bytes); | |
7340 | as->arcstat_l2_write_bytes.value.ui64 = | |
7341 | wmsum_value(&arc_sums.arcstat_l2_write_bytes); | |
7342 | as->arcstat_l2_writes_sent.value.ui64 = | |
7343 | wmsum_value(&arc_sums.arcstat_l2_writes_sent); | |
7344 | as->arcstat_l2_writes_done.value.ui64 = | |
7345 | wmsum_value(&arc_sums.arcstat_l2_writes_done); | |
7346 | as->arcstat_l2_writes_error.value.ui64 = | |
7347 | wmsum_value(&arc_sums.arcstat_l2_writes_error); | |
7348 | as->arcstat_l2_writes_lock_retry.value.ui64 = | |
7349 | wmsum_value(&arc_sums.arcstat_l2_writes_lock_retry); | |
7350 | as->arcstat_l2_evict_lock_retry.value.ui64 = | |
7351 | wmsum_value(&arc_sums.arcstat_l2_evict_lock_retry); | |
7352 | as->arcstat_l2_evict_reading.value.ui64 = | |
7353 | wmsum_value(&arc_sums.arcstat_l2_evict_reading); | |
7354 | as->arcstat_l2_evict_l1cached.value.ui64 = | |
7355 | wmsum_value(&arc_sums.arcstat_l2_evict_l1cached); | |
7356 | as->arcstat_l2_free_on_write.value.ui64 = | |
7357 | wmsum_value(&arc_sums.arcstat_l2_free_on_write); | |
7358 | as->arcstat_l2_abort_lowmem.value.ui64 = | |
7359 | wmsum_value(&arc_sums.arcstat_l2_abort_lowmem); | |
7360 | as->arcstat_l2_cksum_bad.value.ui64 = | |
7361 | wmsum_value(&arc_sums.arcstat_l2_cksum_bad); | |
7362 | as->arcstat_l2_io_error.value.ui64 = | |
7363 | wmsum_value(&arc_sums.arcstat_l2_io_error); | |
7364 | as->arcstat_l2_lsize.value.ui64 = | |
7365 | wmsum_value(&arc_sums.arcstat_l2_lsize); | |
7366 | as->arcstat_l2_psize.value.ui64 = | |
7367 | wmsum_value(&arc_sums.arcstat_l2_psize); | |
7368 | as->arcstat_l2_hdr_size.value.ui64 = | |
7369 | aggsum_value(&arc_sums.arcstat_l2_hdr_size); | |
7370 | as->arcstat_l2_log_blk_writes.value.ui64 = | |
7371 | wmsum_value(&arc_sums.arcstat_l2_log_blk_writes); | |
7372 | as->arcstat_l2_log_blk_asize.value.ui64 = | |
7373 | wmsum_value(&arc_sums.arcstat_l2_log_blk_asize); | |
7374 | as->arcstat_l2_log_blk_count.value.ui64 = | |
7375 | wmsum_value(&arc_sums.arcstat_l2_log_blk_count); | |
7376 | as->arcstat_l2_rebuild_success.value.ui64 = | |
7377 | wmsum_value(&arc_sums.arcstat_l2_rebuild_success); | |
7378 | as->arcstat_l2_rebuild_abort_unsupported.value.ui64 = | |
7379 | wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_unsupported); | |
7380 | as->arcstat_l2_rebuild_abort_io_errors.value.ui64 = | |
7381 | wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_io_errors); | |
7382 | as->arcstat_l2_rebuild_abort_dh_errors.value.ui64 = | |
7383 | wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); | |
7384 | as->arcstat_l2_rebuild_abort_cksum_lb_errors.value.ui64 = | |
7385 | wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); | |
7386 | as->arcstat_l2_rebuild_abort_lowmem.value.ui64 = | |
7387 | wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_lowmem); | |
7388 | as->arcstat_l2_rebuild_size.value.ui64 = | |
7389 | wmsum_value(&arc_sums.arcstat_l2_rebuild_size); | |
7390 | as->arcstat_l2_rebuild_asize.value.ui64 = | |
7391 | wmsum_value(&arc_sums.arcstat_l2_rebuild_asize); | |
7392 | as->arcstat_l2_rebuild_bufs.value.ui64 = | |
7393 | wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs); | |
7394 | as->arcstat_l2_rebuild_bufs_precached.value.ui64 = | |
7395 | wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs_precached); | |
7396 | as->arcstat_l2_rebuild_log_blks.value.ui64 = | |
7397 | wmsum_value(&arc_sums.arcstat_l2_rebuild_log_blks); | |
7398 | as->arcstat_memory_throttle_count.value.ui64 = | |
7399 | wmsum_value(&arc_sums.arcstat_memory_throttle_count); | |
7400 | as->arcstat_memory_direct_count.value.ui64 = | |
7401 | wmsum_value(&arc_sums.arcstat_memory_direct_count); | |
7402 | as->arcstat_memory_indirect_count.value.ui64 = | |
7403 | wmsum_value(&arc_sums.arcstat_memory_indirect_count); | |
7404 | ||
7405 | as->arcstat_memory_all_bytes.value.ui64 = | |
7406 | arc_all_memory(); | |
7407 | as->arcstat_memory_free_bytes.value.ui64 = | |
7408 | arc_free_memory(); | |
7409 | as->arcstat_memory_available_bytes.value.i64 = | |
7410 | arc_available_memory(); | |
7411 | ||
7412 | as->arcstat_prune.value.ui64 = | |
7413 | wmsum_value(&arc_sums.arcstat_prune); | |
7414 | as->arcstat_meta_used.value.ui64 = | |
7415 | aggsum_value(&arc_sums.arcstat_meta_used); | |
7416 | as->arcstat_async_upgrade_sync.value.ui64 = | |
7417 | wmsum_value(&arc_sums.arcstat_async_upgrade_sync); | |
7418 | as->arcstat_demand_hit_predictive_prefetch.value.ui64 = | |
7419 | wmsum_value(&arc_sums.arcstat_demand_hit_predictive_prefetch); | |
7420 | as->arcstat_demand_hit_prescient_prefetch.value.ui64 = | |
7421 | wmsum_value(&arc_sums.arcstat_demand_hit_prescient_prefetch); | |
7422 | as->arcstat_raw_size.value.ui64 = | |
7423 | wmsum_value(&arc_sums.arcstat_raw_size); | |
7424 | as->arcstat_cached_only_in_progress.value.ui64 = | |
7425 | wmsum_value(&arc_sums.arcstat_cached_only_in_progress); | |
7426 | as->arcstat_abd_chunk_waste_size.value.ui64 = | |
7427 | wmsum_value(&arc_sums.arcstat_abd_chunk_waste_size); | |
13be560d BB |
7428 | |
7429 | return (0); | |
7430 | } | |
7431 | ||
ca0bf58d PS |
7432 | /* |
7433 | * This function *must* return indices evenly distributed between all | |
7434 | * sublists of the multilist. This is needed due to how the ARC eviction | |
7435 | * code is laid out; arc_evict_state() assumes ARC buffers are evenly | |
7436 | * distributed between all sublists and uses this assumption when | |
7437 | * deciding which sublist to evict from and how much to evict from it. | |
7438 | */ | |
65c7cc49 | 7439 | static unsigned int |
ca0bf58d PS |
7440 | arc_state_multilist_index_func(multilist_t *ml, void *obj) |
7441 | { | |
7442 | arc_buf_hdr_t *hdr = obj; | |
7443 | ||
7444 | /* | |
7445 | * We rely on b_dva to generate evenly distributed index | |
7446 | * numbers using buf_hash below. So, as an added precaution, | |
7447 | * let's make sure we never add empty buffers to the arc lists. | |
7448 | */ | |
d3c2ae1c | 7449 | ASSERT(!HDR_EMPTY(hdr)); |
ca0bf58d PS |
7450 | |
7451 | /* | |
7452 | * The assumption here, is the hash value for a given | |
7453 | * arc_buf_hdr_t will remain constant throughout its lifetime | |
7454 | * (i.e. its b_spa, b_dva, and b_birth fields don't change). | |
7455 | * Thus, we don't need to store the header's sublist index | |
7456 | * on insertion, as this index can be recalculated on removal. | |
7457 | * | |
7458 | * Also, the low order bits of the hash value are thought to be | |
7459 | * distributed evenly. Otherwise, in the case that the multilist | |
7460 | * has a power of two number of sublists, each sublists' usage | |
5b7053a9 AM |
7461 | * would not be evenly distributed. In this context full 64bit |
7462 | * division would be a waste of time, so limit it to 32 bits. | |
ca0bf58d | 7463 | */ |
5b7053a9 | 7464 | return ((unsigned int)buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % |
ca0bf58d PS |
7465 | multilist_get_num_sublists(ml)); |
7466 | } | |
7467 | ||
36a6e233 RM |
7468 | #define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \ |
7469 | if ((do_warn) && (tuning) && ((tuning) != (value))) { \ | |
7470 | cmn_err(CE_WARN, \ | |
7471 | "ignoring tunable %s (using %llu instead)", \ | |
5dbf6c5a | 7472 | (#tuning), (u_longlong_t)(value)); \ |
36a6e233 RM |
7473 | } \ |
7474 | } while (0) | |
7475 | ||
ca67b33a MA |
7476 | /* |
7477 | * Called during module initialization and periodically thereafter to | |
e3570464 | 7478 | * apply reasonable changes to the exposed performance tunings. Can also be |
7479 | * called explicitly by param_set_arc_*() functions when ARC tunables are | |
7480 | * updated manually. Non-zero zfs_* values which differ from the currently set | |
7481 | * values will be applied. | |
ca67b33a | 7482 | */ |
e3570464 | 7483 | void |
36a6e233 | 7484 | arc_tuning_update(boolean_t verbose) |
ca67b33a | 7485 | { |
b8a97fb1 | 7486 | uint64_t allmem = arc_all_memory(); |
7487 | unsigned long limit; | |
9edb3695 | 7488 | |
36a6e233 RM |
7489 | /* Valid range: 32M - <arc_c_max> */ |
7490 | if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) && | |
7491 | (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) && | |
7492 | (zfs_arc_min <= arc_c_max)) { | |
7493 | arc_c_min = zfs_arc_min; | |
7494 | arc_c = MAX(arc_c, arc_c_min); | |
7495 | } | |
7496 | WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose); | |
7497 | ||
ca67b33a MA |
7498 | /* Valid range: 64M - <all physical memory> */ |
7499 | if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) && | |
7403d074 | 7500 | (zfs_arc_max >= 64 << 20) && (zfs_arc_max < allmem) && |
ca67b33a MA |
7501 | (zfs_arc_max > arc_c_min)) { |
7502 | arc_c_max = zfs_arc_max; | |
17ca3018 | 7503 | arc_c = MIN(arc_c, arc_c_max); |
ca67b33a | 7504 | arc_p = (arc_c >> 1); |
b8a97fb1 | 7505 | if (arc_meta_limit > arc_c_max) |
7506 | arc_meta_limit = arc_c_max; | |
03fdcb9a MM |
7507 | if (arc_dnode_size_limit > arc_meta_limit) |
7508 | arc_dnode_size_limit = arc_meta_limit; | |
ca67b33a | 7509 | } |
36a6e233 | 7510 | WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose); |
ca67b33a MA |
7511 | |
7512 | /* Valid range: 16M - <arc_c_max> */ | |
7513 | if ((zfs_arc_meta_min) && (zfs_arc_meta_min != arc_meta_min) && | |
7514 | (zfs_arc_meta_min >= 1ULL << SPA_MAXBLOCKSHIFT) && | |
7515 | (zfs_arc_meta_min <= arc_c_max)) { | |
7516 | arc_meta_min = zfs_arc_meta_min; | |
b8a97fb1 | 7517 | if (arc_meta_limit < arc_meta_min) |
7518 | arc_meta_limit = arc_meta_min; | |
03fdcb9a MM |
7519 | if (arc_dnode_size_limit < arc_meta_min) |
7520 | arc_dnode_size_limit = arc_meta_min; | |
ca67b33a | 7521 | } |
36a6e233 | 7522 | WARN_IF_TUNING_IGNORED(zfs_arc_meta_min, arc_meta_min, verbose); |
ca67b33a MA |
7523 | |
7524 | /* Valid range: <arc_meta_min> - <arc_c_max> */ | |
b8a97fb1 | 7525 | limit = zfs_arc_meta_limit ? zfs_arc_meta_limit : |
7526 | MIN(zfs_arc_meta_limit_percent, 100) * arc_c_max / 100; | |
7527 | if ((limit != arc_meta_limit) && | |
7528 | (limit >= arc_meta_min) && | |
7529 | (limit <= arc_c_max)) | |
7530 | arc_meta_limit = limit; | |
36a6e233 | 7531 | WARN_IF_TUNING_IGNORED(zfs_arc_meta_limit, arc_meta_limit, verbose); |
b8a97fb1 | 7532 | |
7533 | /* Valid range: <arc_meta_min> - <arc_meta_limit> */ | |
7534 | limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit : | |
7535 | MIN(zfs_arc_dnode_limit_percent, 100) * arc_meta_limit / 100; | |
03fdcb9a | 7536 | if ((limit != arc_dnode_size_limit) && |
b8a97fb1 | 7537 | (limit >= arc_meta_min) && |
7538 | (limit <= arc_meta_limit)) | |
03fdcb9a | 7539 | arc_dnode_size_limit = limit; |
36a6e233 RM |
7540 | WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_size_limit, |
7541 | verbose); | |
25458cbe | 7542 | |
ca67b33a MA |
7543 | /* Valid range: 1 - N */ |
7544 | if (zfs_arc_grow_retry) | |
7545 | arc_grow_retry = zfs_arc_grow_retry; | |
7546 | ||
7547 | /* Valid range: 1 - N */ | |
7548 | if (zfs_arc_shrink_shift) { | |
7549 | arc_shrink_shift = zfs_arc_shrink_shift; | |
7550 | arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1); | |
7551 | } | |
7552 | ||
728d6ae9 BB |
7553 | /* Valid range: 1 - N */ |
7554 | if (zfs_arc_p_min_shift) | |
7555 | arc_p_min_shift = zfs_arc_p_min_shift; | |
7556 | ||
d4a72f23 TC |
7557 | /* Valid range: 1 - N ms */ |
7558 | if (zfs_arc_min_prefetch_ms) | |
7559 | arc_min_prefetch_ms = zfs_arc_min_prefetch_ms; | |
7560 | ||
7561 | /* Valid range: 1 - N ms */ | |
7562 | if (zfs_arc_min_prescient_prefetch_ms) { | |
7563 | arc_min_prescient_prefetch_ms = | |
7564 | zfs_arc_min_prescient_prefetch_ms; | |
7565 | } | |
11f552fa | 7566 | |
7e8bddd0 BB |
7567 | /* Valid range: 0 - 100 */ |
7568 | if ((zfs_arc_lotsfree_percent >= 0) && | |
7569 | (zfs_arc_lotsfree_percent <= 100)) | |
7570 | arc_lotsfree_percent = zfs_arc_lotsfree_percent; | |
36a6e233 RM |
7571 | WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent, |
7572 | verbose); | |
7e8bddd0 | 7573 | |
11f552fa BB |
7574 | /* Valid range: 0 - <all physical memory> */ |
7575 | if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free)) | |
9edb3695 | 7576 | arc_sys_free = MIN(MAX(zfs_arc_sys_free, 0), allmem); |
36a6e233 | 7577 | WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose); |
ca67b33a MA |
7578 | } |
7579 | ||
d3c2ae1c GW |
7580 | static void |
7581 | arc_state_init(void) | |
7582 | { | |
ffdf019c AM |
7583 | multilist_create(&arc_mru->arcs_list[ARC_BUFC_METADATA], |
7584 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7585 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7586 | arc_state_multilist_index_func); |
ffdf019c AM |
7587 | multilist_create(&arc_mru->arcs_list[ARC_BUFC_DATA], |
7588 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7589 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7590 | arc_state_multilist_index_func); |
ffdf019c AM |
7591 | multilist_create(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA], |
7592 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7593 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7594 | arc_state_multilist_index_func); |
ffdf019c AM |
7595 | multilist_create(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA], |
7596 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7597 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7598 | arc_state_multilist_index_func); |
ffdf019c AM |
7599 | multilist_create(&arc_mfu->arcs_list[ARC_BUFC_METADATA], |
7600 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7601 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7602 | arc_state_multilist_index_func); |
ffdf019c AM |
7603 | multilist_create(&arc_mfu->arcs_list[ARC_BUFC_DATA], |
7604 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7605 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7606 | arc_state_multilist_index_func); |
ffdf019c AM |
7607 | multilist_create(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA], |
7608 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7609 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7610 | arc_state_multilist_index_func); |
ffdf019c AM |
7611 | multilist_create(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA], |
7612 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7613 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7614 | arc_state_multilist_index_func); |
ffdf019c AM |
7615 | multilist_create(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA], |
7616 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7617 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7618 | arc_state_multilist_index_func); |
ffdf019c AM |
7619 | multilist_create(&arc_l2c_only->arcs_list[ARC_BUFC_DATA], |
7620 | sizeof (arc_buf_hdr_t), | |
d3c2ae1c | 7621 | offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), |
c30e58c4 | 7622 | arc_state_multilist_index_func); |
d3c2ae1c | 7623 | |
424fd7c3 TS |
7624 | zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); |
7625 | zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); | |
7626 | zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); | |
7627 | zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); | |
7628 | zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); | |
7629 | zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); | |
7630 | zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); | |
7631 | zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); | |
7632 | zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); | |
7633 | zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); | |
7634 | zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); | |
7635 | zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); | |
7636 | ||
7637 | zfs_refcount_create(&arc_anon->arcs_size); | |
7638 | zfs_refcount_create(&arc_mru->arcs_size); | |
7639 | zfs_refcount_create(&arc_mru_ghost->arcs_size); | |
7640 | zfs_refcount_create(&arc_mfu->arcs_size); | |
7641 | zfs_refcount_create(&arc_mfu_ghost->arcs_size); | |
7642 | zfs_refcount_create(&arc_l2c_only->arcs_size); | |
d3c2ae1c | 7643 | |
c4c162c1 AM |
7644 | wmsum_init(&arc_sums.arcstat_hits, 0); |
7645 | wmsum_init(&arc_sums.arcstat_misses, 0); | |
7646 | wmsum_init(&arc_sums.arcstat_demand_data_hits, 0); | |
7647 | wmsum_init(&arc_sums.arcstat_demand_data_misses, 0); | |
7648 | wmsum_init(&arc_sums.arcstat_demand_metadata_hits, 0); | |
7649 | wmsum_init(&arc_sums.arcstat_demand_metadata_misses, 0); | |
7650 | wmsum_init(&arc_sums.arcstat_prefetch_data_hits, 0); | |
7651 | wmsum_init(&arc_sums.arcstat_prefetch_data_misses, 0); | |
7652 | wmsum_init(&arc_sums.arcstat_prefetch_metadata_hits, 0); | |
7653 | wmsum_init(&arc_sums.arcstat_prefetch_metadata_misses, 0); | |
7654 | wmsum_init(&arc_sums.arcstat_mru_hits, 0); | |
7655 | wmsum_init(&arc_sums.arcstat_mru_ghost_hits, 0); | |
7656 | wmsum_init(&arc_sums.arcstat_mfu_hits, 0); | |
7657 | wmsum_init(&arc_sums.arcstat_mfu_ghost_hits, 0); | |
7658 | wmsum_init(&arc_sums.arcstat_deleted, 0); | |
7659 | wmsum_init(&arc_sums.arcstat_mutex_miss, 0); | |
7660 | wmsum_init(&arc_sums.arcstat_access_skip, 0); | |
7661 | wmsum_init(&arc_sums.arcstat_evict_skip, 0); | |
7662 | wmsum_init(&arc_sums.arcstat_evict_not_enough, 0); | |
7663 | wmsum_init(&arc_sums.arcstat_evict_l2_cached, 0); | |
7664 | wmsum_init(&arc_sums.arcstat_evict_l2_eligible, 0); | |
7665 | wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mfu, 0); | |
7666 | wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0); | |
7667 | wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0); | |
7668 | wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0); | |
7669 | wmsum_init(&arc_sums.arcstat_hash_collisions, 0); | |
7670 | wmsum_init(&arc_sums.arcstat_hash_chains, 0); | |
7671 | aggsum_init(&arc_sums.arcstat_size, 0); | |
7672 | wmsum_init(&arc_sums.arcstat_compressed_size, 0); | |
7673 | wmsum_init(&arc_sums.arcstat_uncompressed_size, 0); | |
7674 | wmsum_init(&arc_sums.arcstat_overhead_size, 0); | |
7675 | wmsum_init(&arc_sums.arcstat_hdr_size, 0); | |
7676 | wmsum_init(&arc_sums.arcstat_data_size, 0); | |
7677 | wmsum_init(&arc_sums.arcstat_metadata_size, 0); | |
7678 | wmsum_init(&arc_sums.arcstat_dbuf_size, 0); | |
7679 | aggsum_init(&arc_sums.arcstat_dnode_size, 0); | |
7680 | wmsum_init(&arc_sums.arcstat_bonus_size, 0); | |
7681 | wmsum_init(&arc_sums.arcstat_l2_hits, 0); | |
7682 | wmsum_init(&arc_sums.arcstat_l2_misses, 0); | |
7683 | wmsum_init(&arc_sums.arcstat_l2_prefetch_asize, 0); | |
7684 | wmsum_init(&arc_sums.arcstat_l2_mru_asize, 0); | |
7685 | wmsum_init(&arc_sums.arcstat_l2_mfu_asize, 0); | |
7686 | wmsum_init(&arc_sums.arcstat_l2_bufc_data_asize, 0); | |
7687 | wmsum_init(&arc_sums.arcstat_l2_bufc_metadata_asize, 0); | |
7688 | wmsum_init(&arc_sums.arcstat_l2_feeds, 0); | |
7689 | wmsum_init(&arc_sums.arcstat_l2_rw_clash, 0); | |
7690 | wmsum_init(&arc_sums.arcstat_l2_read_bytes, 0); | |
7691 | wmsum_init(&arc_sums.arcstat_l2_write_bytes, 0); | |
7692 | wmsum_init(&arc_sums.arcstat_l2_writes_sent, 0); | |
7693 | wmsum_init(&arc_sums.arcstat_l2_writes_done, 0); | |
7694 | wmsum_init(&arc_sums.arcstat_l2_writes_error, 0); | |
7695 | wmsum_init(&arc_sums.arcstat_l2_writes_lock_retry, 0); | |
7696 | wmsum_init(&arc_sums.arcstat_l2_evict_lock_retry, 0); | |
7697 | wmsum_init(&arc_sums.arcstat_l2_evict_reading, 0); | |
7698 | wmsum_init(&arc_sums.arcstat_l2_evict_l1cached, 0); | |
7699 | wmsum_init(&arc_sums.arcstat_l2_free_on_write, 0); | |
7700 | wmsum_init(&arc_sums.arcstat_l2_abort_lowmem, 0); | |
7701 | wmsum_init(&arc_sums.arcstat_l2_cksum_bad, 0); | |
7702 | wmsum_init(&arc_sums.arcstat_l2_io_error, 0); | |
7703 | wmsum_init(&arc_sums.arcstat_l2_lsize, 0); | |
7704 | wmsum_init(&arc_sums.arcstat_l2_psize, 0); | |
7705 | aggsum_init(&arc_sums.arcstat_l2_hdr_size, 0); | |
7706 | wmsum_init(&arc_sums.arcstat_l2_log_blk_writes, 0); | |
7707 | wmsum_init(&arc_sums.arcstat_l2_log_blk_asize, 0); | |
7708 | wmsum_init(&arc_sums.arcstat_l2_log_blk_count, 0); | |
7709 | wmsum_init(&arc_sums.arcstat_l2_rebuild_success, 0); | |
7710 | wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_unsupported, 0); | |
7711 | wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_io_errors, 0); | |
7712 | wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_dh_errors, 0); | |
7713 | wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors, 0); | |
7714 | wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_lowmem, 0); | |
7715 | wmsum_init(&arc_sums.arcstat_l2_rebuild_size, 0); | |
7716 | wmsum_init(&arc_sums.arcstat_l2_rebuild_asize, 0); | |
7717 | wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs, 0); | |
7718 | wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs_precached, 0); | |
7719 | wmsum_init(&arc_sums.arcstat_l2_rebuild_log_blks, 0); | |
7720 | wmsum_init(&arc_sums.arcstat_memory_throttle_count, 0); | |
7721 | wmsum_init(&arc_sums.arcstat_memory_direct_count, 0); | |
7722 | wmsum_init(&arc_sums.arcstat_memory_indirect_count, 0); | |
7723 | wmsum_init(&arc_sums.arcstat_prune, 0); | |
7724 | aggsum_init(&arc_sums.arcstat_meta_used, 0); | |
7725 | wmsum_init(&arc_sums.arcstat_async_upgrade_sync, 0); | |
7726 | wmsum_init(&arc_sums.arcstat_demand_hit_predictive_prefetch, 0); | |
7727 | wmsum_init(&arc_sums.arcstat_demand_hit_prescient_prefetch, 0); | |
7728 | wmsum_init(&arc_sums.arcstat_raw_size, 0); | |
7729 | wmsum_init(&arc_sums.arcstat_cached_only_in_progress, 0); | |
7730 | wmsum_init(&arc_sums.arcstat_abd_chunk_waste_size, 0); | |
37fb3e43 | 7731 | |
d3c2ae1c GW |
7732 | arc_anon->arcs_state = ARC_STATE_ANON; |
7733 | arc_mru->arcs_state = ARC_STATE_MRU; | |
7734 | arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST; | |
7735 | arc_mfu->arcs_state = ARC_STATE_MFU; | |
7736 | arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST; | |
7737 | arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY; | |
7738 | } | |
7739 | ||
7740 | static void | |
7741 | arc_state_fini(void) | |
7742 | { | |
424fd7c3 TS |
7743 | zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); |
7744 | zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); | |
7745 | zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); | |
7746 | zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); | |
7747 | zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); | |
7748 | zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); | |
7749 | zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); | |
7750 | zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); | |
7751 | zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); | |
7752 | zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); | |
7753 | zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); | |
7754 | zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); | |
7755 | ||
7756 | zfs_refcount_destroy(&arc_anon->arcs_size); | |
7757 | zfs_refcount_destroy(&arc_mru->arcs_size); | |
7758 | zfs_refcount_destroy(&arc_mru_ghost->arcs_size); | |
7759 | zfs_refcount_destroy(&arc_mfu->arcs_size); | |
7760 | zfs_refcount_destroy(&arc_mfu_ghost->arcs_size); | |
7761 | zfs_refcount_destroy(&arc_l2c_only->arcs_size); | |
d3c2ae1c | 7762 | |
ffdf019c AM |
7763 | multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_METADATA]); |
7764 | multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); | |
7765 | multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_METADATA]); | |
7766 | multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); | |
7767 | multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_DATA]); | |
7768 | multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); | |
7769 | multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_DATA]); | |
7770 | multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); | |
7771 | multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); | |
7772 | multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]); | |
37fb3e43 | 7773 | |
c4c162c1 AM |
7774 | wmsum_fini(&arc_sums.arcstat_hits); |
7775 | wmsum_fini(&arc_sums.arcstat_misses); | |
7776 | wmsum_fini(&arc_sums.arcstat_demand_data_hits); | |
7777 | wmsum_fini(&arc_sums.arcstat_demand_data_misses); | |
7778 | wmsum_fini(&arc_sums.arcstat_demand_metadata_hits); | |
7779 | wmsum_fini(&arc_sums.arcstat_demand_metadata_misses); | |
7780 | wmsum_fini(&arc_sums.arcstat_prefetch_data_hits); | |
7781 | wmsum_fini(&arc_sums.arcstat_prefetch_data_misses); | |
7782 | wmsum_fini(&arc_sums.arcstat_prefetch_metadata_hits); | |
7783 | wmsum_fini(&arc_sums.arcstat_prefetch_metadata_misses); | |
7784 | wmsum_fini(&arc_sums.arcstat_mru_hits); | |
7785 | wmsum_fini(&arc_sums.arcstat_mru_ghost_hits); | |
7786 | wmsum_fini(&arc_sums.arcstat_mfu_hits); | |
7787 | wmsum_fini(&arc_sums.arcstat_mfu_ghost_hits); | |
7788 | wmsum_fini(&arc_sums.arcstat_deleted); | |
7789 | wmsum_fini(&arc_sums.arcstat_mutex_miss); | |
7790 | wmsum_fini(&arc_sums.arcstat_access_skip); | |
7791 | wmsum_fini(&arc_sums.arcstat_evict_skip); | |
7792 | wmsum_fini(&arc_sums.arcstat_evict_not_enough); | |
7793 | wmsum_fini(&arc_sums.arcstat_evict_l2_cached); | |
7794 | wmsum_fini(&arc_sums.arcstat_evict_l2_eligible); | |
7795 | wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mfu); | |
7796 | wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru); | |
7797 | wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible); | |
7798 | wmsum_fini(&arc_sums.arcstat_evict_l2_skip); | |
7799 | wmsum_fini(&arc_sums.arcstat_hash_collisions); | |
7800 | wmsum_fini(&arc_sums.arcstat_hash_chains); | |
7801 | aggsum_fini(&arc_sums.arcstat_size); | |
7802 | wmsum_fini(&arc_sums.arcstat_compressed_size); | |
7803 | wmsum_fini(&arc_sums.arcstat_uncompressed_size); | |
7804 | wmsum_fini(&arc_sums.arcstat_overhead_size); | |
7805 | wmsum_fini(&arc_sums.arcstat_hdr_size); | |
7806 | wmsum_fini(&arc_sums.arcstat_data_size); | |
7807 | wmsum_fini(&arc_sums.arcstat_metadata_size); | |
7808 | wmsum_fini(&arc_sums.arcstat_dbuf_size); | |
7809 | aggsum_fini(&arc_sums.arcstat_dnode_size); | |
7810 | wmsum_fini(&arc_sums.arcstat_bonus_size); | |
7811 | wmsum_fini(&arc_sums.arcstat_l2_hits); | |
7812 | wmsum_fini(&arc_sums.arcstat_l2_misses); | |
7813 | wmsum_fini(&arc_sums.arcstat_l2_prefetch_asize); | |
7814 | wmsum_fini(&arc_sums.arcstat_l2_mru_asize); | |
7815 | wmsum_fini(&arc_sums.arcstat_l2_mfu_asize); | |
7816 | wmsum_fini(&arc_sums.arcstat_l2_bufc_data_asize); | |
7817 | wmsum_fini(&arc_sums.arcstat_l2_bufc_metadata_asize); | |
7818 | wmsum_fini(&arc_sums.arcstat_l2_feeds); | |
7819 | wmsum_fini(&arc_sums.arcstat_l2_rw_clash); | |
7820 | wmsum_fini(&arc_sums.arcstat_l2_read_bytes); | |
7821 | wmsum_fini(&arc_sums.arcstat_l2_write_bytes); | |
7822 | wmsum_fini(&arc_sums.arcstat_l2_writes_sent); | |
7823 | wmsum_fini(&arc_sums.arcstat_l2_writes_done); | |
7824 | wmsum_fini(&arc_sums.arcstat_l2_writes_error); | |
7825 | wmsum_fini(&arc_sums.arcstat_l2_writes_lock_retry); | |
7826 | wmsum_fini(&arc_sums.arcstat_l2_evict_lock_retry); | |
7827 | wmsum_fini(&arc_sums.arcstat_l2_evict_reading); | |
7828 | wmsum_fini(&arc_sums.arcstat_l2_evict_l1cached); | |
7829 | wmsum_fini(&arc_sums.arcstat_l2_free_on_write); | |
7830 | wmsum_fini(&arc_sums.arcstat_l2_abort_lowmem); | |
7831 | wmsum_fini(&arc_sums.arcstat_l2_cksum_bad); | |
7832 | wmsum_fini(&arc_sums.arcstat_l2_io_error); | |
7833 | wmsum_fini(&arc_sums.arcstat_l2_lsize); | |
7834 | wmsum_fini(&arc_sums.arcstat_l2_psize); | |
7835 | aggsum_fini(&arc_sums.arcstat_l2_hdr_size); | |
7836 | wmsum_fini(&arc_sums.arcstat_l2_log_blk_writes); | |
7837 | wmsum_fini(&arc_sums.arcstat_l2_log_blk_asize); | |
7838 | wmsum_fini(&arc_sums.arcstat_l2_log_blk_count); | |
7839 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_success); | |
7840 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_unsupported); | |
7841 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_io_errors); | |
7842 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); | |
7843 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); | |
7844 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_lowmem); | |
7845 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_size); | |
7846 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_asize); | |
7847 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs); | |
7848 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs_precached); | |
7849 | wmsum_fini(&arc_sums.arcstat_l2_rebuild_log_blks); | |
7850 | wmsum_fini(&arc_sums.arcstat_memory_throttle_count); | |
7851 | wmsum_fini(&arc_sums.arcstat_memory_direct_count); | |
7852 | wmsum_fini(&arc_sums.arcstat_memory_indirect_count); | |
7853 | wmsum_fini(&arc_sums.arcstat_prune); | |
7854 | aggsum_fini(&arc_sums.arcstat_meta_used); | |
7855 | wmsum_fini(&arc_sums.arcstat_async_upgrade_sync); | |
7856 | wmsum_fini(&arc_sums.arcstat_demand_hit_predictive_prefetch); | |
7857 | wmsum_fini(&arc_sums.arcstat_demand_hit_prescient_prefetch); | |
7858 | wmsum_fini(&arc_sums.arcstat_raw_size); | |
7859 | wmsum_fini(&arc_sums.arcstat_cached_only_in_progress); | |
7860 | wmsum_fini(&arc_sums.arcstat_abd_chunk_waste_size); | |
d3c2ae1c GW |
7861 | } |
7862 | ||
7863 | uint64_t | |
e71cade6 | 7864 | arc_target_bytes(void) |
d3c2ae1c | 7865 | { |
e71cade6 | 7866 | return (arc_c); |
d3c2ae1c GW |
7867 | } |
7868 | ||
60a4c7d2 PD |
7869 | void |
7870 | arc_set_limits(uint64_t allmem) | |
7871 | { | |
7872 | /* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */ | |
7873 | arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT); | |
7874 | ||
7875 | /* How to set default max varies by platform. */ | |
7876 | arc_c_max = arc_default_max(arc_c_min, allmem); | |
7877 | } | |
34dc7c2f BB |
7878 | void |
7879 | arc_init(void) | |
7880 | { | |
9edb3695 | 7881 | uint64_t percent, allmem = arc_all_memory(); |
5dd92909 | 7882 | mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL); |
3442c2a0 MA |
7883 | list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t), |
7884 | offsetof(arc_evict_waiter_t, aew_node)); | |
ca0bf58d | 7885 | |
2b84817f TC |
7886 | arc_min_prefetch_ms = 1000; |
7887 | arc_min_prescient_prefetch_ms = 6000; | |
34dc7c2f | 7888 | |
c9c9c1e2 MM |
7889 | #if defined(_KERNEL) |
7890 | arc_lowmem_init(); | |
34dc7c2f BB |
7891 | #endif |
7892 | ||
60a4c7d2 | 7893 | arc_set_limits(allmem); |
9a51738b RM |
7894 | |
7895 | #ifndef _KERNEL | |
ab5cbbd1 BB |
7896 | /* |
7897 | * In userland, there's only the memory pressure that we artificially | |
7898 | * create (see arc_available_memory()). Don't let arc_c get too | |
7899 | * small, because it can cause transactions to be larger than | |
7900 | * arc_c, causing arc_tempreserve_space() to fail. | |
7901 | */ | |
0a1f8cd9 | 7902 | arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT); |
ab5cbbd1 BB |
7903 | #endif |
7904 | ||
17ca3018 | 7905 | arc_c = arc_c_min; |
34dc7c2f BB |
7906 | arc_p = (arc_c >> 1); |
7907 | ||
ca67b33a MA |
7908 | /* Set min to 1/2 of arc_c_min */ |
7909 | arc_meta_min = 1ULL << SPA_MAXBLOCKSHIFT; | |
9907cc1c G |
7910 | /* |
7911 | * Set arc_meta_limit to a percent of arc_c_max with a floor of | |
7912 | * arc_meta_min, and a ceiling of arc_c_max. | |
7913 | */ | |
7914 | percent = MIN(zfs_arc_meta_limit_percent, 100); | |
7915 | arc_meta_limit = MAX(arc_meta_min, (percent * arc_c_max) / 100); | |
7916 | percent = MIN(zfs_arc_dnode_limit_percent, 100); | |
03fdcb9a | 7917 | arc_dnode_size_limit = (percent * arc_meta_limit) / 100; |
34dc7c2f | 7918 | |
ca67b33a | 7919 | /* Apply user specified tunings */ |
36a6e233 | 7920 | arc_tuning_update(B_TRUE); |
c52fca13 | 7921 | |
34dc7c2f BB |
7922 | /* if kmem_flags are set, lets try to use less memory */ |
7923 | if (kmem_debugging()) | |
7924 | arc_c = arc_c / 2; | |
7925 | if (arc_c < arc_c_min) | |
7926 | arc_c = arc_c_min; | |
7927 | ||
60a4c7d2 PD |
7928 | arc_register_hotplug(); |
7929 | ||
d3c2ae1c | 7930 | arc_state_init(); |
3ec34e55 | 7931 | |
34dc7c2f BB |
7932 | buf_init(); |
7933 | ||
ab26409d BB |
7934 | list_create(&arc_prune_list, sizeof (arc_prune_t), |
7935 | offsetof(arc_prune_t, p_node)); | |
ab26409d | 7936 | mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL); |
34dc7c2f | 7937 | |
60a4c7d2 PD |
7938 | arc_prune_taskq = taskq_create("arc_prune", 100, defclsyspri, |
7939 | boot_ncpus, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC | | |
7940 | TASKQ_THREADS_CPU_PCT); | |
f6046738 | 7941 | |
34dc7c2f BB |
7942 | arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, |
7943 | sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); | |
7944 | ||
7945 | if (arc_ksp != NULL) { | |
7946 | arc_ksp->ks_data = &arc_stats; | |
13be560d | 7947 | arc_ksp->ks_update = arc_kstat_update; |
34dc7c2f BB |
7948 | kstat_install(arc_ksp); |
7949 | } | |
7950 | ||
1531506d RM |
7951 | arc_evict_zthr = zthr_create("arc_evict", |
7952 | arc_evict_cb_check, arc_evict_cb, NULL); | |
843e9ca2 SD |
7953 | arc_reap_zthr = zthr_create_timer("arc_reap", |
7954 | arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1)); | |
34dc7c2f | 7955 | |
b128c09f | 7956 | arc_warm = B_FALSE; |
34dc7c2f | 7957 | |
e8b96c60 MA |
7958 | /* |
7959 | * Calculate maximum amount of dirty data per pool. | |
7960 | * | |
7961 | * If it has been set by a module parameter, take that. | |
7962 | * Otherwise, use a percentage of physical memory defined by | |
7963 | * zfs_dirty_data_max_percent (default 10%) with a cap at | |
e99932f7 | 7964 | * zfs_dirty_data_max_max (default 4G or 25% of physical memory). |
e8b96c60 | 7965 | */ |
47ed79ff | 7966 | #ifdef __LP64__ |
e8b96c60 | 7967 | if (zfs_dirty_data_max_max == 0) |
e99932f7 BB |
7968 | zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024, |
7969 | allmem * zfs_dirty_data_max_max_percent / 100); | |
47ed79ff MM |
7970 | #else |
7971 | if (zfs_dirty_data_max_max == 0) | |
7972 | zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024, | |
7973 | allmem * zfs_dirty_data_max_max_percent / 100); | |
7974 | #endif | |
e8b96c60 MA |
7975 | |
7976 | if (zfs_dirty_data_max == 0) { | |
9edb3695 | 7977 | zfs_dirty_data_max = allmem * |
e8b96c60 MA |
7978 | zfs_dirty_data_max_percent / 100; |
7979 | zfs_dirty_data_max = MIN(zfs_dirty_data_max, | |
7980 | zfs_dirty_data_max_max); | |
7981 | } | |
a7bd20e3 KJ |
7982 | |
7983 | if (zfs_wrlog_data_max == 0) { | |
7984 | ||
7985 | /* | |
7986 | * dp_wrlog_total is reduced for each txg at the end of | |
7987 | * spa_sync(). However, dp_dirty_total is reduced every time | |
7988 | * a block is written out. Thus under normal operation, | |
7989 | * dp_wrlog_total could grow 2 times as big as | |
7990 | * zfs_dirty_data_max. | |
7991 | */ | |
7992 | zfs_wrlog_data_max = zfs_dirty_data_max * 2; | |
7993 | } | |
34dc7c2f BB |
7994 | } |
7995 | ||
7996 | void | |
7997 | arc_fini(void) | |
7998 | { | |
ab26409d BB |
7999 | arc_prune_t *p; |
8000 | ||
7cb67b45 | 8001 | #ifdef _KERNEL |
c9c9c1e2 | 8002 | arc_lowmem_fini(); |
7cb67b45 BB |
8003 | #endif /* _KERNEL */ |
8004 | ||
d3c2ae1c GW |
8005 | /* Use B_TRUE to ensure *all* buffers are evicted */ |
8006 | arc_flush(NULL, B_TRUE); | |
34dc7c2f | 8007 | |
34dc7c2f BB |
8008 | if (arc_ksp != NULL) { |
8009 | kstat_delete(arc_ksp); | |
8010 | arc_ksp = NULL; | |
8011 | } | |
8012 | ||
f6046738 BB |
8013 | taskq_wait(arc_prune_taskq); |
8014 | taskq_destroy(arc_prune_taskq); | |
8015 | ||
ab26409d BB |
8016 | mutex_enter(&arc_prune_mtx); |
8017 | while ((p = list_head(&arc_prune_list)) != NULL) { | |
8018 | list_remove(&arc_prune_list, p); | |
424fd7c3 TS |
8019 | zfs_refcount_remove(&p->p_refcnt, &arc_prune_list); |
8020 | zfs_refcount_destroy(&p->p_refcnt); | |
ab26409d BB |
8021 | kmem_free(p, sizeof (*p)); |
8022 | } | |
8023 | mutex_exit(&arc_prune_mtx); | |
8024 | ||
8025 | list_destroy(&arc_prune_list); | |
8026 | mutex_destroy(&arc_prune_mtx); | |
3ec34e55 | 8027 | |
5dd92909 | 8028 | (void) zthr_cancel(arc_evict_zthr); |
3ec34e55 | 8029 | (void) zthr_cancel(arc_reap_zthr); |
3ec34e55 | 8030 | |
5dd92909 | 8031 | mutex_destroy(&arc_evict_lock); |
3442c2a0 | 8032 | list_destroy(&arc_evict_waiters); |
ca0bf58d | 8033 | |
cfd59f90 BB |
8034 | /* |
8035 | * Free any buffers that were tagged for destruction. This needs | |
8036 | * to occur before arc_state_fini() runs and destroys the aggsum | |
8037 | * values which are updated when freeing scatter ABDs. | |
8038 | */ | |
8039 | l2arc_do_free_on_write(); | |
8040 | ||
ae3d8491 PD |
8041 | /* |
8042 | * buf_fini() must proceed arc_state_fini() because buf_fin() may | |
8043 | * trigger the release of kmem magazines, which can callback to | |
8044 | * arc_space_return() which accesses aggsums freed in act_state_fini(). | |
8045 | */ | |
34dc7c2f | 8046 | buf_fini(); |
ae3d8491 | 8047 | arc_state_fini(); |
9babb374 | 8048 | |
60a4c7d2 PD |
8049 | arc_unregister_hotplug(); |
8050 | ||
1c44a5c9 SD |
8051 | /* |
8052 | * We destroy the zthrs after all the ARC state has been | |
8053 | * torn down to avoid the case of them receiving any | |
8054 | * wakeup() signals after they are destroyed. | |
8055 | */ | |
5dd92909 | 8056 | zthr_destroy(arc_evict_zthr); |
1c44a5c9 SD |
8057 | zthr_destroy(arc_reap_zthr); |
8058 | ||
b9541d6b | 8059 | ASSERT0(arc_loaned_bytes); |
34dc7c2f BB |
8060 | } |
8061 | ||
8062 | /* | |
8063 | * Level 2 ARC | |
8064 | * | |
8065 | * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. | |
8066 | * It uses dedicated storage devices to hold cached data, which are populated | |
8067 | * using large infrequent writes. The main role of this cache is to boost | |
8068 | * the performance of random read workloads. The intended L2ARC devices | |
8069 | * include short-stroked disks, solid state disks, and other media with | |
8070 | * substantially faster read latency than disk. | |
8071 | * | |
8072 | * +-----------------------+ | |
8073 | * | ARC | | |
8074 | * +-----------------------+ | |
8075 | * | ^ ^ | |
8076 | * | | | | |
8077 | * l2arc_feed_thread() arc_read() | |
8078 | * | | | | |
8079 | * | l2arc read | | |
8080 | * V | | | |
8081 | * +---------------+ | | |
8082 | * | L2ARC | | | |
8083 | * +---------------+ | | |
8084 | * | ^ | | |
8085 | * l2arc_write() | | | |
8086 | * | | | | |
8087 | * V | | | |
8088 | * +-------+ +-------+ | |
8089 | * | vdev | | vdev | | |
8090 | * | cache | | cache | | |
8091 | * +-------+ +-------+ | |
8092 | * +=========+ .-----. | |
8093 | * : L2ARC : |-_____-| | |
8094 | * : devices : | Disks | | |
8095 | * +=========+ `-_____-' | |
8096 | * | |
8097 | * Read requests are satisfied from the following sources, in order: | |
8098 | * | |
8099 | * 1) ARC | |
8100 | * 2) vdev cache of L2ARC devices | |
8101 | * 3) L2ARC devices | |
8102 | * 4) vdev cache of disks | |
8103 | * 5) disks | |
8104 | * | |
8105 | * Some L2ARC device types exhibit extremely slow write performance. | |
8106 | * To accommodate for this there are some significant differences between | |
8107 | * the L2ARC and traditional cache design: | |
8108 | * | |
8109 | * 1. There is no eviction path from the ARC to the L2ARC. Evictions from | |
8110 | * the ARC behave as usual, freeing buffers and placing headers on ghost | |
8111 | * lists. The ARC does not send buffers to the L2ARC during eviction as | |
8112 | * this would add inflated write latencies for all ARC memory pressure. | |
8113 | * | |
8114 | * 2. The L2ARC attempts to cache data from the ARC before it is evicted. | |
8115 | * It does this by periodically scanning buffers from the eviction-end of | |
8116 | * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are | |
3a17a7a9 SK |
8117 | * not already there. It scans until a headroom of buffers is satisfied, |
8118 | * which itself is a buffer for ARC eviction. If a compressible buffer is | |
8119 | * found during scanning and selected for writing to an L2ARC device, we | |
8120 | * temporarily boost scanning headroom during the next scan cycle to make | |
8121 | * sure we adapt to compression effects (which might significantly reduce | |
8122 | * the data volume we write to L2ARC). The thread that does this is | |
34dc7c2f BB |
8123 | * l2arc_feed_thread(), illustrated below; example sizes are included to |
8124 | * provide a better sense of ratio than this diagram: | |
8125 | * | |
8126 | * head --> tail | |
8127 | * +---------------------+----------+ | |
8128 | * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC | |
8129 | * +---------------------+----------+ | o L2ARC eligible | |
8130 | * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer | |
8131 | * +---------------------+----------+ | | |
8132 | * 15.9 Gbytes ^ 32 Mbytes | | |
8133 | * headroom | | |
8134 | * l2arc_feed_thread() | |
8135 | * | | |
8136 | * l2arc write hand <--[oooo]--' | |
8137 | * | 8 Mbyte | |
8138 | * | write max | |
8139 | * V | |
8140 | * +==============================+ | |
8141 | * L2ARC dev |####|#|###|###| |####| ... | | |
8142 | * +==============================+ | |
8143 | * 32 Gbytes | |
8144 | * | |
8145 | * 3. If an ARC buffer is copied to the L2ARC but then hit instead of | |
8146 | * evicted, then the L2ARC has cached a buffer much sooner than it probably | |
8147 | * needed to, potentially wasting L2ARC device bandwidth and storage. It is | |
8148 | * safe to say that this is an uncommon case, since buffers at the end of | |
8149 | * the ARC lists have moved there due to inactivity. | |
8150 | * | |
8151 | * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, | |
8152 | * then the L2ARC simply misses copying some buffers. This serves as a | |
8153 | * pressure valve to prevent heavy read workloads from both stalling the ARC | |
8154 | * with waits and clogging the L2ARC with writes. This also helps prevent | |
8155 | * the potential for the L2ARC to churn if it attempts to cache content too | |
8156 | * quickly, such as during backups of the entire pool. | |
8157 | * | |
b128c09f BB |
8158 | * 5. After system boot and before the ARC has filled main memory, there are |
8159 | * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru | |
8160 | * lists can remain mostly static. Instead of searching from tail of these | |
8161 | * lists as pictured, the l2arc_feed_thread() will search from the list heads | |
8162 | * for eligible buffers, greatly increasing its chance of finding them. | |
8163 | * | |
8164 | * The L2ARC device write speed is also boosted during this time so that | |
8165 | * the L2ARC warms up faster. Since there have been no ARC evictions yet, | |
8166 | * there are no L2ARC reads, and no fear of degrading read performance | |
8167 | * through increased writes. | |
8168 | * | |
8169 | * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that | |
34dc7c2f BB |
8170 | * the vdev queue can aggregate them into larger and fewer writes. Each |
8171 | * device is written to in a rotor fashion, sweeping writes through | |
8172 | * available space then repeating. | |
8173 | * | |
b128c09f | 8174 | * 7. The L2ARC does not store dirty content. It never needs to flush |
34dc7c2f BB |
8175 | * write buffers back to disk based storage. |
8176 | * | |
b128c09f | 8177 | * 8. If an ARC buffer is written (and dirtied) which also exists in the |
34dc7c2f BB |
8178 | * L2ARC, the now stale L2ARC buffer is immediately dropped. |
8179 | * | |
8180 | * The performance of the L2ARC can be tweaked by a number of tunables, which | |
8181 | * may be necessary for different workloads: | |
8182 | * | |
8183 | * l2arc_write_max max write bytes per interval | |
b128c09f | 8184 | * l2arc_write_boost extra write bytes during device warmup |
34dc7c2f BB |
8185 | * l2arc_noprefetch skip caching prefetched buffers |
8186 | * l2arc_headroom number of max device writes to precache | |
3a17a7a9 SK |
8187 | * l2arc_headroom_boost when we find compressed buffers during ARC |
8188 | * scanning, we multiply headroom by this | |
8189 | * percentage factor for the next scan cycle, | |
8190 | * since more compressed buffers are likely to | |
8191 | * be present | |
34dc7c2f BB |
8192 | * l2arc_feed_secs seconds between L2ARC writing |
8193 | * | |
8194 | * Tunables may be removed or added as future performance improvements are | |
8195 | * integrated, and also may become zpool properties. | |
d164b209 BB |
8196 | * |
8197 | * There are three key functions that control how the L2ARC warms up: | |
8198 | * | |
8199 | * l2arc_write_eligible() check if a buffer is eligible to cache | |
8200 | * l2arc_write_size() calculate how much to write | |
8201 | * l2arc_write_interval() calculate sleep delay between writes | |
8202 | * | |
8203 | * These three functions determine what to write, how much, and how quickly | |
8204 | * to send writes. | |
77f6826b GA |
8205 | * |
8206 | * L2ARC persistence: | |
8207 | * | |
8208 | * When writing buffers to L2ARC, we periodically add some metadata to | |
8209 | * make sure we can pick them up after reboot, thus dramatically reducing | |
8210 | * the impact that any downtime has on the performance of storage systems | |
8211 | * with large caches. | |
8212 | * | |
8213 | * The implementation works fairly simply by integrating the following two | |
8214 | * modifications: | |
8215 | * | |
8216 | * *) When writing to the L2ARC, we occasionally write a "l2arc log block", | |
8217 | * which is an additional piece of metadata which describes what's been | |
8218 | * written. This allows us to rebuild the arc_buf_hdr_t structures of the | |
8219 | * main ARC buffers. There are 2 linked-lists of log blocks headed by | |
8220 | * dh_start_lbps[2]. We alternate which chain we append to, so they are | |
8221 | * time-wise and offset-wise interleaved, but that is an optimization rather | |
8222 | * than for correctness. The log block also includes a pointer to the | |
8223 | * previous block in its chain. | |
8224 | * | |
8225 | * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device | |
8226 | * for our header bookkeeping purposes. This contains a device header, | |
8227 | * which contains our top-level reference structures. We update it each | |
8228 | * time we write a new log block, so that we're able to locate it in the | |
8229 | * L2ARC device. If this write results in an inconsistent device header | |
8230 | * (e.g. due to power failure), we detect this by verifying the header's | |
8231 | * checksum and simply fail to reconstruct the L2ARC after reboot. | |
8232 | * | |
8233 | * Implementation diagram: | |
8234 | * | |
8235 | * +=== L2ARC device (not to scale) ======================================+ | |
8236 | * | ___two newest log block pointers__.__________ | | |
8237 | * | / \dh_start_lbps[1] | | |
8238 | * | / \ \dh_start_lbps[0]| | |
8239 | * |.___/__. V V | | |
8240 | * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| | |
8241 | * || hdr| ^ /^ /^ / / | | |
8242 | * |+------+ ...--\-------/ \-----/--\------/ / | | |
8243 | * | \--------------/ \--------------/ | | |
8244 | * +======================================================================+ | |
8245 | * | |
8246 | * As can be seen on the diagram, rather than using a simple linked list, | |
8247 | * we use a pair of linked lists with alternating elements. This is a | |
8248 | * performance enhancement due to the fact that we only find out the | |
8249 | * address of the next log block access once the current block has been | |
8250 | * completely read in. Obviously, this hurts performance, because we'd be | |
8251 | * keeping the device's I/O queue at only a 1 operation deep, thus | |
8252 | * incurring a large amount of I/O round-trip latency. Having two lists | |
8253 | * allows us to fetch two log blocks ahead of where we are currently | |
8254 | * rebuilding L2ARC buffers. | |
8255 | * | |
8256 | * On-device data structures: | |
8257 | * | |
8258 | * L2ARC device header: l2arc_dev_hdr_phys_t | |
8259 | * L2ARC log block: l2arc_log_blk_phys_t | |
8260 | * | |
8261 | * L2ARC reconstruction: | |
8262 | * | |
8263 | * When writing data, we simply write in the standard rotary fashion, | |
8264 | * evicting buffers as we go and simply writing new data over them (writing | |
8265 | * a new log block every now and then). This obviously means that once we | |
8266 | * loop around the end of the device, we will start cutting into an already | |
8267 | * committed log block (and its referenced data buffers), like so: | |
8268 | * | |
8269 | * current write head__ __old tail | |
8270 | * \ / | |
8271 | * V V | |
8272 | * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> | |
8273 | * ^ ^^^^^^^^^___________________________________ | |
8274 | * | \ | |
8275 | * <<nextwrite>> may overwrite this blk and/or its bufs --' | |
8276 | * | |
8277 | * When importing the pool, we detect this situation and use it to stop | |
8278 | * our scanning process (see l2arc_rebuild). | |
8279 | * | |
8280 | * There is one significant caveat to consider when rebuilding ARC contents | |
8281 | * from an L2ARC device: what about invalidated buffers? Given the above | |
8282 | * construction, we cannot update blocks which we've already written to amend | |
8283 | * them to remove buffers which were invalidated. Thus, during reconstruction, | |
8284 | * we might be populating the cache with buffers for data that's not on the | |
8285 | * main pool anymore, or may have been overwritten! | |
8286 | * | |
8287 | * As it turns out, this isn't a problem. Every arc_read request includes | |
8288 | * both the DVA and, crucially, the birth TXG of the BP the caller is | |
8289 | * looking for. So even if the cache were populated by completely rotten | |
8290 | * blocks for data that had been long deleted and/or overwritten, we'll | |
8291 | * never actually return bad data from the cache, since the DVA with the | |
8292 | * birth TXG uniquely identify a block in space and time - once created, | |
8293 | * a block is immutable on disk. The worst thing we have done is wasted | |
8294 | * some time and memory at l2arc rebuild to reconstruct outdated ARC | |
8295 | * entries that will get dropped from the l2arc as it is being updated | |
8296 | * with new blocks. | |
8297 | * | |
8298 | * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write | |
8299 | * hand are not restored. This is done by saving the offset (in bytes) | |
8300 | * l2arc_evict() has evicted to in the L2ARC device header and taking it | |
8301 | * into account when restoring buffers. | |
34dc7c2f BB |
8302 | */ |
8303 | ||
d164b209 | 8304 | static boolean_t |
2a432414 | 8305 | l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) |
d164b209 BB |
8306 | { |
8307 | /* | |
8308 | * A buffer is *not* eligible for the L2ARC if it: | |
8309 | * 1. belongs to a different spa. | |
428870ff BB |
8310 | * 2. is already cached on the L2ARC. |
8311 | * 3. has an I/O in progress (it may be an incomplete read). | |
8312 | * 4. is flagged not eligible (zfs property). | |
d164b209 | 8313 | */ |
b9541d6b | 8314 | if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || |
c6f5e9d9 | 8315 | HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) |
d164b209 BB |
8316 | return (B_FALSE); |
8317 | ||
8318 | return (B_TRUE); | |
8319 | } | |
8320 | ||
8321 | static uint64_t | |
37c22948 | 8322 | l2arc_write_size(l2arc_dev_t *dev) |
d164b209 | 8323 | { |
b7654bd7 | 8324 | uint64_t size, dev_size, tsize; |
d164b209 | 8325 | |
3a17a7a9 SK |
8326 | /* |
8327 | * Make sure our globals have meaningful values in case the user | |
8328 | * altered them. | |
8329 | */ | |
8330 | size = l2arc_write_max; | |
8331 | if (size == 0) { | |
8332 | cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " | |
8333 | "be greater than zero, resetting it to the default (%d)", | |
8334 | L2ARC_WRITE_SIZE); | |
8335 | size = l2arc_write_max = L2ARC_WRITE_SIZE; | |
8336 | } | |
d164b209 BB |
8337 | |
8338 | if (arc_warm == B_FALSE) | |
3a17a7a9 | 8339 | size += l2arc_write_boost; |
d164b209 | 8340 | |
37c22948 GA |
8341 | /* |
8342 | * Make sure the write size does not exceed the size of the cache | |
8343 | * device. This is important in l2arc_evict(), otherwise infinite | |
8344 | * iteration can occur. | |
8345 | */ | |
8346 | dev_size = dev->l2ad_end - dev->l2ad_start; | |
b7654bd7 GA |
8347 | tsize = size + l2arc_log_blk_overhead(size, dev); |
8348 | if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) | |
8349 | tsize += MAX(64 * 1024 * 1024, | |
8350 | (tsize * l2arc_trim_ahead) / 100); | |
8351 | ||
8352 | if (tsize >= dev_size) { | |
37c22948 | 8353 | cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " |
77f6826b GA |
8354 | "plus the overhead of log blocks (persistent L2ARC, " |
8355 | "%llu bytes) exceeds the size of the cache device " | |
8356 | "(guid %llu), resetting them to the default (%d)", | |
8357 | l2arc_log_blk_overhead(size, dev), | |
37c22948 GA |
8358 | dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); |
8359 | size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; | |
8360 | ||
8361 | if (arc_warm == B_FALSE) | |
8362 | size += l2arc_write_boost; | |
8363 | } | |
8364 | ||
d164b209 BB |
8365 | return (size); |
8366 | ||
8367 | } | |
8368 | ||
8369 | static clock_t | |
8370 | l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) | |
8371 | { | |
428870ff | 8372 | clock_t interval, next, now; |
d164b209 BB |
8373 | |
8374 | /* | |
8375 | * If the ARC lists are busy, increase our write rate; if the | |
8376 | * lists are stale, idle back. This is achieved by checking | |
8377 | * how much we previously wrote - if it was more than half of | |
8378 | * what we wanted, schedule the next write much sooner. | |
8379 | */ | |
8380 | if (l2arc_feed_again && wrote > (wanted / 2)) | |
8381 | interval = (hz * l2arc_feed_min_ms) / 1000; | |
8382 | else | |
8383 | interval = hz * l2arc_feed_secs; | |
8384 | ||
428870ff BB |
8385 | now = ddi_get_lbolt(); |
8386 | next = MAX(now, MIN(now + interval, began + interval)); | |
d164b209 BB |
8387 | |
8388 | return (next); | |
8389 | } | |
8390 | ||
34dc7c2f BB |
8391 | /* |
8392 | * Cycle through L2ARC devices. This is how L2ARC load balances. | |
b128c09f | 8393 | * If a device is returned, this also returns holding the spa config lock. |
34dc7c2f BB |
8394 | */ |
8395 | static l2arc_dev_t * | |
8396 | l2arc_dev_get_next(void) | |
8397 | { | |
b128c09f | 8398 | l2arc_dev_t *first, *next = NULL; |
34dc7c2f | 8399 | |
b128c09f BB |
8400 | /* |
8401 | * Lock out the removal of spas (spa_namespace_lock), then removal | |
8402 | * of cache devices (l2arc_dev_mtx). Once a device has been selected, | |
8403 | * both locks will be dropped and a spa config lock held instead. | |
8404 | */ | |
8405 | mutex_enter(&spa_namespace_lock); | |
8406 | mutex_enter(&l2arc_dev_mtx); | |
8407 | ||
8408 | /* if there are no vdevs, there is nothing to do */ | |
8409 | if (l2arc_ndev == 0) | |
8410 | goto out; | |
8411 | ||
8412 | first = NULL; | |
8413 | next = l2arc_dev_last; | |
8414 | do { | |
8415 | /* loop around the list looking for a non-faulted vdev */ | |
8416 | if (next == NULL) { | |
34dc7c2f | 8417 | next = list_head(l2arc_dev_list); |
b128c09f BB |
8418 | } else { |
8419 | next = list_next(l2arc_dev_list, next); | |
8420 | if (next == NULL) | |
8421 | next = list_head(l2arc_dev_list); | |
8422 | } | |
8423 | ||
8424 | /* if we have come back to the start, bail out */ | |
8425 | if (first == NULL) | |
8426 | first = next; | |
8427 | else if (next == first) | |
8428 | break; | |
8429 | ||
b7654bd7 GA |
8430 | } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || |
8431 | next->l2ad_trim_all); | |
b128c09f BB |
8432 | |
8433 | /* if we were unable to find any usable vdevs, return NULL */ | |
b7654bd7 GA |
8434 | if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || |
8435 | next->l2ad_trim_all) | |
b128c09f | 8436 | next = NULL; |
34dc7c2f BB |
8437 | |
8438 | l2arc_dev_last = next; | |
8439 | ||
b128c09f BB |
8440 | out: |
8441 | mutex_exit(&l2arc_dev_mtx); | |
8442 | ||
8443 | /* | |
8444 | * Grab the config lock to prevent the 'next' device from being | |
8445 | * removed while we are writing to it. | |
8446 | */ | |
8447 | if (next != NULL) | |
8448 | spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); | |
8449 | mutex_exit(&spa_namespace_lock); | |
8450 | ||
34dc7c2f BB |
8451 | return (next); |
8452 | } | |
8453 | ||
b128c09f BB |
8454 | /* |
8455 | * Free buffers that were tagged for destruction. | |
8456 | */ | |
8457 | static void | |
0bc8fd78 | 8458 | l2arc_do_free_on_write(void) |
b128c09f BB |
8459 | { |
8460 | list_t *buflist; | |
8461 | l2arc_data_free_t *df, *df_prev; | |
8462 | ||
8463 | mutex_enter(&l2arc_free_on_write_mtx); | |
8464 | buflist = l2arc_free_on_write; | |
8465 | ||
8466 | for (df = list_tail(buflist); df; df = df_prev) { | |
8467 | df_prev = list_prev(buflist, df); | |
a6255b7f DQ |
8468 | ASSERT3P(df->l2df_abd, !=, NULL); |
8469 | abd_free(df->l2df_abd); | |
b128c09f BB |
8470 | list_remove(buflist, df); |
8471 | kmem_free(df, sizeof (l2arc_data_free_t)); | |
8472 | } | |
8473 | ||
8474 | mutex_exit(&l2arc_free_on_write_mtx); | |
8475 | } | |
8476 | ||
34dc7c2f BB |
8477 | /* |
8478 | * A write to a cache device has completed. Update all headers to allow | |
8479 | * reads from these buffers to begin. | |
8480 | */ | |
8481 | static void | |
8482 | l2arc_write_done(zio_t *zio) | |
8483 | { | |
77f6826b GA |
8484 | l2arc_write_callback_t *cb; |
8485 | l2arc_lb_abd_buf_t *abd_buf; | |
8486 | l2arc_lb_ptr_buf_t *lb_ptr_buf; | |
8487 | l2arc_dev_t *dev; | |
657fd33b | 8488 | l2arc_dev_hdr_phys_t *l2dhdr; |
77f6826b GA |
8489 | list_t *buflist; |
8490 | arc_buf_hdr_t *head, *hdr, *hdr_prev; | |
8491 | kmutex_t *hash_lock; | |
8492 | int64_t bytes_dropped = 0; | |
34dc7c2f BB |
8493 | |
8494 | cb = zio->io_private; | |
d3c2ae1c | 8495 | ASSERT3P(cb, !=, NULL); |
34dc7c2f | 8496 | dev = cb->l2wcb_dev; |
657fd33b | 8497 | l2dhdr = dev->l2ad_dev_hdr; |
d3c2ae1c | 8498 | ASSERT3P(dev, !=, NULL); |
34dc7c2f | 8499 | head = cb->l2wcb_head; |
d3c2ae1c | 8500 | ASSERT3P(head, !=, NULL); |
b9541d6b | 8501 | buflist = &dev->l2ad_buflist; |
d3c2ae1c | 8502 | ASSERT3P(buflist, !=, NULL); |
34dc7c2f BB |
8503 | DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, |
8504 | l2arc_write_callback_t *, cb); | |
8505 | ||
34dc7c2f BB |
8506 | /* |
8507 | * All writes completed, or an error was hit. | |
8508 | */ | |
ca0bf58d PS |
8509 | top: |
8510 | mutex_enter(&dev->l2ad_mtx); | |
2a432414 GW |
8511 | for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { |
8512 | hdr_prev = list_prev(buflist, hdr); | |
34dc7c2f | 8513 | |
2a432414 | 8514 | hash_lock = HDR_LOCK(hdr); |
ca0bf58d PS |
8515 | |
8516 | /* | |
8517 | * We cannot use mutex_enter or else we can deadlock | |
8518 | * with l2arc_write_buffers (due to swapping the order | |
8519 | * the hash lock and l2ad_mtx are taken). | |
8520 | */ | |
34dc7c2f BB |
8521 | if (!mutex_tryenter(hash_lock)) { |
8522 | /* | |
ca0bf58d PS |
8523 | * Missed the hash lock. We must retry so we |
8524 | * don't leave the ARC_FLAG_L2_WRITING bit set. | |
34dc7c2f | 8525 | */ |
ca0bf58d PS |
8526 | ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); |
8527 | ||
8528 | /* | |
8529 | * We don't want to rescan the headers we've | |
8530 | * already marked as having been written out, so | |
8531 | * we reinsert the head node so we can pick up | |
8532 | * where we left off. | |
8533 | */ | |
8534 | list_remove(buflist, head); | |
8535 | list_insert_after(buflist, hdr, head); | |
8536 | ||
8537 | mutex_exit(&dev->l2ad_mtx); | |
8538 | ||
8539 | /* | |
8540 | * We wait for the hash lock to become available | |
8541 | * to try and prevent busy waiting, and increase | |
8542 | * the chance we'll be able to acquire the lock | |
8543 | * the next time around. | |
8544 | */ | |
8545 | mutex_enter(hash_lock); | |
8546 | mutex_exit(hash_lock); | |
8547 | goto top; | |
34dc7c2f BB |
8548 | } |
8549 | ||
b9541d6b | 8550 | /* |
ca0bf58d PS |
8551 | * We could not have been moved into the arc_l2c_only |
8552 | * state while in-flight due to our ARC_FLAG_L2_WRITING | |
8553 | * bit being set. Let's just ensure that's being enforced. | |
8554 | */ | |
8555 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
8556 | ||
8a09d5fd BB |
8557 | /* |
8558 | * Skipped - drop L2ARC entry and mark the header as no | |
8559 | * longer L2 eligibile. | |
8560 | */ | |
d3c2ae1c | 8561 | if (zio->io_error != 0) { |
34dc7c2f | 8562 | /* |
b128c09f | 8563 | * Error - drop L2ARC entry. |
34dc7c2f | 8564 | */ |
2a432414 | 8565 | list_remove(buflist, hdr); |
d3c2ae1c | 8566 | arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); |
b9541d6b | 8567 | |
7558997d | 8568 | uint64_t psize = HDR_GET_PSIZE(hdr); |
08532162 | 8569 | l2arc_hdr_arcstats_decrement(hdr); |
d962d5da | 8570 | |
7558997d SD |
8571 | bytes_dropped += |
8572 | vdev_psize_to_asize(dev->l2ad_vdev, psize); | |
424fd7c3 | 8573 | (void) zfs_refcount_remove_many(&dev->l2ad_alloc, |
d3c2ae1c | 8574 | arc_hdr_size(hdr), hdr); |
34dc7c2f BB |
8575 | } |
8576 | ||
8577 | /* | |
ca0bf58d PS |
8578 | * Allow ARC to begin reads and ghost list evictions to |
8579 | * this L2ARC entry. | |
34dc7c2f | 8580 | */ |
d3c2ae1c | 8581 | arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); |
34dc7c2f BB |
8582 | |
8583 | mutex_exit(hash_lock); | |
8584 | } | |
8585 | ||
77f6826b GA |
8586 | /* |
8587 | * Free the allocated abd buffers for writing the log blocks. | |
8588 | * If the zio failed reclaim the allocated space and remove the | |
8589 | * pointers to these log blocks from the log block pointer list | |
8590 | * of the L2ARC device. | |
8591 | */ | |
8592 | while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { | |
8593 | abd_free(abd_buf->abd); | |
8594 | zio_buf_free(abd_buf, sizeof (*abd_buf)); | |
8595 | if (zio->io_error != 0) { | |
8596 | lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); | |
657fd33b GA |
8597 | /* |
8598 | * L2BLK_GET_PSIZE returns aligned size for log | |
8599 | * blocks. | |
8600 | */ | |
8601 | uint64_t asize = | |
77f6826b | 8602 | L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); |
657fd33b GA |
8603 | bytes_dropped += asize; |
8604 | ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); | |
8605 | ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); | |
8606 | zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, | |
8607 | lb_ptr_buf); | |
8608 | zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); | |
77f6826b GA |
8609 | kmem_free(lb_ptr_buf->lb_ptr, |
8610 | sizeof (l2arc_log_blkptr_t)); | |
8611 | kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); | |
8612 | } | |
8613 | } | |
8614 | list_destroy(&cb->l2wcb_abd_list); | |
8615 | ||
657fd33b | 8616 | if (zio->io_error != 0) { |
08532162 GA |
8617 | ARCSTAT_BUMP(arcstat_l2_writes_error); |
8618 | ||
2054f35e GA |
8619 | /* |
8620 | * Restore the lbps array in the header to its previous state. | |
8621 | * If the list of log block pointers is empty, zero out the | |
8622 | * log block pointers in the device header. | |
8623 | */ | |
657fd33b GA |
8624 | lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); |
8625 | for (int i = 0; i < 2; i++) { | |
2054f35e GA |
8626 | if (lb_ptr_buf == NULL) { |
8627 | /* | |
8628 | * If the list is empty zero out the device | |
8629 | * header. Otherwise zero out the second log | |
8630 | * block pointer in the header. | |
8631 | */ | |
8632 | if (i == 0) { | |
8633 | bzero(l2dhdr, dev->l2ad_dev_hdr_asize); | |
8634 | } else { | |
8635 | bzero(&l2dhdr->dh_start_lbps[i], | |
8636 | sizeof (l2arc_log_blkptr_t)); | |
8637 | } | |
8638 | break; | |
8639 | } | |
657fd33b GA |
8640 | bcopy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[i], |
8641 | sizeof (l2arc_log_blkptr_t)); | |
8642 | lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, | |
8643 | lb_ptr_buf); | |
8644 | } | |
8645 | } | |
8646 | ||
c4c162c1 | 8647 | ARCSTAT_BUMP(arcstat_l2_writes_done); |
34dc7c2f | 8648 | list_remove(buflist, head); |
b9541d6b CW |
8649 | ASSERT(!HDR_HAS_L1HDR(head)); |
8650 | kmem_cache_free(hdr_l2only_cache, head); | |
8651 | mutex_exit(&dev->l2ad_mtx); | |
34dc7c2f | 8652 | |
77f6826b | 8653 | ASSERT(dev->l2ad_vdev != NULL); |
3bec585e SK |
8654 | vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); |
8655 | ||
b128c09f | 8656 | l2arc_do_free_on_write(); |
34dc7c2f BB |
8657 | |
8658 | kmem_free(cb, sizeof (l2arc_write_callback_t)); | |
8659 | } | |
8660 | ||
b5256303 TC |
8661 | static int |
8662 | l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) | |
8663 | { | |
8664 | int ret; | |
8665 | spa_t *spa = zio->io_spa; | |
8666 | arc_buf_hdr_t *hdr = cb->l2rcb_hdr; | |
8667 | blkptr_t *bp = zio->io_bp; | |
b5256303 TC |
8668 | uint8_t salt[ZIO_DATA_SALT_LEN]; |
8669 | uint8_t iv[ZIO_DATA_IV_LEN]; | |
8670 | uint8_t mac[ZIO_DATA_MAC_LEN]; | |
8671 | boolean_t no_crypt = B_FALSE; | |
8672 | ||
8673 | /* | |
8674 | * ZIL data is never be written to the L2ARC, so we don't need | |
8675 | * special handling for its unique MAC storage. | |
8676 | */ | |
8677 | ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); | |
8678 | ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); | |
440a3eb9 | 8679 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); |
b5256303 | 8680 | |
440a3eb9 TC |
8681 | /* |
8682 | * If the data was encrypted, decrypt it now. Note that | |
8683 | * we must check the bp here and not the hdr, since the | |
8684 | * hdr does not have its encryption parameters updated | |
8685 | * until arc_read_done(). | |
8686 | */ | |
8687 | if (BP_IS_ENCRYPTED(bp)) { | |
e111c802 MM |
8688 | abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, |
8689 | B_TRUE); | |
b5256303 TC |
8690 | |
8691 | zio_crypt_decode_params_bp(bp, salt, iv); | |
8692 | zio_crypt_decode_mac_bp(bp, mac); | |
8693 | ||
be9a5c35 TC |
8694 | ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, |
8695 | BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), | |
8696 | salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, | |
8697 | hdr->b_l1hdr.b_pabd, &no_crypt); | |
b5256303 TC |
8698 | if (ret != 0) { |
8699 | arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); | |
b5256303 TC |
8700 | goto error; |
8701 | } | |
8702 | ||
b5256303 TC |
8703 | /* |
8704 | * If we actually performed decryption, replace b_pabd | |
8705 | * with the decrypted data. Otherwise we can just throw | |
8706 | * our decryption buffer away. | |
8707 | */ | |
8708 | if (!no_crypt) { | |
8709 | arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, | |
8710 | arc_hdr_size(hdr), hdr); | |
8711 | hdr->b_l1hdr.b_pabd = eabd; | |
8712 | zio->io_abd = eabd; | |
8713 | } else { | |
8714 | arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); | |
8715 | } | |
8716 | } | |
8717 | ||
8718 | /* | |
8719 | * If the L2ARC block was compressed, but ARC compression | |
8720 | * is disabled we decompress the data into a new buffer and | |
8721 | * replace the existing data. | |
8722 | */ | |
8723 | if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && | |
8724 | !HDR_COMPRESSION_ENABLED(hdr)) { | |
e111c802 MM |
8725 | abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, |
8726 | B_TRUE); | |
b5256303 TC |
8727 | void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); |
8728 | ||
8729 | ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), | |
8730 | hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), | |
10b3c7f5 | 8731 | HDR_GET_LSIZE(hdr), &hdr->b_complevel); |
b5256303 TC |
8732 | if (ret != 0) { |
8733 | abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); | |
8734 | arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); | |
8735 | goto error; | |
8736 | } | |
8737 | ||
8738 | abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); | |
8739 | arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, | |
8740 | arc_hdr_size(hdr), hdr); | |
8741 | hdr->b_l1hdr.b_pabd = cabd; | |
8742 | zio->io_abd = cabd; | |
8743 | zio->io_size = HDR_GET_LSIZE(hdr); | |
8744 | } | |
8745 | ||
8746 | return (0); | |
8747 | ||
8748 | error: | |
8749 | return (ret); | |
8750 | } | |
8751 | ||
8752 | ||
34dc7c2f BB |
8753 | /* |
8754 | * A read to a cache device completed. Validate buffer contents before | |
8755 | * handing over to the regular ARC routines. | |
8756 | */ | |
8757 | static void | |
8758 | l2arc_read_done(zio_t *zio) | |
8759 | { | |
b5256303 | 8760 | int tfm_error = 0; |
b405837a | 8761 | l2arc_read_callback_t *cb = zio->io_private; |
34dc7c2f | 8762 | arc_buf_hdr_t *hdr; |
34dc7c2f | 8763 | kmutex_t *hash_lock; |
b405837a TC |
8764 | boolean_t valid_cksum; |
8765 | boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && | |
8766 | (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); | |
b128c09f | 8767 | |
d3c2ae1c | 8768 | ASSERT3P(zio->io_vd, !=, NULL); |
b128c09f BB |
8769 | ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); |
8770 | ||
8771 | spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); | |
34dc7c2f | 8772 | |
d3c2ae1c GW |
8773 | ASSERT3P(cb, !=, NULL); |
8774 | hdr = cb->l2rcb_hdr; | |
8775 | ASSERT3P(hdr, !=, NULL); | |
34dc7c2f | 8776 | |
d3c2ae1c | 8777 | hash_lock = HDR_LOCK(hdr); |
34dc7c2f | 8778 | mutex_enter(hash_lock); |
428870ff | 8779 | ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); |
34dc7c2f | 8780 | |
82710e99 GDN |
8781 | /* |
8782 | * If the data was read into a temporary buffer, | |
8783 | * move it and free the buffer. | |
8784 | */ | |
8785 | if (cb->l2rcb_abd != NULL) { | |
8786 | ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); | |
8787 | if (zio->io_error == 0) { | |
b405837a TC |
8788 | if (using_rdata) { |
8789 | abd_copy(hdr->b_crypt_hdr.b_rabd, | |
8790 | cb->l2rcb_abd, arc_hdr_size(hdr)); | |
8791 | } else { | |
8792 | abd_copy(hdr->b_l1hdr.b_pabd, | |
8793 | cb->l2rcb_abd, arc_hdr_size(hdr)); | |
8794 | } | |
82710e99 GDN |
8795 | } |
8796 | ||
8797 | /* | |
8798 | * The following must be done regardless of whether | |
8799 | * there was an error: | |
8800 | * - free the temporary buffer | |
8801 | * - point zio to the real ARC buffer | |
8802 | * - set zio size accordingly | |
8803 | * These are required because zio is either re-used for | |
8804 | * an I/O of the block in the case of the error | |
8805 | * or the zio is passed to arc_read_done() and it | |
8806 | * needs real data. | |
8807 | */ | |
8808 | abd_free(cb->l2rcb_abd); | |
8809 | zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); | |
440a3eb9 | 8810 | |
b405837a | 8811 | if (using_rdata) { |
440a3eb9 TC |
8812 | ASSERT(HDR_HAS_RABD(hdr)); |
8813 | zio->io_abd = zio->io_orig_abd = | |
8814 | hdr->b_crypt_hdr.b_rabd; | |
8815 | } else { | |
8816 | ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); | |
8817 | zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; | |
8818 | } | |
82710e99 GDN |
8819 | } |
8820 | ||
a6255b7f | 8821 | ASSERT3P(zio->io_abd, !=, NULL); |
3a17a7a9 | 8822 | |
34dc7c2f BB |
8823 | /* |
8824 | * Check this survived the L2ARC journey. | |
8825 | */ | |
b5256303 TC |
8826 | ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || |
8827 | (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); | |
d3c2ae1c GW |
8828 | zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ |
8829 | zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ | |
10b3c7f5 | 8830 | zio->io_prop.zp_complevel = hdr->b_complevel; |
d3c2ae1c GW |
8831 | |
8832 | valid_cksum = arc_cksum_is_equal(hdr, zio); | |
b5256303 TC |
8833 | |
8834 | /* | |
8835 | * b_rabd will always match the data as it exists on disk if it is | |
8836 | * being used. Therefore if we are reading into b_rabd we do not | |
8837 | * attempt to untransform the data. | |
8838 | */ | |
8839 | if (valid_cksum && !using_rdata) | |
8840 | tfm_error = l2arc_untransform(zio, cb); | |
8841 | ||
8842 | if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && | |
8843 | !HDR_L2_EVICTED(hdr)) { | |
34dc7c2f | 8844 | mutex_exit(hash_lock); |
d3c2ae1c | 8845 | zio->io_private = hdr; |
34dc7c2f BB |
8846 | arc_read_done(zio); |
8847 | } else { | |
34dc7c2f BB |
8848 | /* |
8849 | * Buffer didn't survive caching. Increment stats and | |
8850 | * reissue to the original storage device. | |
8851 | */ | |
b128c09f | 8852 | if (zio->io_error != 0) { |
34dc7c2f | 8853 | ARCSTAT_BUMP(arcstat_l2_io_error); |
b128c09f | 8854 | } else { |
2e528b49 | 8855 | zio->io_error = SET_ERROR(EIO); |
b128c09f | 8856 | } |
b5256303 | 8857 | if (!valid_cksum || tfm_error != 0) |
34dc7c2f BB |
8858 | ARCSTAT_BUMP(arcstat_l2_cksum_bad); |
8859 | ||
34dc7c2f | 8860 | /* |
b128c09f BB |
8861 | * If there's no waiter, issue an async i/o to the primary |
8862 | * storage now. If there *is* a waiter, the caller must | |
8863 | * issue the i/o in a context where it's OK to block. | |
34dc7c2f | 8864 | */ |
d164b209 BB |
8865 | if (zio->io_waiter == NULL) { |
8866 | zio_t *pio = zio_unique_parent(zio); | |
b5256303 TC |
8867 | void *abd = (using_rdata) ? |
8868 | hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; | |
d164b209 BB |
8869 | |
8870 | ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); | |
8871 | ||
5ff2249f | 8872 | zio = zio_read(pio, zio->io_spa, zio->io_bp, |
b5256303 | 8873 | abd, zio->io_size, arc_read_done, |
d3c2ae1c | 8874 | hdr, zio->io_priority, cb->l2rcb_flags, |
5ff2249f AM |
8875 | &cb->l2rcb_zb); |
8876 | ||
8877 | /* | |
8878 | * Original ZIO will be freed, so we need to update | |
8879 | * ARC header with the new ZIO pointer to be used | |
8880 | * by zio_change_priority() in arc_read(). | |
8881 | */ | |
8882 | for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; | |
8883 | acb != NULL; acb = acb->acb_next) | |
8884 | acb->acb_zio_head = zio; | |
8885 | ||
8886 | mutex_exit(hash_lock); | |
8887 | zio_nowait(zio); | |
8888 | } else { | |
8889 | mutex_exit(hash_lock); | |
d164b209 | 8890 | } |
34dc7c2f BB |
8891 | } |
8892 | ||
8893 | kmem_free(cb, sizeof (l2arc_read_callback_t)); | |
8894 | } | |
8895 | ||
8896 | /* | |
8897 | * This is the list priority from which the L2ARC will search for pages to | |
8898 | * cache. This is used within loops (0..3) to cycle through lists in the | |
8899 | * desired order. This order can have a significant effect on cache | |
8900 | * performance. | |
8901 | * | |
8902 | * Currently the metadata lists are hit first, MFU then MRU, followed by | |
8903 | * the data lists. This function returns a locked list, and also returns | |
8904 | * the lock pointer. | |
8905 | */ | |
ca0bf58d PS |
8906 | static multilist_sublist_t * |
8907 | l2arc_sublist_lock(int list_num) | |
34dc7c2f | 8908 | { |
ca0bf58d PS |
8909 | multilist_t *ml = NULL; |
8910 | unsigned int idx; | |
34dc7c2f | 8911 | |
4aafab91 | 8912 | ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES); |
34dc7c2f BB |
8913 | |
8914 | switch (list_num) { | |
8915 | case 0: | |
ffdf019c | 8916 | ml = &arc_mfu->arcs_list[ARC_BUFC_METADATA]; |
34dc7c2f BB |
8917 | break; |
8918 | case 1: | |
ffdf019c | 8919 | ml = &arc_mru->arcs_list[ARC_BUFC_METADATA]; |
34dc7c2f BB |
8920 | break; |
8921 | case 2: | |
ffdf019c | 8922 | ml = &arc_mfu->arcs_list[ARC_BUFC_DATA]; |
34dc7c2f BB |
8923 | break; |
8924 | case 3: | |
ffdf019c | 8925 | ml = &arc_mru->arcs_list[ARC_BUFC_DATA]; |
34dc7c2f | 8926 | break; |
4aafab91 G |
8927 | default: |
8928 | return (NULL); | |
34dc7c2f BB |
8929 | } |
8930 | ||
ca0bf58d PS |
8931 | /* |
8932 | * Return a randomly-selected sublist. This is acceptable | |
8933 | * because the caller feeds only a little bit of data for each | |
8934 | * call (8MB). Subsequent calls will result in different | |
8935 | * sublists being selected. | |
8936 | */ | |
8937 | idx = multilist_get_random_index(ml); | |
8938 | return (multilist_sublist_lock(ml, idx)); | |
34dc7c2f BB |
8939 | } |
8940 | ||
77f6826b GA |
8941 | /* |
8942 | * Calculates the maximum overhead of L2ARC metadata log blocks for a given | |
657fd33b | 8943 | * L2ARC write size. l2arc_evict and l2arc_write_size need to include this |
77f6826b GA |
8944 | * overhead in processing to make sure there is enough headroom available |
8945 | * when writing buffers. | |
8946 | */ | |
8947 | static inline uint64_t | |
8948 | l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) | |
8949 | { | |
657fd33b | 8950 | if (dev->l2ad_log_entries == 0) { |
77f6826b GA |
8951 | return (0); |
8952 | } else { | |
8953 | uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; | |
8954 | ||
8955 | uint64_t log_blocks = (log_entries + | |
657fd33b GA |
8956 | dev->l2ad_log_entries - 1) / |
8957 | dev->l2ad_log_entries; | |
77f6826b GA |
8958 | |
8959 | return (vdev_psize_to_asize(dev->l2ad_vdev, | |
8960 | sizeof (l2arc_log_blk_phys_t)) * log_blocks); | |
8961 | } | |
8962 | } | |
8963 | ||
34dc7c2f BB |
8964 | /* |
8965 | * Evict buffers from the device write hand to the distance specified in | |
77f6826b | 8966 | * bytes. This distance may span populated buffers, it may span nothing. |
34dc7c2f BB |
8967 | * This is clearing a region on the L2ARC device ready for writing. |
8968 | * If the 'all' boolean is set, every buffer is evicted. | |
8969 | */ | |
8970 | static void | |
8971 | l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) | |
8972 | { | |
8973 | list_t *buflist; | |
2a432414 | 8974 | arc_buf_hdr_t *hdr, *hdr_prev; |
34dc7c2f BB |
8975 | kmutex_t *hash_lock; |
8976 | uint64_t taddr; | |
77f6826b | 8977 | l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; |
b7654bd7 GA |
8978 | vdev_t *vd = dev->l2ad_vdev; |
8979 | boolean_t rerun; | |
34dc7c2f | 8980 | |
b9541d6b | 8981 | buflist = &dev->l2ad_buflist; |
34dc7c2f | 8982 | |
77f6826b GA |
8983 | /* |
8984 | * We need to add in the worst case scenario of log block overhead. | |
8985 | */ | |
8986 | distance += l2arc_log_blk_overhead(distance, dev); | |
b7654bd7 GA |
8987 | if (vd->vdev_has_trim && l2arc_trim_ahead > 0) { |
8988 | /* | |
8989 | * Trim ahead of the write size 64MB or (l2arc_trim_ahead/100) | |
8990 | * times the write size, whichever is greater. | |
8991 | */ | |
8992 | distance += MAX(64 * 1024 * 1024, | |
8993 | (distance * l2arc_trim_ahead) / 100); | |
8994 | } | |
77f6826b | 8995 | |
37c22948 GA |
8996 | top: |
8997 | rerun = B_FALSE; | |
8998 | if (dev->l2ad_hand >= (dev->l2ad_end - distance)) { | |
34dc7c2f | 8999 | /* |
dd4bc569 | 9000 | * When there is no space to accommodate upcoming writes, |
77f6826b GA |
9001 | * evict to the end. Then bump the write and evict hands |
9002 | * to the start and iterate. This iteration does not | |
9003 | * happen indefinitely as we make sure in | |
9004 | * l2arc_write_size() that when the write hand is reset, | |
9005 | * the write size does not exceed the end of the device. | |
34dc7c2f | 9006 | */ |
37c22948 | 9007 | rerun = B_TRUE; |
34dc7c2f BB |
9008 | taddr = dev->l2ad_end; |
9009 | } else { | |
9010 | taddr = dev->l2ad_hand + distance; | |
9011 | } | |
9012 | DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, | |
9013 | uint64_t, taddr, boolean_t, all); | |
9014 | ||
b7654bd7 | 9015 | if (!all) { |
37c22948 | 9016 | /* |
b7654bd7 GA |
9017 | * This check has to be placed after deciding whether to |
9018 | * iterate (rerun). | |
37c22948 | 9019 | */ |
b7654bd7 GA |
9020 | if (dev->l2ad_first) { |
9021 | /* | |
9022 | * This is the first sweep through the device. There is | |
9023 | * nothing to evict. We have already trimmmed the | |
9024 | * whole device. | |
9025 | */ | |
9026 | goto out; | |
9027 | } else { | |
9028 | /* | |
9029 | * Trim the space to be evicted. | |
9030 | */ | |
9031 | if (vd->vdev_has_trim && dev->l2ad_evict < taddr && | |
9032 | l2arc_trim_ahead > 0) { | |
9033 | /* | |
9034 | * We have to drop the spa_config lock because | |
9035 | * vdev_trim_range() will acquire it. | |
9036 | * l2ad_evict already accounts for the label | |
9037 | * size. To prevent vdev_trim_ranges() from | |
9038 | * adding it again, we subtract it from | |
9039 | * l2ad_evict. | |
9040 | */ | |
9041 | spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev); | |
9042 | vdev_trim_simple(vd, | |
9043 | dev->l2ad_evict - VDEV_LABEL_START_SIZE, | |
9044 | taddr - dev->l2ad_evict); | |
9045 | spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev, | |
9046 | RW_READER); | |
9047 | } | |
37c22948 | 9048 | |
b7654bd7 GA |
9049 | /* |
9050 | * When rebuilding L2ARC we retrieve the evict hand | |
9051 | * from the header of the device. Of note, l2arc_evict() | |
9052 | * does not actually delete buffers from the cache | |
9053 | * device, but trimming may do so depending on the | |
9054 | * hardware implementation. Thus keeping track of the | |
9055 | * evict hand is useful. | |
9056 | */ | |
9057 | dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); | |
9058 | } | |
9059 | } | |
77f6826b | 9060 | |
37c22948 | 9061 | retry: |
b9541d6b | 9062 | mutex_enter(&dev->l2ad_mtx); |
77f6826b GA |
9063 | /* |
9064 | * We have to account for evicted log blocks. Run vdev_space_update() | |
9065 | * on log blocks whose offset (in bytes) is before the evicted offset | |
9066 | * (in bytes) by searching in the list of pointers to log blocks | |
9067 | * present in the L2ARC device. | |
9068 | */ | |
9069 | for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; | |
9070 | lb_ptr_buf = lb_ptr_buf_prev) { | |
9071 | ||
9072 | lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); | |
9073 | ||
657fd33b GA |
9074 | /* L2BLK_GET_PSIZE returns aligned size for log blocks */ |
9075 | uint64_t asize = L2BLK_GET_PSIZE( | |
9076 | (lb_ptr_buf->lb_ptr)->lbp_prop); | |
9077 | ||
77f6826b GA |
9078 | /* |
9079 | * We don't worry about log blocks left behind (ie | |
657fd33b | 9080 | * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() |
77f6826b GA |
9081 | * will never write more than l2arc_evict() evicts. |
9082 | */ | |
9083 | if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { | |
9084 | break; | |
9085 | } else { | |
b7654bd7 | 9086 | vdev_space_update(vd, -asize, 0, 0); |
657fd33b GA |
9087 | ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); |
9088 | ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); | |
9089 | zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, | |
9090 | lb_ptr_buf); | |
9091 | zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); | |
77f6826b GA |
9092 | list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); |
9093 | kmem_free(lb_ptr_buf->lb_ptr, | |
9094 | sizeof (l2arc_log_blkptr_t)); | |
9095 | kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); | |
9096 | } | |
9097 | } | |
9098 | ||
2a432414 GW |
9099 | for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { |
9100 | hdr_prev = list_prev(buflist, hdr); | |
34dc7c2f | 9101 | |
ca6c7a94 | 9102 | ASSERT(!HDR_EMPTY(hdr)); |
2a432414 | 9103 | hash_lock = HDR_LOCK(hdr); |
ca0bf58d PS |
9104 | |
9105 | /* | |
9106 | * We cannot use mutex_enter or else we can deadlock | |
9107 | * with l2arc_write_buffers (due to swapping the order | |
9108 | * the hash lock and l2ad_mtx are taken). | |
9109 | */ | |
34dc7c2f BB |
9110 | if (!mutex_tryenter(hash_lock)) { |
9111 | /* | |
9112 | * Missed the hash lock. Retry. | |
9113 | */ | |
9114 | ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); | |
b9541d6b | 9115 | mutex_exit(&dev->l2ad_mtx); |
34dc7c2f BB |
9116 | mutex_enter(hash_lock); |
9117 | mutex_exit(hash_lock); | |
37c22948 | 9118 | goto retry; |
34dc7c2f BB |
9119 | } |
9120 | ||
f06f53fa AG |
9121 | /* |
9122 | * A header can't be on this list if it doesn't have L2 header. | |
9123 | */ | |
9124 | ASSERT(HDR_HAS_L2HDR(hdr)); | |
34dc7c2f | 9125 | |
f06f53fa AG |
9126 | /* Ensure this header has finished being written. */ |
9127 | ASSERT(!HDR_L2_WRITING(hdr)); | |
9128 | ASSERT(!HDR_L2_WRITE_HEAD(hdr)); | |
9129 | ||
77f6826b | 9130 | if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || |
b9541d6b | 9131 | hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { |
34dc7c2f BB |
9132 | /* |
9133 | * We've evicted to the target address, | |
9134 | * or the end of the device. | |
9135 | */ | |
9136 | mutex_exit(hash_lock); | |
9137 | break; | |
9138 | } | |
9139 | ||
b9541d6b | 9140 | if (!HDR_HAS_L1HDR(hdr)) { |
2a432414 | 9141 | ASSERT(!HDR_L2_READING(hdr)); |
34dc7c2f BB |
9142 | /* |
9143 | * This doesn't exist in the ARC. Destroy. | |
9144 | * arc_hdr_destroy() will call list_remove() | |
01850391 | 9145 | * and decrement arcstat_l2_lsize. |
34dc7c2f | 9146 | */ |
2a432414 GW |
9147 | arc_change_state(arc_anon, hdr, hash_lock); |
9148 | arc_hdr_destroy(hdr); | |
34dc7c2f | 9149 | } else { |
b9541d6b CW |
9150 | ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); |
9151 | ARCSTAT_BUMP(arcstat_l2_evict_l1cached); | |
b128c09f BB |
9152 | /* |
9153 | * Invalidate issued or about to be issued | |
9154 | * reads, since we may be about to write | |
9155 | * over this location. | |
9156 | */ | |
2a432414 | 9157 | if (HDR_L2_READING(hdr)) { |
b128c09f | 9158 | ARCSTAT_BUMP(arcstat_l2_evict_reading); |
d3c2ae1c | 9159 | arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); |
b128c09f BB |
9160 | } |
9161 | ||
d962d5da | 9162 | arc_hdr_l2hdr_destroy(hdr); |
34dc7c2f BB |
9163 | } |
9164 | mutex_exit(hash_lock); | |
9165 | } | |
b9541d6b | 9166 | mutex_exit(&dev->l2ad_mtx); |
37c22948 GA |
9167 | |
9168 | out: | |
77f6826b GA |
9169 | /* |
9170 | * We need to check if we evict all buffers, otherwise we may iterate | |
9171 | * unnecessarily. | |
9172 | */ | |
9173 | if (!all && rerun) { | |
37c22948 GA |
9174 | /* |
9175 | * Bump device hand to the device start if it is approaching the | |
9176 | * end. l2arc_evict() has already evicted ahead for this case. | |
9177 | */ | |
9178 | dev->l2ad_hand = dev->l2ad_start; | |
77f6826b | 9179 | dev->l2ad_evict = dev->l2ad_start; |
37c22948 GA |
9180 | dev->l2ad_first = B_FALSE; |
9181 | goto top; | |
9182 | } | |
657fd33b | 9183 | |
2c210f68 GA |
9184 | if (!all) { |
9185 | /* | |
9186 | * In case of cache device removal (all) the following | |
9187 | * assertions may be violated without functional consequences | |
9188 | * as the device is about to be removed. | |
9189 | */ | |
9190 | ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); | |
9191 | if (!dev->l2ad_first) | |
9192 | ASSERT3U(dev->l2ad_hand, <, dev->l2ad_evict); | |
9193 | } | |
34dc7c2f BB |
9194 | } |
9195 | ||
b5256303 TC |
9196 | /* |
9197 | * Handle any abd transforms that might be required for writing to the L2ARC. | |
9198 | * If successful, this function will always return an abd with the data | |
9199 | * transformed as it is on disk in a new abd of asize bytes. | |
9200 | */ | |
9201 | static int | |
9202 | l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, | |
9203 | abd_t **abd_out) | |
9204 | { | |
9205 | int ret; | |
9206 | void *tmp = NULL; | |
9207 | abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; | |
9208 | enum zio_compress compress = HDR_GET_COMPRESS(hdr); | |
9209 | uint64_t psize = HDR_GET_PSIZE(hdr); | |
9210 | uint64_t size = arc_hdr_size(hdr); | |
9211 | boolean_t ismd = HDR_ISTYPE_METADATA(hdr); | |
9212 | boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); | |
9213 | dsl_crypto_key_t *dck = NULL; | |
9214 | uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; | |
4807c0ba | 9215 | boolean_t no_crypt = B_FALSE; |
b5256303 TC |
9216 | |
9217 | ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && | |
9218 | !HDR_COMPRESSION_ENABLED(hdr)) || | |
9219 | HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); | |
9220 | ASSERT3U(psize, <=, asize); | |
9221 | ||
9222 | /* | |
9223 | * If this data simply needs its own buffer, we simply allocate it | |
b7109a41 | 9224 | * and copy the data. This may be done to eliminate a dependency on a |
b5256303 TC |
9225 | * shared buffer or to reallocate the buffer to match asize. |
9226 | */ | |
4807c0ba | 9227 | if (HDR_HAS_RABD(hdr) && asize != psize) { |
10adee27 | 9228 | ASSERT3U(asize, >=, psize); |
4807c0ba | 9229 | to_write = abd_alloc_for_io(asize, ismd); |
10adee27 TC |
9230 | abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); |
9231 | if (psize != asize) | |
9232 | abd_zero_off(to_write, psize, asize - psize); | |
4807c0ba TC |
9233 | goto out; |
9234 | } | |
9235 | ||
b5256303 TC |
9236 | if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && |
9237 | !HDR_ENCRYPTED(hdr)) { | |
9238 | ASSERT3U(size, ==, psize); | |
9239 | to_write = abd_alloc_for_io(asize, ismd); | |
9240 | abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); | |
9241 | if (size != asize) | |
9242 | abd_zero_off(to_write, size, asize - size); | |
9243 | goto out; | |
9244 | } | |
9245 | ||
9246 | if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { | |
9247 | cabd = abd_alloc_for_io(asize, ismd); | |
9248 | tmp = abd_borrow_buf(cabd, asize); | |
9249 | ||
10b3c7f5 MN |
9250 | psize = zio_compress_data(compress, to_write, tmp, size, |
9251 | hdr->b_complevel); | |
9252 | ||
9253 | if (psize >= size) { | |
9254 | abd_return_buf(cabd, tmp, asize); | |
9255 | HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF); | |
9256 | to_write = cabd; | |
9257 | abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); | |
9258 | if (size != asize) | |
9259 | abd_zero_off(to_write, size, asize - size); | |
9260 | goto encrypt; | |
9261 | } | |
b5256303 TC |
9262 | ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); |
9263 | if (psize < asize) | |
9264 | bzero((char *)tmp + psize, asize - psize); | |
9265 | psize = HDR_GET_PSIZE(hdr); | |
9266 | abd_return_buf_copy(cabd, tmp, asize); | |
9267 | to_write = cabd; | |
9268 | } | |
9269 | ||
10b3c7f5 | 9270 | encrypt: |
b5256303 TC |
9271 | if (HDR_ENCRYPTED(hdr)) { |
9272 | eabd = abd_alloc_for_io(asize, ismd); | |
9273 | ||
9274 | /* | |
9275 | * If the dataset was disowned before the buffer | |
9276 | * made it to this point, the key to re-encrypt | |
9277 | * it won't be available. In this case we simply | |
9278 | * won't write the buffer to the L2ARC. | |
9279 | */ | |
9280 | ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, | |
9281 | FTAG, &dck); | |
9282 | if (ret != 0) | |
9283 | goto error; | |
9284 | ||
10fa2545 | 9285 | ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, |
be9a5c35 TC |
9286 | hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, |
9287 | hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, | |
9288 | &no_crypt); | |
b5256303 TC |
9289 | if (ret != 0) |
9290 | goto error; | |
9291 | ||
4807c0ba TC |
9292 | if (no_crypt) |
9293 | abd_copy(eabd, to_write, psize); | |
b5256303 TC |
9294 | |
9295 | if (psize != asize) | |
9296 | abd_zero_off(eabd, psize, asize - psize); | |
9297 | ||
9298 | /* assert that the MAC we got here matches the one we saved */ | |
9299 | ASSERT0(bcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); | |
9300 | spa_keystore_dsl_key_rele(spa, dck, FTAG); | |
9301 | ||
9302 | if (to_write == cabd) | |
9303 | abd_free(cabd); | |
9304 | ||
9305 | to_write = eabd; | |
9306 | } | |
9307 | ||
9308 | out: | |
9309 | ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); | |
9310 | *abd_out = to_write; | |
9311 | return (0); | |
9312 | ||
9313 | error: | |
9314 | if (dck != NULL) | |
9315 | spa_keystore_dsl_key_rele(spa, dck, FTAG); | |
9316 | if (cabd != NULL) | |
9317 | abd_free(cabd); | |
9318 | if (eabd != NULL) | |
9319 | abd_free(eabd); | |
9320 | ||
9321 | *abd_out = NULL; | |
9322 | return (ret); | |
9323 | } | |
9324 | ||
77f6826b GA |
9325 | static void |
9326 | l2arc_blk_fetch_done(zio_t *zio) | |
9327 | { | |
9328 | l2arc_read_callback_t *cb; | |
9329 | ||
9330 | cb = zio->io_private; | |
9331 | if (cb->l2rcb_abd != NULL) | |
e2af2acc | 9332 | abd_free(cb->l2rcb_abd); |
77f6826b GA |
9333 | kmem_free(cb, sizeof (l2arc_read_callback_t)); |
9334 | } | |
9335 | ||
34dc7c2f BB |
9336 | /* |
9337 | * Find and write ARC buffers to the L2ARC device. | |
9338 | * | |
2a432414 | 9339 | * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid |
34dc7c2f | 9340 | * for reading until they have completed writing. |
3a17a7a9 SK |
9341 | * The headroom_boost is an in-out parameter used to maintain headroom boost |
9342 | * state between calls to this function. | |
9343 | * | |
9344 | * Returns the number of bytes actually written (which may be smaller than | |
77f6826b GA |
9345 | * the delta by which the device hand has changed due to alignment and the |
9346 | * writing of log blocks). | |
34dc7c2f | 9347 | */ |
d164b209 | 9348 | static uint64_t |
d3c2ae1c | 9349 | l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) |
34dc7c2f | 9350 | { |
77f6826b GA |
9351 | arc_buf_hdr_t *hdr, *hdr_prev, *head; |
9352 | uint64_t write_asize, write_psize, write_lsize, headroom; | |
9353 | boolean_t full; | |
9354 | l2arc_write_callback_t *cb = NULL; | |
9355 | zio_t *pio, *wzio; | |
9356 | uint64_t guid = spa_load_guid(spa); | |
0ae184a6 | 9357 | l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; |
34dc7c2f | 9358 | |
d3c2ae1c | 9359 | ASSERT3P(dev->l2ad_vdev, !=, NULL); |
3a17a7a9 | 9360 | |
34dc7c2f | 9361 | pio = NULL; |
01850391 | 9362 | write_lsize = write_asize = write_psize = 0; |
34dc7c2f | 9363 | full = B_FALSE; |
b9541d6b | 9364 | head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); |
d3c2ae1c | 9365 | arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); |
3a17a7a9 | 9366 | |
34dc7c2f BB |
9367 | /* |
9368 | * Copy buffers for L2ARC writing. | |
9369 | */ | |
4a90d4d6 | 9370 | for (int pass = 0; pass < L2ARC_FEED_TYPES; pass++) { |
feb3a7ee | 9371 | /* |
4a90d4d6 | 9372 | * If pass == 1 or 3, we cache MRU metadata and data |
feb3a7ee GA |
9373 | * respectively. |
9374 | */ | |
9375 | if (l2arc_mfuonly) { | |
4a90d4d6 | 9376 | if (pass == 1 || pass == 3) |
feb3a7ee GA |
9377 | continue; |
9378 | } | |
9379 | ||
4a90d4d6 | 9380 | multilist_sublist_t *mls = l2arc_sublist_lock(pass); |
3a17a7a9 SK |
9381 | uint64_t passed_sz = 0; |
9382 | ||
4aafab91 G |
9383 | VERIFY3P(mls, !=, NULL); |
9384 | ||
b128c09f BB |
9385 | /* |
9386 | * L2ARC fast warmup. | |
9387 | * | |
9388 | * Until the ARC is warm and starts to evict, read from the | |
9389 | * head of the ARC lists rather than the tail. | |
9390 | */ | |
b128c09f | 9391 | if (arc_warm == B_FALSE) |
ca0bf58d | 9392 | hdr = multilist_sublist_head(mls); |
b128c09f | 9393 | else |
ca0bf58d | 9394 | hdr = multilist_sublist_tail(mls); |
b128c09f | 9395 | |
3a17a7a9 | 9396 | headroom = target_sz * l2arc_headroom; |
d3c2ae1c | 9397 | if (zfs_compressed_arc_enabled) |
3a17a7a9 SK |
9398 | headroom = (headroom * l2arc_headroom_boost) / 100; |
9399 | ||
2a432414 | 9400 | for (; hdr; hdr = hdr_prev) { |
3a17a7a9 | 9401 | kmutex_t *hash_lock; |
b5256303 | 9402 | abd_t *to_write = NULL; |
3a17a7a9 | 9403 | |
b128c09f | 9404 | if (arc_warm == B_FALSE) |
ca0bf58d | 9405 | hdr_prev = multilist_sublist_next(mls, hdr); |
b128c09f | 9406 | else |
ca0bf58d | 9407 | hdr_prev = multilist_sublist_prev(mls, hdr); |
34dc7c2f | 9408 | |
2a432414 | 9409 | hash_lock = HDR_LOCK(hdr); |
3a17a7a9 | 9410 | if (!mutex_tryenter(hash_lock)) { |
34dc7c2f BB |
9411 | /* |
9412 | * Skip this buffer rather than waiting. | |
9413 | */ | |
9414 | continue; | |
9415 | } | |
9416 | ||
d3c2ae1c | 9417 | passed_sz += HDR_GET_LSIZE(hdr); |
77f6826b | 9418 | if (l2arc_headroom != 0 && passed_sz > headroom) { |
34dc7c2f BB |
9419 | /* |
9420 | * Searched too far. | |
9421 | */ | |
9422 | mutex_exit(hash_lock); | |
9423 | break; | |
9424 | } | |
9425 | ||
2a432414 | 9426 | if (!l2arc_write_eligible(guid, hdr)) { |
34dc7c2f BB |
9427 | mutex_exit(hash_lock); |
9428 | continue; | |
9429 | } | |
9430 | ||
01850391 AG |
9431 | /* |
9432 | * We rely on the L1 portion of the header below, so | |
9433 | * it's invalid for this header to have been evicted out | |
9434 | * of the ghost cache, prior to being written out. The | |
9435 | * ARC_FLAG_L2_WRITING bit ensures this won't happen. | |
9436 | */ | |
9437 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
9438 | ||
9439 | ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); | |
01850391 | 9440 | ASSERT3U(arc_hdr_size(hdr), >, 0); |
b5256303 TC |
9441 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || |
9442 | HDR_HAS_RABD(hdr)); | |
9443 | uint64_t psize = HDR_GET_PSIZE(hdr); | |
01850391 AG |
9444 | uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, |
9445 | psize); | |
9446 | ||
9447 | if ((write_asize + asize) > target_sz) { | |
34dc7c2f BB |
9448 | full = B_TRUE; |
9449 | mutex_exit(hash_lock); | |
9450 | break; | |
9451 | } | |
9452 | ||
b5256303 TC |
9453 | /* |
9454 | * We rely on the L1 portion of the header below, so | |
9455 | * it's invalid for this header to have been evicted out | |
9456 | * of the ghost cache, prior to being written out. The | |
9457 | * ARC_FLAG_L2_WRITING bit ensures this won't happen. | |
9458 | */ | |
9459 | arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); | |
9460 | ASSERT(HDR_HAS_L1HDR(hdr)); | |
9461 | ||
9462 | ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); | |
9463 | ASSERT(hdr->b_l1hdr.b_pabd != NULL || | |
9464 | HDR_HAS_RABD(hdr)); | |
9465 | ASSERT3U(arc_hdr_size(hdr), >, 0); | |
9466 | ||
9467 | /* | |
9468 | * If this header has b_rabd, we can use this since it | |
9469 | * must always match the data exactly as it exists on | |
9777044f | 9470 | * disk. Otherwise, the L2ARC can normally use the |
b5256303 TC |
9471 | * hdr's data, but if we're sharing data between the |
9472 | * hdr and one of its bufs, L2ARC needs its own copy of | |
9473 | * the data so that the ZIO below can't race with the | |
9474 | * buf consumer. To ensure that this copy will be | |
9475 | * available for the lifetime of the ZIO and be cleaned | |
9476 | * up afterwards, we add it to the l2arc_free_on_write | |
9477 | * queue. If we need to apply any transforms to the | |
9478 | * data (compression, encryption) we will also need the | |
9479 | * extra buffer. | |
9480 | */ | |
9481 | if (HDR_HAS_RABD(hdr) && psize == asize) { | |
9482 | to_write = hdr->b_crypt_hdr.b_rabd; | |
9483 | } else if ((HDR_COMPRESSION_ENABLED(hdr) || | |
9484 | HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && | |
9485 | !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && | |
9486 | psize == asize) { | |
9487 | to_write = hdr->b_l1hdr.b_pabd; | |
9488 | } else { | |
9489 | int ret; | |
9490 | arc_buf_contents_t type = arc_buf_type(hdr); | |
9491 | ||
9492 | ret = l2arc_apply_transforms(spa, hdr, asize, | |
9493 | &to_write); | |
9494 | if (ret != 0) { | |
9495 | arc_hdr_clear_flags(hdr, | |
9496 | ARC_FLAG_L2_WRITING); | |
9497 | mutex_exit(hash_lock); | |
9498 | continue; | |
9499 | } | |
9500 | ||
9501 | l2arc_free_abd_on_write(to_write, asize, type); | |
9502 | } | |
9503 | ||
34dc7c2f BB |
9504 | if (pio == NULL) { |
9505 | /* | |
9506 | * Insert a dummy header on the buflist so | |
9507 | * l2arc_write_done() can find where the | |
9508 | * write buffers begin without searching. | |
9509 | */ | |
ca0bf58d | 9510 | mutex_enter(&dev->l2ad_mtx); |
b9541d6b | 9511 | list_insert_head(&dev->l2ad_buflist, head); |
ca0bf58d | 9512 | mutex_exit(&dev->l2ad_mtx); |
34dc7c2f | 9513 | |
96c080cb BB |
9514 | cb = kmem_alloc( |
9515 | sizeof (l2arc_write_callback_t), KM_SLEEP); | |
34dc7c2f BB |
9516 | cb->l2wcb_dev = dev; |
9517 | cb->l2wcb_head = head; | |
657fd33b GA |
9518 | /* |
9519 | * Create a list to save allocated abd buffers | |
9520 | * for l2arc_log_blk_commit(). | |
9521 | */ | |
77f6826b GA |
9522 | list_create(&cb->l2wcb_abd_list, |
9523 | sizeof (l2arc_lb_abd_buf_t), | |
9524 | offsetof(l2arc_lb_abd_buf_t, node)); | |
34dc7c2f BB |
9525 | pio = zio_root(spa, l2arc_write_done, cb, |
9526 | ZIO_FLAG_CANFAIL); | |
9527 | } | |
9528 | ||
b9541d6b | 9529 | hdr->b_l2hdr.b_dev = dev; |
b9541d6b | 9530 | hdr->b_l2hdr.b_hits = 0; |
3a17a7a9 | 9531 | |
d3c2ae1c | 9532 | hdr->b_l2hdr.b_daddr = dev->l2ad_hand; |
08532162 GA |
9533 | hdr->b_l2hdr.b_arcs_state = |
9534 | hdr->b_l1hdr.b_state->arcs_state; | |
b5256303 | 9535 | arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR); |
3a17a7a9 | 9536 | |
ca0bf58d | 9537 | mutex_enter(&dev->l2ad_mtx); |
b9541d6b | 9538 | list_insert_head(&dev->l2ad_buflist, hdr); |
ca0bf58d | 9539 | mutex_exit(&dev->l2ad_mtx); |
34dc7c2f | 9540 | |
424fd7c3 | 9541 | (void) zfs_refcount_add_many(&dev->l2ad_alloc, |
b5256303 | 9542 | arc_hdr_size(hdr), hdr); |
3a17a7a9 | 9543 | |
34dc7c2f | 9544 | wzio = zio_write_phys(pio, dev->l2ad_vdev, |
82710e99 | 9545 | hdr->b_l2hdr.b_daddr, asize, to_write, |
d3c2ae1c GW |
9546 | ZIO_CHECKSUM_OFF, NULL, hdr, |
9547 | ZIO_PRIORITY_ASYNC_WRITE, | |
34dc7c2f BB |
9548 | ZIO_FLAG_CANFAIL, B_FALSE); |
9549 | ||
01850391 | 9550 | write_lsize += HDR_GET_LSIZE(hdr); |
34dc7c2f BB |
9551 | DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, |
9552 | zio_t *, wzio); | |
d962d5da | 9553 | |
01850391 AG |
9554 | write_psize += psize; |
9555 | write_asize += asize; | |
d3c2ae1c | 9556 | dev->l2ad_hand += asize; |
08532162 | 9557 | l2arc_hdr_arcstats_increment(hdr); |
7558997d | 9558 | vdev_space_update(dev->l2ad_vdev, asize, 0, 0); |
d3c2ae1c GW |
9559 | |
9560 | mutex_exit(hash_lock); | |
9561 | ||
77f6826b GA |
9562 | /* |
9563 | * Append buf info to current log and commit if full. | |
9564 | * arcstat_l2_{size,asize} kstats are updated | |
9565 | * internally. | |
9566 | */ | |
9567 | if (l2arc_log_blk_insert(dev, hdr)) | |
9568 | l2arc_log_blk_commit(dev, pio, cb); | |
9569 | ||
9cdf7b1f | 9570 | zio_nowait(wzio); |
34dc7c2f | 9571 | } |
d3c2ae1c GW |
9572 | |
9573 | multilist_sublist_unlock(mls); | |
9574 | ||
9575 | if (full == B_TRUE) | |
9576 | break; | |
34dc7c2f | 9577 | } |
34dc7c2f | 9578 | |
d3c2ae1c GW |
9579 | /* No buffers selected for writing? */ |
9580 | if (pio == NULL) { | |
01850391 | 9581 | ASSERT0(write_lsize); |
d3c2ae1c GW |
9582 | ASSERT(!HDR_HAS_L1HDR(head)); |
9583 | kmem_cache_free(hdr_l2only_cache, head); | |
77f6826b GA |
9584 | |
9585 | /* | |
9586 | * Although we did not write any buffers l2ad_evict may | |
9587 | * have advanced. | |
9588 | */ | |
0ae184a6 GA |
9589 | if (dev->l2ad_evict != l2dhdr->dh_evict) |
9590 | l2arc_dev_hdr_update(dev); | |
77f6826b | 9591 | |
d3c2ae1c GW |
9592 | return (0); |
9593 | } | |
34dc7c2f | 9594 | |
657fd33b GA |
9595 | if (!dev->l2ad_first) |
9596 | ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); | |
9597 | ||
3a17a7a9 | 9598 | ASSERT3U(write_asize, <=, target_sz); |
34dc7c2f | 9599 | ARCSTAT_BUMP(arcstat_l2_writes_sent); |
01850391 | 9600 | ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); |
34dc7c2f | 9601 | |
d164b209 | 9602 | dev->l2ad_writing = B_TRUE; |
34dc7c2f | 9603 | (void) zio_wait(pio); |
d164b209 BB |
9604 | dev->l2ad_writing = B_FALSE; |
9605 | ||
2054f35e GA |
9606 | /* |
9607 | * Update the device header after the zio completes as | |
9608 | * l2arc_write_done() may have updated the memory holding the log block | |
9609 | * pointers in the device header. | |
9610 | */ | |
9611 | l2arc_dev_hdr_update(dev); | |
9612 | ||
3a17a7a9 SK |
9613 | return (write_asize); |
9614 | } | |
9615 | ||
523e1295 AM |
9616 | static boolean_t |
9617 | l2arc_hdr_limit_reached(void) | |
9618 | { | |
c4c162c1 | 9619 | int64_t s = aggsum_upper_bound(&arc_sums.arcstat_l2_hdr_size); |
523e1295 AM |
9620 | |
9621 | return (arc_reclaim_needed() || (s > arc_meta_limit * 3 / 4) || | |
9622 | (s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100)); | |
9623 | } | |
9624 | ||
34dc7c2f BB |
9625 | /* |
9626 | * This thread feeds the L2ARC at regular intervals. This is the beating | |
9627 | * heart of the L2ARC. | |
9628 | */ | |
867959b5 | 9629 | /* ARGSUSED */ |
34dc7c2f | 9630 | static void |
c25b8f99 | 9631 | l2arc_feed_thread(void *unused) |
34dc7c2f BB |
9632 | { |
9633 | callb_cpr_t cpr; | |
9634 | l2arc_dev_t *dev; | |
9635 | spa_t *spa; | |
d164b209 | 9636 | uint64_t size, wrote; |
428870ff | 9637 | clock_t begin, next = ddi_get_lbolt(); |
40d06e3c | 9638 | fstrans_cookie_t cookie; |
34dc7c2f BB |
9639 | |
9640 | CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); | |
9641 | ||
9642 | mutex_enter(&l2arc_feed_thr_lock); | |
9643 | ||
40d06e3c | 9644 | cookie = spl_fstrans_mark(); |
34dc7c2f | 9645 | while (l2arc_thread_exit == 0) { |
34dc7c2f | 9646 | CALLB_CPR_SAFE_BEGIN(&cpr); |
ac6e5fb2 | 9647 | (void) cv_timedwait_idle(&l2arc_feed_thr_cv, |
5b63b3eb | 9648 | &l2arc_feed_thr_lock, next); |
34dc7c2f | 9649 | CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); |
428870ff | 9650 | next = ddi_get_lbolt() + hz; |
34dc7c2f BB |
9651 | |
9652 | /* | |
b128c09f | 9653 | * Quick check for L2ARC devices. |
34dc7c2f BB |
9654 | */ |
9655 | mutex_enter(&l2arc_dev_mtx); | |
9656 | if (l2arc_ndev == 0) { | |
9657 | mutex_exit(&l2arc_dev_mtx); | |
9658 | continue; | |
9659 | } | |
b128c09f | 9660 | mutex_exit(&l2arc_dev_mtx); |
428870ff | 9661 | begin = ddi_get_lbolt(); |
34dc7c2f BB |
9662 | |
9663 | /* | |
b128c09f BB |
9664 | * This selects the next l2arc device to write to, and in |
9665 | * doing so the next spa to feed from: dev->l2ad_spa. This | |
9666 | * will return NULL if there are now no l2arc devices or if | |
9667 | * they are all faulted. | |
9668 | * | |
9669 | * If a device is returned, its spa's config lock is also | |
9670 | * held to prevent device removal. l2arc_dev_get_next() | |
9671 | * will grab and release l2arc_dev_mtx. | |
34dc7c2f | 9672 | */ |
b128c09f | 9673 | if ((dev = l2arc_dev_get_next()) == NULL) |
34dc7c2f | 9674 | continue; |
b128c09f BB |
9675 | |
9676 | spa = dev->l2ad_spa; | |
d3c2ae1c | 9677 | ASSERT3P(spa, !=, NULL); |
34dc7c2f | 9678 | |
572e2857 BB |
9679 | /* |
9680 | * If the pool is read-only then force the feed thread to | |
9681 | * sleep a little longer. | |
9682 | */ | |
9683 | if (!spa_writeable(spa)) { | |
9684 | next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; | |
9685 | spa_config_exit(spa, SCL_L2ARC, dev); | |
9686 | continue; | |
9687 | } | |
9688 | ||
34dc7c2f | 9689 | /* |
b128c09f | 9690 | * Avoid contributing to memory pressure. |
34dc7c2f | 9691 | */ |
523e1295 | 9692 | if (l2arc_hdr_limit_reached()) { |
b128c09f BB |
9693 | ARCSTAT_BUMP(arcstat_l2_abort_lowmem); |
9694 | spa_config_exit(spa, SCL_L2ARC, dev); | |
34dc7c2f BB |
9695 | continue; |
9696 | } | |
b128c09f | 9697 | |
34dc7c2f BB |
9698 | ARCSTAT_BUMP(arcstat_l2_feeds); |
9699 | ||
37c22948 | 9700 | size = l2arc_write_size(dev); |
b128c09f | 9701 | |
34dc7c2f BB |
9702 | /* |
9703 | * Evict L2ARC buffers that will be overwritten. | |
9704 | */ | |
b128c09f | 9705 | l2arc_evict(dev, size, B_FALSE); |
34dc7c2f BB |
9706 | |
9707 | /* | |
9708 | * Write ARC buffers. | |
9709 | */ | |
d3c2ae1c | 9710 | wrote = l2arc_write_buffers(spa, dev, size); |
d164b209 BB |
9711 | |
9712 | /* | |
9713 | * Calculate interval between writes. | |
9714 | */ | |
9715 | next = l2arc_write_interval(begin, size, wrote); | |
b128c09f | 9716 | spa_config_exit(spa, SCL_L2ARC, dev); |
34dc7c2f | 9717 | } |
40d06e3c | 9718 | spl_fstrans_unmark(cookie); |
34dc7c2f BB |
9719 | |
9720 | l2arc_thread_exit = 0; | |
9721 | cv_broadcast(&l2arc_feed_thr_cv); | |
9722 | CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ | |
9723 | thread_exit(); | |
9724 | } | |
9725 | ||
b128c09f BB |
9726 | boolean_t |
9727 | l2arc_vdev_present(vdev_t *vd) | |
9728 | { | |
77f6826b GA |
9729 | return (l2arc_vdev_get(vd) != NULL); |
9730 | } | |
9731 | ||
9732 | /* | |
9733 | * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if | |
9734 | * the vdev_t isn't an L2ARC device. | |
9735 | */ | |
b7654bd7 | 9736 | l2arc_dev_t * |
77f6826b GA |
9737 | l2arc_vdev_get(vdev_t *vd) |
9738 | { | |
9739 | l2arc_dev_t *dev; | |
b128c09f BB |
9740 | |
9741 | mutex_enter(&l2arc_dev_mtx); | |
9742 | for (dev = list_head(l2arc_dev_list); dev != NULL; | |
9743 | dev = list_next(l2arc_dev_list, dev)) { | |
9744 | if (dev->l2ad_vdev == vd) | |
9745 | break; | |
9746 | } | |
9747 | mutex_exit(&l2arc_dev_mtx); | |
9748 | ||
77f6826b | 9749 | return (dev); |
b128c09f BB |
9750 | } |
9751 | ||
34dc7c2f BB |
9752 | /* |
9753 | * Add a vdev for use by the L2ARC. By this point the spa has already | |
9754 | * validated the vdev and opened it. | |
9755 | */ | |
9756 | void | |
9babb374 | 9757 | l2arc_add_vdev(spa_t *spa, vdev_t *vd) |
34dc7c2f | 9758 | { |
77f6826b GA |
9759 | l2arc_dev_t *adddev; |
9760 | uint64_t l2dhdr_asize; | |
34dc7c2f | 9761 | |
b128c09f BB |
9762 | ASSERT(!l2arc_vdev_present(vd)); |
9763 | ||
34dc7c2f BB |
9764 | /* |
9765 | * Create a new l2arc device entry. | |
9766 | */ | |
77f6826b | 9767 | adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); |
34dc7c2f BB |
9768 | adddev->l2ad_spa = spa; |
9769 | adddev->l2ad_vdev = vd; | |
77f6826b GA |
9770 | /* leave extra size for an l2arc device header */ |
9771 | l2dhdr_asize = adddev->l2ad_dev_hdr_asize = | |
9772 | MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); | |
9773 | adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; | |
9babb374 | 9774 | adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); |
77f6826b | 9775 | ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); |
34dc7c2f | 9776 | adddev->l2ad_hand = adddev->l2ad_start; |
77f6826b | 9777 | adddev->l2ad_evict = adddev->l2ad_start; |
34dc7c2f | 9778 | adddev->l2ad_first = B_TRUE; |
d164b209 | 9779 | adddev->l2ad_writing = B_FALSE; |
b7654bd7 | 9780 | adddev->l2ad_trim_all = B_FALSE; |
98f72a53 | 9781 | list_link_init(&adddev->l2ad_node); |
77f6826b | 9782 | adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); |
34dc7c2f | 9783 | |
b9541d6b | 9784 | mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); |
34dc7c2f BB |
9785 | /* |
9786 | * This is a list of all ARC buffers that are still valid on the | |
9787 | * device. | |
9788 | */ | |
b9541d6b CW |
9789 | list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), |
9790 | offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); | |
34dc7c2f | 9791 | |
77f6826b GA |
9792 | /* |
9793 | * This is a list of pointers to log blocks that are still present | |
9794 | * on the device. | |
9795 | */ | |
9796 | list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), | |
9797 | offsetof(l2arc_lb_ptr_buf_t, node)); | |
9798 | ||
428870ff | 9799 | vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); |
424fd7c3 | 9800 | zfs_refcount_create(&adddev->l2ad_alloc); |
657fd33b GA |
9801 | zfs_refcount_create(&adddev->l2ad_lb_asize); |
9802 | zfs_refcount_create(&adddev->l2ad_lb_count); | |
34dc7c2f BB |
9803 | |
9804 | /* | |
9805 | * Add device to global list | |
9806 | */ | |
9807 | mutex_enter(&l2arc_dev_mtx); | |
9808 | list_insert_head(l2arc_dev_list, adddev); | |
9809 | atomic_inc_64(&l2arc_ndev); | |
9810 | mutex_exit(&l2arc_dev_mtx); | |
77f6826b GA |
9811 | |
9812 | /* | |
9813 | * Decide if vdev is eligible for L2ARC rebuild | |
9814 | */ | |
9815 | l2arc_rebuild_vdev(adddev->l2ad_vdev, B_FALSE); | |
9816 | } | |
9817 | ||
9818 | void | |
9819 | l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) | |
9820 | { | |
9821 | l2arc_dev_t *dev = NULL; | |
9822 | l2arc_dev_hdr_phys_t *l2dhdr; | |
9823 | uint64_t l2dhdr_asize; | |
9824 | spa_t *spa; | |
77f6826b GA |
9825 | |
9826 | dev = l2arc_vdev_get(vd); | |
9827 | ASSERT3P(dev, !=, NULL); | |
9828 | spa = dev->l2ad_spa; | |
9829 | l2dhdr = dev->l2ad_dev_hdr; | |
9830 | l2dhdr_asize = dev->l2ad_dev_hdr_asize; | |
9831 | ||
9832 | /* | |
9833 | * The L2ARC has to hold at least the payload of one log block for | |
9834 | * them to be restored (persistent L2ARC). The payload of a log block | |
9835 | * depends on the amount of its log entries. We always write log blocks | |
9836 | * with 1022 entries. How many of them are committed or restored depends | |
9837 | * on the size of the L2ARC device. Thus the maximum payload of | |
9838 | * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device | |
9839 | * is less than that, we reduce the amount of committed and restored | |
9840 | * log entries per block so as to enable persistence. | |
9841 | */ | |
9842 | if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { | |
9843 | dev->l2ad_log_entries = 0; | |
9844 | } else { | |
9845 | dev->l2ad_log_entries = MIN((dev->l2ad_end - | |
9846 | dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, | |
9847 | L2ARC_LOG_BLK_MAX_ENTRIES); | |
9848 | } | |
9849 | ||
9850 | /* | |
9851 | * Read the device header, if an error is returned do not rebuild L2ARC. | |
9852 | */ | |
a76e4e67 | 9853 | if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) { |
77f6826b GA |
9854 | /* |
9855 | * If we are onlining a cache device (vdev_reopen) that was | |
9856 | * still present (l2arc_vdev_present()) and rebuild is enabled, | |
9857 | * we should evict all ARC buffers and pointers to log blocks | |
9858 | * and reclaim their space before restoring its contents to | |
9859 | * L2ARC. | |
9860 | */ | |
9861 | if (reopen) { | |
9862 | if (!l2arc_rebuild_enabled) { | |
9863 | return; | |
9864 | } else { | |
9865 | l2arc_evict(dev, 0, B_TRUE); | |
9866 | /* start a new log block */ | |
9867 | dev->l2ad_log_ent_idx = 0; | |
9868 | dev->l2ad_log_blk_payload_asize = 0; | |
9869 | dev->l2ad_log_blk_payload_start = 0; | |
9870 | } | |
9871 | } | |
9872 | /* | |
9873 | * Just mark the device as pending for a rebuild. We won't | |
9874 | * be starting a rebuild in line here as it would block pool | |
9875 | * import. Instead spa_load_impl will hand that off to an | |
9876 | * async task which will call l2arc_spa_rebuild_start. | |
9877 | */ | |
9878 | dev->l2ad_rebuild = B_TRUE; | |
657fd33b | 9879 | } else if (spa_writeable(spa)) { |
77f6826b | 9880 | /* |
b7654bd7 GA |
9881 | * In this case TRIM the whole device if l2arc_trim_ahead > 0, |
9882 | * otherwise create a new header. We zero out the memory holding | |
9883 | * the header to reset dh_start_lbps. If we TRIM the whole | |
9884 | * device the new header will be written by | |
9885 | * vdev_trim_l2arc_thread() at the end of the TRIM to update the | |
9886 | * trim_state in the header too. When reading the header, if | |
9887 | * trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0 | |
9888 | * we opt to TRIM the whole device again. | |
77f6826b | 9889 | */ |
b7654bd7 GA |
9890 | if (l2arc_trim_ahead > 0) { |
9891 | dev->l2ad_trim_all = B_TRUE; | |
9892 | } else { | |
9893 | bzero(l2dhdr, l2dhdr_asize); | |
9894 | l2arc_dev_hdr_update(dev); | |
9895 | } | |
77f6826b | 9896 | } |
34dc7c2f BB |
9897 | } |
9898 | ||
9899 | /* | |
9900 | * Remove a vdev from the L2ARC. | |
9901 | */ | |
9902 | void | |
9903 | l2arc_remove_vdev(vdev_t *vd) | |
9904 | { | |
77f6826b | 9905 | l2arc_dev_t *remdev = NULL; |
34dc7c2f | 9906 | |
34dc7c2f BB |
9907 | /* |
9908 | * Find the device by vdev | |
9909 | */ | |
77f6826b | 9910 | remdev = l2arc_vdev_get(vd); |
d3c2ae1c | 9911 | ASSERT3P(remdev, !=, NULL); |
34dc7c2f | 9912 | |
77f6826b GA |
9913 | /* |
9914 | * Cancel any ongoing or scheduled rebuild. | |
9915 | */ | |
9916 | mutex_enter(&l2arc_rebuild_thr_lock); | |
9917 | if (remdev->l2ad_rebuild_began == B_TRUE) { | |
9918 | remdev->l2ad_rebuild_cancel = B_TRUE; | |
9919 | while (remdev->l2ad_rebuild == B_TRUE) | |
9920 | cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); | |
9921 | } | |
9922 | mutex_exit(&l2arc_rebuild_thr_lock); | |
9923 | ||
34dc7c2f BB |
9924 | /* |
9925 | * Remove device from global list | |
9926 | */ | |
77f6826b | 9927 | mutex_enter(&l2arc_dev_mtx); |
34dc7c2f BB |
9928 | list_remove(l2arc_dev_list, remdev); |
9929 | l2arc_dev_last = NULL; /* may have been invalidated */ | |
b128c09f BB |
9930 | atomic_dec_64(&l2arc_ndev); |
9931 | mutex_exit(&l2arc_dev_mtx); | |
34dc7c2f BB |
9932 | |
9933 | /* | |
9934 | * Clear all buflists and ARC references. L2ARC device flush. | |
9935 | */ | |
9936 | l2arc_evict(remdev, 0, B_TRUE); | |
b9541d6b | 9937 | list_destroy(&remdev->l2ad_buflist); |
77f6826b GA |
9938 | ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); |
9939 | list_destroy(&remdev->l2ad_lbptr_list); | |
b9541d6b | 9940 | mutex_destroy(&remdev->l2ad_mtx); |
424fd7c3 | 9941 | zfs_refcount_destroy(&remdev->l2ad_alloc); |
657fd33b GA |
9942 | zfs_refcount_destroy(&remdev->l2ad_lb_asize); |
9943 | zfs_refcount_destroy(&remdev->l2ad_lb_count); | |
77f6826b GA |
9944 | kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); |
9945 | vmem_free(remdev, sizeof (l2arc_dev_t)); | |
34dc7c2f BB |
9946 | } |
9947 | ||
9948 | void | |
b128c09f | 9949 | l2arc_init(void) |
34dc7c2f BB |
9950 | { |
9951 | l2arc_thread_exit = 0; | |
9952 | l2arc_ndev = 0; | |
34dc7c2f BB |
9953 | |
9954 | mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); | |
9955 | cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); | |
77f6826b GA |
9956 | mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); |
9957 | cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); | |
34dc7c2f | 9958 | mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); |
34dc7c2f BB |
9959 | mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); |
9960 | ||
9961 | l2arc_dev_list = &L2ARC_dev_list; | |
9962 | l2arc_free_on_write = &L2ARC_free_on_write; | |
9963 | list_create(l2arc_dev_list, sizeof (l2arc_dev_t), | |
9964 | offsetof(l2arc_dev_t, l2ad_node)); | |
9965 | list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), | |
9966 | offsetof(l2arc_data_free_t, l2df_list_node)); | |
34dc7c2f BB |
9967 | } |
9968 | ||
9969 | void | |
b128c09f | 9970 | l2arc_fini(void) |
34dc7c2f | 9971 | { |
34dc7c2f BB |
9972 | mutex_destroy(&l2arc_feed_thr_lock); |
9973 | cv_destroy(&l2arc_feed_thr_cv); | |
77f6826b GA |
9974 | mutex_destroy(&l2arc_rebuild_thr_lock); |
9975 | cv_destroy(&l2arc_rebuild_thr_cv); | |
34dc7c2f | 9976 | mutex_destroy(&l2arc_dev_mtx); |
34dc7c2f BB |
9977 | mutex_destroy(&l2arc_free_on_write_mtx); |
9978 | ||
9979 | list_destroy(l2arc_dev_list); | |
9980 | list_destroy(l2arc_free_on_write); | |
9981 | } | |
b128c09f BB |
9982 | |
9983 | void | |
9984 | l2arc_start(void) | |
9985 | { | |
da92d5cb | 9986 | if (!(spa_mode_global & SPA_MODE_WRITE)) |
b128c09f BB |
9987 | return; |
9988 | ||
9989 | (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, | |
1229323d | 9990 | TS_RUN, defclsyspri); |
b128c09f BB |
9991 | } |
9992 | ||
9993 | void | |
9994 | l2arc_stop(void) | |
9995 | { | |
da92d5cb | 9996 | if (!(spa_mode_global & SPA_MODE_WRITE)) |
b128c09f BB |
9997 | return; |
9998 | ||
9999 | mutex_enter(&l2arc_feed_thr_lock); | |
10000 | cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ | |
10001 | l2arc_thread_exit = 1; | |
10002 | while (l2arc_thread_exit != 0) | |
10003 | cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); | |
10004 | mutex_exit(&l2arc_feed_thr_lock); | |
10005 | } | |
c28b2279 | 10006 | |
77f6826b GA |
10007 | /* |
10008 | * Punches out rebuild threads for the L2ARC devices in a spa. This should | |
10009 | * be called after pool import from the spa async thread, since starting | |
10010 | * these threads directly from spa_import() will make them part of the | |
10011 | * "zpool import" context and delay process exit (and thus pool import). | |
10012 | */ | |
10013 | void | |
10014 | l2arc_spa_rebuild_start(spa_t *spa) | |
10015 | { | |
10016 | ASSERT(MUTEX_HELD(&spa_namespace_lock)); | |
10017 | ||
10018 | /* | |
10019 | * Locate the spa's l2arc devices and kick off rebuild threads. | |
10020 | */ | |
10021 | for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { | |
10022 | l2arc_dev_t *dev = | |
10023 | l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); | |
10024 | if (dev == NULL) { | |
10025 | /* Don't attempt a rebuild if the vdev is UNAVAIL */ | |
10026 | continue; | |
10027 | } | |
10028 | mutex_enter(&l2arc_rebuild_thr_lock); | |
10029 | if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { | |
10030 | dev->l2ad_rebuild_began = B_TRUE; | |
3eaf76a8 | 10031 | (void) thread_create(NULL, 0, l2arc_dev_rebuild_thread, |
77f6826b GA |
10032 | dev, 0, &p0, TS_RUN, minclsyspri); |
10033 | } | |
10034 | mutex_exit(&l2arc_rebuild_thr_lock); | |
10035 | } | |
10036 | } | |
10037 | ||
10038 | /* | |
10039 | * Main entry point for L2ARC rebuilding. | |
10040 | */ | |
10041 | static void | |
3eaf76a8 | 10042 | l2arc_dev_rebuild_thread(void *arg) |
77f6826b | 10043 | { |
3eaf76a8 RM |
10044 | l2arc_dev_t *dev = arg; |
10045 | ||
77f6826b GA |
10046 | VERIFY(!dev->l2ad_rebuild_cancel); |
10047 | VERIFY(dev->l2ad_rebuild); | |
10048 | (void) l2arc_rebuild(dev); | |
10049 | mutex_enter(&l2arc_rebuild_thr_lock); | |
10050 | dev->l2ad_rebuild_began = B_FALSE; | |
10051 | dev->l2ad_rebuild = B_FALSE; | |
10052 | mutex_exit(&l2arc_rebuild_thr_lock); | |
10053 | ||
10054 | thread_exit(); | |
10055 | } | |
10056 | ||
10057 | /* | |
10058 | * This function implements the actual L2ARC metadata rebuild. It: | |
10059 | * starts reading the log block chain and restores each block's contents | |
10060 | * to memory (reconstructing arc_buf_hdr_t's). | |
10061 | * | |
10062 | * Operation stops under any of the following conditions: | |
10063 | * | |
10064 | * 1) We reach the end of the log block chain. | |
10065 | * 2) We encounter *any* error condition (cksum errors, io errors) | |
10066 | */ | |
10067 | static int | |
10068 | l2arc_rebuild(l2arc_dev_t *dev) | |
10069 | { | |
10070 | vdev_t *vd = dev->l2ad_vdev; | |
10071 | spa_t *spa = vd->vdev_spa; | |
657fd33b | 10072 | int err = 0; |
77f6826b GA |
10073 | l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; |
10074 | l2arc_log_blk_phys_t *this_lb, *next_lb; | |
10075 | zio_t *this_io = NULL, *next_io = NULL; | |
10076 | l2arc_log_blkptr_t lbps[2]; | |
10077 | l2arc_lb_ptr_buf_t *lb_ptr_buf; | |
10078 | boolean_t lock_held; | |
10079 | ||
10080 | this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP); | |
10081 | next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP); | |
10082 | ||
10083 | /* | |
10084 | * We prevent device removal while issuing reads to the device, | |
10085 | * then during the rebuilding phases we drop this lock again so | |
10086 | * that a spa_unload or device remove can be initiated - this is | |
10087 | * safe, because the spa will signal us to stop before removing | |
10088 | * our device and wait for us to stop. | |
10089 | */ | |
10090 | spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); | |
10091 | lock_held = B_TRUE; | |
10092 | ||
10093 | /* | |
10094 | * Retrieve the persistent L2ARC device state. | |
657fd33b | 10095 | * L2BLK_GET_PSIZE returns aligned size for log blocks. |
77f6826b GA |
10096 | */ |
10097 | dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); | |
10098 | dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + | |
10099 | L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), | |
10100 | dev->l2ad_start); | |
10101 | dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); | |
10102 | ||
b7654bd7 GA |
10103 | vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time; |
10104 | vd->vdev_trim_state = l2dhdr->dh_trim_state; | |
10105 | ||
77f6826b GA |
10106 | /* |
10107 | * In case the zfs module parameter l2arc_rebuild_enabled is false | |
10108 | * we do not start the rebuild process. | |
10109 | */ | |
10110 | if (!l2arc_rebuild_enabled) | |
10111 | goto out; | |
10112 | ||
10113 | /* Prepare the rebuild process */ | |
10114 | bcopy(l2dhdr->dh_start_lbps, lbps, sizeof (lbps)); | |
10115 | ||
10116 | /* Start the rebuild process */ | |
10117 | for (;;) { | |
10118 | if (!l2arc_log_blkptr_valid(dev, &lbps[0])) | |
10119 | break; | |
10120 | ||
10121 | if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], | |
10122 | this_lb, next_lb, this_io, &next_io)) != 0) | |
10123 | goto out; | |
10124 | ||
10125 | /* | |
10126 | * Our memory pressure valve. If the system is running low | |
10127 | * on memory, rather than swamping memory with new ARC buf | |
10128 | * hdrs, we opt not to rebuild the L2ARC. At this point, | |
10129 | * however, we have already set up our L2ARC dev to chain in | |
10130 | * new metadata log blocks, so the user may choose to offline/ | |
10131 | * online the L2ARC dev at a later time (or re-import the pool) | |
10132 | * to reconstruct it (when there's less memory pressure). | |
10133 | */ | |
523e1295 | 10134 | if (l2arc_hdr_limit_reached()) { |
77f6826b GA |
10135 | ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); |
10136 | cmn_err(CE_NOTE, "System running low on memory, " | |
10137 | "aborting L2ARC rebuild."); | |
10138 | err = SET_ERROR(ENOMEM); | |
10139 | goto out; | |
10140 | } | |
10141 | ||
10142 | spa_config_exit(spa, SCL_L2ARC, vd); | |
10143 | lock_held = B_FALSE; | |
10144 | ||
10145 | /* | |
10146 | * Now that we know that the next_lb checks out alright, we | |
10147 | * can start reconstruction from this log block. | |
657fd33b | 10148 | * L2BLK_GET_PSIZE returns aligned size for log blocks. |
77f6826b | 10149 | */ |
657fd33b | 10150 | uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); |
a76e4e67 | 10151 | l2arc_log_blk_restore(dev, this_lb, asize); |
77f6826b GA |
10152 | |
10153 | /* | |
10154 | * log block restored, include its pointer in the list of | |
10155 | * pointers to log blocks present in the L2ARC device. | |
10156 | */ | |
10157 | lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); | |
10158 | lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), | |
10159 | KM_SLEEP); | |
10160 | bcopy(&lbps[0], lb_ptr_buf->lb_ptr, | |
10161 | sizeof (l2arc_log_blkptr_t)); | |
10162 | mutex_enter(&dev->l2ad_mtx); | |
10163 | list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); | |
657fd33b GA |
10164 | ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); |
10165 | ARCSTAT_BUMP(arcstat_l2_log_blk_count); | |
10166 | zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); | |
10167 | zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); | |
77f6826b | 10168 | mutex_exit(&dev->l2ad_mtx); |
657fd33b | 10169 | vdev_space_update(vd, asize, 0, 0); |
77f6826b GA |
10170 | |
10171 | /* | |
10172 | * Protection against loops of log blocks: | |
10173 | * | |
10174 | * l2ad_hand l2ad_evict | |
10175 | * V V | |
10176 | * l2ad_start |=======================================| l2ad_end | |
10177 | * -----|||----|||---|||----||| | |
10178 | * (3) (2) (1) (0) | |
10179 | * ---|||---|||----|||---||| | |
10180 | * (7) (6) (5) (4) | |
10181 | * | |
10182 | * In this situation the pointer of log block (4) passes | |
10183 | * l2arc_log_blkptr_valid() but the log block should not be | |
10184 | * restored as it is overwritten by the payload of log block | |
10185 | * (0). Only log blocks (0)-(3) should be restored. We check | |
657fd33b GA |
10186 | * whether l2ad_evict lies in between the payload starting |
10187 | * offset of the next log block (lbps[1].lbp_payload_start) | |
10188 | * and the payload starting offset of the present log block | |
10189 | * (lbps[0].lbp_payload_start). If true and this isn't the | |
10190 | * first pass, we are looping from the beginning and we should | |
10191 | * stop. | |
77f6826b | 10192 | */ |
657fd33b GA |
10193 | if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, |
10194 | lbps[0].lbp_payload_start, dev->l2ad_evict) && | |
10195 | !dev->l2ad_first) | |
77f6826b GA |
10196 | goto out; |
10197 | ||
1199c3e8 | 10198 | cond_resched(); |
77f6826b GA |
10199 | for (;;) { |
10200 | mutex_enter(&l2arc_rebuild_thr_lock); | |
10201 | if (dev->l2ad_rebuild_cancel) { | |
10202 | dev->l2ad_rebuild = B_FALSE; | |
10203 | cv_signal(&l2arc_rebuild_thr_cv); | |
10204 | mutex_exit(&l2arc_rebuild_thr_lock); | |
10205 | err = SET_ERROR(ECANCELED); | |
10206 | goto out; | |
10207 | } | |
10208 | mutex_exit(&l2arc_rebuild_thr_lock); | |
10209 | if (spa_config_tryenter(spa, SCL_L2ARC, vd, | |
10210 | RW_READER)) { | |
10211 | lock_held = B_TRUE; | |
10212 | break; | |
10213 | } | |
10214 | /* | |
10215 | * L2ARC config lock held by somebody in writer, | |
10216 | * possibly due to them trying to remove us. They'll | |
10217 | * likely to want us to shut down, so after a little | |
10218 | * delay, we check l2ad_rebuild_cancel and retry | |
10219 | * the lock again. | |
10220 | */ | |
10221 | delay(1); | |
10222 | } | |
10223 | ||
10224 | /* | |
10225 | * Continue with the next log block. | |
10226 | */ | |
10227 | lbps[0] = lbps[1]; | |
10228 | lbps[1] = this_lb->lb_prev_lbp; | |
10229 | PTR_SWAP(this_lb, next_lb); | |
10230 | this_io = next_io; | |
10231 | next_io = NULL; | |
a76e4e67 | 10232 | } |
77f6826b GA |
10233 | |
10234 | if (this_io != NULL) | |
10235 | l2arc_log_blk_fetch_abort(this_io); | |
10236 | out: | |
10237 | if (next_io != NULL) | |
10238 | l2arc_log_blk_fetch_abort(next_io); | |
10239 | vmem_free(this_lb, sizeof (*this_lb)); | |
10240 | vmem_free(next_lb, sizeof (*next_lb)); | |
10241 | ||
10242 | if (!l2arc_rebuild_enabled) { | |
657fd33b GA |
10243 | spa_history_log_internal(spa, "L2ARC rebuild", NULL, |
10244 | "disabled"); | |
10245 | } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { | |
77f6826b | 10246 | ARCSTAT_BUMP(arcstat_l2_rebuild_success); |
657fd33b GA |
10247 | spa_history_log_internal(spa, "L2ARC rebuild", NULL, |
10248 | "successful, restored %llu blocks", | |
10249 | (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); | |
10250 | } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { | |
10251 | /* | |
10252 | * No error but also nothing restored, meaning the lbps array | |
10253 | * in the device header points to invalid/non-present log | |
10254 | * blocks. Reset the header. | |
10255 | */ | |
10256 | spa_history_log_internal(spa, "L2ARC rebuild", NULL, | |
10257 | "no valid log blocks"); | |
10258 | bzero(l2dhdr, dev->l2ad_dev_hdr_asize); | |
10259 | l2arc_dev_hdr_update(dev); | |
da60484d GA |
10260 | } else if (err == ECANCELED) { |
10261 | /* | |
10262 | * In case the rebuild was canceled do not log to spa history | |
10263 | * log as the pool may be in the process of being removed. | |
10264 | */ | |
10265 | zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks", | |
8e739b2c | 10266 | (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); |
77f6826b | 10267 | } else if (err != 0) { |
657fd33b GA |
10268 | spa_history_log_internal(spa, "L2ARC rebuild", NULL, |
10269 | "aborted, restored %llu blocks", | |
10270 | (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); | |
77f6826b GA |
10271 | } |
10272 | ||
10273 | if (lock_held) | |
10274 | spa_config_exit(spa, SCL_L2ARC, vd); | |
10275 | ||
10276 | return (err); | |
10277 | } | |
10278 | ||
10279 | /* | |
10280 | * Attempts to read the device header on the provided L2ARC device and writes | |
10281 | * it to `hdr'. On success, this function returns 0, otherwise the appropriate | |
10282 | * error code is returned. | |
10283 | */ | |
10284 | static int | |
10285 | l2arc_dev_hdr_read(l2arc_dev_t *dev) | |
10286 | { | |
10287 | int err; | |
10288 | uint64_t guid; | |
10289 | l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; | |
10290 | const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; | |
10291 | abd_t *abd; | |
10292 | ||
10293 | guid = spa_guid(dev->l2ad_vdev->vdev_spa); | |
10294 | ||
10295 | abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); | |
10296 | ||
10297 | err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, | |
10298 | VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, | |
a76e4e67 | 10299 | ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ, |
77f6826b GA |
10300 | ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | |
10301 | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | | |
10302 | ZIO_FLAG_SPECULATIVE, B_FALSE)); | |
10303 | ||
e2af2acc | 10304 | abd_free(abd); |
77f6826b GA |
10305 | |
10306 | if (err != 0) { | |
10307 | ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); | |
10308 | zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " | |
8e739b2c RE |
10309 | "vdev guid: %llu", err, |
10310 | (u_longlong_t)dev->l2ad_vdev->vdev_guid); | |
77f6826b GA |
10311 | return (err); |
10312 | } | |
10313 | ||
10314 | if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) | |
10315 | byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); | |
10316 | ||
10317 | if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || | |
10318 | l2dhdr->dh_spa_guid != guid || | |
10319 | l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || | |
10320 | l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || | |
657fd33b | 10321 | l2dhdr->dh_log_entries != dev->l2ad_log_entries || |
77f6826b GA |
10322 | l2dhdr->dh_end != dev->l2ad_end || |
10323 | !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, | |
b7654bd7 GA |
10324 | l2dhdr->dh_evict) || |
10325 | (l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE && | |
10326 | l2arc_trim_ahead > 0)) { | |
77f6826b GA |
10327 | /* |
10328 | * Attempt to rebuild a device containing no actual dev hdr | |
10329 | * or containing a header from some other pool or from another | |
10330 | * version of persistent L2ARC. | |
10331 | */ | |
10332 | ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); | |
10333 | return (SET_ERROR(ENOTSUP)); | |
10334 | } | |
10335 | ||
10336 | return (0); | |
10337 | } | |
10338 | ||
10339 | /* | |
10340 | * Reads L2ARC log blocks from storage and validates their contents. | |
10341 | * | |
10342 | * This function implements a simple fetcher to make sure that while | |
10343 | * we're processing one buffer the L2ARC is already fetching the next | |
10344 | * one in the chain. | |
10345 | * | |
10346 | * The arguments this_lp and next_lp point to the current and next log block | |
10347 | * address in the block chain. Similarly, this_lb and next_lb hold the | |
10348 | * l2arc_log_blk_phys_t's of the current and next L2ARC blk. | |
10349 | * | |
10350 | * The `this_io' and `next_io' arguments are used for block fetching. | |
10351 | * When issuing the first blk IO during rebuild, you should pass NULL for | |
10352 | * `this_io'. This function will then issue a sync IO to read the block and | |
10353 | * also issue an async IO to fetch the next block in the block chain. The | |
10354 | * fetched IO is returned in `next_io'. On subsequent calls to this | |
10355 | * function, pass the value returned in `next_io' from the previous call | |
10356 | * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. | |
10357 | * Prior to the call, you should initialize your `next_io' pointer to be | |
10358 | * NULL. If no fetch IO was issued, the pointer is left set at NULL. | |
10359 | * | |
10360 | * On success, this function returns 0, otherwise it returns an appropriate | |
10361 | * error code. On error the fetching IO is aborted and cleared before | |
10362 | * returning from this function. Therefore, if we return `success', the | |
10363 | * caller can assume that we have taken care of cleanup of fetch IOs. | |
10364 | */ | |
10365 | static int | |
10366 | l2arc_log_blk_read(l2arc_dev_t *dev, | |
10367 | const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, | |
10368 | l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, | |
10369 | zio_t *this_io, zio_t **next_io) | |
10370 | { | |
10371 | int err = 0; | |
10372 | zio_cksum_t cksum; | |
10373 | abd_t *abd = NULL; | |
657fd33b | 10374 | uint64_t asize; |
77f6826b GA |
10375 | |
10376 | ASSERT(this_lbp != NULL && next_lbp != NULL); | |
10377 | ASSERT(this_lb != NULL && next_lb != NULL); | |
10378 | ASSERT(next_io != NULL && *next_io == NULL); | |
10379 | ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); | |
10380 | ||
10381 | /* | |
10382 | * Check to see if we have issued the IO for this log block in a | |
10383 | * previous run. If not, this is the first call, so issue it now. | |
10384 | */ | |
10385 | if (this_io == NULL) { | |
10386 | this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, | |
10387 | this_lb); | |
10388 | } | |
10389 | ||
10390 | /* | |
10391 | * Peek to see if we can start issuing the next IO immediately. | |
10392 | */ | |
10393 | if (l2arc_log_blkptr_valid(dev, next_lbp)) { | |
10394 | /* | |
10395 | * Start issuing IO for the next log block early - this | |
10396 | * should help keep the L2ARC device busy while we | |
10397 | * decompress and restore this log block. | |
10398 | */ | |
10399 | *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, | |
10400 | next_lb); | |
10401 | } | |
10402 | ||
10403 | /* Wait for the IO to read this log block to complete */ | |
10404 | if ((err = zio_wait(this_io)) != 0) { | |
10405 | ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); | |
10406 | zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " | |
8e739b2c RE |
10407 | "offset: %llu, vdev guid: %llu", err, |
10408 | (u_longlong_t)this_lbp->lbp_daddr, | |
10409 | (u_longlong_t)dev->l2ad_vdev->vdev_guid); | |
77f6826b GA |
10410 | goto cleanup; |
10411 | } | |
10412 | ||
657fd33b GA |
10413 | /* |
10414 | * Make sure the buffer checks out. | |
10415 | * L2BLK_GET_PSIZE returns aligned size for log blocks. | |
10416 | */ | |
10417 | asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); | |
10418 | fletcher_4_native(this_lb, asize, NULL, &cksum); | |
77f6826b GA |
10419 | if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { |
10420 | ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); | |
10421 | zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " | |
10422 | "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", | |
8e739b2c RE |
10423 | (u_longlong_t)this_lbp->lbp_daddr, |
10424 | (u_longlong_t)dev->l2ad_vdev->vdev_guid, | |
10425 | (u_longlong_t)dev->l2ad_hand, | |
10426 | (u_longlong_t)dev->l2ad_evict); | |
77f6826b GA |
10427 | err = SET_ERROR(ECKSUM); |
10428 | goto cleanup; | |
10429 | } | |
10430 | ||
10431 | /* Now we can take our time decoding this buffer */ | |
10432 | switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { | |
10433 | case ZIO_COMPRESS_OFF: | |
10434 | break; | |
10435 | case ZIO_COMPRESS_LZ4: | |
657fd33b GA |
10436 | abd = abd_alloc_for_io(asize, B_TRUE); |
10437 | abd_copy_from_buf_off(abd, this_lb, 0, asize); | |
77f6826b GA |
10438 | if ((err = zio_decompress_data( |
10439 | L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), | |
10b3c7f5 | 10440 | abd, this_lb, asize, sizeof (*this_lb), NULL)) != 0) { |
77f6826b GA |
10441 | err = SET_ERROR(EINVAL); |
10442 | goto cleanup; | |
10443 | } | |
10444 | break; | |
10445 | default: | |
10446 | err = SET_ERROR(EINVAL); | |
10447 | goto cleanup; | |
10448 | } | |
10449 | if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) | |
10450 | byteswap_uint64_array(this_lb, sizeof (*this_lb)); | |
10451 | if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { | |
10452 | err = SET_ERROR(EINVAL); | |
10453 | goto cleanup; | |
10454 | } | |
10455 | cleanup: | |
10456 | /* Abort an in-flight fetch I/O in case of error */ | |
10457 | if (err != 0 && *next_io != NULL) { | |
10458 | l2arc_log_blk_fetch_abort(*next_io); | |
10459 | *next_io = NULL; | |
10460 | } | |
10461 | if (abd != NULL) | |
10462 | abd_free(abd); | |
10463 | return (err); | |
10464 | } | |
10465 | ||
10466 | /* | |
10467 | * Restores the payload of a log block to ARC. This creates empty ARC hdr | |
10468 | * entries which only contain an l2arc hdr, essentially restoring the | |
10469 | * buffers to their L2ARC evicted state. This function also updates space | |
10470 | * usage on the L2ARC vdev to make sure it tracks restored buffers. | |
10471 | */ | |
10472 | static void | |
10473 | l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, | |
a76e4e67 | 10474 | uint64_t lb_asize) |
77f6826b | 10475 | { |
657fd33b GA |
10476 | uint64_t size = 0, asize = 0; |
10477 | uint64_t log_entries = dev->l2ad_log_entries; | |
77f6826b | 10478 | |
523e1295 AM |
10479 | /* |
10480 | * Usually arc_adapt() is called only for data, not headers, but | |
10481 | * since we may allocate significant amount of memory here, let ARC | |
10482 | * grow its arc_c. | |
10483 | */ | |
10484 | arc_adapt(log_entries * HDR_L2ONLY_SIZE, arc_l2c_only); | |
10485 | ||
77f6826b GA |
10486 | for (int i = log_entries - 1; i >= 0; i--) { |
10487 | /* | |
10488 | * Restore goes in the reverse temporal direction to preserve | |
10489 | * correct temporal ordering of buffers in the l2ad_buflist. | |
10490 | * l2arc_hdr_restore also does a list_insert_tail instead of | |
10491 | * list_insert_head on the l2ad_buflist: | |
10492 | * | |
10493 | * LIST l2ad_buflist LIST | |
10494 | * HEAD <------ (time) ------ TAIL | |
10495 | * direction +-----+-----+-----+-----+-----+ direction | |
10496 | * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild | |
10497 | * fill +-----+-----+-----+-----+-----+ | |
10498 | * ^ ^ | |
10499 | * | | | |
10500 | * | | | |
657fd33b GA |
10501 | * l2arc_feed_thread l2arc_rebuild |
10502 | * will place new bufs here restores bufs here | |
77f6826b | 10503 | * |
657fd33b GA |
10504 | * During l2arc_rebuild() the device is not used by |
10505 | * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. | |
77f6826b GA |
10506 | */ |
10507 | size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); | |
657fd33b GA |
10508 | asize += vdev_psize_to_asize(dev->l2ad_vdev, |
10509 | L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); | |
77f6826b GA |
10510 | l2arc_hdr_restore(&lb->lb_entries[i], dev); |
10511 | } | |
10512 | ||
10513 | /* | |
10514 | * Record rebuild stats: | |
10515 | * size Logical size of restored buffers in the L2ARC | |
657fd33b | 10516 | * asize Aligned size of restored buffers in the L2ARC |
77f6826b GA |
10517 | */ |
10518 | ARCSTAT_INCR(arcstat_l2_rebuild_size, size); | |
657fd33b | 10519 | ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); |
77f6826b | 10520 | ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); |
657fd33b GA |
10521 | ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); |
10522 | ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); | |
77f6826b GA |
10523 | ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); |
10524 | } | |
10525 | ||
10526 | /* | |
10527 | * Restores a single ARC buf hdr from a log entry. The ARC buffer is put | |
10528 | * into a state indicating that it has been evicted to L2ARC. | |
10529 | */ | |
10530 | static void | |
10531 | l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) | |
10532 | { | |
10533 | arc_buf_hdr_t *hdr, *exists; | |
10534 | kmutex_t *hash_lock; | |
10535 | arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); | |
10536 | uint64_t asize; | |
10537 | ||
10538 | /* | |
10539 | * Do all the allocation before grabbing any locks, this lets us | |
10540 | * sleep if memory is full and we don't have to deal with failed | |
10541 | * allocations. | |
10542 | */ | |
10543 | hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, | |
10544 | dev, le->le_dva, le->le_daddr, | |
10545 | L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, | |
10b3c7f5 | 10546 | L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel, |
77f6826b | 10547 | L2BLK_GET_PROTECTED((le)->le_prop), |
08532162 GA |
10548 | L2BLK_GET_PREFETCH((le)->le_prop), |
10549 | L2BLK_GET_STATE((le)->le_prop)); | |
77f6826b GA |
10550 | asize = vdev_psize_to_asize(dev->l2ad_vdev, |
10551 | L2BLK_GET_PSIZE((le)->le_prop)); | |
10552 | ||
10553 | /* | |
10554 | * vdev_space_update() has to be called before arc_hdr_destroy() to | |
08532162 | 10555 | * avoid underflow since the latter also calls vdev_space_update(). |
77f6826b | 10556 | */ |
08532162 | 10557 | l2arc_hdr_arcstats_increment(hdr); |
77f6826b GA |
10558 | vdev_space_update(dev->l2ad_vdev, asize, 0, 0); |
10559 | ||
77f6826b GA |
10560 | mutex_enter(&dev->l2ad_mtx); |
10561 | list_insert_tail(&dev->l2ad_buflist, hdr); | |
10562 | (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); | |
10563 | mutex_exit(&dev->l2ad_mtx); | |
10564 | ||
10565 | exists = buf_hash_insert(hdr, &hash_lock); | |
10566 | if (exists) { | |
10567 | /* Buffer was already cached, no need to restore it. */ | |
10568 | arc_hdr_destroy(hdr); | |
10569 | /* | |
10570 | * If the buffer is already cached, check whether it has | |
10571 | * L2ARC metadata. If not, enter them and update the flag. | |
10572 | * This is important is case of onlining a cache device, since | |
10573 | * we previously evicted all L2ARC metadata from ARC. | |
10574 | */ | |
10575 | if (!HDR_HAS_L2HDR(exists)) { | |
10576 | arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); | |
10577 | exists->b_l2hdr.b_dev = dev; | |
10578 | exists->b_l2hdr.b_daddr = le->le_daddr; | |
08532162 GA |
10579 | exists->b_l2hdr.b_arcs_state = |
10580 | L2BLK_GET_STATE((le)->le_prop); | |
77f6826b GA |
10581 | mutex_enter(&dev->l2ad_mtx); |
10582 | list_insert_tail(&dev->l2ad_buflist, exists); | |
10583 | (void) zfs_refcount_add_many(&dev->l2ad_alloc, | |
10584 | arc_hdr_size(exists), exists); | |
10585 | mutex_exit(&dev->l2ad_mtx); | |
08532162 | 10586 | l2arc_hdr_arcstats_increment(exists); |
77f6826b | 10587 | vdev_space_update(dev->l2ad_vdev, asize, 0, 0); |
77f6826b GA |
10588 | } |
10589 | ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); | |
10590 | } | |
10591 | ||
10592 | mutex_exit(hash_lock); | |
10593 | } | |
10594 | ||
10595 | /* | |
10596 | * Starts an asynchronous read IO to read a log block. This is used in log | |
10597 | * block reconstruction to start reading the next block before we are done | |
10598 | * decoding and reconstructing the current block, to keep the l2arc device | |
10599 | * nice and hot with read IO to process. | |
10600 | * The returned zio will contain a newly allocated memory buffers for the IO | |
10601 | * data which should then be freed by the caller once the zio is no longer | |
10602 | * needed (i.e. due to it having completed). If you wish to abort this | |
10603 | * zio, you should do so using l2arc_log_blk_fetch_abort, which takes | |
10604 | * care of disposing of the allocated buffers correctly. | |
10605 | */ | |
10606 | static zio_t * | |
10607 | l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, | |
10608 | l2arc_log_blk_phys_t *lb) | |
10609 | { | |
657fd33b | 10610 | uint32_t asize; |
77f6826b GA |
10611 | zio_t *pio; |
10612 | l2arc_read_callback_t *cb; | |
10613 | ||
657fd33b GA |
10614 | /* L2BLK_GET_PSIZE returns aligned size for log blocks */ |
10615 | asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); | |
10616 | ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); | |
10617 | ||
77f6826b | 10618 | cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); |
657fd33b | 10619 | cb->l2rcb_abd = abd_get_from_buf(lb, asize); |
77f6826b GA |
10620 | pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, |
10621 | ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | | |
10622 | ZIO_FLAG_DONT_RETRY); | |
657fd33b | 10623 | (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, |
77f6826b GA |
10624 | cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, |
10625 | ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | | |
10626 | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); | |
10627 | ||
10628 | return (pio); | |
10629 | } | |
10630 | ||
10631 | /* | |
10632 | * Aborts a zio returned from l2arc_log_blk_fetch and frees the data | |
10633 | * buffers allocated for it. | |
10634 | */ | |
10635 | static void | |
10636 | l2arc_log_blk_fetch_abort(zio_t *zio) | |
10637 | { | |
10638 | (void) zio_wait(zio); | |
10639 | } | |
10640 | ||
10641 | /* | |
2054f35e | 10642 | * Creates a zio to update the device header on an l2arc device. |
77f6826b | 10643 | */ |
b7654bd7 | 10644 | void |
77f6826b GA |
10645 | l2arc_dev_hdr_update(l2arc_dev_t *dev) |
10646 | { | |
10647 | l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; | |
10648 | const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; | |
10649 | abd_t *abd; | |
10650 | int err; | |
10651 | ||
657fd33b GA |
10652 | VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); |
10653 | ||
77f6826b GA |
10654 | l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; |
10655 | l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; | |
10656 | l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); | |
10657 | l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; | |
657fd33b | 10658 | l2dhdr->dh_log_entries = dev->l2ad_log_entries; |
77f6826b GA |
10659 | l2dhdr->dh_evict = dev->l2ad_evict; |
10660 | l2dhdr->dh_start = dev->l2ad_start; | |
10661 | l2dhdr->dh_end = dev->l2ad_end; | |
657fd33b GA |
10662 | l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); |
10663 | l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); | |
77f6826b | 10664 | l2dhdr->dh_flags = 0; |
b7654bd7 GA |
10665 | l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time; |
10666 | l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state; | |
77f6826b GA |
10667 | if (dev->l2ad_first) |
10668 | l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; | |
10669 | ||
10670 | abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); | |
10671 | ||
10672 | err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, | |
10673 | VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, | |
10674 | NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); | |
10675 | ||
e2af2acc | 10676 | abd_free(abd); |
77f6826b GA |
10677 | |
10678 | if (err != 0) { | |
10679 | zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " | |
8e739b2c RE |
10680 | "vdev guid: %llu", err, |
10681 | (u_longlong_t)dev->l2ad_vdev->vdev_guid); | |
77f6826b GA |
10682 | } |
10683 | } | |
10684 | ||
10685 | /* | |
10686 | * Commits a log block to the L2ARC device. This routine is invoked from | |
10687 | * l2arc_write_buffers when the log block fills up. | |
10688 | * This function allocates some memory to temporarily hold the serialized | |
10689 | * buffer to be written. This is then released in l2arc_write_done. | |
10690 | */ | |
10691 | static void | |
10692 | l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) | |
10693 | { | |
10694 | l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; | |
10695 | l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; | |
10696 | uint64_t psize, asize; | |
10697 | zio_t *wzio; | |
10698 | l2arc_lb_abd_buf_t *abd_buf; | |
10699 | uint8_t *tmpbuf; | |
10700 | l2arc_lb_ptr_buf_t *lb_ptr_buf; | |
10701 | ||
657fd33b | 10702 | VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); |
77f6826b GA |
10703 | |
10704 | tmpbuf = zio_buf_alloc(sizeof (*lb)); | |
10705 | abd_buf = zio_buf_alloc(sizeof (*abd_buf)); | |
10706 | abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); | |
10707 | lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); | |
10708 | lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); | |
10709 | ||
10710 | /* link the buffer into the block chain */ | |
10711 | lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; | |
10712 | lb->lb_magic = L2ARC_LOG_BLK_MAGIC; | |
10713 | ||
657fd33b GA |
10714 | /* |
10715 | * l2arc_log_blk_commit() may be called multiple times during a single | |
10716 | * l2arc_write_buffers() call. Save the allocated abd buffers in a list | |
10717 | * so we can free them in l2arc_write_done() later on. | |
10718 | */ | |
77f6826b | 10719 | list_insert_tail(&cb->l2wcb_abd_list, abd_buf); |
657fd33b GA |
10720 | |
10721 | /* try to compress the buffer */ | |
77f6826b | 10722 | psize = zio_compress_data(ZIO_COMPRESS_LZ4, |
10b3c7f5 | 10723 | abd_buf->abd, tmpbuf, sizeof (*lb), 0); |
77f6826b GA |
10724 | |
10725 | /* a log block is never entirely zero */ | |
10726 | ASSERT(psize != 0); | |
10727 | asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); | |
10728 | ASSERT(asize <= sizeof (*lb)); | |
10729 | ||
10730 | /* | |
10731 | * Update the start log block pointer in the device header to point | |
10732 | * to the log block we're about to write. | |
10733 | */ | |
10734 | l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; | |
10735 | l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; | |
10736 | l2dhdr->dh_start_lbps[0].lbp_payload_asize = | |
10737 | dev->l2ad_log_blk_payload_asize; | |
10738 | l2dhdr->dh_start_lbps[0].lbp_payload_start = | |
10739 | dev->l2ad_log_blk_payload_start; | |
77f6826b GA |
10740 | L2BLK_SET_LSIZE( |
10741 | (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); | |
10742 | L2BLK_SET_PSIZE( | |
10743 | (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); | |
10744 | L2BLK_SET_CHECKSUM( | |
10745 | (&l2dhdr->dh_start_lbps[0])->lbp_prop, | |
10746 | ZIO_CHECKSUM_FLETCHER_4); | |
10747 | if (asize < sizeof (*lb)) { | |
10748 | /* compression succeeded */ | |
10749 | bzero(tmpbuf + psize, asize - psize); | |
10750 | L2BLK_SET_COMPRESS( | |
10751 | (&l2dhdr->dh_start_lbps[0])->lbp_prop, | |
10752 | ZIO_COMPRESS_LZ4); | |
10753 | } else { | |
10754 | /* compression failed */ | |
10755 | bcopy(lb, tmpbuf, sizeof (*lb)); | |
10756 | L2BLK_SET_COMPRESS( | |
10757 | (&l2dhdr->dh_start_lbps[0])->lbp_prop, | |
10758 | ZIO_COMPRESS_OFF); | |
10759 | } | |
10760 | ||
10761 | /* checksum what we're about to write */ | |
10762 | fletcher_4_native(tmpbuf, asize, NULL, | |
10763 | &l2dhdr->dh_start_lbps[0].lbp_cksum); | |
10764 | ||
e2af2acc | 10765 | abd_free(abd_buf->abd); |
77f6826b GA |
10766 | |
10767 | /* perform the write itself */ | |
10768 | abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); | |
10769 | abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); | |
10770 | wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, | |
10771 | asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, | |
10772 | ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); | |
10773 | DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); | |
10774 | (void) zio_nowait(wzio); | |
10775 | ||
10776 | dev->l2ad_hand += asize; | |
10777 | /* | |
10778 | * Include the committed log block's pointer in the list of pointers | |
10779 | * to log blocks present in the L2ARC device. | |
10780 | */ | |
10781 | bcopy(&l2dhdr->dh_start_lbps[0], lb_ptr_buf->lb_ptr, | |
10782 | sizeof (l2arc_log_blkptr_t)); | |
10783 | mutex_enter(&dev->l2ad_mtx); | |
10784 | list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); | |
657fd33b GA |
10785 | ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); |
10786 | ARCSTAT_BUMP(arcstat_l2_log_blk_count); | |
10787 | zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); | |
10788 | zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); | |
77f6826b GA |
10789 | mutex_exit(&dev->l2ad_mtx); |
10790 | vdev_space_update(dev->l2ad_vdev, asize, 0, 0); | |
10791 | ||
10792 | /* bump the kstats */ | |
10793 | ARCSTAT_INCR(arcstat_l2_write_bytes, asize); | |
10794 | ARCSTAT_BUMP(arcstat_l2_log_blk_writes); | |
657fd33b | 10795 | ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); |
77f6826b GA |
10796 | ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, |
10797 | dev->l2ad_log_blk_payload_asize / asize); | |
10798 | ||
10799 | /* start a new log block */ | |
10800 | dev->l2ad_log_ent_idx = 0; | |
10801 | dev->l2ad_log_blk_payload_asize = 0; | |
10802 | dev->l2ad_log_blk_payload_start = 0; | |
10803 | } | |
10804 | ||
10805 | /* | |
10806 | * Validates an L2ARC log block address to make sure that it can be read | |
10807 | * from the provided L2ARC device. | |
10808 | */ | |
10809 | boolean_t | |
10810 | l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) | |
10811 | { | |
657fd33b GA |
10812 | /* L2BLK_GET_PSIZE returns aligned size for log blocks */ |
10813 | uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); | |
10814 | uint64_t end = lbp->lbp_daddr + asize - 1; | |
77f6826b GA |
10815 | uint64_t start = lbp->lbp_payload_start; |
10816 | boolean_t evicted = B_FALSE; | |
10817 | ||
10818 | /* | |
10819 | * A log block is valid if all of the following conditions are true: | |
10820 | * - it fits entirely (including its payload) between l2ad_start and | |
10821 | * l2ad_end | |
10822 | * - it has a valid size | |
10823 | * - neither the log block itself nor part of its payload was evicted | |
10824 | * by l2arc_evict(): | |
10825 | * | |
10826 | * l2ad_hand l2ad_evict | |
10827 | * | | lbp_daddr | |
10828 | * | start | | end | |
10829 | * | | | | | | |
10830 | * V V V V V | |
10831 | * l2ad_start ============================================ l2ad_end | |
10832 | * --------------------------|||| | |
10833 | * ^ ^ | |
10834 | * | log block | |
10835 | * payload | |
10836 | */ | |
10837 | ||
10838 | evicted = | |
10839 | l2arc_range_check_overlap(start, end, dev->l2ad_hand) || | |
10840 | l2arc_range_check_overlap(start, end, dev->l2ad_evict) || | |
10841 | l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || | |
10842 | l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); | |
10843 | ||
10844 | return (start >= dev->l2ad_start && end <= dev->l2ad_end && | |
657fd33b | 10845 | asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && |
77f6826b GA |
10846 | (!evicted || dev->l2ad_first)); |
10847 | } | |
10848 | ||
10849 | /* | |
10850 | * Inserts ARC buffer header `hdr' into the current L2ARC log block on | |
10851 | * the device. The buffer being inserted must be present in L2ARC. | |
10852 | * Returns B_TRUE if the L2ARC log block is full and needs to be committed | |
10853 | * to L2ARC, or B_FALSE if it still has room for more ARC buffers. | |
10854 | */ | |
10855 | static boolean_t | |
10856 | l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) | |
10857 | { | |
10858 | l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; | |
10859 | l2arc_log_ent_phys_t *le; | |
77f6826b | 10860 | |
657fd33b | 10861 | if (dev->l2ad_log_entries == 0) |
77f6826b GA |
10862 | return (B_FALSE); |
10863 | ||
10864 | int index = dev->l2ad_log_ent_idx++; | |
10865 | ||
657fd33b | 10866 | ASSERT3S(index, <, dev->l2ad_log_entries); |
77f6826b GA |
10867 | ASSERT(HDR_HAS_L2HDR(hdr)); |
10868 | ||
10869 | le = &lb->lb_entries[index]; | |
10870 | bzero(le, sizeof (*le)); | |
10871 | le->le_dva = hdr->b_dva; | |
10872 | le->le_birth = hdr->b_birth; | |
10873 | le->le_daddr = hdr->b_l2hdr.b_daddr; | |
10874 | if (index == 0) | |
10875 | dev->l2ad_log_blk_payload_start = le->le_daddr; | |
10876 | L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); | |
10877 | L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); | |
10878 | L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); | |
10b3c7f5 | 10879 | le->le_complevel = hdr->b_complevel; |
77f6826b GA |
10880 | L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); |
10881 | L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); | |
10882 | L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); | |
08532162 | 10883 | L2BLK_SET_STATE((le)->le_prop, hdr->b_l1hdr.b_state->arcs_state); |
77f6826b GA |
10884 | |
10885 | dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, | |
10886 | HDR_GET_PSIZE(hdr)); | |
10887 | ||
657fd33b | 10888 | return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); |
77f6826b GA |
10889 | } |
10890 | ||
10891 | /* | |
10892 | * Checks whether a given L2ARC device address sits in a time-sequential | |
10893 | * range. The trick here is that the L2ARC is a rotary buffer, so we can't | |
10894 | * just do a range comparison, we need to handle the situation in which the | |
10895 | * range wraps around the end of the L2ARC device. Arguments: | |
10896 | * bottom -- Lower end of the range to check (written to earlier). | |
10897 | * top -- Upper end of the range to check (written to later). | |
10898 | * check -- The address for which we want to determine if it sits in | |
10899 | * between the top and bottom. | |
10900 | * | |
10901 | * The 3-way conditional below represents the following cases: | |
10902 | * | |
10903 | * bottom < top : Sequentially ordered case: | |
10904 | * <check>--------+-------------------+ | |
10905 | * | (overlap here?) | | |
10906 | * L2ARC dev V V | |
10907 | * |---------------<bottom>============<top>--------------| | |
10908 | * | |
10909 | * bottom > top: Looped-around case: | |
10910 | * <check>--------+------------------+ | |
10911 | * | (overlap here?) | | |
10912 | * L2ARC dev V V | |
10913 | * |===============<top>---------------<bottom>===========| | |
10914 | * ^ ^ | |
10915 | * | (or here?) | | |
10916 | * +---------------+---------<check> | |
10917 | * | |
10918 | * top == bottom : Just a single address comparison. | |
10919 | */ | |
10920 | boolean_t | |
10921 | l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) | |
10922 | { | |
10923 | if (bottom < top) | |
10924 | return (bottom <= check && check <= top); | |
10925 | else if (bottom > top) | |
10926 | return (check <= top || bottom <= check); | |
10927 | else | |
10928 | return (check == top); | |
10929 | } | |
10930 | ||
0f699108 AZ |
10931 | EXPORT_SYMBOL(arc_buf_size); |
10932 | EXPORT_SYMBOL(arc_write); | |
c28b2279 | 10933 | EXPORT_SYMBOL(arc_read); |
e0b0ca98 | 10934 | EXPORT_SYMBOL(arc_buf_info); |
c28b2279 | 10935 | EXPORT_SYMBOL(arc_getbuf_func); |
ab26409d BB |
10936 | EXPORT_SYMBOL(arc_add_prune_callback); |
10937 | EXPORT_SYMBOL(arc_remove_prune_callback); | |
c28b2279 | 10938 | |
02730c33 | 10939 | /* BEGIN CSTYLED */ |
e3570464 | 10940 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_long, |
10941 | param_get_long, ZMOD_RW, "Min arc size"); | |
c28b2279 | 10942 | |
e3570464 | 10943 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_long, |
10944 | param_get_long, ZMOD_RW, "Max arc size"); | |
c28b2279 | 10945 | |
e3570464 | 10946 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit, param_set_arc_long, |
10947 | param_get_long, ZMOD_RW, "Metadata limit for arc size"); | |
6a8f9b6b | 10948 | |
e3570464 | 10949 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit_percent, |
10950 | param_set_arc_long, param_get_long, ZMOD_RW, | |
9907cc1c G |
10951 | "Percent of arc size for arc meta limit"); |
10952 | ||
e3570464 | 10953 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_min, param_set_arc_long, |
10954 | param_get_long, ZMOD_RW, "Min arc metadata"); | |
ca0bf58d | 10955 | |
03fdcb9a MM |
10956 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_prune, INT, ZMOD_RW, |
10957 | "Meta objects to scan for prune"); | |
c409e464 | 10958 | |
03fdcb9a | 10959 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_adjust_restarts, INT, ZMOD_RW, |
5dd92909 | 10960 | "Limit number of restarts in arc_evict_meta"); |
bc888666 | 10961 | |
03fdcb9a MM |
10962 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_strategy, INT, ZMOD_RW, |
10963 | "Meta reclaim strategy"); | |
f6046738 | 10964 | |
e3570464 | 10965 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int, |
10966 | param_get_int, ZMOD_RW, "Seconds before growing arc size"); | |
c409e464 | 10967 | |
03fdcb9a MM |
10968 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, p_dampener_disable, INT, ZMOD_RW, |
10969 | "Disable arc_p adapt dampener"); | |
62422785 | 10970 | |
e3570464 | 10971 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int, |
10972 | param_get_int, ZMOD_RW, "log2(fraction of arc to reclaim)"); | |
c409e464 | 10973 | |
03fdcb9a | 10974 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW, |
03b60eee DB |
10975 | "Percent of pagecache to reclaim arc to"); |
10976 | ||
e3570464 | 10977 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, p_min_shift, param_set_arc_int, |
10978 | param_get_int, ZMOD_RW, "arc_c shift to calc min/max arc_p"); | |
728d6ae9 | 10979 | |
03fdcb9a MM |
10980 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, INT, ZMOD_RD, |
10981 | "Target average block size"); | |
49ddb315 | 10982 | |
03fdcb9a MM |
10983 | ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW, |
10984 | "Disable compressed arc buffers"); | |
d3c2ae1c | 10985 | |
e3570464 | 10986 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int, |
10987 | param_get_int, ZMOD_RW, "Min life of prefetch block in ms"); | |
d4a72f23 | 10988 | |
e3570464 | 10989 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms, |
10990 | param_set_arc_int, param_get_int, ZMOD_RW, | |
d4a72f23 | 10991 | "Min life of prescient prefetched block in ms"); |
bce45ec9 | 10992 | |
03fdcb9a MM |
10993 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, ULONG, ZMOD_RW, |
10994 | "Max write bytes per interval"); | |
abd8610c | 10995 | |
03fdcb9a MM |
10996 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, ULONG, ZMOD_RW, |
10997 | "Extra write bytes during device warmup"); | |
abd8610c | 10998 | |
03fdcb9a MM |
10999 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, ULONG, ZMOD_RW, |
11000 | "Number of max device writes to precache"); | |
abd8610c | 11001 | |
03fdcb9a MM |
11002 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, ULONG, ZMOD_RW, |
11003 | "Compressed l2arc_headroom multiplier"); | |
3a17a7a9 | 11004 | |
b7654bd7 GA |
11005 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, ULONG, ZMOD_RW, |
11006 | "TRIM ahead L2ARC write size multiplier"); | |
11007 | ||
03fdcb9a MM |
11008 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, ULONG, ZMOD_RW, |
11009 | "Seconds between L2ARC writing"); | |
abd8610c | 11010 | |
03fdcb9a MM |
11011 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, ULONG, ZMOD_RW, |
11012 | "Min feed interval in milliseconds"); | |
abd8610c | 11013 | |
03fdcb9a MM |
11014 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW, |
11015 | "Skip caching prefetched buffers"); | |
abd8610c | 11016 | |
03fdcb9a MM |
11017 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW, |
11018 | "Turbo L2ARC warmup"); | |
abd8610c | 11019 | |
03fdcb9a MM |
11020 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW, |
11021 | "No reads during writes"); | |
abd8610c | 11022 | |
523e1295 AM |
11023 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, INT, ZMOD_RW, |
11024 | "Percent of ARC size allowed for L2ARC-only headers"); | |
11025 | ||
77f6826b GA |
11026 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW, |
11027 | "Rebuild the L2ARC when importing a pool"); | |
11028 | ||
11029 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, ULONG, ZMOD_RW, | |
11030 | "Min size in bytes to write rebuild log blocks in L2ARC"); | |
11031 | ||
feb3a7ee GA |
11032 | ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW, |
11033 | "Cache only MFU data from ARC into L2ARC"); | |
11034 | ||
e3570464 | 11035 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int, |
11036 | param_get_int, ZMOD_RW, "System free memory I/O throttle in bytes"); | |
7e8bddd0 | 11037 | |
e3570464 | 11038 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_long, |
11039 | param_get_long, ZMOD_RW, "System free memory target size in bytes"); | |
11f552fa | 11040 | |
e3570464 | 11041 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_long, |
11042 | param_get_long, ZMOD_RW, "Minimum bytes of dnodes in arc"); | |
25458cbe | 11043 | |
e3570464 | 11044 | ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent, |
11045 | param_set_arc_long, param_get_long, ZMOD_RW, | |
9907cc1c G |
11046 | "Percent of ARC meta buffers for dnodes"); |
11047 | ||
03fdcb9a | 11048 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, ULONG, ZMOD_RW, |
25458cbe | 11049 | "Percentage of excess dnodes to try to unpin"); |
3442c2a0 MA |
11050 | |
11051 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, INT, ZMOD_RW, | |
eb02a4c6 RM |
11052 | "When full, ARC allocation waits for eviction of this % of alloc size"); |
11053 | ||
11054 | ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, INT, ZMOD_RW, | |
11055 | "The number of headers to evict per sublist before moving to the next"); | |
02730c33 | 11056 | /* END CSTYLED */ |