4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright 2011 Nexenta Systems, Inc. All rights reserved.
24 * Copyright (c) 2013 by Delphix. All rights reserved.
25 * Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
29 * DVA-based Adjustable Replacement Cache
31 * While much of the theory of operation used here is
32 * based on the self-tuning, low overhead replacement cache
33 * presented by Megiddo and Modha at FAST 2003, there are some
34 * significant differences:
36 * 1. The Megiddo and Modha model assumes any page is evictable.
37 * Pages in its cache cannot be "locked" into memory. This makes
38 * the eviction algorithm simple: evict the last page in the list.
39 * This also make the performance characteristics easy to reason
40 * about. Our cache is not so simple. At any given moment, some
41 * subset of the blocks in the cache are un-evictable because we
42 * have handed out a reference to them. Blocks are only evictable
43 * when there are no external references active. This makes
44 * eviction far more problematic: we choose to evict the evictable
45 * blocks that are the "lowest" in the list.
47 * There are times when it is not possible to evict the requested
48 * space. In these circumstances we are unable to adjust the cache
49 * size. To prevent the cache growing unbounded at these times we
50 * implement a "cache throttle" that slows the flow of new data
51 * into the cache until we can make space available.
53 * 2. The Megiddo and Modha model assumes a fixed cache size.
54 * Pages are evicted when the cache is full and there is a cache
55 * miss. Our model has a variable sized cache. It grows with
56 * high use, but also tries to react to memory pressure from the
57 * operating system: decreasing its size when system memory is
60 * 3. The Megiddo and Modha model assumes a fixed page size. All
61 * elements of the cache are therefor exactly the same size. So
62 * when adjusting the cache size following a cache miss, its simply
63 * a matter of choosing a single page to evict. In our model, we
64 * have variable sized cache blocks (rangeing from 512 bytes to
65 * 128K bytes). We therefor choose a set of blocks to evict to make
66 * space for a cache miss that approximates as closely as possible
67 * the space used by the new block.
69 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
70 * by N. Megiddo & D. Modha, FAST 2003
76 * A new reference to a cache buffer can be obtained in two
77 * ways: 1) via a hash table lookup using the DVA as a key,
78 * or 2) via one of the ARC lists. The arc_read() interface
79 * uses method 1, while the internal arc algorithms for
80 * adjusting the cache use method 2. We therefor provide two
81 * types of locks: 1) the hash table lock array, and 2) the
84 * Buffers do not have their own mutexes, rather they rely on the
85 * hash table mutexes for the bulk of their protection (i.e. most
86 * fields in the arc_buf_hdr_t are protected by these mutexes).
88 * buf_hash_find() returns the appropriate mutex (held) when it
89 * locates the requested buffer in the hash table. It returns
90 * NULL for the mutex if the buffer was not in the table.
92 * buf_hash_remove() expects the appropriate hash mutex to be
93 * already held before it is invoked.
95 * Each arc state also has a mutex which is used to protect the
96 * buffer list associated with the state. When attempting to
97 * obtain a hash table lock while holding an arc list lock you
98 * must use: mutex_tryenter() to avoid deadlock. Also note that
99 * the active state mutex must be held before the ghost state mutex.
101 * Arc buffers may have an associated eviction callback function.
102 * This function will be invoked prior to removing the buffer (e.g.
103 * in arc_do_user_evicts()). Note however that the data associated
104 * with the buffer may be evicted prior to the callback. The callback
105 * must be made with *no locks held* (to prevent deadlock). Additionally,
106 * the users of callbacks must ensure that their private data is
107 * protected from simultaneous callbacks from arc_buf_evict()
108 * and arc_do_user_evicts().
110 * It as also possible to register a callback which is run when the
111 * arc_meta_limit is reached and no buffers can be safely evicted. In
112 * this case the arc user should drop a reference on some arc buffers so
113 * they can be reclaimed and the arc_meta_limit honored. For example,
114 * when using the ZPL each dentry holds a references on a znode. These
115 * dentries must be pruned before the arc buffer holding the znode can
118 * Note that the majority of the performance stats are manipulated
119 * with atomic operations.
121 * The L2ARC uses the l2arc_buflist_mtx global mutex for the following:
123 * - L2ARC buflist creation
124 * - L2ARC buflist eviction
125 * - L2ARC write completion, which walks L2ARC buflists
126 * - ARC header destruction, as it removes from L2ARC buflists
127 * - ARC header release, as it removes from L2ARC buflists
132 #include <sys/zio_compress.h>
133 #include <sys/zfs_context.h>
135 #include <sys/vdev.h>
136 #include <sys/vdev_impl.h>
138 #include <sys/vmsystm.h>
140 #include <sys/fs/swapnode.h>
143 #include <sys/callb.h>
144 #include <sys/kstat.h>
145 #include <sys/dmu_tx.h>
146 #include <zfs_fletcher.h>
148 static kmutex_t arc_reclaim_thr_lock
;
149 static kcondvar_t arc_reclaim_thr_cv
; /* used to signal reclaim thr */
150 static uint8_t arc_thread_exit
;
152 /* number of bytes to prune from caches when at arc_meta_limit is reached */
153 int zfs_arc_meta_prune
= 1048576;
155 typedef enum arc_reclaim_strategy
{
156 ARC_RECLAIM_AGGR
, /* Aggressive reclaim strategy */
157 ARC_RECLAIM_CONS
/* Conservative reclaim strategy */
158 } arc_reclaim_strategy_t
;
160 /* number of seconds before growing cache again */
161 int zfs_arc_grow_retry
= 5;
163 /* shift of arc_c for calculating both min and max arc_p */
164 int zfs_arc_p_min_shift
= 4;
166 /* log2(fraction of arc to reclaim) */
167 int zfs_arc_shrink_shift
= 5;
170 * minimum lifespan of a prefetch block in clock ticks
171 * (initialized in arc_init())
173 int zfs_arc_min_prefetch_lifespan
= HZ
;
175 /* disable arc proactive arc throttle due to low memory */
176 int zfs_arc_memory_throttle_disable
= 1;
178 /* disable duplicate buffer eviction */
179 int zfs_disable_dup_eviction
= 0;
183 /* expiration time for arc_no_grow */
184 static clock_t arc_grow_time
= 0;
187 * The arc has filled available memory and has now warmed up.
189 static boolean_t arc_warm
;
192 * These tunables are for performance analysis.
194 unsigned long zfs_arc_max
= 0;
195 unsigned long zfs_arc_min
= 0;
196 unsigned long zfs_arc_meta_limit
= 0;
199 * Note that buffers can be in one of 6 states:
200 * ARC_anon - anonymous (discussed below)
201 * ARC_mru - recently used, currently cached
202 * ARC_mru_ghost - recentely used, no longer in cache
203 * ARC_mfu - frequently used, currently cached
204 * ARC_mfu_ghost - frequently used, no longer in cache
205 * ARC_l2c_only - exists in L2ARC but not other states
206 * When there are no active references to the buffer, they are
207 * are linked onto a list in one of these arc states. These are
208 * the only buffers that can be evicted or deleted. Within each
209 * state there are multiple lists, one for meta-data and one for
210 * non-meta-data. Meta-data (indirect blocks, blocks of dnodes,
211 * etc.) is tracked separately so that it can be managed more
212 * explicitly: favored over data, limited explicitly.
214 * Anonymous buffers are buffers that are not associated with
215 * a DVA. These are buffers that hold dirty block copies
216 * before they are written to stable storage. By definition,
217 * they are "ref'd" and are considered part of arc_mru
218 * that cannot be freed. Generally, they will aquire a DVA
219 * as they are written and migrate onto the arc_mru list.
221 * The ARC_l2c_only state is for buffers that are in the second
222 * level ARC but no longer in any of the ARC_m* lists. The second
223 * level ARC itself may also contain buffers that are in any of
224 * the ARC_m* states - meaning that a buffer can exist in two
225 * places. The reason for the ARC_l2c_only state is to keep the
226 * buffer header in the hash table, so that reads that hit the
227 * second level ARC benefit from these fast lookups.
230 typedef struct arc_state
{
231 list_t arcs_list
[ARC_BUFC_NUMTYPES
]; /* list of evictable buffers */
232 uint64_t arcs_lsize
[ARC_BUFC_NUMTYPES
]; /* amount of evictable data */
233 uint64_t arcs_size
; /* total amount of data in this state */
235 arc_state_type_t arcs_state
;
239 static arc_state_t ARC_anon
;
240 static arc_state_t ARC_mru
;
241 static arc_state_t ARC_mru_ghost
;
242 static arc_state_t ARC_mfu
;
243 static arc_state_t ARC_mfu_ghost
;
244 static arc_state_t ARC_l2c_only
;
246 typedef struct arc_stats
{
247 kstat_named_t arcstat_hits
;
248 kstat_named_t arcstat_misses
;
249 kstat_named_t arcstat_demand_data_hits
;
250 kstat_named_t arcstat_demand_data_misses
;
251 kstat_named_t arcstat_demand_metadata_hits
;
252 kstat_named_t arcstat_demand_metadata_misses
;
253 kstat_named_t arcstat_prefetch_data_hits
;
254 kstat_named_t arcstat_prefetch_data_misses
;
255 kstat_named_t arcstat_prefetch_metadata_hits
;
256 kstat_named_t arcstat_prefetch_metadata_misses
;
257 kstat_named_t arcstat_mru_hits
;
258 kstat_named_t arcstat_mru_ghost_hits
;
259 kstat_named_t arcstat_mfu_hits
;
260 kstat_named_t arcstat_mfu_ghost_hits
;
261 kstat_named_t arcstat_deleted
;
262 kstat_named_t arcstat_recycle_miss
;
264 * Number of buffers that could not be evicted because the hash lock
265 * was held by another thread. The lock may not necessarily be held
266 * by something using the same buffer, since hash locks are shared
267 * by multiple buffers.
269 kstat_named_t arcstat_mutex_miss
;
271 * Number of buffers skipped because they have I/O in progress, are
272 * indrect prefetch buffers that have not lived long enough, or are
273 * not from the spa we're trying to evict from.
275 kstat_named_t arcstat_evict_skip
;
276 kstat_named_t arcstat_evict_l2_cached
;
277 kstat_named_t arcstat_evict_l2_eligible
;
278 kstat_named_t arcstat_evict_l2_ineligible
;
279 kstat_named_t arcstat_hash_elements
;
280 kstat_named_t arcstat_hash_elements_max
;
281 kstat_named_t arcstat_hash_collisions
;
282 kstat_named_t arcstat_hash_chains
;
283 kstat_named_t arcstat_hash_chain_max
;
284 kstat_named_t arcstat_p
;
285 kstat_named_t arcstat_c
;
286 kstat_named_t arcstat_c_min
;
287 kstat_named_t arcstat_c_max
;
288 kstat_named_t arcstat_size
;
289 kstat_named_t arcstat_hdr_size
;
290 kstat_named_t arcstat_data_size
;
291 kstat_named_t arcstat_other_size
;
292 kstat_named_t arcstat_anon_size
;
293 kstat_named_t arcstat_anon_evict_data
;
294 kstat_named_t arcstat_anon_evict_metadata
;
295 kstat_named_t arcstat_mru_size
;
296 kstat_named_t arcstat_mru_evict_data
;
297 kstat_named_t arcstat_mru_evict_metadata
;
298 kstat_named_t arcstat_mru_ghost_size
;
299 kstat_named_t arcstat_mru_ghost_evict_data
;
300 kstat_named_t arcstat_mru_ghost_evict_metadata
;
301 kstat_named_t arcstat_mfu_size
;
302 kstat_named_t arcstat_mfu_evict_data
;
303 kstat_named_t arcstat_mfu_evict_metadata
;
304 kstat_named_t arcstat_mfu_ghost_size
;
305 kstat_named_t arcstat_mfu_ghost_evict_data
;
306 kstat_named_t arcstat_mfu_ghost_evict_metadata
;
307 kstat_named_t arcstat_l2_hits
;
308 kstat_named_t arcstat_l2_misses
;
309 kstat_named_t arcstat_l2_feeds
;
310 kstat_named_t arcstat_l2_rw_clash
;
311 kstat_named_t arcstat_l2_read_bytes
;
312 kstat_named_t arcstat_l2_write_bytes
;
313 kstat_named_t arcstat_l2_writes_sent
;
314 kstat_named_t arcstat_l2_writes_done
;
315 kstat_named_t arcstat_l2_writes_error
;
316 kstat_named_t arcstat_l2_writes_hdr_miss
;
317 kstat_named_t arcstat_l2_evict_lock_retry
;
318 kstat_named_t arcstat_l2_evict_reading
;
319 kstat_named_t arcstat_l2_free_on_write
;
320 kstat_named_t arcstat_l2_abort_lowmem
;
321 kstat_named_t arcstat_l2_cksum_bad
;
322 kstat_named_t arcstat_l2_io_error
;
323 kstat_named_t arcstat_l2_size
;
324 kstat_named_t arcstat_l2_asize
;
325 kstat_named_t arcstat_l2_hdr_size
;
326 kstat_named_t arcstat_l2_compress_successes
;
327 kstat_named_t arcstat_l2_compress_zeros
;
328 kstat_named_t arcstat_l2_compress_failures
;
329 kstat_named_t arcstat_memory_throttle_count
;
330 kstat_named_t arcstat_duplicate_buffers
;
331 kstat_named_t arcstat_duplicate_buffers_size
;
332 kstat_named_t arcstat_duplicate_reads
;
333 kstat_named_t arcstat_memory_direct_count
;
334 kstat_named_t arcstat_memory_indirect_count
;
335 kstat_named_t arcstat_no_grow
;
336 kstat_named_t arcstat_tempreserve
;
337 kstat_named_t arcstat_loaned_bytes
;
338 kstat_named_t arcstat_prune
;
339 kstat_named_t arcstat_meta_used
;
340 kstat_named_t arcstat_meta_limit
;
341 kstat_named_t arcstat_meta_max
;
344 static arc_stats_t arc_stats
= {
345 { "hits", KSTAT_DATA_UINT64
},
346 { "misses", KSTAT_DATA_UINT64
},
347 { "demand_data_hits", KSTAT_DATA_UINT64
},
348 { "demand_data_misses", KSTAT_DATA_UINT64
},
349 { "demand_metadata_hits", KSTAT_DATA_UINT64
},
350 { "demand_metadata_misses", KSTAT_DATA_UINT64
},
351 { "prefetch_data_hits", KSTAT_DATA_UINT64
},
352 { "prefetch_data_misses", KSTAT_DATA_UINT64
},
353 { "prefetch_metadata_hits", KSTAT_DATA_UINT64
},
354 { "prefetch_metadata_misses", KSTAT_DATA_UINT64
},
355 { "mru_hits", KSTAT_DATA_UINT64
},
356 { "mru_ghost_hits", KSTAT_DATA_UINT64
},
357 { "mfu_hits", KSTAT_DATA_UINT64
},
358 { "mfu_ghost_hits", KSTAT_DATA_UINT64
},
359 { "deleted", KSTAT_DATA_UINT64
},
360 { "recycle_miss", KSTAT_DATA_UINT64
},
361 { "mutex_miss", KSTAT_DATA_UINT64
},
362 { "evict_skip", KSTAT_DATA_UINT64
},
363 { "evict_l2_cached", KSTAT_DATA_UINT64
},
364 { "evict_l2_eligible", KSTAT_DATA_UINT64
},
365 { "evict_l2_ineligible", KSTAT_DATA_UINT64
},
366 { "hash_elements", KSTAT_DATA_UINT64
},
367 { "hash_elements_max", KSTAT_DATA_UINT64
},
368 { "hash_collisions", KSTAT_DATA_UINT64
},
369 { "hash_chains", KSTAT_DATA_UINT64
},
370 { "hash_chain_max", KSTAT_DATA_UINT64
},
371 { "p", KSTAT_DATA_UINT64
},
372 { "c", KSTAT_DATA_UINT64
},
373 { "c_min", KSTAT_DATA_UINT64
},
374 { "c_max", KSTAT_DATA_UINT64
},
375 { "size", KSTAT_DATA_UINT64
},
376 { "hdr_size", KSTAT_DATA_UINT64
},
377 { "data_size", KSTAT_DATA_UINT64
},
378 { "other_size", KSTAT_DATA_UINT64
},
379 { "anon_size", KSTAT_DATA_UINT64
},
380 { "anon_evict_data", KSTAT_DATA_UINT64
},
381 { "anon_evict_metadata", KSTAT_DATA_UINT64
},
382 { "mru_size", KSTAT_DATA_UINT64
},
383 { "mru_evict_data", KSTAT_DATA_UINT64
},
384 { "mru_evict_metadata", KSTAT_DATA_UINT64
},
385 { "mru_ghost_size", KSTAT_DATA_UINT64
},
386 { "mru_ghost_evict_data", KSTAT_DATA_UINT64
},
387 { "mru_ghost_evict_metadata", KSTAT_DATA_UINT64
},
388 { "mfu_size", KSTAT_DATA_UINT64
},
389 { "mfu_evict_data", KSTAT_DATA_UINT64
},
390 { "mfu_evict_metadata", KSTAT_DATA_UINT64
},
391 { "mfu_ghost_size", KSTAT_DATA_UINT64
},
392 { "mfu_ghost_evict_data", KSTAT_DATA_UINT64
},
393 { "mfu_ghost_evict_metadata", KSTAT_DATA_UINT64
},
394 { "l2_hits", KSTAT_DATA_UINT64
},
395 { "l2_misses", KSTAT_DATA_UINT64
},
396 { "l2_feeds", KSTAT_DATA_UINT64
},
397 { "l2_rw_clash", KSTAT_DATA_UINT64
},
398 { "l2_read_bytes", KSTAT_DATA_UINT64
},
399 { "l2_write_bytes", KSTAT_DATA_UINT64
},
400 { "l2_writes_sent", KSTAT_DATA_UINT64
},
401 { "l2_writes_done", KSTAT_DATA_UINT64
},
402 { "l2_writes_error", KSTAT_DATA_UINT64
},
403 { "l2_writes_hdr_miss", KSTAT_DATA_UINT64
},
404 { "l2_evict_lock_retry", KSTAT_DATA_UINT64
},
405 { "l2_evict_reading", KSTAT_DATA_UINT64
},
406 { "l2_free_on_write", KSTAT_DATA_UINT64
},
407 { "l2_abort_lowmem", KSTAT_DATA_UINT64
},
408 { "l2_cksum_bad", KSTAT_DATA_UINT64
},
409 { "l2_io_error", KSTAT_DATA_UINT64
},
410 { "l2_size", KSTAT_DATA_UINT64
},
411 { "l2_asize", KSTAT_DATA_UINT64
},
412 { "l2_hdr_size", KSTAT_DATA_UINT64
},
413 { "l2_compress_successes", KSTAT_DATA_UINT64
},
414 { "l2_compress_zeros", KSTAT_DATA_UINT64
},
415 { "l2_compress_failures", KSTAT_DATA_UINT64
},
416 { "memory_throttle_count", KSTAT_DATA_UINT64
},
417 { "duplicate_buffers", KSTAT_DATA_UINT64
},
418 { "duplicate_buffers_size", KSTAT_DATA_UINT64
},
419 { "duplicate_reads", KSTAT_DATA_UINT64
},
420 { "memory_direct_count", KSTAT_DATA_UINT64
},
421 { "memory_indirect_count", KSTAT_DATA_UINT64
},
422 { "arc_no_grow", KSTAT_DATA_UINT64
},
423 { "arc_tempreserve", KSTAT_DATA_UINT64
},
424 { "arc_loaned_bytes", KSTAT_DATA_UINT64
},
425 { "arc_prune", KSTAT_DATA_UINT64
},
426 { "arc_meta_used", KSTAT_DATA_UINT64
},
427 { "arc_meta_limit", KSTAT_DATA_UINT64
},
428 { "arc_meta_max", KSTAT_DATA_UINT64
},
431 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
433 #define ARCSTAT_INCR(stat, val) \
434 atomic_add_64(&arc_stats.stat.value.ui64, (val));
436 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
437 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
439 #define ARCSTAT_MAX(stat, val) { \
441 while ((val) > (m = arc_stats.stat.value.ui64) && \
442 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
446 #define ARCSTAT_MAXSTAT(stat) \
447 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
450 * We define a macro to allow ARC hits/misses to be easily broken down by
451 * two separate conditions, giving a total of four different subtypes for
452 * each of hits and misses (so eight statistics total).
454 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
457 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
459 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
463 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
465 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
470 static arc_state_t
*arc_anon
;
471 static arc_state_t
*arc_mru
;
472 static arc_state_t
*arc_mru_ghost
;
473 static arc_state_t
*arc_mfu
;
474 static arc_state_t
*arc_mfu_ghost
;
475 static arc_state_t
*arc_l2c_only
;
478 * There are several ARC variables that are critical to export as kstats --
479 * but we don't want to have to grovel around in the kstat whenever we wish to
480 * manipulate them. For these variables, we therefore define them to be in
481 * terms of the statistic variable. This assures that we are not introducing
482 * the possibility of inconsistency by having shadow copies of the variables,
483 * while still allowing the code to be readable.
485 #define arc_size ARCSTAT(arcstat_size) /* actual total arc size */
486 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
487 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
488 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
489 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
490 #define arc_no_grow ARCSTAT(arcstat_no_grow)
491 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
492 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
493 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
494 #define arc_meta_used ARCSTAT(arcstat_meta_used) /* size of metadata */
495 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
497 #define L2ARC_IS_VALID_COMPRESS(_c_) \
498 ((_c_) == ZIO_COMPRESS_LZ4 || (_c_) == ZIO_COMPRESS_EMPTY)
500 typedef struct l2arc_buf_hdr l2arc_buf_hdr_t
;
502 typedef struct arc_callback arc_callback_t
;
504 struct arc_callback
{
506 arc_done_func_t
*acb_done
;
508 zio_t
*acb_zio_dummy
;
509 arc_callback_t
*acb_next
;
512 typedef struct arc_write_callback arc_write_callback_t
;
514 struct arc_write_callback
{
516 arc_done_func_t
*awcb_ready
;
517 arc_done_func_t
*awcb_done
;
522 /* protected by hash lock */
527 kmutex_t b_freeze_lock
;
528 zio_cksum_t
*b_freeze_cksum
;
530 arc_buf_hdr_t
*b_hash_next
;
535 arc_callback_t
*b_acb
;
539 arc_buf_contents_t b_type
;
543 /* protected by arc state mutex */
544 arc_state_t
*b_state
;
545 list_node_t b_arc_node
;
547 /* updated atomically */
548 clock_t b_arc_access
;
550 uint32_t b_mru_ghost_hits
;
552 uint32_t b_mfu_ghost_hits
;
555 /* self protecting */
558 l2arc_buf_hdr_t
*b_l2hdr
;
559 list_node_t b_l2node
;
562 static list_t arc_prune_list
;
563 static kmutex_t arc_prune_mtx
;
564 static arc_buf_t
*arc_eviction_list
;
565 static kmutex_t arc_eviction_mtx
;
566 static arc_buf_hdr_t arc_eviction_hdr
;
567 static void arc_get_data_buf(arc_buf_t
*buf
);
568 static void arc_access(arc_buf_hdr_t
*buf
, kmutex_t
*hash_lock
);
569 static int arc_evict_needed(arc_buf_contents_t type
);
570 static void arc_evict_ghost(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
571 arc_buf_contents_t type
);
573 static boolean_t
l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*ab
);
575 #define GHOST_STATE(state) \
576 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
577 (state) == arc_l2c_only)
580 * Private ARC flags. These flags are private ARC only flags that will show up
581 * in b_flags in the arc_hdr_buf_t. Some flags are publicly declared, and can
582 * be passed in as arc_flags in things like arc_read. However, these flags
583 * should never be passed and should only be set by ARC code. When adding new
584 * public flags, make sure not to smash the private ones.
587 #define ARC_IN_HASH_TABLE (1 << 9) /* this buffer is hashed */
588 #define ARC_IO_IN_PROGRESS (1 << 10) /* I/O in progress for buf */
589 #define ARC_IO_ERROR (1 << 11) /* I/O failed for buf */
590 #define ARC_FREED_IN_READ (1 << 12) /* buf freed while in read */
591 #define ARC_BUF_AVAILABLE (1 << 13) /* block not in active use */
592 #define ARC_INDIRECT (1 << 14) /* this is an indirect block */
593 #define ARC_FREE_IN_PROGRESS (1 << 15) /* hdr about to be freed */
594 #define ARC_L2_WRITING (1 << 16) /* L2ARC write in progress */
595 #define ARC_L2_EVICTED (1 << 17) /* evicted during I/O */
596 #define ARC_L2_WRITE_HEAD (1 << 18) /* head of write list */
598 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_IN_HASH_TABLE)
599 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_IO_IN_PROGRESS)
600 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_IO_ERROR)
601 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_PREFETCH)
602 #define HDR_FREED_IN_READ(hdr) ((hdr)->b_flags & ARC_FREED_IN_READ)
603 #define HDR_BUF_AVAILABLE(hdr) ((hdr)->b_flags & ARC_BUF_AVAILABLE)
604 #define HDR_FREE_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FREE_IN_PROGRESS)
605 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_L2CACHE)
606 #define HDR_L2_READING(hdr) ((hdr)->b_flags & ARC_IO_IN_PROGRESS && \
607 (hdr)->b_l2hdr != NULL)
608 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_L2_WRITING)
609 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_L2_EVICTED)
610 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_L2_WRITE_HEAD)
616 #define HDR_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
617 #define L2HDR_SIZE ((int64_t)sizeof (l2arc_buf_hdr_t))
620 * Hash table routines
623 #define HT_LOCK_ALIGN 64
624 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
629 unsigned char pad
[HT_LOCK_PAD
];
633 #define BUF_LOCKS 256
634 typedef struct buf_hash_table
{
636 arc_buf_hdr_t
**ht_table
;
637 struct ht_lock ht_locks
[BUF_LOCKS
];
640 static buf_hash_table_t buf_hash_table
;
642 #define BUF_HASH_INDEX(spa, dva, birth) \
643 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
644 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
645 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
646 #define HDR_LOCK(hdr) \
647 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
649 uint64_t zfs_crc64_table
[256];
655 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
656 #define L2ARC_HEADROOM 2 /* num of writes */
658 * If we discover during ARC scan any buffers to be compressed, we boost
659 * our headroom for the next scanning cycle by this percentage multiple.
661 #define L2ARC_HEADROOM_BOOST 200
662 #define L2ARC_FEED_SECS 1 /* caching interval secs */
663 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
665 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
666 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
669 * L2ARC Performance Tunables
671 unsigned long l2arc_write_max
= L2ARC_WRITE_SIZE
; /* def max write size */
672 unsigned long l2arc_write_boost
= L2ARC_WRITE_SIZE
; /* extra warmup write */
673 unsigned long l2arc_headroom
= L2ARC_HEADROOM
; /* # of dev writes */
674 unsigned long l2arc_headroom_boost
= L2ARC_HEADROOM_BOOST
;
675 unsigned long l2arc_feed_secs
= L2ARC_FEED_SECS
; /* interval seconds */
676 unsigned long l2arc_feed_min_ms
= L2ARC_FEED_MIN_MS
; /* min interval msecs */
677 int l2arc_noprefetch
= B_TRUE
; /* don't cache prefetch bufs */
678 int l2arc_nocompress
= B_FALSE
; /* don't compress bufs */
679 int l2arc_feed_again
= B_TRUE
; /* turbo warmup */
680 int l2arc_norw
= B_FALSE
; /* no reads during writes */
685 typedef struct l2arc_dev
{
686 vdev_t
*l2ad_vdev
; /* vdev */
687 spa_t
*l2ad_spa
; /* spa */
688 uint64_t l2ad_hand
; /* next write location */
689 uint64_t l2ad_start
; /* first addr on device */
690 uint64_t l2ad_end
; /* last addr on device */
691 uint64_t l2ad_evict
; /* last addr eviction reached */
692 boolean_t l2ad_first
; /* first sweep through */
693 boolean_t l2ad_writing
; /* currently writing */
694 list_t
*l2ad_buflist
; /* buffer list */
695 list_node_t l2ad_node
; /* device list node */
698 static list_t L2ARC_dev_list
; /* device list */
699 static list_t
*l2arc_dev_list
; /* device list pointer */
700 static kmutex_t l2arc_dev_mtx
; /* device list mutex */
701 static l2arc_dev_t
*l2arc_dev_last
; /* last device used */
702 static kmutex_t l2arc_buflist_mtx
; /* mutex for all buflists */
703 static list_t L2ARC_free_on_write
; /* free after write buf list */
704 static list_t
*l2arc_free_on_write
; /* free after write list ptr */
705 static kmutex_t l2arc_free_on_write_mtx
; /* mutex for list */
706 static uint64_t l2arc_ndev
; /* number of devices */
708 typedef struct l2arc_read_callback
{
709 arc_buf_t
*l2rcb_buf
; /* read buffer */
710 spa_t
*l2rcb_spa
; /* spa */
711 blkptr_t l2rcb_bp
; /* original blkptr */
712 zbookmark_t l2rcb_zb
; /* original bookmark */
713 int l2rcb_flags
; /* original flags */
714 enum zio_compress l2rcb_compress
; /* applied compress */
715 } l2arc_read_callback_t
;
717 typedef struct l2arc_write_callback
{
718 l2arc_dev_t
*l2wcb_dev
; /* device info */
719 arc_buf_hdr_t
*l2wcb_head
; /* head of write buflist */
720 } l2arc_write_callback_t
;
722 struct l2arc_buf_hdr
{
723 /* protected by arc_buf_hdr mutex */
724 l2arc_dev_t
*b_dev
; /* L2ARC device */
725 uint64_t b_daddr
; /* disk address, offset byte */
726 /* compression applied to buffer data */
727 enum zio_compress b_compress
;
728 /* real alloc'd buffer size depending on b_compress applied */
731 /* temporary buffer holder for in-flight compressed data */
735 typedef struct l2arc_data_free
{
736 /* protected by l2arc_free_on_write_mtx */
739 void (*l2df_func
)(void *, size_t);
740 list_node_t l2df_list_node
;
743 static kmutex_t l2arc_feed_thr_lock
;
744 static kcondvar_t l2arc_feed_thr_cv
;
745 static uint8_t l2arc_thread_exit
;
747 static void l2arc_read_done(zio_t
*zio
);
748 static void l2arc_hdr_stat_add(void);
749 static void l2arc_hdr_stat_remove(void);
751 static boolean_t
l2arc_compress_buf(l2arc_buf_hdr_t
*l2hdr
);
752 static void l2arc_decompress_zio(zio_t
*zio
, arc_buf_hdr_t
*hdr
,
753 enum zio_compress c
);
754 static void l2arc_release_cdata_buf(arc_buf_hdr_t
*ab
);
757 buf_hash(uint64_t spa
, const dva_t
*dva
, uint64_t birth
)
759 uint8_t *vdva
= (uint8_t *)dva
;
760 uint64_t crc
= -1ULL;
763 ASSERT(zfs_crc64_table
[128] == ZFS_CRC64_POLY
);
765 for (i
= 0; i
< sizeof (dva_t
); i
++)
766 crc
= (crc
>> 8) ^ zfs_crc64_table
[(crc
^ vdva
[i
]) & 0xFF];
768 crc
^= (spa
>>8) ^ birth
;
773 #define BUF_EMPTY(buf) \
774 ((buf)->b_dva.dva_word[0] == 0 && \
775 (buf)->b_dva.dva_word[1] == 0 && \
778 #define BUF_EQUAL(spa, dva, birth, buf) \
779 ((buf)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
780 ((buf)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
781 ((buf)->b_birth == birth) && ((buf)->b_spa == spa)
784 buf_discard_identity(arc_buf_hdr_t
*hdr
)
786 hdr
->b_dva
.dva_word
[0] = 0;
787 hdr
->b_dva
.dva_word
[1] = 0;
792 static arc_buf_hdr_t
*
793 buf_hash_find(uint64_t spa
, const dva_t
*dva
, uint64_t birth
, kmutex_t
**lockp
)
795 uint64_t idx
= BUF_HASH_INDEX(spa
, dva
, birth
);
796 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
799 mutex_enter(hash_lock
);
800 for (buf
= buf_hash_table
.ht_table
[idx
]; buf
!= NULL
;
801 buf
= buf
->b_hash_next
) {
802 if (BUF_EQUAL(spa
, dva
, birth
, buf
)) {
807 mutex_exit(hash_lock
);
813 * Insert an entry into the hash table. If there is already an element
814 * equal to elem in the hash table, then the already existing element
815 * will be returned and the new element will not be inserted.
816 * Otherwise returns NULL.
818 static arc_buf_hdr_t
*
819 buf_hash_insert(arc_buf_hdr_t
*buf
, kmutex_t
**lockp
)
821 uint64_t idx
= BUF_HASH_INDEX(buf
->b_spa
, &buf
->b_dva
, buf
->b_birth
);
822 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
826 ASSERT(!HDR_IN_HASH_TABLE(buf
));
828 mutex_enter(hash_lock
);
829 for (fbuf
= buf_hash_table
.ht_table
[idx
], i
= 0; fbuf
!= NULL
;
830 fbuf
= fbuf
->b_hash_next
, i
++) {
831 if (BUF_EQUAL(buf
->b_spa
, &buf
->b_dva
, buf
->b_birth
, fbuf
))
835 buf
->b_hash_next
= buf_hash_table
.ht_table
[idx
];
836 buf_hash_table
.ht_table
[idx
] = buf
;
837 buf
->b_flags
|= ARC_IN_HASH_TABLE
;
839 /* collect some hash table performance data */
841 ARCSTAT_BUMP(arcstat_hash_collisions
);
843 ARCSTAT_BUMP(arcstat_hash_chains
);
845 ARCSTAT_MAX(arcstat_hash_chain_max
, i
);
848 ARCSTAT_BUMP(arcstat_hash_elements
);
849 ARCSTAT_MAXSTAT(arcstat_hash_elements
);
855 buf_hash_remove(arc_buf_hdr_t
*buf
)
857 arc_buf_hdr_t
*fbuf
, **bufp
;
858 uint64_t idx
= BUF_HASH_INDEX(buf
->b_spa
, &buf
->b_dva
, buf
->b_birth
);
860 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx
)));
861 ASSERT(HDR_IN_HASH_TABLE(buf
));
863 bufp
= &buf_hash_table
.ht_table
[idx
];
864 while ((fbuf
= *bufp
) != buf
) {
865 ASSERT(fbuf
!= NULL
);
866 bufp
= &fbuf
->b_hash_next
;
868 *bufp
= buf
->b_hash_next
;
869 buf
->b_hash_next
= NULL
;
870 buf
->b_flags
&= ~ARC_IN_HASH_TABLE
;
872 /* collect some hash table performance data */
873 ARCSTAT_BUMPDOWN(arcstat_hash_elements
);
875 if (buf_hash_table
.ht_table
[idx
] &&
876 buf_hash_table
.ht_table
[idx
]->b_hash_next
== NULL
)
877 ARCSTAT_BUMPDOWN(arcstat_hash_chains
);
881 * Global data structures and functions for the buf kmem cache.
883 static kmem_cache_t
*hdr_cache
;
884 static kmem_cache_t
*buf_cache
;
891 #if defined(_KERNEL) && defined(HAVE_SPL)
892 /* Large allocations which do not require contiguous pages
893 * should be using vmem_free() in the linux kernel */
894 vmem_free(buf_hash_table
.ht_table
,
895 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
897 kmem_free(buf_hash_table
.ht_table
,
898 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
900 for (i
= 0; i
< BUF_LOCKS
; i
++)
901 mutex_destroy(&buf_hash_table
.ht_locks
[i
].ht_lock
);
902 kmem_cache_destroy(hdr_cache
);
903 kmem_cache_destroy(buf_cache
);
907 * Constructor callback - called when the cache is empty
908 * and a new buf is requested.
912 hdr_cons(void *vbuf
, void *unused
, int kmflag
)
914 arc_buf_hdr_t
*buf
= vbuf
;
916 bzero(buf
, sizeof (arc_buf_hdr_t
));
917 refcount_create(&buf
->b_refcnt
);
918 cv_init(&buf
->b_cv
, NULL
, CV_DEFAULT
, NULL
);
919 mutex_init(&buf
->b_freeze_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
920 list_link_init(&buf
->b_arc_node
);
921 list_link_init(&buf
->b_l2node
);
922 arc_space_consume(sizeof (arc_buf_hdr_t
), ARC_SPACE_HDRS
);
929 buf_cons(void *vbuf
, void *unused
, int kmflag
)
931 arc_buf_t
*buf
= vbuf
;
933 bzero(buf
, sizeof (arc_buf_t
));
934 mutex_init(&buf
->b_evict_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
935 arc_space_consume(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
941 * Destructor callback - called when a cached buf is
942 * no longer required.
946 hdr_dest(void *vbuf
, void *unused
)
948 arc_buf_hdr_t
*buf
= vbuf
;
950 ASSERT(BUF_EMPTY(buf
));
951 refcount_destroy(&buf
->b_refcnt
);
952 cv_destroy(&buf
->b_cv
);
953 mutex_destroy(&buf
->b_freeze_lock
);
954 arc_space_return(sizeof (arc_buf_hdr_t
), ARC_SPACE_HDRS
);
959 buf_dest(void *vbuf
, void *unused
)
961 arc_buf_t
*buf
= vbuf
;
963 mutex_destroy(&buf
->b_evict_lock
);
964 arc_space_return(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
971 uint64_t hsize
= 1ULL << 12;
975 * The hash table is big enough to fill all of physical memory
976 * with an average 64K block size. The table will take up
977 * totalmem*sizeof(void*)/64K (eg. 128KB/GB with 8-byte pointers).
979 while (hsize
* 65536 < physmem
* PAGESIZE
)
982 buf_hash_table
.ht_mask
= hsize
- 1;
983 #if defined(_KERNEL) && defined(HAVE_SPL)
984 /* Large allocations which do not require contiguous pages
985 * should be using vmem_alloc() in the linux kernel */
986 buf_hash_table
.ht_table
=
987 vmem_zalloc(hsize
* sizeof (void*), KM_SLEEP
);
989 buf_hash_table
.ht_table
=
990 kmem_zalloc(hsize
* sizeof (void*), KM_NOSLEEP
);
992 if (buf_hash_table
.ht_table
== NULL
) {
993 ASSERT(hsize
> (1ULL << 8));
998 hdr_cache
= kmem_cache_create("arc_buf_hdr_t", sizeof (arc_buf_hdr_t
),
999 0, hdr_cons
, hdr_dest
, NULL
, NULL
, NULL
, 0);
1000 buf_cache
= kmem_cache_create("arc_buf_t", sizeof (arc_buf_t
),
1001 0, buf_cons
, buf_dest
, NULL
, NULL
, NULL
, 0);
1003 for (i
= 0; i
< 256; i
++)
1004 for (ct
= zfs_crc64_table
+ i
, *ct
= i
, j
= 8; j
> 0; j
--)
1005 *ct
= (*ct
>> 1) ^ (-(*ct
& 1) & ZFS_CRC64_POLY
);
1007 for (i
= 0; i
< BUF_LOCKS
; i
++) {
1008 mutex_init(&buf_hash_table
.ht_locks
[i
].ht_lock
,
1009 NULL
, MUTEX_DEFAULT
, NULL
);
1013 #define ARC_MINTIME (hz>>4) /* 62 ms */
1016 arc_cksum_verify(arc_buf_t
*buf
)
1020 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1023 mutex_enter(&buf
->b_hdr
->b_freeze_lock
);
1024 if (buf
->b_hdr
->b_freeze_cksum
== NULL
||
1025 (buf
->b_hdr
->b_flags
& ARC_IO_ERROR
)) {
1026 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1029 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1030 if (!ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
))
1031 panic("buffer modified while frozen!");
1032 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1036 arc_cksum_equal(arc_buf_t
*buf
)
1041 mutex_enter(&buf
->b_hdr
->b_freeze_lock
);
1042 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1043 equal
= ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
);
1044 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1050 arc_cksum_compute(arc_buf_t
*buf
, boolean_t force
)
1052 if (!force
&& !(zfs_flags
& ZFS_DEBUG_MODIFY
))
1055 mutex_enter(&buf
->b_hdr
->b_freeze_lock
);
1056 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1057 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1060 buf
->b_hdr
->b_freeze_cksum
= kmem_alloc(sizeof (zio_cksum_t
),
1062 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
,
1063 buf
->b_hdr
->b_freeze_cksum
);
1064 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1068 arc_buf_thaw(arc_buf_t
*buf
)
1070 if (zfs_flags
& ZFS_DEBUG_MODIFY
) {
1071 if (buf
->b_hdr
->b_state
!= arc_anon
)
1072 panic("modifying non-anon buffer!");
1073 if (buf
->b_hdr
->b_flags
& ARC_IO_IN_PROGRESS
)
1074 panic("modifying buffer while i/o in progress!");
1075 arc_cksum_verify(buf
);
1078 mutex_enter(&buf
->b_hdr
->b_freeze_lock
);
1079 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1080 kmem_free(buf
->b_hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
1081 buf
->b_hdr
->b_freeze_cksum
= NULL
;
1084 mutex_exit(&buf
->b_hdr
->b_freeze_lock
);
1088 arc_buf_freeze(arc_buf_t
*buf
)
1090 kmutex_t
*hash_lock
;
1092 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1095 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1096 mutex_enter(hash_lock
);
1098 ASSERT(buf
->b_hdr
->b_freeze_cksum
!= NULL
||
1099 buf
->b_hdr
->b_state
== arc_anon
);
1100 arc_cksum_compute(buf
, B_FALSE
);
1101 mutex_exit(hash_lock
);
1105 add_reference(arc_buf_hdr_t
*ab
, kmutex_t
*hash_lock
, void *tag
)
1107 ASSERT(MUTEX_HELD(hash_lock
));
1109 if ((refcount_add(&ab
->b_refcnt
, tag
) == 1) &&
1110 (ab
->b_state
!= arc_anon
)) {
1111 uint64_t delta
= ab
->b_size
* ab
->b_datacnt
;
1112 list_t
*list
= &ab
->b_state
->arcs_list
[ab
->b_type
];
1113 uint64_t *size
= &ab
->b_state
->arcs_lsize
[ab
->b_type
];
1115 ASSERT(!MUTEX_HELD(&ab
->b_state
->arcs_mtx
));
1116 mutex_enter(&ab
->b_state
->arcs_mtx
);
1117 ASSERT(list_link_active(&ab
->b_arc_node
));
1118 list_remove(list
, ab
);
1119 if (GHOST_STATE(ab
->b_state
)) {
1120 ASSERT0(ab
->b_datacnt
);
1121 ASSERT3P(ab
->b_buf
, ==, NULL
);
1125 ASSERT3U(*size
, >=, delta
);
1126 atomic_add_64(size
, -delta
);
1127 mutex_exit(&ab
->b_state
->arcs_mtx
);
1128 /* remove the prefetch flag if we get a reference */
1129 if (ab
->b_flags
& ARC_PREFETCH
)
1130 ab
->b_flags
&= ~ARC_PREFETCH
;
1135 remove_reference(arc_buf_hdr_t
*ab
, kmutex_t
*hash_lock
, void *tag
)
1138 arc_state_t
*state
= ab
->b_state
;
1140 ASSERT(state
== arc_anon
|| MUTEX_HELD(hash_lock
));
1141 ASSERT(!GHOST_STATE(state
));
1143 if (((cnt
= refcount_remove(&ab
->b_refcnt
, tag
)) == 0) &&
1144 (state
!= arc_anon
)) {
1145 uint64_t *size
= &state
->arcs_lsize
[ab
->b_type
];
1147 ASSERT(!MUTEX_HELD(&state
->arcs_mtx
));
1148 mutex_enter(&state
->arcs_mtx
);
1149 ASSERT(!list_link_active(&ab
->b_arc_node
));
1150 list_insert_head(&state
->arcs_list
[ab
->b_type
], ab
);
1151 ASSERT(ab
->b_datacnt
> 0);
1152 atomic_add_64(size
, ab
->b_size
* ab
->b_datacnt
);
1153 mutex_exit(&state
->arcs_mtx
);
1159 * Returns detailed information about a specific arc buffer. When the
1160 * state_index argument is set the function will calculate the arc header
1161 * list position for its arc state. Since this requires a linear traversal
1162 * callers are strongly encourage not to do this. However, it can be helpful
1163 * for targeted analysis so the functionality is provided.
1166 arc_buf_info(arc_buf_t
*ab
, arc_buf_info_t
*abi
, int state_index
)
1168 arc_buf_hdr_t
*hdr
= ab
->b_hdr
;
1169 arc_state_t
*state
= hdr
->b_state
;
1171 memset(abi
, 0, sizeof(arc_buf_info_t
));
1172 abi
->abi_flags
= hdr
->b_flags
;
1173 abi
->abi_datacnt
= hdr
->b_datacnt
;
1174 abi
->abi_state_type
= state
? state
->arcs_state
: ARC_STATE_ANON
;
1175 abi
->abi_state_contents
= hdr
->b_type
;
1176 abi
->abi_state_index
= -1;
1177 abi
->abi_size
= hdr
->b_size
;
1178 abi
->abi_access
= hdr
->b_arc_access
;
1179 abi
->abi_mru_hits
= hdr
->b_mru_hits
;
1180 abi
->abi_mru_ghost_hits
= hdr
->b_mru_ghost_hits
;
1181 abi
->abi_mfu_hits
= hdr
->b_mfu_hits
;
1182 abi
->abi_mfu_ghost_hits
= hdr
->b_mfu_ghost_hits
;
1183 abi
->abi_holds
= refcount_count(&hdr
->b_refcnt
);
1186 abi
->abi_l2arc_dattr
= hdr
->b_l2hdr
->b_daddr
;
1187 abi
->abi_l2arc_asize
= hdr
->b_l2hdr
->b_asize
;
1188 abi
->abi_l2arc_compress
= hdr
->b_l2hdr
->b_compress
;
1189 abi
->abi_l2arc_hits
= hdr
->b_l2hdr
->b_hits
;
1192 if (state
&& state_index
&& list_link_active(&hdr
->b_arc_node
)) {
1193 list_t
*list
= &state
->arcs_list
[hdr
->b_type
];
1196 mutex_enter(&state
->arcs_mtx
);
1197 for (h
= list_head(list
); h
!= NULL
; h
= list_next(list
, h
)) {
1198 abi
->abi_state_index
++;
1202 mutex_exit(&state
->arcs_mtx
);
1207 * Move the supplied buffer to the indicated state. The mutex
1208 * for the buffer must be held by the caller.
1211 arc_change_state(arc_state_t
*new_state
, arc_buf_hdr_t
*ab
, kmutex_t
*hash_lock
)
1213 arc_state_t
*old_state
= ab
->b_state
;
1214 int64_t refcnt
= refcount_count(&ab
->b_refcnt
);
1215 uint64_t from_delta
, to_delta
;
1217 ASSERT(MUTEX_HELD(hash_lock
));
1218 ASSERT(new_state
!= old_state
);
1219 ASSERT(refcnt
== 0 || ab
->b_datacnt
> 0);
1220 ASSERT(ab
->b_datacnt
== 0 || !GHOST_STATE(new_state
));
1221 ASSERT(ab
->b_datacnt
<= 1 || old_state
!= arc_anon
);
1223 from_delta
= to_delta
= ab
->b_datacnt
* ab
->b_size
;
1226 * If this buffer is evictable, transfer it from the
1227 * old state list to the new state list.
1230 if (old_state
!= arc_anon
) {
1231 int use_mutex
= !MUTEX_HELD(&old_state
->arcs_mtx
);
1232 uint64_t *size
= &old_state
->arcs_lsize
[ab
->b_type
];
1235 mutex_enter(&old_state
->arcs_mtx
);
1237 ASSERT(list_link_active(&ab
->b_arc_node
));
1238 list_remove(&old_state
->arcs_list
[ab
->b_type
], ab
);
1241 * If prefetching out of the ghost cache,
1242 * we will have a non-zero datacnt.
1244 if (GHOST_STATE(old_state
) && ab
->b_datacnt
== 0) {
1245 /* ghost elements have a ghost size */
1246 ASSERT(ab
->b_buf
== NULL
);
1247 from_delta
= ab
->b_size
;
1249 ASSERT3U(*size
, >=, from_delta
);
1250 atomic_add_64(size
, -from_delta
);
1253 mutex_exit(&old_state
->arcs_mtx
);
1255 if (new_state
!= arc_anon
) {
1256 int use_mutex
= !MUTEX_HELD(&new_state
->arcs_mtx
);
1257 uint64_t *size
= &new_state
->arcs_lsize
[ab
->b_type
];
1260 mutex_enter(&new_state
->arcs_mtx
);
1262 list_insert_head(&new_state
->arcs_list
[ab
->b_type
], ab
);
1264 /* ghost elements have a ghost size */
1265 if (GHOST_STATE(new_state
)) {
1266 ASSERT(ab
->b_datacnt
== 0);
1267 ASSERT(ab
->b_buf
== NULL
);
1268 to_delta
= ab
->b_size
;
1270 atomic_add_64(size
, to_delta
);
1273 mutex_exit(&new_state
->arcs_mtx
);
1277 ASSERT(!BUF_EMPTY(ab
));
1278 if (new_state
== arc_anon
&& HDR_IN_HASH_TABLE(ab
))
1279 buf_hash_remove(ab
);
1281 /* adjust state sizes */
1283 atomic_add_64(&new_state
->arcs_size
, to_delta
);
1285 ASSERT3U(old_state
->arcs_size
, >=, from_delta
);
1286 atomic_add_64(&old_state
->arcs_size
, -from_delta
);
1288 ab
->b_state
= new_state
;
1290 /* adjust l2arc hdr stats */
1291 if (new_state
== arc_l2c_only
)
1292 l2arc_hdr_stat_add();
1293 else if (old_state
== arc_l2c_only
)
1294 l2arc_hdr_stat_remove();
1298 arc_space_consume(uint64_t space
, arc_space_type_t type
)
1300 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1305 case ARC_SPACE_DATA
:
1306 ARCSTAT_INCR(arcstat_data_size
, space
);
1308 case ARC_SPACE_OTHER
:
1309 ARCSTAT_INCR(arcstat_other_size
, space
);
1311 case ARC_SPACE_HDRS
:
1312 ARCSTAT_INCR(arcstat_hdr_size
, space
);
1314 case ARC_SPACE_L2HDRS
:
1315 ARCSTAT_INCR(arcstat_l2_hdr_size
, space
);
1319 ARCSTAT_INCR(arcstat_meta_used
, space
);
1320 atomic_add_64(&arc_size
, space
);
1324 arc_space_return(uint64_t space
, arc_space_type_t type
)
1326 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1331 case ARC_SPACE_DATA
:
1332 ARCSTAT_INCR(arcstat_data_size
, -space
);
1334 case ARC_SPACE_OTHER
:
1335 ARCSTAT_INCR(arcstat_other_size
, -space
);
1337 case ARC_SPACE_HDRS
:
1338 ARCSTAT_INCR(arcstat_hdr_size
, -space
);
1340 case ARC_SPACE_L2HDRS
:
1341 ARCSTAT_INCR(arcstat_l2_hdr_size
, -space
);
1345 ASSERT(arc_meta_used
>= space
);
1346 if (arc_meta_max
< arc_meta_used
)
1347 arc_meta_max
= arc_meta_used
;
1348 ARCSTAT_INCR(arcstat_meta_used
, -space
);
1349 ASSERT(arc_size
>= space
);
1350 atomic_add_64(&arc_size
, -space
);
1354 arc_buf_alloc(spa_t
*spa
, int size
, void *tag
, arc_buf_contents_t type
)
1359 ASSERT3U(size
, >, 0);
1360 hdr
= kmem_cache_alloc(hdr_cache
, KM_PUSHPAGE
);
1361 ASSERT(BUF_EMPTY(hdr
));
1364 hdr
->b_spa
= spa_load_guid(spa
);
1365 hdr
->b_state
= arc_anon
;
1366 hdr
->b_arc_access
= 0;
1367 hdr
->b_mru_hits
= 0;
1368 hdr
->b_mru_ghost_hits
= 0;
1369 hdr
->b_mfu_hits
= 0;
1370 hdr
->b_mfu_ghost_hits
= 0;
1372 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1375 buf
->b_efunc
= NULL
;
1376 buf
->b_private
= NULL
;
1379 arc_get_data_buf(buf
);
1382 ASSERT(refcount_is_zero(&hdr
->b_refcnt
));
1383 (void) refcount_add(&hdr
->b_refcnt
, tag
);
1388 static char *arc_onloan_tag
= "onloan";
1391 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
1392 * flight data by arc_tempreserve_space() until they are "returned". Loaned
1393 * buffers must be returned to the arc before they can be used by the DMU or
1397 arc_loan_buf(spa_t
*spa
, int size
)
1401 buf
= arc_buf_alloc(spa
, size
, arc_onloan_tag
, ARC_BUFC_DATA
);
1403 atomic_add_64(&arc_loaned_bytes
, size
);
1408 * Return a loaned arc buffer to the arc.
1411 arc_return_buf(arc_buf_t
*buf
, void *tag
)
1413 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1415 ASSERT(buf
->b_data
!= NULL
);
1416 (void) refcount_add(&hdr
->b_refcnt
, tag
);
1417 (void) refcount_remove(&hdr
->b_refcnt
, arc_onloan_tag
);
1419 atomic_add_64(&arc_loaned_bytes
, -hdr
->b_size
);
1422 /* Detach an arc_buf from a dbuf (tag) */
1424 arc_loan_inuse_buf(arc_buf_t
*buf
, void *tag
)
1428 ASSERT(buf
->b_data
!= NULL
);
1430 (void) refcount_add(&hdr
->b_refcnt
, arc_onloan_tag
);
1431 (void) refcount_remove(&hdr
->b_refcnt
, tag
);
1432 buf
->b_efunc
= NULL
;
1433 buf
->b_private
= NULL
;
1435 atomic_add_64(&arc_loaned_bytes
, hdr
->b_size
);
1439 arc_buf_clone(arc_buf_t
*from
)
1442 arc_buf_hdr_t
*hdr
= from
->b_hdr
;
1443 uint64_t size
= hdr
->b_size
;
1445 ASSERT(hdr
->b_state
!= arc_anon
);
1447 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1450 buf
->b_efunc
= NULL
;
1451 buf
->b_private
= NULL
;
1452 buf
->b_next
= hdr
->b_buf
;
1454 arc_get_data_buf(buf
);
1455 bcopy(from
->b_data
, buf
->b_data
, size
);
1458 * This buffer already exists in the arc so create a duplicate
1459 * copy for the caller. If the buffer is associated with user data
1460 * then track the size and number of duplicates. These stats will be
1461 * updated as duplicate buffers are created and destroyed.
1463 if (hdr
->b_type
== ARC_BUFC_DATA
) {
1464 ARCSTAT_BUMP(arcstat_duplicate_buffers
);
1465 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, size
);
1467 hdr
->b_datacnt
+= 1;
1472 arc_buf_add_ref(arc_buf_t
*buf
, void* tag
)
1475 kmutex_t
*hash_lock
;
1478 * Check to see if this buffer is evicted. Callers
1479 * must verify b_data != NULL to know if the add_ref
1482 mutex_enter(&buf
->b_evict_lock
);
1483 if (buf
->b_data
== NULL
) {
1484 mutex_exit(&buf
->b_evict_lock
);
1487 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1488 mutex_enter(hash_lock
);
1490 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
1491 mutex_exit(&buf
->b_evict_lock
);
1493 ASSERT(hdr
->b_state
== arc_mru
|| hdr
->b_state
== arc_mfu
);
1494 add_reference(hdr
, hash_lock
, tag
);
1495 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
1496 arc_access(hdr
, hash_lock
);
1497 mutex_exit(hash_lock
);
1498 ARCSTAT_BUMP(arcstat_hits
);
1499 ARCSTAT_CONDSTAT(!(hdr
->b_flags
& ARC_PREFETCH
),
1500 demand
, prefetch
, hdr
->b_type
!= ARC_BUFC_METADATA
,
1501 data
, metadata
, hits
);
1505 * Free the arc data buffer. If it is an l2arc write in progress,
1506 * the buffer is placed on l2arc_free_on_write to be freed later.
1509 arc_buf_data_free(arc_buf_hdr_t
*hdr
, void (*free_func
)(void *, size_t),
1510 void *data
, size_t size
)
1512 if (HDR_L2_WRITING(hdr
)) {
1513 l2arc_data_free_t
*df
;
1514 df
= kmem_alloc(sizeof (l2arc_data_free_t
), KM_PUSHPAGE
);
1515 df
->l2df_data
= data
;
1516 df
->l2df_size
= size
;
1517 df
->l2df_func
= free_func
;
1518 mutex_enter(&l2arc_free_on_write_mtx
);
1519 list_insert_head(l2arc_free_on_write
, df
);
1520 mutex_exit(&l2arc_free_on_write_mtx
);
1521 ARCSTAT_BUMP(arcstat_l2_free_on_write
);
1523 free_func(data
, size
);
1528 arc_buf_destroy(arc_buf_t
*buf
, boolean_t recycle
, boolean_t all
)
1532 /* free up data associated with the buf */
1534 arc_state_t
*state
= buf
->b_hdr
->b_state
;
1535 uint64_t size
= buf
->b_hdr
->b_size
;
1536 arc_buf_contents_t type
= buf
->b_hdr
->b_type
;
1538 arc_cksum_verify(buf
);
1541 if (type
== ARC_BUFC_METADATA
) {
1542 arc_buf_data_free(buf
->b_hdr
, zio_buf_free
,
1544 arc_space_return(size
, ARC_SPACE_DATA
);
1546 ASSERT(type
== ARC_BUFC_DATA
);
1547 arc_buf_data_free(buf
->b_hdr
,
1548 zio_data_buf_free
, buf
->b_data
, size
);
1549 ARCSTAT_INCR(arcstat_data_size
, -size
);
1550 atomic_add_64(&arc_size
, -size
);
1553 if (list_link_active(&buf
->b_hdr
->b_arc_node
)) {
1554 uint64_t *cnt
= &state
->arcs_lsize
[type
];
1556 ASSERT(refcount_is_zero(&buf
->b_hdr
->b_refcnt
));
1557 ASSERT(state
!= arc_anon
);
1559 ASSERT3U(*cnt
, >=, size
);
1560 atomic_add_64(cnt
, -size
);
1562 ASSERT3U(state
->arcs_size
, >=, size
);
1563 atomic_add_64(&state
->arcs_size
, -size
);
1567 * If we're destroying a duplicate buffer make sure
1568 * that the appropriate statistics are updated.
1570 if (buf
->b_hdr
->b_datacnt
> 1 &&
1571 buf
->b_hdr
->b_type
== ARC_BUFC_DATA
) {
1572 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
1573 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, -size
);
1575 ASSERT(buf
->b_hdr
->b_datacnt
> 0);
1576 buf
->b_hdr
->b_datacnt
-= 1;
1579 /* only remove the buf if requested */
1583 /* remove the buf from the hdr list */
1584 for (bufp
= &buf
->b_hdr
->b_buf
; *bufp
!= buf
; bufp
= &(*bufp
)->b_next
)
1586 *bufp
= buf
->b_next
;
1589 ASSERT(buf
->b_efunc
== NULL
);
1591 /* clean up the buf */
1593 kmem_cache_free(buf_cache
, buf
);
1597 arc_hdr_destroy(arc_buf_hdr_t
*hdr
)
1599 l2arc_buf_hdr_t
*l2hdr
= hdr
->b_l2hdr
;
1601 ASSERT(refcount_is_zero(&hdr
->b_refcnt
));
1602 ASSERT3P(hdr
->b_state
, ==, arc_anon
);
1603 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
1605 if (l2hdr
!= NULL
) {
1606 boolean_t buflist_held
= MUTEX_HELD(&l2arc_buflist_mtx
);
1608 * To prevent arc_free() and l2arc_evict() from
1609 * attempting to free the same buffer at the same time,
1610 * a FREE_IN_PROGRESS flag is given to arc_free() to
1611 * give it priority. l2arc_evict() can't destroy this
1612 * header while we are waiting on l2arc_buflist_mtx.
1614 * The hdr may be removed from l2ad_buflist before we
1615 * grab l2arc_buflist_mtx, so b_l2hdr is rechecked.
1617 if (!buflist_held
) {
1618 mutex_enter(&l2arc_buflist_mtx
);
1619 l2hdr
= hdr
->b_l2hdr
;
1622 if (l2hdr
!= NULL
) {
1623 list_remove(l2hdr
->b_dev
->l2ad_buflist
, hdr
);
1624 ARCSTAT_INCR(arcstat_l2_size
, -hdr
->b_size
);
1625 ARCSTAT_INCR(arcstat_l2_asize
, -l2hdr
->b_asize
);
1626 kmem_free(l2hdr
, sizeof (l2arc_buf_hdr_t
));
1627 arc_space_return(L2HDR_SIZE
, ARC_SPACE_L2HDRS
);
1628 if (hdr
->b_state
== arc_l2c_only
)
1629 l2arc_hdr_stat_remove();
1630 hdr
->b_l2hdr
= NULL
;
1634 mutex_exit(&l2arc_buflist_mtx
);
1637 if (!BUF_EMPTY(hdr
)) {
1638 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
1639 buf_discard_identity(hdr
);
1641 while (hdr
->b_buf
) {
1642 arc_buf_t
*buf
= hdr
->b_buf
;
1645 mutex_enter(&arc_eviction_mtx
);
1646 mutex_enter(&buf
->b_evict_lock
);
1647 ASSERT(buf
->b_hdr
!= NULL
);
1648 arc_buf_destroy(hdr
->b_buf
, FALSE
, FALSE
);
1649 hdr
->b_buf
= buf
->b_next
;
1650 buf
->b_hdr
= &arc_eviction_hdr
;
1651 buf
->b_next
= arc_eviction_list
;
1652 arc_eviction_list
= buf
;
1653 mutex_exit(&buf
->b_evict_lock
);
1654 mutex_exit(&arc_eviction_mtx
);
1656 arc_buf_destroy(hdr
->b_buf
, FALSE
, TRUE
);
1659 if (hdr
->b_freeze_cksum
!= NULL
) {
1660 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
1661 hdr
->b_freeze_cksum
= NULL
;
1664 ASSERT(!list_link_active(&hdr
->b_arc_node
));
1665 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
1666 ASSERT3P(hdr
->b_acb
, ==, NULL
);
1667 kmem_cache_free(hdr_cache
, hdr
);
1671 arc_buf_free(arc_buf_t
*buf
, void *tag
)
1673 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1674 int hashed
= hdr
->b_state
!= arc_anon
;
1676 ASSERT(buf
->b_efunc
== NULL
);
1677 ASSERT(buf
->b_data
!= NULL
);
1680 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
1682 mutex_enter(hash_lock
);
1684 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
1686 (void) remove_reference(hdr
, hash_lock
, tag
);
1687 if (hdr
->b_datacnt
> 1) {
1688 arc_buf_destroy(buf
, FALSE
, TRUE
);
1690 ASSERT(buf
== hdr
->b_buf
);
1691 ASSERT(buf
->b_efunc
== NULL
);
1692 hdr
->b_flags
|= ARC_BUF_AVAILABLE
;
1694 mutex_exit(hash_lock
);
1695 } else if (HDR_IO_IN_PROGRESS(hdr
)) {
1698 * We are in the middle of an async write. Don't destroy
1699 * this buffer unless the write completes before we finish
1700 * decrementing the reference count.
1702 mutex_enter(&arc_eviction_mtx
);
1703 (void) remove_reference(hdr
, NULL
, tag
);
1704 ASSERT(refcount_is_zero(&hdr
->b_refcnt
));
1705 destroy_hdr
= !HDR_IO_IN_PROGRESS(hdr
);
1706 mutex_exit(&arc_eviction_mtx
);
1708 arc_hdr_destroy(hdr
);
1710 if (remove_reference(hdr
, NULL
, tag
) > 0)
1711 arc_buf_destroy(buf
, FALSE
, TRUE
);
1713 arc_hdr_destroy(hdr
);
1718 arc_buf_remove_ref(arc_buf_t
*buf
, void* tag
)
1720 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1721 kmutex_t
*hash_lock
= NULL
;
1722 boolean_t no_callback
= (buf
->b_efunc
== NULL
);
1724 if (hdr
->b_state
== arc_anon
) {
1725 ASSERT(hdr
->b_datacnt
== 1);
1726 arc_buf_free(buf
, tag
);
1727 return (no_callback
);
1730 hash_lock
= HDR_LOCK(hdr
);
1731 mutex_enter(hash_lock
);
1733 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
1734 ASSERT(hdr
->b_state
!= arc_anon
);
1735 ASSERT(buf
->b_data
!= NULL
);
1737 (void) remove_reference(hdr
, hash_lock
, tag
);
1738 if (hdr
->b_datacnt
> 1) {
1740 arc_buf_destroy(buf
, FALSE
, TRUE
);
1741 } else if (no_callback
) {
1742 ASSERT(hdr
->b_buf
== buf
&& buf
->b_next
== NULL
);
1743 ASSERT(buf
->b_efunc
== NULL
);
1744 hdr
->b_flags
|= ARC_BUF_AVAILABLE
;
1746 ASSERT(no_callback
|| hdr
->b_datacnt
> 1 ||
1747 refcount_is_zero(&hdr
->b_refcnt
));
1748 mutex_exit(hash_lock
);
1749 return (no_callback
);
1753 arc_buf_size(arc_buf_t
*buf
)
1755 return (buf
->b_hdr
->b_size
);
1759 * Called from the DMU to determine if the current buffer should be
1760 * evicted. In order to ensure proper locking, the eviction must be initiated
1761 * from the DMU. Return true if the buffer is associated with user data and
1762 * duplicate buffers still exist.
1765 arc_buf_eviction_needed(arc_buf_t
*buf
)
1768 boolean_t evict_needed
= B_FALSE
;
1770 if (zfs_disable_dup_eviction
)
1773 mutex_enter(&buf
->b_evict_lock
);
1777 * We are in arc_do_user_evicts(); let that function
1778 * perform the eviction.
1780 ASSERT(buf
->b_data
== NULL
);
1781 mutex_exit(&buf
->b_evict_lock
);
1783 } else if (buf
->b_data
== NULL
) {
1785 * We have already been added to the arc eviction list;
1786 * recommend eviction.
1788 ASSERT3P(hdr
, ==, &arc_eviction_hdr
);
1789 mutex_exit(&buf
->b_evict_lock
);
1793 if (hdr
->b_datacnt
> 1 && hdr
->b_type
== ARC_BUFC_DATA
)
1794 evict_needed
= B_TRUE
;
1796 mutex_exit(&buf
->b_evict_lock
);
1797 return (evict_needed
);
1801 * Evict buffers from list until we've removed the specified number of
1802 * bytes. Move the removed buffers to the appropriate evict state.
1803 * If the recycle flag is set, then attempt to "recycle" a buffer:
1804 * - look for a buffer to evict that is `bytes' long.
1805 * - return the data block from this buffer rather than freeing it.
1806 * This flag is used by callers that are trying to make space for a
1807 * new buffer in a full arc cache.
1809 * This function makes a "best effort". It skips over any buffers
1810 * it can't get a hash_lock on, and so may not catch all candidates.
1811 * It may also return without evicting as much space as requested.
1814 arc_evict(arc_state_t
*state
, uint64_t spa
, int64_t bytes
, boolean_t recycle
,
1815 arc_buf_contents_t type
)
1817 arc_state_t
*evicted_state
;
1818 uint64_t bytes_evicted
= 0, skipped
= 0, missed
= 0;
1819 arc_buf_hdr_t
*ab
, *ab_prev
= NULL
;
1820 list_t
*list
= &state
->arcs_list
[type
];
1821 kmutex_t
*hash_lock
;
1822 boolean_t have_lock
;
1823 void *stolen
= NULL
;
1825 ASSERT(state
== arc_mru
|| state
== arc_mfu
);
1827 evicted_state
= (state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
1829 mutex_enter(&state
->arcs_mtx
);
1830 mutex_enter(&evicted_state
->arcs_mtx
);
1832 for (ab
= list_tail(list
); ab
; ab
= ab_prev
) {
1833 ab_prev
= list_prev(list
, ab
);
1834 /* prefetch buffers have a minimum lifespan */
1835 if (HDR_IO_IN_PROGRESS(ab
) ||
1836 (spa
&& ab
->b_spa
!= spa
) ||
1837 (ab
->b_flags
& (ARC_PREFETCH
|ARC_INDIRECT
) &&
1838 ddi_get_lbolt() - ab
->b_arc_access
<
1839 zfs_arc_min_prefetch_lifespan
)) {
1843 /* "lookahead" for better eviction candidate */
1844 if (recycle
&& ab
->b_size
!= bytes
&&
1845 ab_prev
&& ab_prev
->b_size
== bytes
)
1847 hash_lock
= HDR_LOCK(ab
);
1848 have_lock
= MUTEX_HELD(hash_lock
);
1849 if (have_lock
|| mutex_tryenter(hash_lock
)) {
1850 ASSERT0(refcount_count(&ab
->b_refcnt
));
1851 ASSERT(ab
->b_datacnt
> 0);
1853 arc_buf_t
*buf
= ab
->b_buf
;
1854 if (!mutex_tryenter(&buf
->b_evict_lock
)) {
1859 bytes_evicted
+= ab
->b_size
;
1860 if (recycle
&& ab
->b_type
== type
&&
1861 ab
->b_size
== bytes
&&
1862 !HDR_L2_WRITING(ab
)) {
1863 stolen
= buf
->b_data
;
1868 mutex_enter(&arc_eviction_mtx
);
1869 arc_buf_destroy(buf
,
1870 buf
->b_data
== stolen
, FALSE
);
1871 ab
->b_buf
= buf
->b_next
;
1872 buf
->b_hdr
= &arc_eviction_hdr
;
1873 buf
->b_next
= arc_eviction_list
;
1874 arc_eviction_list
= buf
;
1875 mutex_exit(&arc_eviction_mtx
);
1876 mutex_exit(&buf
->b_evict_lock
);
1878 mutex_exit(&buf
->b_evict_lock
);
1879 arc_buf_destroy(buf
,
1880 buf
->b_data
== stolen
, TRUE
);
1885 ARCSTAT_INCR(arcstat_evict_l2_cached
,
1888 if (l2arc_write_eligible(ab
->b_spa
, ab
)) {
1889 ARCSTAT_INCR(arcstat_evict_l2_eligible
,
1893 arcstat_evict_l2_ineligible
,
1898 if (ab
->b_datacnt
== 0) {
1899 arc_change_state(evicted_state
, ab
, hash_lock
);
1900 ASSERT(HDR_IN_HASH_TABLE(ab
));
1901 ab
->b_flags
|= ARC_IN_HASH_TABLE
;
1902 ab
->b_flags
&= ~ARC_BUF_AVAILABLE
;
1903 DTRACE_PROBE1(arc__evict
, arc_buf_hdr_t
*, ab
);
1906 mutex_exit(hash_lock
);
1907 if (bytes
>= 0 && bytes_evicted
>= bytes
)
1914 mutex_exit(&evicted_state
->arcs_mtx
);
1915 mutex_exit(&state
->arcs_mtx
);
1917 if (bytes_evicted
< bytes
)
1918 dprintf("only evicted %lld bytes from %x\n",
1919 (longlong_t
)bytes_evicted
, state
);
1922 ARCSTAT_INCR(arcstat_evict_skip
, skipped
);
1925 ARCSTAT_INCR(arcstat_mutex_miss
, missed
);
1928 * We have just evicted some data into the ghost state, make
1929 * sure we also adjust the ghost state size if necessary.
1932 arc_mru_ghost
->arcs_size
+ arc_mfu_ghost
->arcs_size
> arc_c
) {
1933 int64_t mru_over
= arc_anon
->arcs_size
+ arc_mru
->arcs_size
+
1934 arc_mru_ghost
->arcs_size
- arc_c
;
1936 if (mru_over
> 0 && arc_mru_ghost
->arcs_lsize
[type
] > 0) {
1938 MIN(arc_mru_ghost
->arcs_lsize
[type
], mru_over
);
1939 arc_evict_ghost(arc_mru_ghost
, 0, todelete
,
1941 } else if (arc_mfu_ghost
->arcs_lsize
[type
] > 0) {
1942 int64_t todelete
= MIN(arc_mfu_ghost
->arcs_lsize
[type
],
1943 arc_mru_ghost
->arcs_size
+
1944 arc_mfu_ghost
->arcs_size
- arc_c
);
1945 arc_evict_ghost(arc_mfu_ghost
, 0, todelete
,
1954 * Remove buffers from list until we've removed the specified number of
1955 * bytes. Destroy the buffers that are removed.
1958 arc_evict_ghost(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
1959 arc_buf_contents_t type
)
1961 arc_buf_hdr_t
*ab
, *ab_prev
;
1962 arc_buf_hdr_t marker
;
1963 list_t
*list
= &state
->arcs_list
[type
];
1964 kmutex_t
*hash_lock
;
1965 uint64_t bytes_deleted
= 0;
1966 uint64_t bufs_skipped
= 0;
1968 ASSERT(GHOST_STATE(state
));
1969 bzero(&marker
, sizeof(marker
));
1971 mutex_enter(&state
->arcs_mtx
);
1972 for (ab
= list_tail(list
); ab
; ab
= ab_prev
) {
1973 ab_prev
= list_prev(list
, ab
);
1974 if (spa
&& ab
->b_spa
!= spa
)
1977 /* ignore markers */
1981 hash_lock
= HDR_LOCK(ab
);
1982 /* caller may be trying to modify this buffer, skip it */
1983 if (MUTEX_HELD(hash_lock
))
1985 if (mutex_tryenter(hash_lock
)) {
1986 ASSERT(!HDR_IO_IN_PROGRESS(ab
));
1987 ASSERT(ab
->b_buf
== NULL
);
1988 ARCSTAT_BUMP(arcstat_deleted
);
1989 bytes_deleted
+= ab
->b_size
;
1991 if (ab
->b_l2hdr
!= NULL
) {
1993 * This buffer is cached on the 2nd Level ARC;
1994 * don't destroy the header.
1996 arc_change_state(arc_l2c_only
, ab
, hash_lock
);
1997 mutex_exit(hash_lock
);
1999 arc_change_state(arc_anon
, ab
, hash_lock
);
2000 mutex_exit(hash_lock
);
2001 arc_hdr_destroy(ab
);
2004 DTRACE_PROBE1(arc__delete
, arc_buf_hdr_t
*, ab
);
2005 if (bytes
>= 0 && bytes_deleted
>= bytes
)
2007 } else if (bytes
< 0) {
2009 * Insert a list marker and then wait for the
2010 * hash lock to become available. Once its
2011 * available, restart from where we left off.
2013 list_insert_after(list
, ab
, &marker
);
2014 mutex_exit(&state
->arcs_mtx
);
2015 mutex_enter(hash_lock
);
2016 mutex_exit(hash_lock
);
2017 mutex_enter(&state
->arcs_mtx
);
2018 ab_prev
= list_prev(list
, &marker
);
2019 list_remove(list
, &marker
);
2023 mutex_exit(&state
->arcs_mtx
);
2025 if (list
== &state
->arcs_list
[ARC_BUFC_DATA
] &&
2026 (bytes
< 0 || bytes_deleted
< bytes
)) {
2027 list
= &state
->arcs_list
[ARC_BUFC_METADATA
];
2032 ARCSTAT_INCR(arcstat_mutex_miss
, bufs_skipped
);
2036 if (bytes_deleted
< bytes
)
2037 dprintf("only deleted %lld bytes from %p\n",
2038 (longlong_t
)bytes_deleted
, state
);
2044 int64_t adjustment
, delta
;
2050 adjustment
= MIN((int64_t)(arc_size
- arc_c
),
2051 (int64_t)(arc_anon
->arcs_size
+ arc_mru
->arcs_size
+ arc_meta_used
-
2054 if (adjustment
> 0 && arc_mru
->arcs_lsize
[ARC_BUFC_DATA
] > 0) {
2055 delta
= MIN(arc_mru
->arcs_lsize
[ARC_BUFC_DATA
], adjustment
);
2056 (void) arc_evict(arc_mru
, 0, delta
, FALSE
, ARC_BUFC_DATA
);
2057 adjustment
-= delta
;
2060 if (adjustment
> 0 && arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
] > 0) {
2061 delta
= MIN(arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
], adjustment
);
2062 (void) arc_evict(arc_mru
, 0, delta
, FALSE
,
2070 adjustment
= arc_size
- arc_c
;
2072 if (adjustment
> 0 && arc_mfu
->arcs_lsize
[ARC_BUFC_DATA
] > 0) {
2073 delta
= MIN(adjustment
, arc_mfu
->arcs_lsize
[ARC_BUFC_DATA
]);
2074 (void) arc_evict(arc_mfu
, 0, delta
, FALSE
, ARC_BUFC_DATA
);
2075 adjustment
-= delta
;
2078 if (adjustment
> 0 && arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
] > 0) {
2079 int64_t delta
= MIN(adjustment
,
2080 arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
]);
2081 (void) arc_evict(arc_mfu
, 0, delta
, FALSE
,
2086 * Adjust ghost lists
2089 adjustment
= arc_mru
->arcs_size
+ arc_mru_ghost
->arcs_size
- arc_c
;
2091 if (adjustment
> 0 && arc_mru_ghost
->arcs_size
> 0) {
2092 delta
= MIN(arc_mru_ghost
->arcs_size
, adjustment
);
2093 arc_evict_ghost(arc_mru_ghost
, 0, delta
, ARC_BUFC_DATA
);
2097 arc_mru_ghost
->arcs_size
+ arc_mfu_ghost
->arcs_size
- arc_c
;
2099 if (adjustment
> 0 && arc_mfu_ghost
->arcs_size
> 0) {
2100 delta
= MIN(arc_mfu_ghost
->arcs_size
, adjustment
);
2101 arc_evict_ghost(arc_mfu_ghost
, 0, delta
, ARC_BUFC_DATA
);
2106 * Request that arc user drop references so that N bytes can be released
2107 * from the cache. This provides a mechanism to ensure the arc can honor
2108 * the arc_meta_limit and reclaim buffers which are pinned in the cache
2109 * by higher layers. (i.e. the zpl)
2112 arc_do_user_prune(int64_t adjustment
)
2114 arc_prune_func_t
*func
;
2116 arc_prune_t
*cp
, *np
;
2118 mutex_enter(&arc_prune_mtx
);
2120 cp
= list_head(&arc_prune_list
);
2121 while (cp
!= NULL
) {
2123 private = cp
->p_private
;
2124 np
= list_next(&arc_prune_list
, cp
);
2125 refcount_add(&cp
->p_refcnt
, func
);
2126 mutex_exit(&arc_prune_mtx
);
2129 func(adjustment
, private);
2131 mutex_enter(&arc_prune_mtx
);
2133 /* User removed prune callback concurrently with execution */
2134 if (refcount_remove(&cp
->p_refcnt
, func
) == 0) {
2135 ASSERT(!list_link_active(&cp
->p_node
));
2136 refcount_destroy(&cp
->p_refcnt
);
2137 kmem_free(cp
, sizeof (*cp
));
2143 ARCSTAT_BUMP(arcstat_prune
);
2144 mutex_exit(&arc_prune_mtx
);
2148 arc_do_user_evicts(void)
2150 mutex_enter(&arc_eviction_mtx
);
2151 while (arc_eviction_list
!= NULL
) {
2152 arc_buf_t
*buf
= arc_eviction_list
;
2153 arc_eviction_list
= buf
->b_next
;
2154 mutex_enter(&buf
->b_evict_lock
);
2156 mutex_exit(&buf
->b_evict_lock
);
2157 mutex_exit(&arc_eviction_mtx
);
2159 if (buf
->b_efunc
!= NULL
)
2160 VERIFY(buf
->b_efunc(buf
) == 0);
2162 buf
->b_efunc
= NULL
;
2163 buf
->b_private
= NULL
;
2164 kmem_cache_free(buf_cache
, buf
);
2165 mutex_enter(&arc_eviction_mtx
);
2167 mutex_exit(&arc_eviction_mtx
);
2171 * Evict only meta data objects from the cache leaving the data objects.
2172 * This is only used to enforce the tunable arc_meta_limit, if we are
2173 * unable to evict enough buffers notify the user via the prune callback.
2176 arc_adjust_meta(int64_t adjustment
, boolean_t may_prune
)
2180 if (adjustment
> 0 && arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
] > 0) {
2181 delta
= MIN(arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
], adjustment
);
2182 arc_evict(arc_mru
, 0, delta
, FALSE
, ARC_BUFC_METADATA
);
2183 adjustment
-= delta
;
2186 if (adjustment
> 0 && arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
] > 0) {
2187 delta
= MIN(arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
], adjustment
);
2188 arc_evict(arc_mfu
, 0, delta
, FALSE
, ARC_BUFC_METADATA
);
2189 adjustment
-= delta
;
2192 if (may_prune
&& (adjustment
> 0) && (arc_meta_used
> arc_meta_limit
))
2193 arc_do_user_prune(zfs_arc_meta_prune
);
2197 * Flush all *evictable* data from the cache for the given spa.
2198 * NOTE: this will not touch "active" (i.e. referenced) data.
2201 arc_flush(spa_t
*spa
)
2206 guid
= spa_load_guid(spa
);
2208 while (list_head(&arc_mru
->arcs_list
[ARC_BUFC_DATA
])) {
2209 (void) arc_evict(arc_mru
, guid
, -1, FALSE
, ARC_BUFC_DATA
);
2213 while (list_head(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
])) {
2214 (void) arc_evict(arc_mru
, guid
, -1, FALSE
, ARC_BUFC_METADATA
);
2218 while (list_head(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
])) {
2219 (void) arc_evict(arc_mfu
, guid
, -1, FALSE
, ARC_BUFC_DATA
);
2223 while (list_head(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
])) {
2224 (void) arc_evict(arc_mfu
, guid
, -1, FALSE
, ARC_BUFC_METADATA
);
2229 arc_evict_ghost(arc_mru_ghost
, guid
, -1, ARC_BUFC_DATA
);
2230 arc_evict_ghost(arc_mfu_ghost
, guid
, -1, ARC_BUFC_DATA
);
2232 mutex_enter(&arc_reclaim_thr_lock
);
2233 arc_do_user_evicts();
2234 mutex_exit(&arc_reclaim_thr_lock
);
2235 ASSERT(spa
|| arc_eviction_list
== NULL
);
2239 arc_shrink(uint64_t bytes
)
2241 if (arc_c
> arc_c_min
) {
2244 to_free
= bytes
? bytes
: arc_c
>> zfs_arc_shrink_shift
;
2246 if (arc_c
> arc_c_min
+ to_free
)
2247 atomic_add_64(&arc_c
, -to_free
);
2251 atomic_add_64(&arc_p
, -(arc_p
>> zfs_arc_shrink_shift
));
2252 if (arc_c
> arc_size
)
2253 arc_c
= MAX(arc_size
, arc_c_min
);
2255 arc_p
= (arc_c
>> 1);
2256 ASSERT(arc_c
>= arc_c_min
);
2257 ASSERT((int64_t)arc_p
>= 0);
2260 if (arc_size
> arc_c
)
2265 arc_kmem_reap_now(arc_reclaim_strategy_t strat
, uint64_t bytes
)
2268 kmem_cache_t
*prev_cache
= NULL
;
2269 kmem_cache_t
*prev_data_cache
= NULL
;
2270 extern kmem_cache_t
*zio_buf_cache
[];
2271 extern kmem_cache_t
*zio_data_buf_cache
[];
2274 * An aggressive reclamation will shrink the cache size as well as
2275 * reap free buffers from the arc kmem caches.
2277 if (strat
== ARC_RECLAIM_AGGR
)
2280 for (i
= 0; i
< SPA_MAXBLOCKSIZE
>> SPA_MINBLOCKSHIFT
; i
++) {
2281 if (zio_buf_cache
[i
] != prev_cache
) {
2282 prev_cache
= zio_buf_cache
[i
];
2283 kmem_cache_reap_now(zio_buf_cache
[i
]);
2285 if (zio_data_buf_cache
[i
] != prev_data_cache
) {
2286 prev_data_cache
= zio_data_buf_cache
[i
];
2287 kmem_cache_reap_now(zio_data_buf_cache
[i
]);
2291 kmem_cache_reap_now(buf_cache
);
2292 kmem_cache_reap_now(hdr_cache
);
2296 * Unlike other ZFS implementations this thread is only responsible for
2297 * adapting the target ARC size on Linux. The responsibility for memory
2298 * reclamation has been entirely delegated to the arc_shrinker_func()
2299 * which is registered with the VM. To reflect this change in behavior
2300 * the arc_reclaim thread has been renamed to arc_adapt.
2303 arc_adapt_thread(void)
2308 CALLB_CPR_INIT(&cpr
, &arc_reclaim_thr_lock
, callb_generic_cpr
, FTAG
);
2310 mutex_enter(&arc_reclaim_thr_lock
);
2311 while (arc_thread_exit
== 0) {
2313 arc_reclaim_strategy_t last_reclaim
= ARC_RECLAIM_CONS
;
2315 if (spa_get_random(100) == 0) {
2318 if (last_reclaim
== ARC_RECLAIM_CONS
) {
2319 last_reclaim
= ARC_RECLAIM_AGGR
;
2321 last_reclaim
= ARC_RECLAIM_CONS
;
2325 last_reclaim
= ARC_RECLAIM_AGGR
;
2329 /* reset the growth delay for every reclaim */
2330 arc_grow_time
= ddi_get_lbolt()+(zfs_arc_grow_retry
* hz
);
2332 arc_kmem_reap_now(last_reclaim
, 0);
2335 #endif /* !_KERNEL */
2337 /* No recent memory pressure allow the ARC to grow. */
2338 if (arc_no_grow
&& ddi_get_lbolt() >= arc_grow_time
)
2339 arc_no_grow
= FALSE
;
2342 * Keep meta data usage within limits, arc_shrink() is not
2343 * used to avoid collapsing the arc_c value when only the
2344 * arc_meta_limit is being exceeded.
2346 prune
= (int64_t)arc_meta_used
- (int64_t)arc_meta_limit
;
2348 arc_adjust_meta(prune
, B_TRUE
);
2352 if (arc_eviction_list
!= NULL
)
2353 arc_do_user_evicts();
2355 /* block until needed, or one second, whichever is shorter */
2356 CALLB_CPR_SAFE_BEGIN(&cpr
);
2357 (void) cv_timedwait_interruptible(&arc_reclaim_thr_cv
,
2358 &arc_reclaim_thr_lock
, (ddi_get_lbolt() + hz
));
2359 CALLB_CPR_SAFE_END(&cpr
, &arc_reclaim_thr_lock
);
2362 /* Allow the module options to be changed */
2363 if (zfs_arc_max
> 64 << 20 &&
2364 zfs_arc_max
< physmem
* PAGESIZE
&&
2365 zfs_arc_max
!= arc_c_max
)
2366 arc_c_max
= zfs_arc_max
;
2368 if (zfs_arc_min
> 0 &&
2369 zfs_arc_min
< arc_c_max
&&
2370 zfs_arc_min
!= arc_c_min
)
2371 arc_c_min
= zfs_arc_min
;
2373 if (zfs_arc_meta_limit
> 0 &&
2374 zfs_arc_meta_limit
<= arc_c_max
&&
2375 zfs_arc_meta_limit
!= arc_meta_limit
)
2376 arc_meta_limit
= zfs_arc_meta_limit
;
2382 arc_thread_exit
= 0;
2383 cv_broadcast(&arc_reclaim_thr_cv
);
2384 CALLB_CPR_EXIT(&cpr
); /* drops arc_reclaim_thr_lock */
2390 * Determine the amount of memory eligible for eviction contained in the
2391 * ARC. All clean data reported by the ghost lists can always be safely
2392 * evicted. Due to arc_c_min, the same does not hold for all clean data
2393 * contained by the regular mru and mfu lists.
2395 * In the case of the regular mru and mfu lists, we need to report as
2396 * much clean data as possible, such that evicting that same reported
2397 * data will not bring arc_size below arc_c_min. Thus, in certain
2398 * circumstances, the total amount of clean data in the mru and mfu
2399 * lists might not actually be evictable.
2401 * The following two distinct cases are accounted for:
2403 * 1. The sum of the amount of dirty data contained by both the mru and
2404 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
2405 * is greater than or equal to arc_c_min.
2406 * (i.e. amount of dirty data >= arc_c_min)
2408 * This is the easy case; all clean data contained by the mru and mfu
2409 * lists is evictable. Evicting all clean data can only drop arc_size
2410 * to the amount of dirty data, which is greater than arc_c_min.
2412 * 2. The sum of the amount of dirty data contained by both the mru and
2413 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
2414 * is less than arc_c_min.
2415 * (i.e. arc_c_min > amount of dirty data)
2417 * 2.1. arc_size is greater than or equal arc_c_min.
2418 * (i.e. arc_size >= arc_c_min > amount of dirty data)
2420 * In this case, not all clean data from the regular mru and mfu
2421 * lists is actually evictable; we must leave enough clean data
2422 * to keep arc_size above arc_c_min. Thus, the maximum amount of
2423 * evictable data from the two lists combined, is exactly the
2424 * difference between arc_size and arc_c_min.
2426 * 2.2. arc_size is less than arc_c_min
2427 * (i.e. arc_c_min > arc_size > amount of dirty data)
2429 * In this case, none of the data contained in the mru and mfu
2430 * lists is evictable, even if it's clean. Since arc_size is
2431 * already below arc_c_min, evicting any more would only
2432 * increase this negative difference.
2435 arc_evictable_memory(void) {
2436 uint64_t arc_clean
=
2437 arc_mru
->arcs_lsize
[ARC_BUFC_DATA
] +
2438 arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
] +
2439 arc_mfu
->arcs_lsize
[ARC_BUFC_DATA
] +
2440 arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
];
2441 uint64_t ghost_clean
=
2442 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
2443 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_METADATA
] +
2444 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
2445 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_METADATA
];
2446 uint64_t arc_dirty
= MAX((int64_t)arc_size
- (int64_t)arc_clean
, 0);
2448 if (arc_dirty
>= arc_c_min
)
2449 return (ghost_clean
+ arc_clean
);
2451 return (ghost_clean
+ MAX((int64_t)arc_size
- (int64_t)arc_c_min
, 0));
2455 __arc_shrinker_func(struct shrinker
*shrink
, struct shrink_control
*sc
)
2459 /* The arc is considered warm once reclaim has occurred */
2460 if (unlikely(arc_warm
== B_FALSE
))
2463 /* Return the potential number of reclaimable pages */
2464 pages
= btop(arc_evictable_memory());
2465 if (sc
->nr_to_scan
== 0)
2468 /* Not allowed to perform filesystem reclaim */
2469 if (!(sc
->gfp_mask
& __GFP_FS
))
2472 /* Reclaim in progress */
2473 if (mutex_tryenter(&arc_reclaim_thr_lock
) == 0)
2477 * Evict the requested number of pages by shrinking arc_c the
2478 * requested amount. If there is nothing left to evict just
2479 * reap whatever we can from the various arc slabs.
2482 arc_kmem_reap_now(ARC_RECLAIM_AGGR
, ptob(sc
->nr_to_scan
));
2484 arc_kmem_reap_now(ARC_RECLAIM_CONS
, ptob(sc
->nr_to_scan
));
2488 * When direct reclaim is observed it usually indicates a rapid
2489 * increase in memory pressure. This occurs because the kswapd
2490 * threads were unable to asynchronously keep enough free memory
2491 * available. In this case set arc_no_grow to briefly pause arc
2492 * growth to avoid compounding the memory pressure.
2494 if (current_is_kswapd()) {
2495 ARCSTAT_BUMP(arcstat_memory_indirect_count
);
2497 arc_no_grow
= B_TRUE
;
2498 arc_grow_time
= ddi_get_lbolt() + (zfs_arc_grow_retry
* hz
);
2499 ARCSTAT_BUMP(arcstat_memory_direct_count
);
2502 mutex_exit(&arc_reclaim_thr_lock
);
2506 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func
);
2508 SPL_SHRINKER_DECLARE(arc_shrinker
, arc_shrinker_func
, DEFAULT_SEEKS
);
2509 #endif /* _KERNEL */
2512 * Adapt arc info given the number of bytes we are trying to add and
2513 * the state that we are comming from. This function is only called
2514 * when we are adding new content to the cache.
2517 arc_adapt(int bytes
, arc_state_t
*state
)
2520 uint64_t arc_p_min
= (arc_c
>> zfs_arc_p_min_shift
);
2522 if (state
== arc_l2c_only
)
2527 * Adapt the target size of the MRU list:
2528 * - if we just hit in the MRU ghost list, then increase
2529 * the target size of the MRU list.
2530 * - if we just hit in the MFU ghost list, then increase
2531 * the target size of the MFU list by decreasing the
2532 * target size of the MRU list.
2534 if (state
== arc_mru_ghost
) {
2535 mult
= ((arc_mru_ghost
->arcs_size
>= arc_mfu_ghost
->arcs_size
) ?
2536 1 : (arc_mfu_ghost
->arcs_size
/arc_mru_ghost
->arcs_size
));
2537 mult
= MIN(mult
, 10); /* avoid wild arc_p adjustment */
2539 arc_p
= MIN(arc_c
- arc_p_min
, arc_p
+ bytes
* mult
);
2540 } else if (state
== arc_mfu_ghost
) {
2543 mult
= ((arc_mfu_ghost
->arcs_size
>= arc_mru_ghost
->arcs_size
) ?
2544 1 : (arc_mru_ghost
->arcs_size
/arc_mfu_ghost
->arcs_size
));
2545 mult
= MIN(mult
, 10);
2547 delta
= MIN(bytes
* mult
, arc_p
);
2548 arc_p
= MAX(arc_p_min
, arc_p
- delta
);
2550 ASSERT((int64_t)arc_p
>= 0);
2555 if (arc_c
>= arc_c_max
)
2559 * If we're within (2 * maxblocksize) bytes of the target
2560 * cache size, increment the target cache size
2562 if (arc_size
> arc_c
- (2ULL << SPA_MAXBLOCKSHIFT
)) {
2563 atomic_add_64(&arc_c
, (int64_t)bytes
);
2564 if (arc_c
> arc_c_max
)
2566 else if (state
== arc_anon
)
2567 atomic_add_64(&arc_p
, (int64_t)bytes
);
2571 ASSERT((int64_t)arc_p
>= 0);
2575 * Check if the cache has reached its limits and eviction is required
2579 arc_evict_needed(arc_buf_contents_t type
)
2581 if (type
== ARC_BUFC_METADATA
&& arc_meta_used
>= arc_meta_limit
)
2587 return (arc_size
> arc_c
);
2591 * The buffer, supplied as the first argument, needs a data block.
2592 * So, if we are at cache max, determine which cache should be victimized.
2593 * We have the following cases:
2595 * 1. Insert for MRU, p > sizeof(arc_anon + arc_mru) ->
2596 * In this situation if we're out of space, but the resident size of the MFU is
2597 * under the limit, victimize the MFU cache to satisfy this insertion request.
2599 * 2. Insert for MRU, p <= sizeof(arc_anon + arc_mru) ->
2600 * Here, we've used up all of the available space for the MRU, so we need to
2601 * evict from our own cache instead. Evict from the set of resident MRU
2604 * 3. Insert for MFU (c - p) > sizeof(arc_mfu) ->
2605 * c minus p represents the MFU space in the cache, since p is the size of the
2606 * cache that is dedicated to the MRU. In this situation there's still space on
2607 * the MFU side, so the MRU side needs to be victimized.
2609 * 4. Insert for MFU (c - p) < sizeof(arc_mfu) ->
2610 * MFU's resident set is consuming more space than it has been allotted. In
2611 * this situation, we must victimize our own cache, the MFU, for this insertion.
2614 arc_get_data_buf(arc_buf_t
*buf
)
2616 arc_state_t
*state
= buf
->b_hdr
->b_state
;
2617 uint64_t size
= buf
->b_hdr
->b_size
;
2618 arc_buf_contents_t type
= buf
->b_hdr
->b_type
;
2620 arc_adapt(size
, state
);
2623 * We have not yet reached cache maximum size,
2624 * just allocate a new buffer.
2626 if (!arc_evict_needed(type
)) {
2627 if (type
== ARC_BUFC_METADATA
) {
2628 buf
->b_data
= zio_buf_alloc(size
);
2629 arc_space_consume(size
, ARC_SPACE_DATA
);
2631 ASSERT(type
== ARC_BUFC_DATA
);
2632 buf
->b_data
= zio_data_buf_alloc(size
);
2633 ARCSTAT_INCR(arcstat_data_size
, size
);
2634 atomic_add_64(&arc_size
, size
);
2640 * If we are prefetching from the mfu ghost list, this buffer
2641 * will end up on the mru list; so steal space from there.
2643 if (state
== arc_mfu_ghost
)
2644 state
= buf
->b_hdr
->b_flags
& ARC_PREFETCH
? arc_mru
: arc_mfu
;
2645 else if (state
== arc_mru_ghost
)
2648 if (state
== arc_mru
|| state
== arc_anon
) {
2649 uint64_t mru_used
= arc_anon
->arcs_size
+ arc_mru
->arcs_size
;
2650 state
= (arc_mfu
->arcs_lsize
[type
] >= size
&&
2651 arc_p
> mru_used
) ? arc_mfu
: arc_mru
;
2654 uint64_t mfu_space
= arc_c
- arc_p
;
2655 state
= (arc_mru
->arcs_lsize
[type
] >= size
&&
2656 mfu_space
> arc_mfu
->arcs_size
) ? arc_mru
: arc_mfu
;
2659 if ((buf
->b_data
= arc_evict(state
, 0, size
, TRUE
, type
)) == NULL
) {
2660 if (type
== ARC_BUFC_METADATA
) {
2661 buf
->b_data
= zio_buf_alloc(size
);
2662 arc_space_consume(size
, ARC_SPACE_DATA
);
2665 * If we are unable to recycle an existing meta buffer
2666 * signal the reclaim thread. It will notify users
2667 * via the prune callback to drop references. The
2668 * prune callback in run in the context of the reclaim
2669 * thread to avoid deadlocking on the hash_lock.
2671 cv_signal(&arc_reclaim_thr_cv
);
2673 ASSERT(type
== ARC_BUFC_DATA
);
2674 buf
->b_data
= zio_data_buf_alloc(size
);
2675 ARCSTAT_INCR(arcstat_data_size
, size
);
2676 atomic_add_64(&arc_size
, size
);
2679 ARCSTAT_BUMP(arcstat_recycle_miss
);
2681 ASSERT(buf
->b_data
!= NULL
);
2684 * Update the state size. Note that ghost states have a
2685 * "ghost size" and so don't need to be updated.
2687 if (!GHOST_STATE(buf
->b_hdr
->b_state
)) {
2688 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2690 atomic_add_64(&hdr
->b_state
->arcs_size
, size
);
2691 if (list_link_active(&hdr
->b_arc_node
)) {
2692 ASSERT(refcount_is_zero(&hdr
->b_refcnt
));
2693 atomic_add_64(&hdr
->b_state
->arcs_lsize
[type
], size
);
2696 * If we are growing the cache, and we are adding anonymous
2697 * data, and we have outgrown arc_p, update arc_p
2699 if (arc_size
< arc_c
&& hdr
->b_state
== arc_anon
&&
2700 arc_anon
->arcs_size
+ arc_mru
->arcs_size
> arc_p
)
2701 arc_p
= MIN(arc_c
, arc_p
+ size
);
2706 * This routine is called whenever a buffer is accessed.
2707 * NOTE: the hash lock is dropped in this function.
2710 arc_access(arc_buf_hdr_t
*buf
, kmutex_t
*hash_lock
)
2714 ASSERT(MUTEX_HELD(hash_lock
));
2716 if (buf
->b_state
== arc_anon
) {
2718 * This buffer is not in the cache, and does not
2719 * appear in our "ghost" list. Add the new buffer
2723 ASSERT(buf
->b_arc_access
== 0);
2724 buf
->b_arc_access
= ddi_get_lbolt();
2725 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, buf
);
2726 arc_change_state(arc_mru
, buf
, hash_lock
);
2728 } else if (buf
->b_state
== arc_mru
) {
2729 now
= ddi_get_lbolt();
2732 * If this buffer is here because of a prefetch, then either:
2733 * - clear the flag if this is a "referencing" read
2734 * (any subsequent access will bump this into the MFU state).
2736 * - move the buffer to the head of the list if this is
2737 * another prefetch (to make it less likely to be evicted).
2739 if ((buf
->b_flags
& ARC_PREFETCH
) != 0) {
2740 if (refcount_count(&buf
->b_refcnt
) == 0) {
2741 ASSERT(list_link_active(&buf
->b_arc_node
));
2743 buf
->b_flags
&= ~ARC_PREFETCH
;
2744 atomic_inc_32(&buf
->b_mru_hits
);
2745 ARCSTAT_BUMP(arcstat_mru_hits
);
2747 buf
->b_arc_access
= now
;
2752 * This buffer has been "accessed" only once so far,
2753 * but it is still in the cache. Move it to the MFU
2756 if (now
> buf
->b_arc_access
+ ARC_MINTIME
) {
2758 * More than 125ms have passed since we
2759 * instantiated this buffer. Move it to the
2760 * most frequently used state.
2762 buf
->b_arc_access
= now
;
2763 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, buf
);
2764 arc_change_state(arc_mfu
, buf
, hash_lock
);
2766 atomic_inc_32(&buf
->b_mru_hits
);
2767 ARCSTAT_BUMP(arcstat_mru_hits
);
2768 } else if (buf
->b_state
== arc_mru_ghost
) {
2769 arc_state_t
*new_state
;
2771 * This buffer has been "accessed" recently, but
2772 * was evicted from the cache. Move it to the
2776 if (buf
->b_flags
& ARC_PREFETCH
) {
2777 new_state
= arc_mru
;
2778 if (refcount_count(&buf
->b_refcnt
) > 0)
2779 buf
->b_flags
&= ~ARC_PREFETCH
;
2780 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, buf
);
2782 new_state
= arc_mfu
;
2783 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, buf
);
2786 buf
->b_arc_access
= ddi_get_lbolt();
2787 arc_change_state(new_state
, buf
, hash_lock
);
2789 atomic_inc_32(&buf
->b_mru_ghost_hits
);
2790 ARCSTAT_BUMP(arcstat_mru_ghost_hits
);
2791 } else if (buf
->b_state
== arc_mfu
) {
2793 * This buffer has been accessed more than once and is
2794 * still in the cache. Keep it in the MFU state.
2796 * NOTE: an add_reference() that occurred when we did
2797 * the arc_read() will have kicked this off the list.
2798 * If it was a prefetch, we will explicitly move it to
2799 * the head of the list now.
2801 if ((buf
->b_flags
& ARC_PREFETCH
) != 0) {
2802 ASSERT(refcount_count(&buf
->b_refcnt
) == 0);
2803 ASSERT(list_link_active(&buf
->b_arc_node
));
2805 atomic_inc_32(&buf
->b_mfu_hits
);
2806 ARCSTAT_BUMP(arcstat_mfu_hits
);
2807 buf
->b_arc_access
= ddi_get_lbolt();
2808 } else if (buf
->b_state
== arc_mfu_ghost
) {
2809 arc_state_t
*new_state
= arc_mfu
;
2811 * This buffer has been accessed more than once but has
2812 * been evicted from the cache. Move it back to the
2816 if (buf
->b_flags
& ARC_PREFETCH
) {
2818 * This is a prefetch access...
2819 * move this block back to the MRU state.
2821 ASSERT0(refcount_count(&buf
->b_refcnt
));
2822 new_state
= arc_mru
;
2825 buf
->b_arc_access
= ddi_get_lbolt();
2826 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, buf
);
2827 arc_change_state(new_state
, buf
, hash_lock
);
2829 atomic_inc_32(&buf
->b_mfu_ghost_hits
);
2830 ARCSTAT_BUMP(arcstat_mfu_ghost_hits
);
2831 } else if (buf
->b_state
== arc_l2c_only
) {
2833 * This buffer is on the 2nd Level ARC.
2836 buf
->b_arc_access
= ddi_get_lbolt();
2837 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, buf
);
2838 arc_change_state(arc_mfu
, buf
, hash_lock
);
2840 ASSERT(!"invalid arc state");
2844 /* a generic arc_done_func_t which you can use */
2847 arc_bcopy_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
2849 if (zio
== NULL
|| zio
->io_error
== 0)
2850 bcopy(buf
->b_data
, arg
, buf
->b_hdr
->b_size
);
2851 VERIFY(arc_buf_remove_ref(buf
, arg
));
2854 /* a generic arc_done_func_t */
2856 arc_getbuf_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
2858 arc_buf_t
**bufp
= arg
;
2859 if (zio
&& zio
->io_error
) {
2860 VERIFY(arc_buf_remove_ref(buf
, arg
));
2864 ASSERT(buf
->b_data
);
2869 arc_read_done(zio_t
*zio
)
2871 arc_buf_hdr_t
*hdr
, *found
;
2873 arc_buf_t
*abuf
; /* buffer we're assigning to callback */
2874 kmutex_t
*hash_lock
;
2875 arc_callback_t
*callback_list
, *acb
;
2876 int freeable
= FALSE
;
2878 buf
= zio
->io_private
;
2882 * The hdr was inserted into hash-table and removed from lists
2883 * prior to starting I/O. We should find this header, since
2884 * it's in the hash table, and it should be legit since it's
2885 * not possible to evict it during the I/O. The only possible
2886 * reason for it not to be found is if we were freed during the
2889 found
= buf_hash_find(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
,
2892 ASSERT((found
== NULL
&& HDR_FREED_IN_READ(hdr
) && hash_lock
== NULL
) ||
2893 (found
== hdr
&& DVA_EQUAL(&hdr
->b_dva
, BP_IDENTITY(zio
->io_bp
))) ||
2894 (found
== hdr
&& HDR_L2_READING(hdr
)));
2896 hdr
->b_flags
&= ~ARC_L2_EVICTED
;
2897 if (l2arc_noprefetch
&& (hdr
->b_flags
& ARC_PREFETCH
))
2898 hdr
->b_flags
&= ~ARC_L2CACHE
;
2900 /* byteswap if necessary */
2901 callback_list
= hdr
->b_acb
;
2902 ASSERT(callback_list
!= NULL
);
2903 if (BP_SHOULD_BYTESWAP(zio
->io_bp
) && zio
->io_error
== 0) {
2904 dmu_object_byteswap_t bswap
=
2905 DMU_OT_BYTESWAP(BP_GET_TYPE(zio
->io_bp
));
2906 if (BP_GET_LEVEL(zio
->io_bp
) > 0)
2907 byteswap_uint64_array(buf
->b_data
, hdr
->b_size
);
2909 dmu_ot_byteswap
[bswap
].ob_func(buf
->b_data
, hdr
->b_size
);
2912 arc_cksum_compute(buf
, B_FALSE
);
2914 if (hash_lock
&& zio
->io_error
== 0 && hdr
->b_state
== arc_anon
) {
2916 * Only call arc_access on anonymous buffers. This is because
2917 * if we've issued an I/O for an evicted buffer, we've already
2918 * called arc_access (to prevent any simultaneous readers from
2919 * getting confused).
2921 arc_access(hdr
, hash_lock
);
2924 /* create copies of the data buffer for the callers */
2926 for (acb
= callback_list
; acb
; acb
= acb
->acb_next
) {
2927 if (acb
->acb_done
) {
2929 ARCSTAT_BUMP(arcstat_duplicate_reads
);
2930 abuf
= arc_buf_clone(buf
);
2932 acb
->acb_buf
= abuf
;
2937 hdr
->b_flags
&= ~ARC_IO_IN_PROGRESS
;
2938 ASSERT(!HDR_BUF_AVAILABLE(hdr
));
2940 ASSERT(buf
->b_efunc
== NULL
);
2941 ASSERT(hdr
->b_datacnt
== 1);
2942 hdr
->b_flags
|= ARC_BUF_AVAILABLE
;
2945 ASSERT(refcount_is_zero(&hdr
->b_refcnt
) || callback_list
!= NULL
);
2947 if (zio
->io_error
!= 0) {
2948 hdr
->b_flags
|= ARC_IO_ERROR
;
2949 if (hdr
->b_state
!= arc_anon
)
2950 arc_change_state(arc_anon
, hdr
, hash_lock
);
2951 if (HDR_IN_HASH_TABLE(hdr
))
2952 buf_hash_remove(hdr
);
2953 freeable
= refcount_is_zero(&hdr
->b_refcnt
);
2957 * Broadcast before we drop the hash_lock to avoid the possibility
2958 * that the hdr (and hence the cv) might be freed before we get to
2959 * the cv_broadcast().
2961 cv_broadcast(&hdr
->b_cv
);
2964 mutex_exit(hash_lock
);
2967 * This block was freed while we waited for the read to
2968 * complete. It has been removed from the hash table and
2969 * moved to the anonymous state (so that it won't show up
2972 ASSERT3P(hdr
->b_state
, ==, arc_anon
);
2973 freeable
= refcount_is_zero(&hdr
->b_refcnt
);
2976 /* execute each callback and free its structure */
2977 while ((acb
= callback_list
) != NULL
) {
2979 acb
->acb_done(zio
, acb
->acb_buf
, acb
->acb_private
);
2981 if (acb
->acb_zio_dummy
!= NULL
) {
2982 acb
->acb_zio_dummy
->io_error
= zio
->io_error
;
2983 zio_nowait(acb
->acb_zio_dummy
);
2986 callback_list
= acb
->acb_next
;
2987 kmem_free(acb
, sizeof (arc_callback_t
));
2991 arc_hdr_destroy(hdr
);
2995 * "Read" the block at the specified DVA (in bp) via the
2996 * cache. If the block is found in the cache, invoke the provided
2997 * callback immediately and return. Note that the `zio' parameter
2998 * in the callback will be NULL in this case, since no IO was
2999 * required. If the block is not in the cache pass the read request
3000 * on to the spa with a substitute callback function, so that the
3001 * requested block will be added to the cache.
3003 * If a read request arrives for a block that has a read in-progress,
3004 * either wait for the in-progress read to complete (and return the
3005 * results); or, if this is a read with a "done" func, add a record
3006 * to the read to invoke the "done" func when the read completes,
3007 * and return; or just return.
3009 * arc_read_done() will invoke all the requested "done" functions
3010 * for readers of this block.
3013 arc_read(zio_t
*pio
, spa_t
*spa
, const blkptr_t
*bp
, arc_done_func_t
*done
,
3014 void *private, int priority
, int zio_flags
, uint32_t *arc_flags
,
3015 const zbookmark_t
*zb
)
3018 arc_buf_t
*buf
= NULL
;
3019 kmutex_t
*hash_lock
;
3021 uint64_t guid
= spa_load_guid(spa
);
3025 hdr
= buf_hash_find(guid
, BP_IDENTITY(bp
), BP_PHYSICAL_BIRTH(bp
),
3027 if (hdr
&& hdr
->b_datacnt
> 0) {
3029 *arc_flags
|= ARC_CACHED
;
3031 if (HDR_IO_IN_PROGRESS(hdr
)) {
3033 if (*arc_flags
& ARC_WAIT
) {
3034 cv_wait(&hdr
->b_cv
, hash_lock
);
3035 mutex_exit(hash_lock
);
3038 ASSERT(*arc_flags
& ARC_NOWAIT
);
3041 arc_callback_t
*acb
= NULL
;
3043 acb
= kmem_zalloc(sizeof (arc_callback_t
),
3045 acb
->acb_done
= done
;
3046 acb
->acb_private
= private;
3048 acb
->acb_zio_dummy
= zio_null(pio
,
3049 spa
, NULL
, NULL
, NULL
, zio_flags
);
3051 ASSERT(acb
->acb_done
!= NULL
);
3052 acb
->acb_next
= hdr
->b_acb
;
3054 add_reference(hdr
, hash_lock
, private);
3055 mutex_exit(hash_lock
);
3058 mutex_exit(hash_lock
);
3062 ASSERT(hdr
->b_state
== arc_mru
|| hdr
->b_state
== arc_mfu
);
3065 add_reference(hdr
, hash_lock
, private);
3067 * If this block is already in use, create a new
3068 * copy of the data so that we will be guaranteed
3069 * that arc_release() will always succeed.
3073 ASSERT(buf
->b_data
);
3074 if (HDR_BUF_AVAILABLE(hdr
)) {
3075 ASSERT(buf
->b_efunc
== NULL
);
3076 hdr
->b_flags
&= ~ARC_BUF_AVAILABLE
;
3078 buf
= arc_buf_clone(buf
);
3081 } else if (*arc_flags
& ARC_PREFETCH
&&
3082 refcount_count(&hdr
->b_refcnt
) == 0) {
3083 hdr
->b_flags
|= ARC_PREFETCH
;
3085 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
3086 arc_access(hdr
, hash_lock
);
3087 if (*arc_flags
& ARC_L2CACHE
)
3088 hdr
->b_flags
|= ARC_L2CACHE
;
3089 if (*arc_flags
& ARC_L2COMPRESS
)
3090 hdr
->b_flags
|= ARC_L2COMPRESS
;
3091 mutex_exit(hash_lock
);
3092 ARCSTAT_BUMP(arcstat_hits
);
3093 ARCSTAT_CONDSTAT(!(hdr
->b_flags
& ARC_PREFETCH
),
3094 demand
, prefetch
, hdr
->b_type
!= ARC_BUFC_METADATA
,
3095 data
, metadata
, hits
);
3098 done(NULL
, buf
, private);
3100 uint64_t size
= BP_GET_LSIZE(bp
);
3101 arc_callback_t
*acb
;
3104 boolean_t devw
= B_FALSE
;
3107 /* this block is not in the cache */
3108 arc_buf_hdr_t
*exists
;
3109 arc_buf_contents_t type
= BP_GET_BUFC_TYPE(bp
);
3110 buf
= arc_buf_alloc(spa
, size
, private, type
);
3112 hdr
->b_dva
= *BP_IDENTITY(bp
);
3113 hdr
->b_birth
= BP_PHYSICAL_BIRTH(bp
);
3114 hdr
->b_cksum0
= bp
->blk_cksum
.zc_word
[0];
3115 exists
= buf_hash_insert(hdr
, &hash_lock
);
3117 /* somebody beat us to the hash insert */
3118 mutex_exit(hash_lock
);
3119 buf_discard_identity(hdr
);
3120 (void) arc_buf_remove_ref(buf
, private);
3121 goto top
; /* restart the IO request */
3123 /* if this is a prefetch, we don't have a reference */
3124 if (*arc_flags
& ARC_PREFETCH
) {
3125 (void) remove_reference(hdr
, hash_lock
,
3127 hdr
->b_flags
|= ARC_PREFETCH
;
3129 if (*arc_flags
& ARC_L2CACHE
)
3130 hdr
->b_flags
|= ARC_L2CACHE
;
3131 if (*arc_flags
& ARC_L2COMPRESS
)
3132 hdr
->b_flags
|= ARC_L2COMPRESS
;
3133 if (BP_GET_LEVEL(bp
) > 0)
3134 hdr
->b_flags
|= ARC_INDIRECT
;
3136 /* this block is in the ghost cache */
3137 ASSERT(GHOST_STATE(hdr
->b_state
));
3138 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3139 ASSERT0(refcount_count(&hdr
->b_refcnt
));
3140 ASSERT(hdr
->b_buf
== NULL
);
3142 /* if this is a prefetch, we don't have a reference */
3143 if (*arc_flags
& ARC_PREFETCH
)
3144 hdr
->b_flags
|= ARC_PREFETCH
;
3146 add_reference(hdr
, hash_lock
, private);
3147 if (*arc_flags
& ARC_L2CACHE
)
3148 hdr
->b_flags
|= ARC_L2CACHE
;
3149 if (*arc_flags
& ARC_L2COMPRESS
)
3150 hdr
->b_flags
|= ARC_L2COMPRESS
;
3151 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
3154 buf
->b_efunc
= NULL
;
3155 buf
->b_private
= NULL
;
3158 ASSERT(hdr
->b_datacnt
== 0);
3160 arc_get_data_buf(buf
);
3161 arc_access(hdr
, hash_lock
);
3164 ASSERT(!GHOST_STATE(hdr
->b_state
));
3166 acb
= kmem_zalloc(sizeof (arc_callback_t
), KM_PUSHPAGE
);
3167 acb
->acb_done
= done
;
3168 acb
->acb_private
= private;
3170 ASSERT(hdr
->b_acb
== NULL
);
3172 hdr
->b_flags
|= ARC_IO_IN_PROGRESS
;
3174 if (HDR_L2CACHE(hdr
) && hdr
->b_l2hdr
!= NULL
&&
3175 (vd
= hdr
->b_l2hdr
->b_dev
->l2ad_vdev
) != NULL
) {
3176 devw
= hdr
->b_l2hdr
->b_dev
->l2ad_writing
;
3177 addr
= hdr
->b_l2hdr
->b_daddr
;
3179 * Lock out device removal.
3181 if (vdev_is_dead(vd
) ||
3182 !spa_config_tryenter(spa
, SCL_L2ARC
, vd
, RW_READER
))
3186 mutex_exit(hash_lock
);
3189 * At this point, we have a level 1 cache miss. Try again in
3190 * L2ARC if possible.
3192 ASSERT3U(hdr
->b_size
, ==, size
);
3193 DTRACE_PROBE4(arc__miss
, arc_buf_hdr_t
*, hdr
, blkptr_t
*, bp
,
3194 uint64_t, size
, zbookmark_t
*, zb
);
3195 ARCSTAT_BUMP(arcstat_misses
);
3196 ARCSTAT_CONDSTAT(!(hdr
->b_flags
& ARC_PREFETCH
),
3197 demand
, prefetch
, hdr
->b_type
!= ARC_BUFC_METADATA
,
3198 data
, metadata
, misses
);
3200 if (vd
!= NULL
&& l2arc_ndev
!= 0 && !(l2arc_norw
&& devw
)) {
3202 * Read from the L2ARC if the following are true:
3203 * 1. The L2ARC vdev was previously cached.
3204 * 2. This buffer still has L2ARC metadata.
3205 * 3. This buffer isn't currently writing to the L2ARC.
3206 * 4. The L2ARC entry wasn't evicted, which may
3207 * also have invalidated the vdev.
3208 * 5. This isn't prefetch and l2arc_noprefetch is set.
3210 if (hdr
->b_l2hdr
!= NULL
&&
3211 !HDR_L2_WRITING(hdr
) && !HDR_L2_EVICTED(hdr
) &&
3212 !(l2arc_noprefetch
&& HDR_PREFETCH(hdr
))) {
3213 l2arc_read_callback_t
*cb
;
3215 DTRACE_PROBE1(l2arc__hit
, arc_buf_hdr_t
*, hdr
);
3216 ARCSTAT_BUMP(arcstat_l2_hits
);
3217 atomic_inc_32(&hdr
->b_l2hdr
->b_hits
);
3219 cb
= kmem_zalloc(sizeof (l2arc_read_callback_t
),
3221 cb
->l2rcb_buf
= buf
;
3222 cb
->l2rcb_spa
= spa
;
3225 cb
->l2rcb_flags
= zio_flags
;
3226 cb
->l2rcb_compress
= hdr
->b_l2hdr
->b_compress
;
3228 ASSERT(addr
>= VDEV_LABEL_START_SIZE
&&
3229 addr
+ size
< vd
->vdev_psize
-
3230 VDEV_LABEL_END_SIZE
);
3233 * l2arc read. The SCL_L2ARC lock will be
3234 * released by l2arc_read_done().
3235 * Issue a null zio if the underlying buffer
3236 * was squashed to zero size by compression.
3238 if (hdr
->b_l2hdr
->b_compress
==
3239 ZIO_COMPRESS_EMPTY
) {
3240 rzio
= zio_null(pio
, spa
, vd
,
3241 l2arc_read_done
, cb
,
3242 zio_flags
| ZIO_FLAG_DONT_CACHE
|
3244 ZIO_FLAG_DONT_PROPAGATE
|
3245 ZIO_FLAG_DONT_RETRY
);
3247 rzio
= zio_read_phys(pio
, vd
, addr
,
3248 hdr
->b_l2hdr
->b_asize
,
3249 buf
->b_data
, ZIO_CHECKSUM_OFF
,
3250 l2arc_read_done
, cb
, priority
,
3251 zio_flags
| ZIO_FLAG_DONT_CACHE
|
3253 ZIO_FLAG_DONT_PROPAGATE
|
3254 ZIO_FLAG_DONT_RETRY
, B_FALSE
);
3256 DTRACE_PROBE2(l2arc__read
, vdev_t
*, vd
,
3258 ARCSTAT_INCR(arcstat_l2_read_bytes
,
3259 hdr
->b_l2hdr
->b_asize
);
3261 if (*arc_flags
& ARC_NOWAIT
) {
3266 ASSERT(*arc_flags
& ARC_WAIT
);
3267 if (zio_wait(rzio
) == 0)
3270 /* l2arc read error; goto zio_read() */
3272 DTRACE_PROBE1(l2arc__miss
,
3273 arc_buf_hdr_t
*, hdr
);
3274 ARCSTAT_BUMP(arcstat_l2_misses
);
3275 if (HDR_L2_WRITING(hdr
))
3276 ARCSTAT_BUMP(arcstat_l2_rw_clash
);
3277 spa_config_exit(spa
, SCL_L2ARC
, vd
);
3281 spa_config_exit(spa
, SCL_L2ARC
, vd
);
3282 if (l2arc_ndev
!= 0) {
3283 DTRACE_PROBE1(l2arc__miss
,
3284 arc_buf_hdr_t
*, hdr
);
3285 ARCSTAT_BUMP(arcstat_l2_misses
);
3289 rzio
= zio_read(pio
, spa
, bp
, buf
->b_data
, size
,
3290 arc_read_done
, buf
, priority
, zio_flags
, zb
);
3292 if (*arc_flags
& ARC_WAIT
) {
3293 rc
= zio_wait(rzio
);
3297 ASSERT(*arc_flags
& ARC_NOWAIT
);
3302 spa_read_history_add(spa
, zb
, *arc_flags
);
3307 arc_add_prune_callback(arc_prune_func_t
*func
, void *private)
3311 p
= kmem_alloc(sizeof(*p
), KM_SLEEP
);
3313 p
->p_private
= private;
3314 list_link_init(&p
->p_node
);
3315 refcount_create(&p
->p_refcnt
);
3317 mutex_enter(&arc_prune_mtx
);
3318 refcount_add(&p
->p_refcnt
, &arc_prune_list
);
3319 list_insert_head(&arc_prune_list
, p
);
3320 mutex_exit(&arc_prune_mtx
);
3326 arc_remove_prune_callback(arc_prune_t
*p
)
3328 mutex_enter(&arc_prune_mtx
);
3329 list_remove(&arc_prune_list
, p
);
3330 if (refcount_remove(&p
->p_refcnt
, &arc_prune_list
) == 0) {
3331 refcount_destroy(&p
->p_refcnt
);
3332 kmem_free(p
, sizeof (*p
));
3334 mutex_exit(&arc_prune_mtx
);
3338 arc_set_callback(arc_buf_t
*buf
, arc_evict_func_t
*func
, void *private)
3340 ASSERT(buf
->b_hdr
!= NULL
);
3341 ASSERT(buf
->b_hdr
->b_state
!= arc_anon
);
3342 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_refcnt
) || func
== NULL
);
3343 ASSERT(buf
->b_efunc
== NULL
);
3344 ASSERT(!HDR_BUF_AVAILABLE(buf
->b_hdr
));
3346 buf
->b_efunc
= func
;
3347 buf
->b_private
= private;
3351 * Notify the arc that a block was freed, and thus will never be used again.
3354 arc_freed(spa_t
*spa
, const blkptr_t
*bp
)
3357 kmutex_t
*hash_lock
;
3358 uint64_t guid
= spa_load_guid(spa
);
3360 hdr
= buf_hash_find(guid
, BP_IDENTITY(bp
), BP_PHYSICAL_BIRTH(bp
),
3364 if (HDR_BUF_AVAILABLE(hdr
)) {
3365 arc_buf_t
*buf
= hdr
->b_buf
;
3366 add_reference(hdr
, hash_lock
, FTAG
);
3367 hdr
->b_flags
&= ~ARC_BUF_AVAILABLE
;
3368 mutex_exit(hash_lock
);
3370 arc_release(buf
, FTAG
);
3371 (void) arc_buf_remove_ref(buf
, FTAG
);
3373 mutex_exit(hash_lock
);
3379 * This is used by the DMU to let the ARC know that a buffer is
3380 * being evicted, so the ARC should clean up. If this arc buf
3381 * is not yet in the evicted state, it will be put there.
3384 arc_buf_evict(arc_buf_t
*buf
)
3387 kmutex_t
*hash_lock
;
3390 mutex_enter(&buf
->b_evict_lock
);
3394 * We are in arc_do_user_evicts().
3396 ASSERT(buf
->b_data
== NULL
);
3397 mutex_exit(&buf
->b_evict_lock
);
3399 } else if (buf
->b_data
== NULL
) {
3400 arc_buf_t copy
= *buf
; /* structure assignment */
3402 * We are on the eviction list; process this buffer now
3403 * but let arc_do_user_evicts() do the reaping.
3405 buf
->b_efunc
= NULL
;
3406 mutex_exit(&buf
->b_evict_lock
);
3407 VERIFY(copy
.b_efunc(©
) == 0);
3410 hash_lock
= HDR_LOCK(hdr
);
3411 mutex_enter(hash_lock
);
3413 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
3415 ASSERT3U(refcount_count(&hdr
->b_refcnt
), <, hdr
->b_datacnt
);
3416 ASSERT(hdr
->b_state
== arc_mru
|| hdr
->b_state
== arc_mfu
);
3419 * Pull this buffer off of the hdr
3422 while (*bufp
!= buf
)
3423 bufp
= &(*bufp
)->b_next
;
3424 *bufp
= buf
->b_next
;
3426 ASSERT(buf
->b_data
!= NULL
);
3427 arc_buf_destroy(buf
, FALSE
, FALSE
);
3429 if (hdr
->b_datacnt
== 0) {
3430 arc_state_t
*old_state
= hdr
->b_state
;
3431 arc_state_t
*evicted_state
;
3433 ASSERT(hdr
->b_buf
== NULL
);
3434 ASSERT(refcount_is_zero(&hdr
->b_refcnt
));
3437 (old_state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
3439 mutex_enter(&old_state
->arcs_mtx
);
3440 mutex_enter(&evicted_state
->arcs_mtx
);
3442 arc_change_state(evicted_state
, hdr
, hash_lock
);
3443 ASSERT(HDR_IN_HASH_TABLE(hdr
));
3444 hdr
->b_flags
|= ARC_IN_HASH_TABLE
;
3445 hdr
->b_flags
&= ~ARC_BUF_AVAILABLE
;
3447 mutex_exit(&evicted_state
->arcs_mtx
);
3448 mutex_exit(&old_state
->arcs_mtx
);
3450 mutex_exit(hash_lock
);
3451 mutex_exit(&buf
->b_evict_lock
);
3453 VERIFY(buf
->b_efunc(buf
) == 0);
3454 buf
->b_efunc
= NULL
;
3455 buf
->b_private
= NULL
;
3458 kmem_cache_free(buf_cache
, buf
);
3463 * Release this buffer from the cache, making it an anonymous buffer. This
3464 * must be done after a read and prior to modifying the buffer contents.
3465 * If the buffer has more than one reference, we must make
3466 * a new hdr for the buffer.
3469 arc_release(arc_buf_t
*buf
, void *tag
)
3472 kmutex_t
*hash_lock
= NULL
;
3473 l2arc_buf_hdr_t
*l2hdr
;
3474 uint64_t buf_size
= 0;
3477 * It would be nice to assert that if it's DMU metadata (level >
3478 * 0 || it's the dnode file), then it must be syncing context.
3479 * But we don't know that information at this level.
3482 mutex_enter(&buf
->b_evict_lock
);
3485 /* this buffer is not on any list */
3486 ASSERT(refcount_count(&hdr
->b_refcnt
) > 0);
3488 if (hdr
->b_state
== arc_anon
) {
3489 /* this buffer is already released */
3490 ASSERT(buf
->b_efunc
== NULL
);
3492 hash_lock
= HDR_LOCK(hdr
);
3493 mutex_enter(hash_lock
);
3495 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
3498 l2hdr
= hdr
->b_l2hdr
;
3500 mutex_enter(&l2arc_buflist_mtx
);
3501 hdr
->b_l2hdr
= NULL
;
3503 buf_size
= hdr
->b_size
;
3506 * Do we have more than one buf?
3508 if (hdr
->b_datacnt
> 1) {
3509 arc_buf_hdr_t
*nhdr
;
3511 uint64_t blksz
= hdr
->b_size
;
3512 uint64_t spa
= hdr
->b_spa
;
3513 arc_buf_contents_t type
= hdr
->b_type
;
3514 uint32_t flags
= hdr
->b_flags
;
3516 ASSERT(hdr
->b_buf
!= buf
|| buf
->b_next
!= NULL
);
3518 * Pull the data off of this hdr and attach it to
3519 * a new anonymous hdr.
3521 (void) remove_reference(hdr
, hash_lock
, tag
);
3523 while (*bufp
!= buf
)
3524 bufp
= &(*bufp
)->b_next
;
3525 *bufp
= buf
->b_next
;
3528 ASSERT3U(hdr
->b_state
->arcs_size
, >=, hdr
->b_size
);
3529 atomic_add_64(&hdr
->b_state
->arcs_size
, -hdr
->b_size
);
3530 if (refcount_is_zero(&hdr
->b_refcnt
)) {
3531 uint64_t *size
= &hdr
->b_state
->arcs_lsize
[hdr
->b_type
];
3532 ASSERT3U(*size
, >=, hdr
->b_size
);
3533 atomic_add_64(size
, -hdr
->b_size
);
3537 * We're releasing a duplicate user data buffer, update
3538 * our statistics accordingly.
3540 if (hdr
->b_type
== ARC_BUFC_DATA
) {
3541 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
3542 ARCSTAT_INCR(arcstat_duplicate_buffers_size
,
3545 hdr
->b_datacnt
-= 1;
3546 arc_cksum_verify(buf
);
3548 mutex_exit(hash_lock
);
3550 nhdr
= kmem_cache_alloc(hdr_cache
, KM_PUSHPAGE
);
3551 nhdr
->b_size
= blksz
;
3553 nhdr
->b_type
= type
;
3555 nhdr
->b_state
= arc_anon
;
3556 nhdr
->b_arc_access
= 0;
3557 nhdr
->b_mru_hits
= 0;
3558 nhdr
->b_mru_ghost_hits
= 0;
3559 nhdr
->b_mfu_hits
= 0;
3560 nhdr
->b_mfu_ghost_hits
= 0;
3561 nhdr
->b_l2_hits
= 0;
3562 nhdr
->b_flags
= flags
& ARC_L2_WRITING
;
3563 nhdr
->b_l2hdr
= NULL
;
3564 nhdr
->b_datacnt
= 1;
3565 nhdr
->b_freeze_cksum
= NULL
;
3566 (void) refcount_add(&nhdr
->b_refcnt
, tag
);
3568 mutex_exit(&buf
->b_evict_lock
);
3569 atomic_add_64(&arc_anon
->arcs_size
, blksz
);
3571 mutex_exit(&buf
->b_evict_lock
);
3572 ASSERT(refcount_count(&hdr
->b_refcnt
) == 1);
3573 ASSERT(!list_link_active(&hdr
->b_arc_node
));
3574 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3575 if (hdr
->b_state
!= arc_anon
)
3576 arc_change_state(arc_anon
, hdr
, hash_lock
);
3577 hdr
->b_arc_access
= 0;
3578 hdr
->b_mru_hits
= 0;
3579 hdr
->b_mru_ghost_hits
= 0;
3580 hdr
->b_mfu_hits
= 0;
3581 hdr
->b_mfu_ghost_hits
= 0;
3584 mutex_exit(hash_lock
);
3586 buf_discard_identity(hdr
);
3589 buf
->b_efunc
= NULL
;
3590 buf
->b_private
= NULL
;
3593 ARCSTAT_INCR(arcstat_l2_asize
, -l2hdr
->b_asize
);
3594 list_remove(l2hdr
->b_dev
->l2ad_buflist
, hdr
);
3595 kmem_free(l2hdr
, sizeof (l2arc_buf_hdr_t
));
3596 arc_space_return(L2HDR_SIZE
, ARC_SPACE_L2HDRS
);
3597 ARCSTAT_INCR(arcstat_l2_size
, -buf_size
);
3598 mutex_exit(&l2arc_buflist_mtx
);
3603 arc_released(arc_buf_t
*buf
)
3607 mutex_enter(&buf
->b_evict_lock
);
3608 released
= (buf
->b_data
!= NULL
&& buf
->b_hdr
->b_state
== arc_anon
);
3609 mutex_exit(&buf
->b_evict_lock
);
3614 arc_has_callback(arc_buf_t
*buf
)
3618 mutex_enter(&buf
->b_evict_lock
);
3619 callback
= (buf
->b_efunc
!= NULL
);
3620 mutex_exit(&buf
->b_evict_lock
);
3626 arc_referenced(arc_buf_t
*buf
)
3630 mutex_enter(&buf
->b_evict_lock
);
3631 referenced
= (refcount_count(&buf
->b_hdr
->b_refcnt
));
3632 mutex_exit(&buf
->b_evict_lock
);
3633 return (referenced
);
3638 arc_write_ready(zio_t
*zio
)
3640 arc_write_callback_t
*callback
= zio
->io_private
;
3641 arc_buf_t
*buf
= callback
->awcb_buf
;
3642 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3644 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_refcnt
));
3645 callback
->awcb_ready(zio
, buf
, callback
->awcb_private
);
3648 * If the IO is already in progress, then this is a re-write
3649 * attempt, so we need to thaw and re-compute the cksum.
3650 * It is the responsibility of the callback to handle the
3651 * accounting for any re-write attempt.
3653 if (HDR_IO_IN_PROGRESS(hdr
)) {
3654 mutex_enter(&hdr
->b_freeze_lock
);
3655 if (hdr
->b_freeze_cksum
!= NULL
) {
3656 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
3657 hdr
->b_freeze_cksum
= NULL
;
3659 mutex_exit(&hdr
->b_freeze_lock
);
3661 arc_cksum_compute(buf
, B_FALSE
);
3662 hdr
->b_flags
|= ARC_IO_IN_PROGRESS
;
3666 arc_write_done(zio_t
*zio
)
3668 arc_write_callback_t
*callback
= zio
->io_private
;
3669 arc_buf_t
*buf
= callback
->awcb_buf
;
3670 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3672 ASSERT(hdr
->b_acb
== NULL
);
3674 if (zio
->io_error
== 0) {
3675 hdr
->b_dva
= *BP_IDENTITY(zio
->io_bp
);
3676 hdr
->b_birth
= BP_PHYSICAL_BIRTH(zio
->io_bp
);
3677 hdr
->b_cksum0
= zio
->io_bp
->blk_cksum
.zc_word
[0];
3679 ASSERT(BUF_EMPTY(hdr
));
3683 * If the block to be written was all-zero, we may have
3684 * compressed it away. In this case no write was performed
3685 * so there will be no dva/birth/checksum. The buffer must
3686 * therefore remain anonymous (and uncached).
3688 if (!BUF_EMPTY(hdr
)) {
3689 arc_buf_hdr_t
*exists
;
3690 kmutex_t
*hash_lock
;
3692 ASSERT(zio
->io_error
== 0);
3694 arc_cksum_verify(buf
);
3696 exists
= buf_hash_insert(hdr
, &hash_lock
);
3699 * This can only happen if we overwrite for
3700 * sync-to-convergence, because we remove
3701 * buffers from the hash table when we arc_free().
3703 if (zio
->io_flags
& ZIO_FLAG_IO_REWRITE
) {
3704 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
3705 panic("bad overwrite, hdr=%p exists=%p",
3706 (void *)hdr
, (void *)exists
);
3707 ASSERT(refcount_is_zero(&exists
->b_refcnt
));
3708 arc_change_state(arc_anon
, exists
, hash_lock
);
3709 mutex_exit(hash_lock
);
3710 arc_hdr_destroy(exists
);
3711 exists
= buf_hash_insert(hdr
, &hash_lock
);
3712 ASSERT3P(exists
, ==, NULL
);
3715 ASSERT(hdr
->b_datacnt
== 1);
3716 ASSERT(hdr
->b_state
== arc_anon
);
3717 ASSERT(BP_GET_DEDUP(zio
->io_bp
));
3718 ASSERT(BP_GET_LEVEL(zio
->io_bp
) == 0);
3721 hdr
->b_flags
&= ~ARC_IO_IN_PROGRESS
;
3722 /* if it's not anon, we are doing a scrub */
3723 if (!exists
&& hdr
->b_state
== arc_anon
)
3724 arc_access(hdr
, hash_lock
);
3725 mutex_exit(hash_lock
);
3727 hdr
->b_flags
&= ~ARC_IO_IN_PROGRESS
;
3730 ASSERT(!refcount_is_zero(&hdr
->b_refcnt
));
3731 callback
->awcb_done(zio
, buf
, callback
->awcb_private
);
3733 kmem_free(callback
, sizeof (arc_write_callback_t
));
3737 arc_write(zio_t
*pio
, spa_t
*spa
, uint64_t txg
,
3738 blkptr_t
*bp
, arc_buf_t
*buf
, boolean_t l2arc
, boolean_t l2arc_compress
,
3739 const zio_prop_t
*zp
, arc_done_func_t
*ready
, arc_done_func_t
*done
,
3740 void *private, int priority
, int zio_flags
, const zbookmark_t
*zb
)
3742 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3743 arc_write_callback_t
*callback
;
3746 ASSERT(ready
!= NULL
);
3747 ASSERT(done
!= NULL
);
3748 ASSERT(!HDR_IO_ERROR(hdr
));
3749 ASSERT((hdr
->b_flags
& ARC_IO_IN_PROGRESS
) == 0);
3750 ASSERT(hdr
->b_acb
== NULL
);
3752 hdr
->b_flags
|= ARC_L2CACHE
;
3754 hdr
->b_flags
|= ARC_L2COMPRESS
;
3755 callback
= kmem_zalloc(sizeof (arc_write_callback_t
), KM_PUSHPAGE
);
3756 callback
->awcb_ready
= ready
;
3757 callback
->awcb_done
= done
;
3758 callback
->awcb_private
= private;
3759 callback
->awcb_buf
= buf
;
3761 zio
= zio_write(pio
, spa
, txg
, bp
, buf
->b_data
, hdr
->b_size
, zp
,
3762 arc_write_ready
, arc_write_done
, callback
, priority
, zio_flags
, zb
);
3768 arc_memory_throttle(uint64_t reserve
, uint64_t inflight_data
, uint64_t txg
)
3771 uint64_t available_memory
;
3773 if (zfs_arc_memory_throttle_disable
)
3776 /* Easily reclaimable memory (free + inactive + arc-evictable) */
3777 available_memory
= ptob(spl_kmem_availrmem()) + arc_evictable_memory();
3779 if (available_memory
<= zfs_write_limit_max
) {
3780 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
3781 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
3782 return (SET_ERROR(EAGAIN
));
3785 if (inflight_data
> available_memory
/ 4) {
3786 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
3787 DMU_TX_STAT_BUMP(dmu_tx_memory_inflight
);
3795 arc_tempreserve_clear(uint64_t reserve
)
3797 atomic_add_64(&arc_tempreserve
, -reserve
);
3798 ASSERT((int64_t)arc_tempreserve
>= 0);
3802 arc_tempreserve_space(uint64_t reserve
, uint64_t txg
)
3809 * Once in a while, fail for no reason. Everything should cope.
3811 if (spa_get_random(10000) == 0) {
3812 dprintf("forcing random failure\n");
3816 if (reserve
> arc_c
/4 && !arc_no_grow
)
3817 arc_c
= MIN(arc_c_max
, reserve
* 4);
3818 if (reserve
> arc_c
) {
3819 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve
);
3820 return (SET_ERROR(ENOMEM
));
3824 * Don't count loaned bufs as in flight dirty data to prevent long
3825 * network delays from blocking transactions that are ready to be
3826 * assigned to a txg.
3828 anon_size
= MAX((int64_t)(arc_anon
->arcs_size
- arc_loaned_bytes
), 0);
3831 * Writes will, almost always, require additional memory allocations
3832 * in order to compress/encrypt/etc the data. We therefor need to
3833 * make sure that there is sufficient available memory for this.
3835 if ((error
= arc_memory_throttle(reserve
, anon_size
, txg
)))
3839 * Throttle writes when the amount of dirty data in the cache
3840 * gets too large. We try to keep the cache less than half full
3841 * of dirty blocks so that our sync times don't grow too large.
3842 * Note: if two requests come in concurrently, we might let them
3843 * both succeed, when one of them should fail. Not a huge deal.
3846 if (reserve
+ arc_tempreserve
+ anon_size
> arc_c
/ 2 &&
3847 anon_size
> arc_c
/ 4) {
3848 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
3849 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
3850 arc_tempreserve
>>10,
3851 arc_anon
->arcs_lsize
[ARC_BUFC_METADATA
]>>10,
3852 arc_anon
->arcs_lsize
[ARC_BUFC_DATA
]>>10,
3853 reserve
>>10, arc_c
>>10);
3854 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle
);
3855 return (SET_ERROR(ERESTART
));
3857 atomic_add_64(&arc_tempreserve
, reserve
);
3862 arc_kstat_update_state(arc_state_t
*state
, kstat_named_t
*size
,
3863 kstat_named_t
*evict_data
, kstat_named_t
*evict_metadata
)
3865 size
->value
.ui64
= state
->arcs_size
;
3866 evict_data
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_DATA
];
3867 evict_metadata
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_METADATA
];
3871 arc_kstat_update(kstat_t
*ksp
, int rw
)
3873 arc_stats_t
*as
= ksp
->ks_data
;
3875 if (rw
== KSTAT_WRITE
) {
3876 return (SET_ERROR(EACCES
));
3878 arc_kstat_update_state(arc_anon
,
3879 &as
->arcstat_anon_size
,
3880 &as
->arcstat_anon_evict_data
,
3881 &as
->arcstat_anon_evict_metadata
);
3882 arc_kstat_update_state(arc_mru
,
3883 &as
->arcstat_mru_size
,
3884 &as
->arcstat_mru_evict_data
,
3885 &as
->arcstat_mru_evict_metadata
);
3886 arc_kstat_update_state(arc_mru_ghost
,
3887 &as
->arcstat_mru_ghost_size
,
3888 &as
->arcstat_mru_ghost_evict_data
,
3889 &as
->arcstat_mru_ghost_evict_metadata
);
3890 arc_kstat_update_state(arc_mfu
,
3891 &as
->arcstat_mfu_size
,
3892 &as
->arcstat_mfu_evict_data
,
3893 &as
->arcstat_mfu_evict_metadata
);
3894 arc_kstat_update_state(arc_mfu_ghost
,
3895 &as
->arcstat_mfu_ghost_size
,
3896 &as
->arcstat_mfu_ghost_evict_data
,
3897 &as
->arcstat_mfu_ghost_evict_metadata
);
3906 mutex_init(&arc_reclaim_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
3907 cv_init(&arc_reclaim_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
3909 /* Convert seconds to clock ticks */
3910 zfs_arc_min_prefetch_lifespan
= 1 * hz
;
3912 /* Start out with 1/8 of all memory */
3913 arc_c
= physmem
* PAGESIZE
/ 8;
3917 * On architectures where the physical memory can be larger
3918 * than the addressable space (intel in 32-bit mode), we may
3919 * need to limit the cache to 1/8 of VM size.
3921 arc_c
= MIN(arc_c
, vmem_size(heap_arena
, VMEM_ALLOC
| VMEM_FREE
) / 8);
3923 * Register a shrinker to support synchronous (direct) memory
3924 * reclaim from the arc. This is done to prevent kswapd from
3925 * swapping out pages when it is preferable to shrink the arc.
3927 spl_register_shrinker(&arc_shrinker
);
3930 /* set min cache to 1/32 of all memory, or 64MB, whichever is more */
3931 arc_c_min
= MAX(arc_c
/ 4, 64<<20);
3932 /* set max to 1/2 of all memory */
3933 arc_c_max
= MAX(arc_c
* 4, arc_c_max
);
3936 * Allow the tunables to override our calculations if they are
3937 * reasonable (ie. over 64MB)
3939 if (zfs_arc_max
> 64<<20 && zfs_arc_max
< physmem
* PAGESIZE
)
3940 arc_c_max
= zfs_arc_max
;
3941 if (zfs_arc_min
> 64<<20 && zfs_arc_min
<= arc_c_max
)
3942 arc_c_min
= zfs_arc_min
;
3945 arc_p
= (arc_c
>> 1);
3947 /* limit meta-data to 1/4 of the arc capacity */
3948 arc_meta_limit
= arc_c_max
/ 4;
3951 /* Allow the tunable to override if it is reasonable */
3952 if (zfs_arc_meta_limit
> 0 && zfs_arc_meta_limit
<= arc_c_max
)
3953 arc_meta_limit
= zfs_arc_meta_limit
;
3955 if (arc_c_min
< arc_meta_limit
/ 2 && zfs_arc_min
== 0)
3956 arc_c_min
= arc_meta_limit
/ 2;
3958 /* if kmem_flags are set, lets try to use less memory */
3959 if (kmem_debugging())
3961 if (arc_c
< arc_c_min
)
3964 arc_anon
= &ARC_anon
;
3966 arc_mru_ghost
= &ARC_mru_ghost
;
3968 arc_mfu_ghost
= &ARC_mfu_ghost
;
3969 arc_l2c_only
= &ARC_l2c_only
;
3972 mutex_init(&arc_anon
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3973 mutex_init(&arc_mru
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3974 mutex_init(&arc_mru_ghost
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3975 mutex_init(&arc_mfu
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3976 mutex_init(&arc_mfu_ghost
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3977 mutex_init(&arc_l2c_only
->arcs_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
3979 list_create(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
],
3980 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3981 list_create(&arc_mru
->arcs_list
[ARC_BUFC_DATA
],
3982 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3983 list_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
],
3984 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3985 list_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
],
3986 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3987 list_create(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
],
3988 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3989 list_create(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
],
3990 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3991 list_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
],
3992 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3993 list_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
],
3994 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3995 list_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
],
3996 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
3997 list_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
],
3998 sizeof (arc_buf_hdr_t
), offsetof(arc_buf_hdr_t
, b_arc_node
));
4000 arc_anon
->arcs_state
= ARC_STATE_ANON
;
4001 arc_mru
->arcs_state
= ARC_STATE_MRU
;
4002 arc_mru_ghost
->arcs_state
= ARC_STATE_MRU_GHOST
;
4003 arc_mfu
->arcs_state
= ARC_STATE_MFU
;
4004 arc_mfu_ghost
->arcs_state
= ARC_STATE_MFU_GHOST
;
4005 arc_l2c_only
->arcs_state
= ARC_STATE_L2C_ONLY
;
4009 arc_thread_exit
= 0;
4010 list_create(&arc_prune_list
, sizeof (arc_prune_t
),
4011 offsetof(arc_prune_t
, p_node
));
4012 arc_eviction_list
= NULL
;
4013 mutex_init(&arc_prune_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
4014 mutex_init(&arc_eviction_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
4015 bzero(&arc_eviction_hdr
, sizeof (arc_buf_hdr_t
));
4017 arc_ksp
= kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED
,
4018 sizeof (arc_stats
) / sizeof (kstat_named_t
), KSTAT_FLAG_VIRTUAL
);
4020 if (arc_ksp
!= NULL
) {
4021 arc_ksp
->ks_data
= &arc_stats
;
4022 arc_ksp
->ks_update
= arc_kstat_update
;
4023 kstat_install(arc_ksp
);
4026 (void) thread_create(NULL
, 0, arc_adapt_thread
, NULL
, 0, &p0
,
4027 TS_RUN
, minclsyspri
);
4032 if (zfs_write_limit_max
== 0)
4033 zfs_write_limit_max
= ptob(physmem
) >> zfs_write_limit_shift
;
4035 zfs_write_limit_shift
= 0;
4036 mutex_init(&zfs_write_limit_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
4044 mutex_enter(&arc_reclaim_thr_lock
);
4046 spl_unregister_shrinker(&arc_shrinker
);
4047 #endif /* _KERNEL */
4049 arc_thread_exit
= 1;
4050 while (arc_thread_exit
!= 0)
4051 cv_wait(&arc_reclaim_thr_cv
, &arc_reclaim_thr_lock
);
4052 mutex_exit(&arc_reclaim_thr_lock
);
4058 if (arc_ksp
!= NULL
) {
4059 kstat_delete(arc_ksp
);
4063 mutex_enter(&arc_prune_mtx
);
4064 while ((p
= list_head(&arc_prune_list
)) != NULL
) {
4065 list_remove(&arc_prune_list
, p
);
4066 refcount_remove(&p
->p_refcnt
, &arc_prune_list
);
4067 refcount_destroy(&p
->p_refcnt
);
4068 kmem_free(p
, sizeof (*p
));
4070 mutex_exit(&arc_prune_mtx
);
4072 list_destroy(&arc_prune_list
);
4073 mutex_destroy(&arc_prune_mtx
);
4074 mutex_destroy(&arc_eviction_mtx
);
4075 mutex_destroy(&arc_reclaim_thr_lock
);
4076 cv_destroy(&arc_reclaim_thr_cv
);
4078 list_destroy(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
]);
4079 list_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
4080 list_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
]);
4081 list_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
4082 list_destroy(&arc_mru
->arcs_list
[ARC_BUFC_DATA
]);
4083 list_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
]);
4084 list_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
]);
4085 list_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
]);
4087 mutex_destroy(&arc_anon
->arcs_mtx
);
4088 mutex_destroy(&arc_mru
->arcs_mtx
);
4089 mutex_destroy(&arc_mru_ghost
->arcs_mtx
);
4090 mutex_destroy(&arc_mfu
->arcs_mtx
);
4091 mutex_destroy(&arc_mfu_ghost
->arcs_mtx
);
4092 mutex_destroy(&arc_l2c_only
->arcs_mtx
);
4094 mutex_destroy(&zfs_write_limit_lock
);
4098 ASSERT(arc_loaned_bytes
== 0);
4104 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
4105 * It uses dedicated storage devices to hold cached data, which are populated
4106 * using large infrequent writes. The main role of this cache is to boost
4107 * the performance of random read workloads. The intended L2ARC devices
4108 * include short-stroked disks, solid state disks, and other media with
4109 * substantially faster read latency than disk.
4111 * +-----------------------+
4113 * +-----------------------+
4116 * l2arc_feed_thread() arc_read()
4120 * +---------------+ |
4122 * +---------------+ |
4127 * +-------+ +-------+
4129 * | cache | | cache |
4130 * +-------+ +-------+
4131 * +=========+ .-----.
4132 * : L2ARC : |-_____-|
4133 * : devices : | Disks |
4134 * +=========+ `-_____-'
4136 * Read requests are satisfied from the following sources, in order:
4139 * 2) vdev cache of L2ARC devices
4141 * 4) vdev cache of disks
4144 * Some L2ARC device types exhibit extremely slow write performance.
4145 * To accommodate for this there are some significant differences between
4146 * the L2ARC and traditional cache design:
4148 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
4149 * the ARC behave as usual, freeing buffers and placing headers on ghost
4150 * lists. The ARC does not send buffers to the L2ARC during eviction as
4151 * this would add inflated write latencies for all ARC memory pressure.
4153 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
4154 * It does this by periodically scanning buffers from the eviction-end of
4155 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
4156 * not already there. It scans until a headroom of buffers is satisfied,
4157 * which itself is a buffer for ARC eviction. If a compressible buffer is
4158 * found during scanning and selected for writing to an L2ARC device, we
4159 * temporarily boost scanning headroom during the next scan cycle to make
4160 * sure we adapt to compression effects (which might significantly reduce
4161 * the data volume we write to L2ARC). The thread that does this is
4162 * l2arc_feed_thread(), illustrated below; example sizes are included to
4163 * provide a better sense of ratio than this diagram:
4166 * +---------------------+----------+
4167 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
4168 * +---------------------+----------+ | o L2ARC eligible
4169 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
4170 * +---------------------+----------+ |
4171 * 15.9 Gbytes ^ 32 Mbytes |
4173 * l2arc_feed_thread()
4175 * l2arc write hand <--[oooo]--'
4179 * +==============================+
4180 * L2ARC dev |####|#|###|###| |####| ... |
4181 * +==============================+
4184 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
4185 * evicted, then the L2ARC has cached a buffer much sooner than it probably
4186 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
4187 * safe to say that this is an uncommon case, since buffers at the end of
4188 * the ARC lists have moved there due to inactivity.
4190 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
4191 * then the L2ARC simply misses copying some buffers. This serves as a
4192 * pressure valve to prevent heavy read workloads from both stalling the ARC
4193 * with waits and clogging the L2ARC with writes. This also helps prevent
4194 * the potential for the L2ARC to churn if it attempts to cache content too
4195 * quickly, such as during backups of the entire pool.
4197 * 5. After system boot and before the ARC has filled main memory, there are
4198 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
4199 * lists can remain mostly static. Instead of searching from tail of these
4200 * lists as pictured, the l2arc_feed_thread() will search from the list heads
4201 * for eligible buffers, greatly increasing its chance of finding them.
4203 * The L2ARC device write speed is also boosted during this time so that
4204 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
4205 * there are no L2ARC reads, and no fear of degrading read performance
4206 * through increased writes.
4208 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
4209 * the vdev queue can aggregate them into larger and fewer writes. Each
4210 * device is written to in a rotor fashion, sweeping writes through
4211 * available space then repeating.
4213 * 7. The L2ARC does not store dirty content. It never needs to flush
4214 * write buffers back to disk based storage.
4216 * 8. If an ARC buffer is written (and dirtied) which also exists in the
4217 * L2ARC, the now stale L2ARC buffer is immediately dropped.
4219 * The performance of the L2ARC can be tweaked by a number of tunables, which
4220 * may be necessary for different workloads:
4222 * l2arc_write_max max write bytes per interval
4223 * l2arc_write_boost extra write bytes during device warmup
4224 * l2arc_noprefetch skip caching prefetched buffers
4225 * l2arc_nocompress skip compressing buffers
4226 * l2arc_headroom number of max device writes to precache
4227 * l2arc_headroom_boost when we find compressed buffers during ARC
4228 * scanning, we multiply headroom by this
4229 * percentage factor for the next scan cycle,
4230 * since more compressed buffers are likely to
4232 * l2arc_feed_secs seconds between L2ARC writing
4234 * Tunables may be removed or added as future performance improvements are
4235 * integrated, and also may become zpool properties.
4237 * There are three key functions that control how the L2ARC warms up:
4239 * l2arc_write_eligible() check if a buffer is eligible to cache
4240 * l2arc_write_size() calculate how much to write
4241 * l2arc_write_interval() calculate sleep delay between writes
4243 * These three functions determine what to write, how much, and how quickly
4248 l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*ab
)
4251 * A buffer is *not* eligible for the L2ARC if it:
4252 * 1. belongs to a different spa.
4253 * 2. is already cached on the L2ARC.
4254 * 3. has an I/O in progress (it may be an incomplete read).
4255 * 4. is flagged not eligible (zfs property).
4257 if (ab
->b_spa
!= spa_guid
|| ab
->b_l2hdr
!= NULL
||
4258 HDR_IO_IN_PROGRESS(ab
) || !HDR_L2CACHE(ab
))
4265 l2arc_write_size(void)
4270 * Make sure our globals have meaningful values in case the user
4273 size
= l2arc_write_max
;
4275 cmn_err(CE_NOTE
, "Bad value for l2arc_write_max, value must "
4276 "be greater than zero, resetting it to the default (%d)",
4278 size
= l2arc_write_max
= L2ARC_WRITE_SIZE
;
4281 if (arc_warm
== B_FALSE
)
4282 size
+= l2arc_write_boost
;
4289 l2arc_write_interval(clock_t began
, uint64_t wanted
, uint64_t wrote
)
4291 clock_t interval
, next
, now
;
4294 * If the ARC lists are busy, increase our write rate; if the
4295 * lists are stale, idle back. This is achieved by checking
4296 * how much we previously wrote - if it was more than half of
4297 * what we wanted, schedule the next write much sooner.
4299 if (l2arc_feed_again
&& wrote
> (wanted
/ 2))
4300 interval
= (hz
* l2arc_feed_min_ms
) / 1000;
4302 interval
= hz
* l2arc_feed_secs
;
4304 now
= ddi_get_lbolt();
4305 next
= MAX(now
, MIN(now
+ interval
, began
+ interval
));
4311 l2arc_hdr_stat_add(void)
4313 ARCSTAT_INCR(arcstat_l2_hdr_size
, HDR_SIZE
);
4314 ARCSTAT_INCR(arcstat_hdr_size
, -HDR_SIZE
);
4318 l2arc_hdr_stat_remove(void)
4320 ARCSTAT_INCR(arcstat_l2_hdr_size
, -HDR_SIZE
);
4321 ARCSTAT_INCR(arcstat_hdr_size
, HDR_SIZE
);
4325 * Cycle through L2ARC devices. This is how L2ARC load balances.
4326 * If a device is returned, this also returns holding the spa config lock.
4328 static l2arc_dev_t
*
4329 l2arc_dev_get_next(void)
4331 l2arc_dev_t
*first
, *next
= NULL
;
4334 * Lock out the removal of spas (spa_namespace_lock), then removal
4335 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
4336 * both locks will be dropped and a spa config lock held instead.
4338 mutex_enter(&spa_namespace_lock
);
4339 mutex_enter(&l2arc_dev_mtx
);
4341 /* if there are no vdevs, there is nothing to do */
4342 if (l2arc_ndev
== 0)
4346 next
= l2arc_dev_last
;
4348 /* loop around the list looking for a non-faulted vdev */
4350 next
= list_head(l2arc_dev_list
);
4352 next
= list_next(l2arc_dev_list
, next
);
4354 next
= list_head(l2arc_dev_list
);
4357 /* if we have come back to the start, bail out */
4360 else if (next
== first
)
4363 } while (vdev_is_dead(next
->l2ad_vdev
));
4365 /* if we were unable to find any usable vdevs, return NULL */
4366 if (vdev_is_dead(next
->l2ad_vdev
))
4369 l2arc_dev_last
= next
;
4372 mutex_exit(&l2arc_dev_mtx
);
4375 * Grab the config lock to prevent the 'next' device from being
4376 * removed while we are writing to it.
4379 spa_config_enter(next
->l2ad_spa
, SCL_L2ARC
, next
, RW_READER
);
4380 mutex_exit(&spa_namespace_lock
);
4386 * Free buffers that were tagged for destruction.
4389 l2arc_do_free_on_write(void)
4392 l2arc_data_free_t
*df
, *df_prev
;
4394 mutex_enter(&l2arc_free_on_write_mtx
);
4395 buflist
= l2arc_free_on_write
;
4397 for (df
= list_tail(buflist
); df
; df
= df_prev
) {
4398 df_prev
= list_prev(buflist
, df
);
4399 ASSERT(df
->l2df_data
!= NULL
);
4400 ASSERT(df
->l2df_func
!= NULL
);
4401 df
->l2df_func(df
->l2df_data
, df
->l2df_size
);
4402 list_remove(buflist
, df
);
4403 kmem_free(df
, sizeof (l2arc_data_free_t
));
4406 mutex_exit(&l2arc_free_on_write_mtx
);
4410 * A write to a cache device has completed. Update all headers to allow
4411 * reads from these buffers to begin.
4414 l2arc_write_done(zio_t
*zio
)
4416 l2arc_write_callback_t
*cb
;
4419 arc_buf_hdr_t
*head
, *ab
, *ab_prev
;
4420 l2arc_buf_hdr_t
*abl2
;
4421 kmutex_t
*hash_lock
;
4423 cb
= zio
->io_private
;
4425 dev
= cb
->l2wcb_dev
;
4426 ASSERT(dev
!= NULL
);
4427 head
= cb
->l2wcb_head
;
4428 ASSERT(head
!= NULL
);
4429 buflist
= dev
->l2ad_buflist
;
4430 ASSERT(buflist
!= NULL
);
4431 DTRACE_PROBE2(l2arc__iodone
, zio_t
*, zio
,
4432 l2arc_write_callback_t
*, cb
);
4434 if (zio
->io_error
!= 0)
4435 ARCSTAT_BUMP(arcstat_l2_writes_error
);
4437 mutex_enter(&l2arc_buflist_mtx
);
4440 * All writes completed, or an error was hit.
4442 for (ab
= list_prev(buflist
, head
); ab
; ab
= ab_prev
) {
4443 ab_prev
= list_prev(buflist
, ab
);
4445 hash_lock
= HDR_LOCK(ab
);
4446 if (!mutex_tryenter(hash_lock
)) {
4448 * This buffer misses out. It may be in a stage
4449 * of eviction. Its ARC_L2_WRITING flag will be
4450 * left set, denying reads to this buffer.
4452 ARCSTAT_BUMP(arcstat_l2_writes_hdr_miss
);
4459 * Release the temporary compressed buffer as soon as possible.
4461 if (abl2
->b_compress
!= ZIO_COMPRESS_OFF
)
4462 l2arc_release_cdata_buf(ab
);
4464 if (zio
->io_error
!= 0) {
4466 * Error - drop L2ARC entry.
4468 list_remove(buflist
, ab
);
4469 ARCSTAT_INCR(arcstat_l2_asize
, -abl2
->b_asize
);
4471 kmem_free(abl2
, sizeof (l2arc_buf_hdr_t
));
4472 arc_space_return(L2HDR_SIZE
, ARC_SPACE_L2HDRS
);
4473 ARCSTAT_INCR(arcstat_l2_size
, -ab
->b_size
);
4477 * Allow ARC to begin reads to this L2ARC entry.
4479 ab
->b_flags
&= ~ARC_L2_WRITING
;
4481 mutex_exit(hash_lock
);
4484 atomic_inc_64(&l2arc_writes_done
);
4485 list_remove(buflist
, head
);
4486 kmem_cache_free(hdr_cache
, head
);
4487 mutex_exit(&l2arc_buflist_mtx
);
4489 l2arc_do_free_on_write();
4491 kmem_free(cb
, sizeof (l2arc_write_callback_t
));
4495 * A read to a cache device completed. Validate buffer contents before
4496 * handing over to the regular ARC routines.
4499 l2arc_read_done(zio_t
*zio
)
4501 l2arc_read_callback_t
*cb
;
4504 kmutex_t
*hash_lock
;
4507 ASSERT(zio
->io_vd
!= NULL
);
4508 ASSERT(zio
->io_flags
& ZIO_FLAG_DONT_PROPAGATE
);
4510 spa_config_exit(zio
->io_spa
, SCL_L2ARC
, zio
->io_vd
);
4512 cb
= zio
->io_private
;
4514 buf
= cb
->l2rcb_buf
;
4515 ASSERT(buf
!= NULL
);
4517 hash_lock
= HDR_LOCK(buf
->b_hdr
);
4518 mutex_enter(hash_lock
);
4520 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
4523 * If the buffer was compressed, decompress it first.
4525 if (cb
->l2rcb_compress
!= ZIO_COMPRESS_OFF
)
4526 l2arc_decompress_zio(zio
, hdr
, cb
->l2rcb_compress
);
4527 ASSERT(zio
->io_data
!= NULL
);
4530 * Check this survived the L2ARC journey.
4532 equal
= arc_cksum_equal(buf
);
4533 if (equal
&& zio
->io_error
== 0 && !HDR_L2_EVICTED(hdr
)) {
4534 mutex_exit(hash_lock
);
4535 zio
->io_private
= buf
;
4536 zio
->io_bp_copy
= cb
->l2rcb_bp
; /* XXX fix in L2ARC 2.0 */
4537 zio
->io_bp
= &zio
->io_bp_copy
; /* XXX fix in L2ARC 2.0 */
4540 mutex_exit(hash_lock
);
4542 * Buffer didn't survive caching. Increment stats and
4543 * reissue to the original storage device.
4545 if (zio
->io_error
!= 0) {
4546 ARCSTAT_BUMP(arcstat_l2_io_error
);
4548 zio
->io_error
= SET_ERROR(EIO
);
4551 ARCSTAT_BUMP(arcstat_l2_cksum_bad
);
4554 * If there's no waiter, issue an async i/o to the primary
4555 * storage now. If there *is* a waiter, the caller must
4556 * issue the i/o in a context where it's OK to block.
4558 if (zio
->io_waiter
== NULL
) {
4559 zio_t
*pio
= zio_unique_parent(zio
);
4561 ASSERT(!pio
|| pio
->io_child_type
== ZIO_CHILD_LOGICAL
);
4563 zio_nowait(zio_read(pio
, cb
->l2rcb_spa
, &cb
->l2rcb_bp
,
4564 buf
->b_data
, zio
->io_size
, arc_read_done
, buf
,
4565 zio
->io_priority
, cb
->l2rcb_flags
, &cb
->l2rcb_zb
));
4569 kmem_free(cb
, sizeof (l2arc_read_callback_t
));
4573 * This is the list priority from which the L2ARC will search for pages to
4574 * cache. This is used within loops (0..3) to cycle through lists in the
4575 * desired order. This order can have a significant effect on cache
4578 * Currently the metadata lists are hit first, MFU then MRU, followed by
4579 * the data lists. This function returns a locked list, and also returns
4583 l2arc_list_locked(int list_num
, kmutex_t
**lock
)
4585 list_t
*list
= NULL
;
4587 ASSERT(list_num
>= 0 && list_num
<= 3);
4591 list
= &arc_mfu
->arcs_list
[ARC_BUFC_METADATA
];
4592 *lock
= &arc_mfu
->arcs_mtx
;
4595 list
= &arc_mru
->arcs_list
[ARC_BUFC_METADATA
];
4596 *lock
= &arc_mru
->arcs_mtx
;
4599 list
= &arc_mfu
->arcs_list
[ARC_BUFC_DATA
];
4600 *lock
= &arc_mfu
->arcs_mtx
;
4603 list
= &arc_mru
->arcs_list
[ARC_BUFC_DATA
];
4604 *lock
= &arc_mru
->arcs_mtx
;
4608 ASSERT(!(MUTEX_HELD(*lock
)));
4614 * Evict buffers from the device write hand to the distance specified in
4615 * bytes. This distance may span populated buffers, it may span nothing.
4616 * This is clearing a region on the L2ARC device ready for writing.
4617 * If the 'all' boolean is set, every buffer is evicted.
4620 l2arc_evict(l2arc_dev_t
*dev
, uint64_t distance
, boolean_t all
)
4623 l2arc_buf_hdr_t
*abl2
;
4624 arc_buf_hdr_t
*ab
, *ab_prev
;
4625 kmutex_t
*hash_lock
;
4628 buflist
= dev
->l2ad_buflist
;
4630 if (buflist
== NULL
)
4633 if (!all
&& dev
->l2ad_first
) {
4635 * This is the first sweep through the device. There is
4641 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- (2 * distance
))) {
4643 * When nearing the end of the device, evict to the end
4644 * before the device write hand jumps to the start.
4646 taddr
= dev
->l2ad_end
;
4648 taddr
= dev
->l2ad_hand
+ distance
;
4650 DTRACE_PROBE4(l2arc__evict
, l2arc_dev_t
*, dev
, list_t
*, buflist
,
4651 uint64_t, taddr
, boolean_t
, all
);
4654 mutex_enter(&l2arc_buflist_mtx
);
4655 for (ab
= list_tail(buflist
); ab
; ab
= ab_prev
) {
4656 ab_prev
= list_prev(buflist
, ab
);
4658 hash_lock
= HDR_LOCK(ab
);
4659 if (!mutex_tryenter(hash_lock
)) {
4661 * Missed the hash lock. Retry.
4663 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry
);
4664 mutex_exit(&l2arc_buflist_mtx
);
4665 mutex_enter(hash_lock
);
4666 mutex_exit(hash_lock
);
4670 if (HDR_L2_WRITE_HEAD(ab
)) {
4672 * We hit a write head node. Leave it for
4673 * l2arc_write_done().
4675 list_remove(buflist
, ab
);
4676 mutex_exit(hash_lock
);
4680 if (!all
&& ab
->b_l2hdr
!= NULL
&&
4681 (ab
->b_l2hdr
->b_daddr
> taddr
||
4682 ab
->b_l2hdr
->b_daddr
< dev
->l2ad_hand
)) {
4684 * We've evicted to the target address,
4685 * or the end of the device.
4687 mutex_exit(hash_lock
);
4691 if (HDR_FREE_IN_PROGRESS(ab
)) {
4693 * Already on the path to destruction.
4695 mutex_exit(hash_lock
);
4699 if (ab
->b_state
== arc_l2c_only
) {
4700 ASSERT(!HDR_L2_READING(ab
));
4702 * This doesn't exist in the ARC. Destroy.
4703 * arc_hdr_destroy() will call list_remove()
4704 * and decrement arcstat_l2_size.
4706 arc_change_state(arc_anon
, ab
, hash_lock
);
4707 arc_hdr_destroy(ab
);
4710 * Invalidate issued or about to be issued
4711 * reads, since we may be about to write
4712 * over this location.
4714 if (HDR_L2_READING(ab
)) {
4715 ARCSTAT_BUMP(arcstat_l2_evict_reading
);
4716 ab
->b_flags
|= ARC_L2_EVICTED
;
4720 * Tell ARC this no longer exists in L2ARC.
4722 if (ab
->b_l2hdr
!= NULL
) {
4724 ARCSTAT_INCR(arcstat_l2_asize
, -abl2
->b_asize
);
4726 kmem_free(abl2
, sizeof (l2arc_buf_hdr_t
));
4727 arc_space_return(L2HDR_SIZE
, ARC_SPACE_L2HDRS
);
4728 ARCSTAT_INCR(arcstat_l2_size
, -ab
->b_size
);
4730 list_remove(buflist
, ab
);
4733 * This may have been leftover after a
4736 ab
->b_flags
&= ~ARC_L2_WRITING
;
4738 mutex_exit(hash_lock
);
4740 mutex_exit(&l2arc_buflist_mtx
);
4742 vdev_space_update(dev
->l2ad_vdev
, -(taddr
- dev
->l2ad_evict
), 0, 0);
4743 dev
->l2ad_evict
= taddr
;
4747 * Find and write ARC buffers to the L2ARC device.
4749 * An ARC_L2_WRITING flag is set so that the L2ARC buffers are not valid
4750 * for reading until they have completed writing.
4751 * The headroom_boost is an in-out parameter used to maintain headroom boost
4752 * state between calls to this function.
4754 * Returns the number of bytes actually written (which may be smaller than
4755 * the delta by which the device hand has changed due to alignment).
4758 l2arc_write_buffers(spa_t
*spa
, l2arc_dev_t
*dev
, uint64_t target_sz
,
4759 boolean_t
*headroom_boost
)
4761 arc_buf_hdr_t
*ab
, *ab_prev
, *head
;
4763 uint64_t write_asize
, write_psize
, write_sz
, headroom
,
4766 kmutex_t
*list_lock
= NULL
;
4768 l2arc_write_callback_t
*cb
;
4770 uint64_t guid
= spa_load_guid(spa
);
4772 const boolean_t do_headroom_boost
= *headroom_boost
;
4774 ASSERT(dev
->l2ad_vdev
!= NULL
);
4776 /* Lower the flag now, we might want to raise it again later. */
4777 *headroom_boost
= B_FALSE
;
4780 write_sz
= write_asize
= write_psize
= 0;
4782 head
= kmem_cache_alloc(hdr_cache
, KM_PUSHPAGE
);
4783 head
->b_flags
|= ARC_L2_WRITE_HEAD
;
4786 * We will want to try to compress buffers that are at least 2x the
4787 * device sector size.
4789 buf_compress_minsz
= 2 << dev
->l2ad_vdev
->vdev_ashift
;
4792 * Copy buffers for L2ARC writing.
4794 mutex_enter(&l2arc_buflist_mtx
);
4795 for (try = 0; try <= 3; try++) {
4796 uint64_t passed_sz
= 0;
4798 list
= l2arc_list_locked(try, &list_lock
);
4801 * L2ARC fast warmup.
4803 * Until the ARC is warm and starts to evict, read from the
4804 * head of the ARC lists rather than the tail.
4806 if (arc_warm
== B_FALSE
)
4807 ab
= list_head(list
);
4809 ab
= list_tail(list
);
4811 headroom
= target_sz
* l2arc_headroom
;
4812 if (do_headroom_boost
)
4813 headroom
= (headroom
* l2arc_headroom_boost
) / 100;
4815 for (; ab
; ab
= ab_prev
) {
4816 l2arc_buf_hdr_t
*l2hdr
;
4817 kmutex_t
*hash_lock
;
4820 if (arc_warm
== B_FALSE
)
4821 ab_prev
= list_next(list
, ab
);
4823 ab_prev
= list_prev(list
, ab
);
4825 hash_lock
= HDR_LOCK(ab
);
4826 if (!mutex_tryenter(hash_lock
)) {
4828 * Skip this buffer rather than waiting.
4833 passed_sz
+= ab
->b_size
;
4834 if (passed_sz
> headroom
) {
4838 mutex_exit(hash_lock
);
4842 if (!l2arc_write_eligible(guid
, ab
)) {
4843 mutex_exit(hash_lock
);
4847 if ((write_sz
+ ab
->b_size
) > target_sz
) {
4849 mutex_exit(hash_lock
);
4855 * Insert a dummy header on the buflist so
4856 * l2arc_write_done() can find where the
4857 * write buffers begin without searching.
4859 list_insert_head(dev
->l2ad_buflist
, head
);
4861 cb
= kmem_alloc(sizeof (l2arc_write_callback_t
),
4863 cb
->l2wcb_dev
= dev
;
4864 cb
->l2wcb_head
= head
;
4865 pio
= zio_root(spa
, l2arc_write_done
, cb
,
4870 * Create and add a new L2ARC header.
4872 l2hdr
= kmem_zalloc(sizeof (l2arc_buf_hdr_t
),
4875 arc_space_consume(L2HDR_SIZE
, ARC_SPACE_L2HDRS
);
4877 ab
->b_flags
|= ARC_L2_WRITING
;
4880 * Temporarily stash the data buffer in b_tmp_cdata.
4881 * The subsequent write step will pick it up from
4882 * there. This is because can't access ab->b_buf
4883 * without holding the hash_lock, which we in turn
4884 * can't access without holding the ARC list locks
4885 * (which we want to avoid during compression/writing)
4887 l2hdr
->b_compress
= ZIO_COMPRESS_OFF
;
4888 l2hdr
->b_asize
= ab
->b_size
;
4889 l2hdr
->b_tmp_cdata
= ab
->b_buf
->b_data
;
4892 buf_sz
= ab
->b_size
;
4893 ab
->b_l2hdr
= l2hdr
;
4895 list_insert_head(dev
->l2ad_buflist
, ab
);
4898 * Compute and store the buffer cksum before
4899 * writing. On debug the cksum is verified first.
4901 arc_cksum_verify(ab
->b_buf
);
4902 arc_cksum_compute(ab
->b_buf
, B_TRUE
);
4904 mutex_exit(hash_lock
);
4909 mutex_exit(list_lock
);
4915 /* No buffers selected for writing? */
4918 mutex_exit(&l2arc_buflist_mtx
);
4919 kmem_cache_free(hdr_cache
, head
);
4924 * Now start writing the buffers. We're starting at the write head
4925 * and work backwards, retracing the course of the buffer selector
4928 for (ab
= list_prev(dev
->l2ad_buflist
, head
); ab
;
4929 ab
= list_prev(dev
->l2ad_buflist
, ab
)) {
4930 l2arc_buf_hdr_t
*l2hdr
;
4934 * We shouldn't need to lock the buffer here, since we flagged
4935 * it as ARC_L2_WRITING in the previous step, but we must take
4936 * care to only access its L2 cache parameters. In particular,
4937 * ab->b_buf may be invalid by now due to ARC eviction.
4939 l2hdr
= ab
->b_l2hdr
;
4940 l2hdr
->b_daddr
= dev
->l2ad_hand
;
4942 if (!l2arc_nocompress
&& (ab
->b_flags
& ARC_L2COMPRESS
) &&
4943 l2hdr
->b_asize
>= buf_compress_minsz
) {
4944 if (l2arc_compress_buf(l2hdr
)) {
4946 * If compression succeeded, enable headroom
4947 * boost on the next scan cycle.
4949 *headroom_boost
= B_TRUE
;
4954 * Pick up the buffer data we had previously stashed away
4955 * (and now potentially also compressed).
4957 buf_data
= l2hdr
->b_tmp_cdata
;
4958 buf_sz
= l2hdr
->b_asize
;
4960 /* Compression may have squashed the buffer to zero length. */
4964 wzio
= zio_write_phys(pio
, dev
->l2ad_vdev
,
4965 dev
->l2ad_hand
, buf_sz
, buf_data
, ZIO_CHECKSUM_OFF
,
4966 NULL
, NULL
, ZIO_PRIORITY_ASYNC_WRITE
,
4967 ZIO_FLAG_CANFAIL
, B_FALSE
);
4969 DTRACE_PROBE2(l2arc__write
, vdev_t
*, dev
->l2ad_vdev
,
4971 (void) zio_nowait(wzio
);
4973 write_asize
+= buf_sz
;
4975 * Keep the clock hand suitably device-aligned.
4977 buf_p_sz
= vdev_psize_to_asize(dev
->l2ad_vdev
, buf_sz
);
4978 write_psize
+= buf_p_sz
;
4979 dev
->l2ad_hand
+= buf_p_sz
;
4983 mutex_exit(&l2arc_buflist_mtx
);
4985 ASSERT3U(write_asize
, <=, target_sz
);
4986 ARCSTAT_BUMP(arcstat_l2_writes_sent
);
4987 ARCSTAT_INCR(arcstat_l2_write_bytes
, write_asize
);
4988 ARCSTAT_INCR(arcstat_l2_size
, write_sz
);
4989 ARCSTAT_INCR(arcstat_l2_asize
, write_asize
);
4990 vdev_space_update(dev
->l2ad_vdev
, write_psize
, 0, 0);
4993 * Bump device hand to the device start if it is approaching the end.
4994 * l2arc_evict() will already have evicted ahead for this case.
4996 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- target_sz
)) {
4997 vdev_space_update(dev
->l2ad_vdev
,
4998 dev
->l2ad_end
- dev
->l2ad_hand
, 0, 0);
4999 dev
->l2ad_hand
= dev
->l2ad_start
;
5000 dev
->l2ad_evict
= dev
->l2ad_start
;
5001 dev
->l2ad_first
= B_FALSE
;
5004 dev
->l2ad_writing
= B_TRUE
;
5005 (void) zio_wait(pio
);
5006 dev
->l2ad_writing
= B_FALSE
;
5008 return (write_asize
);
5012 * Compresses an L2ARC buffer.
5013 * The data to be compressed must be prefilled in l2hdr->b_tmp_cdata and its
5014 * size in l2hdr->b_asize. This routine tries to compress the data and
5015 * depending on the compression result there are three possible outcomes:
5016 * *) The buffer was incompressible. The original l2hdr contents were left
5017 * untouched and are ready for writing to an L2 device.
5018 * *) The buffer was all-zeros, so there is no need to write it to an L2
5019 * device. To indicate this situation b_tmp_cdata is NULL'ed, b_asize is
5020 * set to zero and b_compress is set to ZIO_COMPRESS_EMPTY.
5021 * *) Compression succeeded and b_tmp_cdata was replaced with a temporary
5022 * data buffer which holds the compressed data to be written, and b_asize
5023 * tells us how much data there is. b_compress is set to the appropriate
5024 * compression algorithm. Once writing is done, invoke
5025 * l2arc_release_cdata_buf on this l2hdr to free this temporary buffer.
5027 * Returns B_TRUE if compression succeeded, or B_FALSE if it didn't (the
5028 * buffer was incompressible).
5031 l2arc_compress_buf(l2arc_buf_hdr_t
*l2hdr
)
5036 ASSERT(l2hdr
->b_compress
== ZIO_COMPRESS_OFF
);
5037 ASSERT(l2hdr
->b_tmp_cdata
!= NULL
);
5039 len
= l2hdr
->b_asize
;
5040 cdata
= zio_data_buf_alloc(len
);
5041 csize
= zio_compress_data(ZIO_COMPRESS_LZ4
, l2hdr
->b_tmp_cdata
,
5042 cdata
, l2hdr
->b_asize
);
5045 /* zero block, indicate that there's nothing to write */
5046 zio_data_buf_free(cdata
, len
);
5047 l2hdr
->b_compress
= ZIO_COMPRESS_EMPTY
;
5049 l2hdr
->b_tmp_cdata
= NULL
;
5050 ARCSTAT_BUMP(arcstat_l2_compress_zeros
);
5052 } else if (csize
> 0 && csize
< len
) {
5054 * Compression succeeded, we'll keep the cdata around for
5055 * writing and release it afterwards.
5057 l2hdr
->b_compress
= ZIO_COMPRESS_LZ4
;
5058 l2hdr
->b_asize
= csize
;
5059 l2hdr
->b_tmp_cdata
= cdata
;
5060 ARCSTAT_BUMP(arcstat_l2_compress_successes
);
5064 * Compression failed, release the compressed buffer.
5065 * l2hdr will be left unmodified.
5067 zio_data_buf_free(cdata
, len
);
5068 ARCSTAT_BUMP(arcstat_l2_compress_failures
);
5074 * Decompresses a zio read back from an l2arc device. On success, the
5075 * underlying zio's io_data buffer is overwritten by the uncompressed
5076 * version. On decompression error (corrupt compressed stream), the
5077 * zio->io_error value is set to signal an I/O error.
5079 * Please note that the compressed data stream is not checksummed, so
5080 * if the underlying device is experiencing data corruption, we may feed
5081 * corrupt data to the decompressor, so the decompressor needs to be
5082 * able to handle this situation (LZ4 does).
5085 l2arc_decompress_zio(zio_t
*zio
, arc_buf_hdr_t
*hdr
, enum zio_compress c
)
5090 ASSERT(L2ARC_IS_VALID_COMPRESS(c
));
5092 if (zio
->io_error
!= 0) {
5094 * An io error has occured, just restore the original io
5095 * size in preparation for a main pool read.
5097 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
5101 if (c
== ZIO_COMPRESS_EMPTY
) {
5103 * An empty buffer results in a null zio, which means we
5104 * need to fill its io_data after we're done restoring the
5105 * buffer's contents.
5107 ASSERT(hdr
->b_buf
!= NULL
);
5108 bzero(hdr
->b_buf
->b_data
, hdr
->b_size
);
5109 zio
->io_data
= zio
->io_orig_data
= hdr
->b_buf
->b_data
;
5111 ASSERT(zio
->io_data
!= NULL
);
5113 * We copy the compressed data from the start of the arc buffer
5114 * (the zio_read will have pulled in only what we need, the
5115 * rest is garbage which we will overwrite at decompression)
5116 * and then decompress back to the ARC data buffer. This way we
5117 * can minimize copying by simply decompressing back over the
5118 * original compressed data (rather than decompressing to an
5119 * aux buffer and then copying back the uncompressed buffer,
5120 * which is likely to be much larger).
5122 csize
= zio
->io_size
;
5123 cdata
= zio_data_buf_alloc(csize
);
5124 bcopy(zio
->io_data
, cdata
, csize
);
5125 if (zio_decompress_data(c
, cdata
, zio
->io_data
, csize
,
5127 zio
->io_error
= SET_ERROR(EIO
);
5128 zio_data_buf_free(cdata
, csize
);
5131 /* Restore the expected uncompressed IO size. */
5132 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
5136 * Releases the temporary b_tmp_cdata buffer in an l2arc header structure.
5137 * This buffer serves as a temporary holder of compressed data while
5138 * the buffer entry is being written to an l2arc device. Once that is
5139 * done, we can dispose of it.
5142 l2arc_release_cdata_buf(arc_buf_hdr_t
*ab
)
5144 l2arc_buf_hdr_t
*l2hdr
= ab
->b_l2hdr
;
5146 if (l2hdr
->b_compress
== ZIO_COMPRESS_LZ4
) {
5148 * If the data was compressed, then we've allocated a
5149 * temporary buffer for it, so now we need to release it.
5151 ASSERT(l2hdr
->b_tmp_cdata
!= NULL
);
5152 zio_data_buf_free(l2hdr
->b_tmp_cdata
, ab
->b_size
);
5154 l2hdr
->b_tmp_cdata
= NULL
;
5158 * This thread feeds the L2ARC at regular intervals. This is the beating
5159 * heart of the L2ARC.
5162 l2arc_feed_thread(void)
5167 uint64_t size
, wrote
;
5168 clock_t begin
, next
= ddi_get_lbolt();
5169 boolean_t headroom_boost
= B_FALSE
;
5171 CALLB_CPR_INIT(&cpr
, &l2arc_feed_thr_lock
, callb_generic_cpr
, FTAG
);
5173 mutex_enter(&l2arc_feed_thr_lock
);
5175 while (l2arc_thread_exit
== 0) {
5176 CALLB_CPR_SAFE_BEGIN(&cpr
);
5177 (void) cv_timedwait_interruptible(&l2arc_feed_thr_cv
,
5178 &l2arc_feed_thr_lock
, next
);
5179 CALLB_CPR_SAFE_END(&cpr
, &l2arc_feed_thr_lock
);
5180 next
= ddi_get_lbolt() + hz
;
5183 * Quick check for L2ARC devices.
5185 mutex_enter(&l2arc_dev_mtx
);
5186 if (l2arc_ndev
== 0) {
5187 mutex_exit(&l2arc_dev_mtx
);
5190 mutex_exit(&l2arc_dev_mtx
);
5191 begin
= ddi_get_lbolt();
5194 * This selects the next l2arc device to write to, and in
5195 * doing so the next spa to feed from: dev->l2ad_spa. This
5196 * will return NULL if there are now no l2arc devices or if
5197 * they are all faulted.
5199 * If a device is returned, its spa's config lock is also
5200 * held to prevent device removal. l2arc_dev_get_next()
5201 * will grab and release l2arc_dev_mtx.
5203 if ((dev
= l2arc_dev_get_next()) == NULL
)
5206 spa
= dev
->l2ad_spa
;
5207 ASSERT(spa
!= NULL
);
5210 * If the pool is read-only then force the feed thread to
5211 * sleep a little longer.
5213 if (!spa_writeable(spa
)) {
5214 next
= ddi_get_lbolt() + 5 * l2arc_feed_secs
* hz
;
5215 spa_config_exit(spa
, SCL_L2ARC
, dev
);
5220 * Avoid contributing to memory pressure.
5223 ARCSTAT_BUMP(arcstat_l2_abort_lowmem
);
5224 spa_config_exit(spa
, SCL_L2ARC
, dev
);
5228 ARCSTAT_BUMP(arcstat_l2_feeds
);
5230 size
= l2arc_write_size();
5233 * Evict L2ARC buffers that will be overwritten.
5235 l2arc_evict(dev
, size
, B_FALSE
);
5238 * Write ARC buffers.
5240 wrote
= l2arc_write_buffers(spa
, dev
, size
, &headroom_boost
);
5243 * Calculate interval between writes.
5245 next
= l2arc_write_interval(begin
, size
, wrote
);
5246 spa_config_exit(spa
, SCL_L2ARC
, dev
);
5249 l2arc_thread_exit
= 0;
5250 cv_broadcast(&l2arc_feed_thr_cv
);
5251 CALLB_CPR_EXIT(&cpr
); /* drops l2arc_feed_thr_lock */
5256 l2arc_vdev_present(vdev_t
*vd
)
5260 mutex_enter(&l2arc_dev_mtx
);
5261 for (dev
= list_head(l2arc_dev_list
); dev
!= NULL
;
5262 dev
= list_next(l2arc_dev_list
, dev
)) {
5263 if (dev
->l2ad_vdev
== vd
)
5266 mutex_exit(&l2arc_dev_mtx
);
5268 return (dev
!= NULL
);
5272 * Add a vdev for use by the L2ARC. By this point the spa has already
5273 * validated the vdev and opened it.
5276 l2arc_add_vdev(spa_t
*spa
, vdev_t
*vd
)
5278 l2arc_dev_t
*adddev
;
5280 ASSERT(!l2arc_vdev_present(vd
));
5283 * Create a new l2arc device entry.
5285 adddev
= kmem_zalloc(sizeof (l2arc_dev_t
), KM_SLEEP
);
5286 adddev
->l2ad_spa
= spa
;
5287 adddev
->l2ad_vdev
= vd
;
5288 adddev
->l2ad_start
= VDEV_LABEL_START_SIZE
;
5289 adddev
->l2ad_end
= VDEV_LABEL_START_SIZE
+ vdev_get_min_asize(vd
);
5290 adddev
->l2ad_hand
= adddev
->l2ad_start
;
5291 adddev
->l2ad_evict
= adddev
->l2ad_start
;
5292 adddev
->l2ad_first
= B_TRUE
;
5293 adddev
->l2ad_writing
= B_FALSE
;
5294 list_link_init(&adddev
->l2ad_node
);
5297 * This is a list of all ARC buffers that are still valid on the
5300 adddev
->l2ad_buflist
= kmem_zalloc(sizeof (list_t
), KM_SLEEP
);
5301 list_create(adddev
->l2ad_buflist
, sizeof (arc_buf_hdr_t
),
5302 offsetof(arc_buf_hdr_t
, b_l2node
));
5304 vdev_space_update(vd
, 0, 0, adddev
->l2ad_end
- adddev
->l2ad_hand
);
5307 * Add device to global list
5309 mutex_enter(&l2arc_dev_mtx
);
5310 list_insert_head(l2arc_dev_list
, adddev
);
5311 atomic_inc_64(&l2arc_ndev
);
5312 mutex_exit(&l2arc_dev_mtx
);
5316 * Remove a vdev from the L2ARC.
5319 l2arc_remove_vdev(vdev_t
*vd
)
5321 l2arc_dev_t
*dev
, *nextdev
, *remdev
= NULL
;
5324 * Find the device by vdev
5326 mutex_enter(&l2arc_dev_mtx
);
5327 for (dev
= list_head(l2arc_dev_list
); dev
; dev
= nextdev
) {
5328 nextdev
= list_next(l2arc_dev_list
, dev
);
5329 if (vd
== dev
->l2ad_vdev
) {
5334 ASSERT(remdev
!= NULL
);
5337 * Remove device from global list
5339 list_remove(l2arc_dev_list
, remdev
);
5340 l2arc_dev_last
= NULL
; /* may have been invalidated */
5341 atomic_dec_64(&l2arc_ndev
);
5342 mutex_exit(&l2arc_dev_mtx
);
5345 * Clear all buflists and ARC references. L2ARC device flush.
5347 l2arc_evict(remdev
, 0, B_TRUE
);
5348 list_destroy(remdev
->l2ad_buflist
);
5349 kmem_free(remdev
->l2ad_buflist
, sizeof (list_t
));
5350 kmem_free(remdev
, sizeof (l2arc_dev_t
));
5356 l2arc_thread_exit
= 0;
5358 l2arc_writes_sent
= 0;
5359 l2arc_writes_done
= 0;
5361 mutex_init(&l2arc_feed_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
5362 cv_init(&l2arc_feed_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
5363 mutex_init(&l2arc_dev_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
5364 mutex_init(&l2arc_buflist_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
5365 mutex_init(&l2arc_free_on_write_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
5367 l2arc_dev_list
= &L2ARC_dev_list
;
5368 l2arc_free_on_write
= &L2ARC_free_on_write
;
5369 list_create(l2arc_dev_list
, sizeof (l2arc_dev_t
),
5370 offsetof(l2arc_dev_t
, l2ad_node
));
5371 list_create(l2arc_free_on_write
, sizeof (l2arc_data_free_t
),
5372 offsetof(l2arc_data_free_t
, l2df_list_node
));
5379 * This is called from dmu_fini(), which is called from spa_fini();
5380 * Because of this, we can assume that all l2arc devices have
5381 * already been removed when the pools themselves were removed.
5384 l2arc_do_free_on_write();
5386 mutex_destroy(&l2arc_feed_thr_lock
);
5387 cv_destroy(&l2arc_feed_thr_cv
);
5388 mutex_destroy(&l2arc_dev_mtx
);
5389 mutex_destroy(&l2arc_buflist_mtx
);
5390 mutex_destroy(&l2arc_free_on_write_mtx
);
5392 list_destroy(l2arc_dev_list
);
5393 list_destroy(l2arc_free_on_write
);
5399 if (!(spa_mode_global
& FWRITE
))
5402 (void) thread_create(NULL
, 0, l2arc_feed_thread
, NULL
, 0, &p0
,
5403 TS_RUN
, minclsyspri
);
5409 if (!(spa_mode_global
& FWRITE
))
5412 mutex_enter(&l2arc_feed_thr_lock
);
5413 cv_signal(&l2arc_feed_thr_cv
); /* kick thread out of startup */
5414 l2arc_thread_exit
= 1;
5415 while (l2arc_thread_exit
!= 0)
5416 cv_wait(&l2arc_feed_thr_cv
, &l2arc_feed_thr_lock
);
5417 mutex_exit(&l2arc_feed_thr_lock
);
5420 #if defined(_KERNEL) && defined(HAVE_SPL)
5421 EXPORT_SYMBOL(arc_read
);
5422 EXPORT_SYMBOL(arc_buf_remove_ref
);
5423 EXPORT_SYMBOL(arc_buf_info
);
5424 EXPORT_SYMBOL(arc_getbuf_func
);
5425 EXPORT_SYMBOL(arc_add_prune_callback
);
5426 EXPORT_SYMBOL(arc_remove_prune_callback
);
5428 module_param(zfs_arc_min
, ulong
, 0644);
5429 MODULE_PARM_DESC(zfs_arc_min
, "Min arc size");
5431 module_param(zfs_arc_max
, ulong
, 0644);
5432 MODULE_PARM_DESC(zfs_arc_max
, "Max arc size");
5434 module_param(zfs_arc_meta_limit
, ulong
, 0644);
5435 MODULE_PARM_DESC(zfs_arc_meta_limit
, "Meta limit for arc size");
5437 module_param(zfs_arc_meta_prune
, int, 0644);
5438 MODULE_PARM_DESC(zfs_arc_meta_prune
, "Bytes of meta data to prune");
5440 module_param(zfs_arc_grow_retry
, int, 0644);
5441 MODULE_PARM_DESC(zfs_arc_grow_retry
, "Seconds before growing arc size");
5443 module_param(zfs_arc_shrink_shift
, int, 0644);
5444 MODULE_PARM_DESC(zfs_arc_shrink_shift
, "log2(fraction of arc to reclaim)");
5446 module_param(zfs_arc_p_min_shift
, int, 0644);
5447 MODULE_PARM_DESC(zfs_arc_p_min_shift
, "arc_c shift to calc min/max arc_p");
5449 module_param(zfs_disable_dup_eviction
, int, 0644);
5450 MODULE_PARM_DESC(zfs_disable_dup_eviction
, "disable duplicate buffer eviction");
5452 module_param(zfs_arc_memory_throttle_disable
, int, 0644);
5453 MODULE_PARM_DESC(zfs_arc_memory_throttle_disable
, "disable memory throttle");
5455 module_param(zfs_arc_min_prefetch_lifespan
, int, 0644);
5456 MODULE_PARM_DESC(zfs_arc_min_prefetch_lifespan
, "Min life of prefetch block");
5458 module_param(l2arc_write_max
, ulong
, 0644);
5459 MODULE_PARM_DESC(l2arc_write_max
, "Max write bytes per interval");
5461 module_param(l2arc_write_boost
, ulong
, 0644);
5462 MODULE_PARM_DESC(l2arc_write_boost
, "Extra write bytes during device warmup");
5464 module_param(l2arc_headroom
, ulong
, 0644);
5465 MODULE_PARM_DESC(l2arc_headroom
, "Number of max device writes to precache");
5467 module_param(l2arc_headroom_boost
, ulong
, 0644);
5468 MODULE_PARM_DESC(l2arc_headroom_boost
, "Compressed l2arc_headroom multiplier");
5470 module_param(l2arc_feed_secs
, ulong
, 0644);
5471 MODULE_PARM_DESC(l2arc_feed_secs
, "Seconds between L2ARC writing");
5473 module_param(l2arc_feed_min_ms
, ulong
, 0644);
5474 MODULE_PARM_DESC(l2arc_feed_min_ms
, "Min feed interval in milliseconds");
5476 module_param(l2arc_noprefetch
, int, 0644);
5477 MODULE_PARM_DESC(l2arc_noprefetch
, "Skip caching prefetched buffers");
5479 module_param(l2arc_nocompress
, int, 0644);
5480 MODULE_PARM_DESC(l2arc_nocompress
, "Skip compressing L2ARC buffers");
5482 module_param(l2arc_feed_again
, int, 0644);
5483 MODULE_PARM_DESC(l2arc_feed_again
, "Turbo L2ARC warmup");
5485 module_param(l2arc_norw
, int, 0644);
5486 MODULE_PARM_DESC(l2arc_norw
, "No reads during writes");