4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2011, 2014 by Delphix. All rights reserved.
24 * Copyright (c) 2013 by Saso Kiselkov. All rights reserved.
25 * Copyright 2014 Nexenta Systems, Inc. All rights reserved.
29 * DVA-based Adjustable Replacement Cache
31 * While much of the theory of operation used here is
32 * based on the self-tuning, low overhead replacement cache
33 * presented by Megiddo and Modha at FAST 2003, there are some
34 * significant differences:
36 * 1. The Megiddo and Modha model assumes any page is evictable.
37 * Pages in its cache cannot be "locked" into memory. This makes
38 * the eviction algorithm simple: evict the last page in the list.
39 * This also make the performance characteristics easy to reason
40 * about. Our cache is not so simple. At any given moment, some
41 * subset of the blocks in the cache are un-evictable because we
42 * have handed out a reference to them. Blocks are only evictable
43 * when there are no external references active. This makes
44 * eviction far more problematic: we choose to evict the evictable
45 * blocks that are the "lowest" in the list.
47 * There are times when it is not possible to evict the requested
48 * space. In these circumstances we are unable to adjust the cache
49 * size. To prevent the cache growing unbounded at these times we
50 * implement a "cache throttle" that slows the flow of new data
51 * into the cache until we can make space available.
53 * 2. The Megiddo and Modha model assumes a fixed cache size.
54 * Pages are evicted when the cache is full and there is a cache
55 * miss. Our model has a variable sized cache. It grows with
56 * high use, but also tries to react to memory pressure from the
57 * operating system: decreasing its size when system memory is
60 * 3. The Megiddo and Modha model assumes a fixed page size. All
61 * elements of the cache are therefore exactly the same size. So
62 * when adjusting the cache size following a cache miss, its simply
63 * a matter of choosing a single page to evict. In our model, we
64 * have variable sized cache blocks (rangeing from 512 bytes to
65 * 128K bytes). We therefore choose a set of blocks to evict to make
66 * space for a cache miss that approximates as closely as possible
67 * the space used by the new block.
69 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
70 * by N. Megiddo & D. Modha, FAST 2003
76 * A new reference to a cache buffer can be obtained in two
77 * ways: 1) via a hash table lookup using the DVA as a key,
78 * or 2) via one of the ARC lists. The arc_read() interface
79 * uses method 1, while the internal arc algorithms for
80 * adjusting the cache use method 2. We therefore provide two
81 * types of locks: 1) the hash table lock array, and 2) the
84 * Buffers do not have their own mutexes, rather they rely on the
85 * hash table mutexes for the bulk of their protection (i.e. most
86 * fields in the arc_buf_hdr_t are protected by these mutexes).
88 * buf_hash_find() returns the appropriate mutex (held) when it
89 * locates the requested buffer in the hash table. It returns
90 * NULL for the mutex if the buffer was not in the table.
92 * buf_hash_remove() expects the appropriate hash mutex to be
93 * already held before it is invoked.
95 * Each arc state also has a mutex which is used to protect the
96 * buffer list associated with the state. When attempting to
97 * obtain a hash table lock while holding an arc list lock you
98 * must use: mutex_tryenter() to avoid deadlock. Also note that
99 * the active state mutex must be held before the ghost state mutex.
101 * Arc buffers may have an associated eviction callback function.
102 * This function will be invoked prior to removing the buffer (e.g.
103 * in arc_do_user_evicts()). Note however that the data associated
104 * with the buffer may be evicted prior to the callback. The callback
105 * must be made with *no locks held* (to prevent deadlock). Additionally,
106 * the users of callbacks must ensure that their private data is
107 * protected from simultaneous callbacks from arc_clear_callback()
108 * and arc_do_user_evicts().
110 * It as also possible to register a callback which is run when the
111 * arc_meta_limit is reached and no buffers can be safely evicted. In
112 * this case the arc user should drop a reference on some arc buffers so
113 * they can be reclaimed and the arc_meta_limit honored. For example,
114 * when using the ZPL each dentry holds a references on a znode. These
115 * dentries must be pruned before the arc buffer holding the znode can
118 * Note that the majority of the performance stats are manipulated
119 * with atomic operations.
121 * The L2ARC uses the l2ad_mtx on each vdev for the following:
123 * - L2ARC buflist creation
124 * - L2ARC buflist eviction
125 * - L2ARC write completion, which walks L2ARC buflists
126 * - ARC header destruction, as it removes from L2ARC buflists
127 * - ARC header release, as it removes from L2ARC buflists
132 #include <sys/zio_compress.h>
133 #include <sys/zfs_context.h>
135 #include <sys/vdev.h>
136 #include <sys/vdev_impl.h>
137 #include <sys/dsl_pool.h>
138 #include <sys/multilist.h>
140 #include <sys/vmsystm.h>
142 #include <sys/fs/swapnode.h>
144 #include <linux/mm_compat.h>
146 #include <sys/callb.h>
147 #include <sys/kstat.h>
148 #include <sys/dmu_tx.h>
149 #include <zfs_fletcher.h>
150 #include <sys/arc_impl.h>
151 #include <sys/trace_arc.h>
154 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
155 boolean_t arc_watch
= B_FALSE
;
158 static kmutex_t arc_reclaim_lock
;
159 static kcondvar_t arc_reclaim_thread_cv
;
160 static boolean_t arc_reclaim_thread_exit
;
161 static kcondvar_t arc_reclaim_waiters_cv
;
163 static kmutex_t arc_user_evicts_lock
;
164 static kcondvar_t arc_user_evicts_cv
;
165 static boolean_t arc_user_evicts_thread_exit
;
167 /* number of objects to prune from caches when arc_meta_limit is reached */
168 int zfs_arc_meta_prune
= 10000;
170 /* The preferred strategy to employ when arc_meta_limit is reached */
171 int zfs_arc_meta_strategy
= ARC_STRATEGY_META_BALANCED
;
173 typedef enum arc_reclaim_strategy
{
174 ARC_RECLAIM_AGGR
, /* Aggressive reclaim strategy */
175 ARC_RECLAIM_CONS
/* Conservative reclaim strategy */
176 } arc_reclaim_strategy_t
;
179 * The number of headers to evict in arc_evict_state_impl() before
180 * dropping the sublist lock and evicting from another sublist. A lower
181 * value means we're more likely to evict the "correct" header (i.e. the
182 * oldest header in the arc state), but comes with higher overhead
183 * (i.e. more invocations of arc_evict_state_impl()).
185 int zfs_arc_evict_batch_limit
= 10;
188 * The number of sublists used for each of the arc state lists. If this
189 * is not set to a suitable value by the user, it will be configured to
190 * the number of CPUs on the system in arc_init().
192 int zfs_arc_num_sublists_per_state
= 0;
194 /* number of seconds before growing cache again */
195 int zfs_arc_grow_retry
= 5;
197 /* shift of arc_c for calculating overflow limit in arc_get_data_buf */
198 int zfs_arc_overflow_shift
= 8;
200 /* disable anon data aggressively growing arc_p */
201 int zfs_arc_p_aggressive_disable
= 1;
203 /* disable arc_p adapt dampener in arc_adapt */
204 int zfs_arc_p_dampener_disable
= 1;
206 /* log2(fraction of arc to reclaim) */
207 int zfs_arc_shrink_shift
= 5;
210 * minimum lifespan of a prefetch block in clock ticks
211 * (initialized in arc_init())
213 int zfs_arc_min_prefetch_lifespan
= HZ
;
215 /* disable arc proactive arc throttle due to low memory */
216 int zfs_arc_memory_throttle_disable
= 1;
218 /* disable duplicate buffer eviction */
219 int zfs_disable_dup_eviction
= 0;
221 /* average block used to size buf_hash_table */
222 int zfs_arc_average_blocksize
= 8 * 1024; /* 8KB */
225 * minimum lifespan of a prefetch block in clock ticks
226 * (initialized in arc_init())
228 static int arc_min_prefetch_lifespan
;
231 * If this percent of memory is free, don't throttle.
233 int arc_lotsfree_percent
= 10;
237 /* expiration time for arc_no_grow */
238 static clock_t arc_grow_time
= 0;
241 * The arc has filled available memory and has now warmed up.
243 static boolean_t arc_warm
;
246 * These tunables are for performance analysis.
248 unsigned long zfs_arc_max
= 0;
249 unsigned long zfs_arc_min
= 0;
250 unsigned long zfs_arc_meta_limit
= 0;
251 unsigned long zfs_arc_meta_min
= 0;
254 * Limit the number of restarts in arc_adjust_meta()
256 unsigned long zfs_arc_meta_adjust_restarts
= 4096;
259 static arc_state_t ARC_anon
;
260 static arc_state_t ARC_mru
;
261 static arc_state_t ARC_mru_ghost
;
262 static arc_state_t ARC_mfu
;
263 static arc_state_t ARC_mfu_ghost
;
264 static arc_state_t ARC_l2c_only
;
266 typedef struct arc_stats
{
267 kstat_named_t arcstat_hits
;
268 kstat_named_t arcstat_misses
;
269 kstat_named_t arcstat_demand_data_hits
;
270 kstat_named_t arcstat_demand_data_misses
;
271 kstat_named_t arcstat_demand_metadata_hits
;
272 kstat_named_t arcstat_demand_metadata_misses
;
273 kstat_named_t arcstat_prefetch_data_hits
;
274 kstat_named_t arcstat_prefetch_data_misses
;
275 kstat_named_t arcstat_prefetch_metadata_hits
;
276 kstat_named_t arcstat_prefetch_metadata_misses
;
277 kstat_named_t arcstat_mru_hits
;
278 kstat_named_t arcstat_mru_ghost_hits
;
279 kstat_named_t arcstat_mfu_hits
;
280 kstat_named_t arcstat_mfu_ghost_hits
;
281 kstat_named_t arcstat_deleted
;
283 * Number of buffers that could not be evicted because the hash lock
284 * was held by another thread. The lock may not necessarily be held
285 * by something using the same buffer, since hash locks are shared
286 * by multiple buffers.
288 kstat_named_t arcstat_mutex_miss
;
290 * Number of buffers skipped because they have I/O in progress, are
291 * indrect prefetch buffers that have not lived long enough, or are
292 * not from the spa we're trying to evict from.
294 kstat_named_t arcstat_evict_skip
;
296 * Number of times arc_evict_state() was unable to evict enough
297 * buffers to reach its target amount.
299 kstat_named_t arcstat_evict_not_enough
;
300 kstat_named_t arcstat_evict_l2_cached
;
301 kstat_named_t arcstat_evict_l2_eligible
;
302 kstat_named_t arcstat_evict_l2_ineligible
;
303 kstat_named_t arcstat_evict_l2_skip
;
304 kstat_named_t arcstat_hash_elements
;
305 kstat_named_t arcstat_hash_elements_max
;
306 kstat_named_t arcstat_hash_collisions
;
307 kstat_named_t arcstat_hash_chains
;
308 kstat_named_t arcstat_hash_chain_max
;
309 kstat_named_t arcstat_p
;
310 kstat_named_t arcstat_c
;
311 kstat_named_t arcstat_c_min
;
312 kstat_named_t arcstat_c_max
;
313 kstat_named_t arcstat_size
;
314 kstat_named_t arcstat_hdr_size
;
315 kstat_named_t arcstat_data_size
;
316 kstat_named_t arcstat_meta_size
;
317 kstat_named_t arcstat_other_size
;
318 kstat_named_t arcstat_anon_size
;
319 kstat_named_t arcstat_anon_evict_data
;
320 kstat_named_t arcstat_anon_evict_metadata
;
321 kstat_named_t arcstat_mru_size
;
322 kstat_named_t arcstat_mru_evict_data
;
323 kstat_named_t arcstat_mru_evict_metadata
;
324 kstat_named_t arcstat_mru_ghost_size
;
325 kstat_named_t arcstat_mru_ghost_evict_data
;
326 kstat_named_t arcstat_mru_ghost_evict_metadata
;
327 kstat_named_t arcstat_mfu_size
;
328 kstat_named_t arcstat_mfu_evict_data
;
329 kstat_named_t arcstat_mfu_evict_metadata
;
330 kstat_named_t arcstat_mfu_ghost_size
;
331 kstat_named_t arcstat_mfu_ghost_evict_data
;
332 kstat_named_t arcstat_mfu_ghost_evict_metadata
;
333 kstat_named_t arcstat_l2_hits
;
334 kstat_named_t arcstat_l2_misses
;
335 kstat_named_t arcstat_l2_feeds
;
336 kstat_named_t arcstat_l2_rw_clash
;
337 kstat_named_t arcstat_l2_read_bytes
;
338 kstat_named_t arcstat_l2_write_bytes
;
339 kstat_named_t arcstat_l2_writes_sent
;
340 kstat_named_t arcstat_l2_writes_done
;
341 kstat_named_t arcstat_l2_writes_error
;
342 kstat_named_t arcstat_l2_writes_lock_retry
;
343 kstat_named_t arcstat_l2_evict_lock_retry
;
344 kstat_named_t arcstat_l2_evict_reading
;
345 kstat_named_t arcstat_l2_evict_l1cached
;
346 kstat_named_t arcstat_l2_free_on_write
;
347 kstat_named_t arcstat_l2_cdata_free_on_write
;
348 kstat_named_t arcstat_l2_abort_lowmem
;
349 kstat_named_t arcstat_l2_cksum_bad
;
350 kstat_named_t arcstat_l2_io_error
;
351 kstat_named_t arcstat_l2_size
;
352 kstat_named_t arcstat_l2_asize
;
353 kstat_named_t arcstat_l2_hdr_size
;
354 kstat_named_t arcstat_l2_compress_successes
;
355 kstat_named_t arcstat_l2_compress_zeros
;
356 kstat_named_t arcstat_l2_compress_failures
;
357 kstat_named_t arcstat_memory_throttle_count
;
358 kstat_named_t arcstat_duplicate_buffers
;
359 kstat_named_t arcstat_duplicate_buffers_size
;
360 kstat_named_t arcstat_duplicate_reads
;
361 kstat_named_t arcstat_memory_direct_count
;
362 kstat_named_t arcstat_memory_indirect_count
;
363 kstat_named_t arcstat_no_grow
;
364 kstat_named_t arcstat_tempreserve
;
365 kstat_named_t arcstat_loaned_bytes
;
366 kstat_named_t arcstat_prune
;
367 kstat_named_t arcstat_meta_used
;
368 kstat_named_t arcstat_meta_limit
;
369 kstat_named_t arcstat_meta_max
;
370 kstat_named_t arcstat_meta_min
;
373 static arc_stats_t arc_stats
= {
374 { "hits", KSTAT_DATA_UINT64
},
375 { "misses", KSTAT_DATA_UINT64
},
376 { "demand_data_hits", KSTAT_DATA_UINT64
},
377 { "demand_data_misses", KSTAT_DATA_UINT64
},
378 { "demand_metadata_hits", KSTAT_DATA_UINT64
},
379 { "demand_metadata_misses", KSTAT_DATA_UINT64
},
380 { "prefetch_data_hits", KSTAT_DATA_UINT64
},
381 { "prefetch_data_misses", KSTAT_DATA_UINT64
},
382 { "prefetch_metadata_hits", KSTAT_DATA_UINT64
},
383 { "prefetch_metadata_misses", KSTAT_DATA_UINT64
},
384 { "mru_hits", KSTAT_DATA_UINT64
},
385 { "mru_ghost_hits", KSTAT_DATA_UINT64
},
386 { "mfu_hits", KSTAT_DATA_UINT64
},
387 { "mfu_ghost_hits", KSTAT_DATA_UINT64
},
388 { "deleted", KSTAT_DATA_UINT64
},
389 { "mutex_miss", KSTAT_DATA_UINT64
},
390 { "evict_skip", KSTAT_DATA_UINT64
},
391 { "evict_not_enough", KSTAT_DATA_UINT64
},
392 { "evict_l2_cached", KSTAT_DATA_UINT64
},
393 { "evict_l2_eligible", KSTAT_DATA_UINT64
},
394 { "evict_l2_ineligible", KSTAT_DATA_UINT64
},
395 { "evict_l2_skip", KSTAT_DATA_UINT64
},
396 { "hash_elements", KSTAT_DATA_UINT64
},
397 { "hash_elements_max", KSTAT_DATA_UINT64
},
398 { "hash_collisions", KSTAT_DATA_UINT64
},
399 { "hash_chains", KSTAT_DATA_UINT64
},
400 { "hash_chain_max", KSTAT_DATA_UINT64
},
401 { "p", KSTAT_DATA_UINT64
},
402 { "c", KSTAT_DATA_UINT64
},
403 { "c_min", KSTAT_DATA_UINT64
},
404 { "c_max", KSTAT_DATA_UINT64
},
405 { "size", KSTAT_DATA_UINT64
},
406 { "hdr_size", KSTAT_DATA_UINT64
},
407 { "data_size", KSTAT_DATA_UINT64
},
408 { "meta_size", KSTAT_DATA_UINT64
},
409 { "other_size", KSTAT_DATA_UINT64
},
410 { "anon_size", KSTAT_DATA_UINT64
},
411 { "anon_evict_data", KSTAT_DATA_UINT64
},
412 { "anon_evict_metadata", KSTAT_DATA_UINT64
},
413 { "mru_size", KSTAT_DATA_UINT64
},
414 { "mru_evict_data", KSTAT_DATA_UINT64
},
415 { "mru_evict_metadata", KSTAT_DATA_UINT64
},
416 { "mru_ghost_size", KSTAT_DATA_UINT64
},
417 { "mru_ghost_evict_data", KSTAT_DATA_UINT64
},
418 { "mru_ghost_evict_metadata", KSTAT_DATA_UINT64
},
419 { "mfu_size", KSTAT_DATA_UINT64
},
420 { "mfu_evict_data", KSTAT_DATA_UINT64
},
421 { "mfu_evict_metadata", KSTAT_DATA_UINT64
},
422 { "mfu_ghost_size", KSTAT_DATA_UINT64
},
423 { "mfu_ghost_evict_data", KSTAT_DATA_UINT64
},
424 { "mfu_ghost_evict_metadata", KSTAT_DATA_UINT64
},
425 { "l2_hits", KSTAT_DATA_UINT64
},
426 { "l2_misses", KSTAT_DATA_UINT64
},
427 { "l2_feeds", KSTAT_DATA_UINT64
},
428 { "l2_rw_clash", KSTAT_DATA_UINT64
},
429 { "l2_read_bytes", KSTAT_DATA_UINT64
},
430 { "l2_write_bytes", KSTAT_DATA_UINT64
},
431 { "l2_writes_sent", KSTAT_DATA_UINT64
},
432 { "l2_writes_done", KSTAT_DATA_UINT64
},
433 { "l2_writes_error", KSTAT_DATA_UINT64
},
434 { "l2_writes_lock_retry", KSTAT_DATA_UINT64
},
435 { "l2_evict_lock_retry", KSTAT_DATA_UINT64
},
436 { "l2_evict_reading", KSTAT_DATA_UINT64
},
437 { "l2_evict_l1cached", KSTAT_DATA_UINT64
},
438 { "l2_free_on_write", KSTAT_DATA_UINT64
},
439 { "l2_cdata_free_on_write", KSTAT_DATA_UINT64
},
440 { "l2_abort_lowmem", KSTAT_DATA_UINT64
},
441 { "l2_cksum_bad", KSTAT_DATA_UINT64
},
442 { "l2_io_error", KSTAT_DATA_UINT64
},
443 { "l2_size", KSTAT_DATA_UINT64
},
444 { "l2_asize", KSTAT_DATA_UINT64
},
445 { "l2_hdr_size", KSTAT_DATA_UINT64
},
446 { "l2_compress_successes", KSTAT_DATA_UINT64
},
447 { "l2_compress_zeros", KSTAT_DATA_UINT64
},
448 { "l2_compress_failures", KSTAT_DATA_UINT64
},
449 { "memory_throttle_count", KSTAT_DATA_UINT64
},
450 { "duplicate_buffers", KSTAT_DATA_UINT64
},
451 { "duplicate_buffers_size", KSTAT_DATA_UINT64
},
452 { "duplicate_reads", KSTAT_DATA_UINT64
},
453 { "memory_direct_count", KSTAT_DATA_UINT64
},
454 { "memory_indirect_count", KSTAT_DATA_UINT64
},
455 { "arc_no_grow", KSTAT_DATA_UINT64
},
456 { "arc_tempreserve", KSTAT_DATA_UINT64
},
457 { "arc_loaned_bytes", KSTAT_DATA_UINT64
},
458 { "arc_prune", KSTAT_DATA_UINT64
},
459 { "arc_meta_used", KSTAT_DATA_UINT64
},
460 { "arc_meta_limit", KSTAT_DATA_UINT64
},
461 { "arc_meta_max", KSTAT_DATA_UINT64
},
462 { "arc_meta_min", KSTAT_DATA_UINT64
},
465 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
467 #define ARCSTAT_INCR(stat, val) \
468 atomic_add_64(&arc_stats.stat.value.ui64, (val))
470 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
471 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
473 #define ARCSTAT_MAX(stat, val) { \
475 while ((val) > (m = arc_stats.stat.value.ui64) && \
476 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
480 #define ARCSTAT_MAXSTAT(stat) \
481 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
484 * We define a macro to allow ARC hits/misses to be easily broken down by
485 * two separate conditions, giving a total of four different subtypes for
486 * each of hits and misses (so eight statistics total).
488 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
491 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
493 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
497 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
499 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
504 static arc_state_t
*arc_anon
;
505 static arc_state_t
*arc_mru
;
506 static arc_state_t
*arc_mru_ghost
;
507 static arc_state_t
*arc_mfu
;
508 static arc_state_t
*arc_mfu_ghost
;
509 static arc_state_t
*arc_l2c_only
;
512 * There are several ARC variables that are critical to export as kstats --
513 * but we don't want to have to grovel around in the kstat whenever we wish to
514 * manipulate them. For these variables, we therefore define them to be in
515 * terms of the statistic variable. This assures that we are not introducing
516 * the possibility of inconsistency by having shadow copies of the variables,
517 * while still allowing the code to be readable.
519 #define arc_size ARCSTAT(arcstat_size) /* actual total arc size */
520 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
521 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
522 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
523 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
524 #define arc_no_grow ARCSTAT(arcstat_no_grow)
525 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
526 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
527 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
528 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */
529 #define arc_meta_used ARCSTAT(arcstat_meta_used) /* size of metadata */
530 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
532 #define L2ARC_IS_VALID_COMPRESS(_c_) \
533 ((_c_) == ZIO_COMPRESS_LZ4 || (_c_) == ZIO_COMPRESS_EMPTY)
535 static list_t arc_prune_list
;
536 static kmutex_t arc_prune_mtx
;
537 static taskq_t
*arc_prune_taskq
;
538 static arc_buf_t
*arc_eviction_list
;
539 static arc_buf_hdr_t arc_eviction_hdr
;
541 #define GHOST_STATE(state) \
542 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
543 (state) == arc_l2c_only)
545 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
546 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
547 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
548 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
549 #define HDR_FREED_IN_READ(hdr) ((hdr)->b_flags & ARC_FLAG_FREED_IN_READ)
550 #define HDR_BUF_AVAILABLE(hdr) ((hdr)->b_flags & ARC_FLAG_BUF_AVAILABLE)
552 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
553 #define HDR_L2COMPRESS(hdr) ((hdr)->b_flags & ARC_FLAG_L2COMPRESS)
554 #define HDR_L2_READING(hdr) \
555 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
556 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
557 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
558 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
559 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
561 #define HDR_ISTYPE_METADATA(hdr) \
562 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
563 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
565 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
566 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
568 /* For storing compression mode in b_flags */
569 #define HDR_COMPRESS_OFFSET 24
570 #define HDR_COMPRESS_NBITS 7
572 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET(hdr->b_flags, \
573 HDR_COMPRESS_OFFSET, HDR_COMPRESS_NBITS))
574 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET(hdr->b_flags, \
575 HDR_COMPRESS_OFFSET, HDR_COMPRESS_NBITS, (cmp))
581 #define HDR_FULL_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
582 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
585 * Hash table routines
588 #define HT_LOCK_ALIGN 64
589 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
594 unsigned char pad
[HT_LOCK_PAD
];
598 #define BUF_LOCKS 8192
599 typedef struct buf_hash_table
{
601 arc_buf_hdr_t
**ht_table
;
602 struct ht_lock ht_locks
[BUF_LOCKS
];
605 static buf_hash_table_t buf_hash_table
;
607 #define BUF_HASH_INDEX(spa, dva, birth) \
608 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
609 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
610 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
611 #define HDR_LOCK(hdr) \
612 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
614 uint64_t zfs_crc64_table
[256];
620 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
621 #define L2ARC_HEADROOM 2 /* num of writes */
623 * If we discover during ARC scan any buffers to be compressed, we boost
624 * our headroom for the next scanning cycle by this percentage multiple.
626 #define L2ARC_HEADROOM_BOOST 200
627 #define L2ARC_FEED_SECS 1 /* caching interval secs */
628 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
631 * Used to distinguish headers that are being process by
632 * l2arc_write_buffers(), but have yet to be assigned to a l2arc disk
633 * address. This can happen when the header is added to the l2arc's list
634 * of buffers to write in the first stage of l2arc_write_buffers(), but
635 * has not yet been written out which happens in the second stage of
636 * l2arc_write_buffers().
638 #define L2ARC_ADDR_UNSET ((uint64_t)(-1))
640 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
641 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
643 /* L2ARC Performance Tunables */
644 unsigned long l2arc_write_max
= L2ARC_WRITE_SIZE
; /* def max write size */
645 unsigned long l2arc_write_boost
= L2ARC_WRITE_SIZE
; /* extra warmup write */
646 unsigned long l2arc_headroom
= L2ARC_HEADROOM
; /* # of dev writes */
647 unsigned long l2arc_headroom_boost
= L2ARC_HEADROOM_BOOST
;
648 unsigned long l2arc_feed_secs
= L2ARC_FEED_SECS
; /* interval seconds */
649 unsigned long l2arc_feed_min_ms
= L2ARC_FEED_MIN_MS
; /* min interval msecs */
650 int l2arc_noprefetch
= B_TRUE
; /* don't cache prefetch bufs */
651 int l2arc_nocompress
= B_FALSE
; /* don't compress bufs */
652 int l2arc_feed_again
= B_TRUE
; /* turbo warmup */
653 int l2arc_norw
= B_FALSE
; /* no reads during writes */
658 static list_t L2ARC_dev_list
; /* device list */
659 static list_t
*l2arc_dev_list
; /* device list pointer */
660 static kmutex_t l2arc_dev_mtx
; /* device list mutex */
661 static l2arc_dev_t
*l2arc_dev_last
; /* last device used */
662 static list_t L2ARC_free_on_write
; /* free after write buf list */
663 static list_t
*l2arc_free_on_write
; /* free after write list ptr */
664 static kmutex_t l2arc_free_on_write_mtx
; /* mutex for list */
665 static uint64_t l2arc_ndev
; /* number of devices */
667 typedef struct l2arc_read_callback
{
668 arc_buf_t
*l2rcb_buf
; /* read buffer */
669 spa_t
*l2rcb_spa
; /* spa */
670 blkptr_t l2rcb_bp
; /* original blkptr */
671 zbookmark_phys_t l2rcb_zb
; /* original bookmark */
672 int l2rcb_flags
; /* original flags */
673 enum zio_compress l2rcb_compress
; /* applied compress */
674 } l2arc_read_callback_t
;
676 typedef struct l2arc_data_free
{
677 /* protected by l2arc_free_on_write_mtx */
680 void (*l2df_func
)(void *, size_t);
681 list_node_t l2df_list_node
;
684 static kmutex_t l2arc_feed_thr_lock
;
685 static kcondvar_t l2arc_feed_thr_cv
;
686 static uint8_t l2arc_thread_exit
;
688 static void arc_get_data_buf(arc_buf_t
*);
689 static void arc_access(arc_buf_hdr_t
*, kmutex_t
*);
690 static boolean_t
arc_is_overflowing(void);
691 static void arc_buf_watch(arc_buf_t
*);
693 static arc_buf_contents_t
arc_buf_type(arc_buf_hdr_t
*);
694 static uint32_t arc_bufc_to_flags(arc_buf_contents_t
);
696 static boolean_t
l2arc_write_eligible(uint64_t, arc_buf_hdr_t
*);
697 static void l2arc_read_done(zio_t
*);
699 static boolean_t
l2arc_compress_buf(arc_buf_hdr_t
*);
700 static void l2arc_decompress_zio(zio_t
*, arc_buf_hdr_t
*, enum zio_compress
);
701 static void l2arc_release_cdata_buf(arc_buf_hdr_t
*);
704 buf_hash(uint64_t spa
, const dva_t
*dva
, uint64_t birth
)
706 uint8_t *vdva
= (uint8_t *)dva
;
707 uint64_t crc
= -1ULL;
710 ASSERT(zfs_crc64_table
[128] == ZFS_CRC64_POLY
);
712 for (i
= 0; i
< sizeof (dva_t
); i
++)
713 crc
= (crc
>> 8) ^ zfs_crc64_table
[(crc
^ vdva
[i
]) & 0xFF];
715 crc
^= (spa
>>8) ^ birth
;
720 #define BUF_EMPTY(buf) \
721 ((buf)->b_dva.dva_word[0] == 0 && \
722 (buf)->b_dva.dva_word[1] == 0)
724 #define BUF_EQUAL(spa, dva, birth, buf) \
725 ((buf)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
726 ((buf)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
727 ((buf)->b_birth == birth) && ((buf)->b_spa == spa)
730 buf_discard_identity(arc_buf_hdr_t
*hdr
)
732 hdr
->b_dva
.dva_word
[0] = 0;
733 hdr
->b_dva
.dva_word
[1] = 0;
737 static arc_buf_hdr_t
*
738 buf_hash_find(uint64_t spa
, const blkptr_t
*bp
, kmutex_t
**lockp
)
740 const dva_t
*dva
= BP_IDENTITY(bp
);
741 uint64_t birth
= BP_PHYSICAL_BIRTH(bp
);
742 uint64_t idx
= BUF_HASH_INDEX(spa
, dva
, birth
);
743 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
746 mutex_enter(hash_lock
);
747 for (hdr
= buf_hash_table
.ht_table
[idx
]; hdr
!= NULL
;
748 hdr
= hdr
->b_hash_next
) {
749 if (BUF_EQUAL(spa
, dva
, birth
, hdr
)) {
754 mutex_exit(hash_lock
);
760 * Insert an entry into the hash table. If there is already an element
761 * equal to elem in the hash table, then the already existing element
762 * will be returned and the new element will not be inserted.
763 * Otherwise returns NULL.
764 * If lockp == NULL, the caller is assumed to already hold the hash lock.
766 static arc_buf_hdr_t
*
767 buf_hash_insert(arc_buf_hdr_t
*hdr
, kmutex_t
**lockp
)
769 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
770 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
774 ASSERT(!DVA_IS_EMPTY(&hdr
->b_dva
));
775 ASSERT(hdr
->b_birth
!= 0);
776 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
780 mutex_enter(hash_lock
);
782 ASSERT(MUTEX_HELD(hash_lock
));
785 for (fhdr
= buf_hash_table
.ht_table
[idx
], i
= 0; fhdr
!= NULL
;
786 fhdr
= fhdr
->b_hash_next
, i
++) {
787 if (BUF_EQUAL(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
, fhdr
))
791 hdr
->b_hash_next
= buf_hash_table
.ht_table
[idx
];
792 buf_hash_table
.ht_table
[idx
] = hdr
;
793 hdr
->b_flags
|= ARC_FLAG_IN_HASH_TABLE
;
795 /* collect some hash table performance data */
797 ARCSTAT_BUMP(arcstat_hash_collisions
);
799 ARCSTAT_BUMP(arcstat_hash_chains
);
801 ARCSTAT_MAX(arcstat_hash_chain_max
, i
);
804 ARCSTAT_BUMP(arcstat_hash_elements
);
805 ARCSTAT_MAXSTAT(arcstat_hash_elements
);
811 buf_hash_remove(arc_buf_hdr_t
*hdr
)
813 arc_buf_hdr_t
*fhdr
, **hdrp
;
814 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
816 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx
)));
817 ASSERT(HDR_IN_HASH_TABLE(hdr
));
819 hdrp
= &buf_hash_table
.ht_table
[idx
];
820 while ((fhdr
= *hdrp
) != hdr
) {
821 ASSERT(fhdr
!= NULL
);
822 hdrp
= &fhdr
->b_hash_next
;
824 *hdrp
= hdr
->b_hash_next
;
825 hdr
->b_hash_next
= NULL
;
826 hdr
->b_flags
&= ~ARC_FLAG_IN_HASH_TABLE
;
828 /* collect some hash table performance data */
829 ARCSTAT_BUMPDOWN(arcstat_hash_elements
);
831 if (buf_hash_table
.ht_table
[idx
] &&
832 buf_hash_table
.ht_table
[idx
]->b_hash_next
== NULL
)
833 ARCSTAT_BUMPDOWN(arcstat_hash_chains
);
837 * Global data structures and functions for the buf kmem cache.
839 static kmem_cache_t
*hdr_full_cache
;
840 static kmem_cache_t
*hdr_l2only_cache
;
841 static kmem_cache_t
*buf_cache
;
848 #if defined(_KERNEL) && defined(HAVE_SPL)
850 * Large allocations which do not require contiguous pages
851 * should be using vmem_free() in the linux kernel\
853 vmem_free(buf_hash_table
.ht_table
,
854 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
856 kmem_free(buf_hash_table
.ht_table
,
857 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
859 for (i
= 0; i
< BUF_LOCKS
; i
++)
860 mutex_destroy(&buf_hash_table
.ht_locks
[i
].ht_lock
);
861 kmem_cache_destroy(hdr_full_cache
);
862 kmem_cache_destroy(hdr_l2only_cache
);
863 kmem_cache_destroy(buf_cache
);
867 * Constructor callback - called when the cache is empty
868 * and a new buf is requested.
872 hdr_full_cons(void *vbuf
, void *unused
, int kmflag
)
874 arc_buf_hdr_t
*hdr
= vbuf
;
876 bzero(hdr
, HDR_FULL_SIZE
);
877 cv_init(&hdr
->b_l1hdr
.b_cv
, NULL
, CV_DEFAULT
, NULL
);
878 refcount_create(&hdr
->b_l1hdr
.b_refcnt
);
879 mutex_init(&hdr
->b_l1hdr
.b_freeze_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
880 list_link_init(&hdr
->b_l1hdr
.b_arc_node
);
881 list_link_init(&hdr
->b_l2hdr
.b_l2node
);
882 multilist_link_init(&hdr
->b_l1hdr
.b_arc_node
);
883 arc_space_consume(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
890 hdr_l2only_cons(void *vbuf
, void *unused
, int kmflag
)
892 arc_buf_hdr_t
*hdr
= vbuf
;
894 bzero(hdr
, HDR_L2ONLY_SIZE
);
895 arc_space_consume(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
902 buf_cons(void *vbuf
, void *unused
, int kmflag
)
904 arc_buf_t
*buf
= vbuf
;
906 bzero(buf
, sizeof (arc_buf_t
));
907 mutex_init(&buf
->b_evict_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
908 arc_space_consume(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
914 * Destructor callback - called when a cached buf is
915 * no longer required.
919 hdr_full_dest(void *vbuf
, void *unused
)
921 arc_buf_hdr_t
*hdr
= vbuf
;
923 ASSERT(BUF_EMPTY(hdr
));
924 cv_destroy(&hdr
->b_l1hdr
.b_cv
);
925 refcount_destroy(&hdr
->b_l1hdr
.b_refcnt
);
926 mutex_destroy(&hdr
->b_l1hdr
.b_freeze_lock
);
927 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
928 arc_space_return(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
933 hdr_l2only_dest(void *vbuf
, void *unused
)
935 ASSERTV(arc_buf_hdr_t
*hdr
= vbuf
);
937 ASSERT(BUF_EMPTY(hdr
));
938 arc_space_return(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
943 buf_dest(void *vbuf
, void *unused
)
945 arc_buf_t
*buf
= vbuf
;
947 mutex_destroy(&buf
->b_evict_lock
);
948 arc_space_return(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
955 uint64_t hsize
= 1ULL << 12;
959 * The hash table is big enough to fill all of physical memory
960 * with an average block size of zfs_arc_average_blocksize (default 8K).
961 * By default, the table will take up
962 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
964 while (hsize
* zfs_arc_average_blocksize
< physmem
* PAGESIZE
)
967 buf_hash_table
.ht_mask
= hsize
- 1;
968 #if defined(_KERNEL) && defined(HAVE_SPL)
970 * Large allocations which do not require contiguous pages
971 * should be using vmem_alloc() in the linux kernel
973 buf_hash_table
.ht_table
=
974 vmem_zalloc(hsize
* sizeof (void*), KM_SLEEP
);
976 buf_hash_table
.ht_table
=
977 kmem_zalloc(hsize
* sizeof (void*), KM_NOSLEEP
);
979 if (buf_hash_table
.ht_table
== NULL
) {
980 ASSERT(hsize
> (1ULL << 8));
985 hdr_full_cache
= kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE
,
986 0, hdr_full_cons
, hdr_full_dest
, NULL
, NULL
, NULL
, 0);
987 hdr_l2only_cache
= kmem_cache_create("arc_buf_hdr_t_l2only",
988 HDR_L2ONLY_SIZE
, 0, hdr_l2only_cons
, hdr_l2only_dest
, NULL
,
990 buf_cache
= kmem_cache_create("arc_buf_t", sizeof (arc_buf_t
),
991 0, buf_cons
, buf_dest
, NULL
, NULL
, NULL
, 0);
993 for (i
= 0; i
< 256; i
++)
994 for (ct
= zfs_crc64_table
+ i
, *ct
= i
, j
= 8; j
> 0; j
--)
995 *ct
= (*ct
>> 1) ^ (-(*ct
& 1) & ZFS_CRC64_POLY
);
997 for (i
= 0; i
< BUF_LOCKS
; i
++) {
998 mutex_init(&buf_hash_table
.ht_locks
[i
].ht_lock
,
999 NULL
, MUTEX_DEFAULT
, NULL
);
1004 * Transition between the two allocation states for the arc_buf_hdr struct.
1005 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
1006 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
1007 * version is used when a cache buffer is only in the L2ARC in order to reduce
1010 static arc_buf_hdr_t
*
1011 arc_hdr_realloc(arc_buf_hdr_t
*hdr
, kmem_cache_t
*old
, kmem_cache_t
*new)
1013 arc_buf_hdr_t
*nhdr
;
1016 ASSERT(HDR_HAS_L2HDR(hdr
));
1017 ASSERT((old
== hdr_full_cache
&& new == hdr_l2only_cache
) ||
1018 (old
== hdr_l2only_cache
&& new == hdr_full_cache
));
1020 dev
= hdr
->b_l2hdr
.b_dev
;
1021 nhdr
= kmem_cache_alloc(new, KM_PUSHPAGE
);
1023 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
1024 buf_hash_remove(hdr
);
1026 bcopy(hdr
, nhdr
, HDR_L2ONLY_SIZE
);
1028 if (new == hdr_full_cache
) {
1029 nhdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
1031 * arc_access and arc_change_state need to be aware that a
1032 * header has just come out of L2ARC, so we set its state to
1033 * l2c_only even though it's about to change.
1035 nhdr
->b_l1hdr
.b_state
= arc_l2c_only
;
1037 /* Verify previous threads set to NULL before freeing */
1038 ASSERT3P(nhdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1040 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1041 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
1044 * If we've reached here, We must have been called from
1045 * arc_evict_hdr(), as such we should have already been
1046 * removed from any ghost list we were previously on
1047 * (which protects us from racing with arc_evict_state),
1048 * thus no locking is needed during this check.
1050 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1053 * A buffer must not be moved into the arc_l2c_only
1054 * state if it's not finished being written out to the
1055 * l2arc device. Otherwise, the b_l1hdr.b_tmp_cdata field
1056 * might try to be accessed, even though it was removed.
1058 VERIFY(!HDR_L2_WRITING(hdr
));
1059 VERIFY3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1061 nhdr
->b_flags
&= ~ARC_FLAG_HAS_L1HDR
;
1064 * The header has been reallocated so we need to re-insert it into any
1067 (void) buf_hash_insert(nhdr
, NULL
);
1069 ASSERT(list_link_active(&hdr
->b_l2hdr
.b_l2node
));
1071 mutex_enter(&dev
->l2ad_mtx
);
1074 * We must place the realloc'ed header back into the list at
1075 * the same spot. Otherwise, if it's placed earlier in the list,
1076 * l2arc_write_buffers() could find it during the function's
1077 * write phase, and try to write it out to the l2arc.
1079 list_insert_after(&dev
->l2ad_buflist
, hdr
, nhdr
);
1080 list_remove(&dev
->l2ad_buflist
, hdr
);
1082 mutex_exit(&dev
->l2ad_mtx
);
1085 * Since we're using the pointer address as the tag when
1086 * incrementing and decrementing the l2ad_alloc refcount, we
1087 * must remove the old pointer (that we're about to destroy) and
1088 * add the new pointer to the refcount. Otherwise we'd remove
1089 * the wrong pointer address when calling arc_hdr_destroy() later.
1092 (void) refcount_remove_many(&dev
->l2ad_alloc
,
1093 hdr
->b_l2hdr
.b_asize
, hdr
);
1095 (void) refcount_add_many(&dev
->l2ad_alloc
,
1096 nhdr
->b_l2hdr
.b_asize
, nhdr
);
1098 buf_discard_identity(hdr
);
1099 hdr
->b_freeze_cksum
= NULL
;
1100 kmem_cache_free(old
, hdr
);
1106 #define ARC_MINTIME (hz>>4) /* 62 ms */
1109 arc_cksum_verify(arc_buf_t
*buf
)
1113 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1116 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1117 if (buf
->b_hdr
->b_freeze_cksum
== NULL
|| HDR_IO_ERROR(buf
->b_hdr
)) {
1118 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1121 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1122 if (!ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
))
1123 panic("buffer modified while frozen!");
1124 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1128 arc_cksum_equal(arc_buf_t
*buf
)
1133 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1134 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1135 equal
= ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
);
1136 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1142 arc_cksum_compute(arc_buf_t
*buf
, boolean_t force
)
1144 if (!force
&& !(zfs_flags
& ZFS_DEBUG_MODIFY
))
1147 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1148 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1149 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1152 buf
->b_hdr
->b_freeze_cksum
= kmem_alloc(sizeof (zio_cksum_t
),
1154 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
,
1155 buf
->b_hdr
->b_freeze_cksum
);
1156 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1162 arc_buf_sigsegv(int sig
, siginfo_t
*si
, void *unused
)
1164 panic("Got SIGSEGV at address: 0x%lx\n", (long) si
->si_addr
);
1170 arc_buf_unwatch(arc_buf_t
*buf
)
1174 ASSERT0(mprotect(buf
->b_data
, buf
->b_hdr
->b_size
,
1175 PROT_READ
| PROT_WRITE
));
1182 arc_buf_watch(arc_buf_t
*buf
)
1186 ASSERT0(mprotect(buf
->b_data
, buf
->b_hdr
->b_size
, PROT_READ
));
1190 static arc_buf_contents_t
1191 arc_buf_type(arc_buf_hdr_t
*hdr
)
1193 if (HDR_ISTYPE_METADATA(hdr
)) {
1194 return (ARC_BUFC_METADATA
);
1196 return (ARC_BUFC_DATA
);
1201 arc_bufc_to_flags(arc_buf_contents_t type
)
1205 /* metadata field is 0 if buffer contains normal data */
1207 case ARC_BUFC_METADATA
:
1208 return (ARC_FLAG_BUFC_METADATA
);
1212 panic("undefined ARC buffer type!");
1213 return ((uint32_t)-1);
1217 arc_buf_thaw(arc_buf_t
*buf
)
1219 if (zfs_flags
& ZFS_DEBUG_MODIFY
) {
1220 if (buf
->b_hdr
->b_l1hdr
.b_state
!= arc_anon
)
1221 panic("modifying non-anon buffer!");
1222 if (HDR_IO_IN_PROGRESS(buf
->b_hdr
))
1223 panic("modifying buffer while i/o in progress!");
1224 arc_cksum_verify(buf
);
1227 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1228 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1229 kmem_free(buf
->b_hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
1230 buf
->b_hdr
->b_freeze_cksum
= NULL
;
1233 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1235 arc_buf_unwatch(buf
);
1239 arc_buf_freeze(arc_buf_t
*buf
)
1241 kmutex_t
*hash_lock
;
1243 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1246 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1247 mutex_enter(hash_lock
);
1249 ASSERT(buf
->b_hdr
->b_freeze_cksum
!= NULL
||
1250 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
1251 arc_cksum_compute(buf
, B_FALSE
);
1252 mutex_exit(hash_lock
);
1257 add_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
1261 ASSERT(HDR_HAS_L1HDR(hdr
));
1262 ASSERT(MUTEX_HELD(hash_lock
));
1264 state
= hdr
->b_l1hdr
.b_state
;
1266 if ((refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
) == 1) &&
1267 (state
!= arc_anon
)) {
1268 /* We don't use the L2-only state list. */
1269 if (state
!= arc_l2c_only
) {
1270 arc_buf_contents_t type
= arc_buf_type(hdr
);
1271 uint64_t delta
= hdr
->b_size
* hdr
->b_l1hdr
.b_datacnt
;
1272 multilist_t
*list
= &state
->arcs_list
[type
];
1273 uint64_t *size
= &state
->arcs_lsize
[type
];
1275 multilist_remove(list
, hdr
);
1277 if (GHOST_STATE(state
)) {
1278 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
1279 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
1280 delta
= hdr
->b_size
;
1283 ASSERT3U(*size
, >=, delta
);
1284 atomic_add_64(size
, -delta
);
1286 /* remove the prefetch flag if we get a reference */
1287 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
1292 remove_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
1295 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
1297 ASSERT(HDR_HAS_L1HDR(hdr
));
1298 ASSERT(state
== arc_anon
|| MUTEX_HELD(hash_lock
));
1299 ASSERT(!GHOST_STATE(state
));
1302 * arc_l2c_only counts as a ghost state so we don't need to explicitly
1303 * check to prevent usage of the arc_l2c_only list.
1305 if (((cnt
= refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
)) == 0) &&
1306 (state
!= arc_anon
)) {
1307 arc_buf_contents_t type
= arc_buf_type(hdr
);
1308 multilist_t
*list
= &state
->arcs_list
[type
];
1309 uint64_t *size
= &state
->arcs_lsize
[type
];
1311 multilist_insert(list
, hdr
);
1313 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
1314 atomic_add_64(size
, hdr
->b_size
*
1315 hdr
->b_l1hdr
.b_datacnt
);
1321 * Returns detailed information about a specific arc buffer. When the
1322 * state_index argument is set the function will calculate the arc header
1323 * list position for its arc state. Since this requires a linear traversal
1324 * callers are strongly encourage not to do this. However, it can be helpful
1325 * for targeted analysis so the functionality is provided.
1328 arc_buf_info(arc_buf_t
*ab
, arc_buf_info_t
*abi
, int state_index
)
1330 arc_buf_hdr_t
*hdr
= ab
->b_hdr
;
1331 l1arc_buf_hdr_t
*l1hdr
= NULL
;
1332 l2arc_buf_hdr_t
*l2hdr
= NULL
;
1333 arc_state_t
*state
= NULL
;
1335 if (HDR_HAS_L1HDR(hdr
)) {
1336 l1hdr
= &hdr
->b_l1hdr
;
1337 state
= l1hdr
->b_state
;
1339 if (HDR_HAS_L2HDR(hdr
))
1340 l2hdr
= &hdr
->b_l2hdr
;
1342 memset(abi
, 0, sizeof (arc_buf_info_t
));
1343 abi
->abi_flags
= hdr
->b_flags
;
1346 abi
->abi_datacnt
= l1hdr
->b_datacnt
;
1347 abi
->abi_access
= l1hdr
->b_arc_access
;
1348 abi
->abi_mru_hits
= l1hdr
->b_mru_hits
;
1349 abi
->abi_mru_ghost_hits
= l1hdr
->b_mru_ghost_hits
;
1350 abi
->abi_mfu_hits
= l1hdr
->b_mfu_hits
;
1351 abi
->abi_mfu_ghost_hits
= l1hdr
->b_mfu_ghost_hits
;
1352 abi
->abi_holds
= refcount_count(&l1hdr
->b_refcnt
);
1356 abi
->abi_l2arc_dattr
= l2hdr
->b_daddr
;
1357 abi
->abi_l2arc_asize
= l2hdr
->b_asize
;
1358 abi
->abi_l2arc_compress
= HDR_GET_COMPRESS(hdr
);
1359 abi
->abi_l2arc_hits
= l2hdr
->b_hits
;
1362 abi
->abi_state_type
= state
? state
->arcs_state
: ARC_STATE_ANON
;
1363 abi
->abi_state_contents
= arc_buf_type(hdr
);
1364 abi
->abi_size
= hdr
->b_size
;
1368 * Move the supplied buffer to the indicated state. The hash lock
1369 * for the buffer must be held by the caller.
1372 arc_change_state(arc_state_t
*new_state
, arc_buf_hdr_t
*hdr
,
1373 kmutex_t
*hash_lock
)
1375 arc_state_t
*old_state
;
1378 uint64_t from_delta
, to_delta
;
1379 arc_buf_contents_t buftype
= arc_buf_type(hdr
);
1382 * We almost always have an L1 hdr here, since we call arc_hdr_realloc()
1383 * in arc_read() when bringing a buffer out of the L2ARC. However, the
1384 * L1 hdr doesn't always exist when we change state to arc_anon before
1385 * destroying a header, in which case reallocating to add the L1 hdr is
1388 if (HDR_HAS_L1HDR(hdr
)) {
1389 old_state
= hdr
->b_l1hdr
.b_state
;
1390 refcnt
= refcount_count(&hdr
->b_l1hdr
.b_refcnt
);
1391 datacnt
= hdr
->b_l1hdr
.b_datacnt
;
1393 old_state
= arc_l2c_only
;
1398 ASSERT(MUTEX_HELD(hash_lock
));
1399 ASSERT3P(new_state
, !=, old_state
);
1400 ASSERT(refcnt
== 0 || datacnt
> 0);
1401 ASSERT(!GHOST_STATE(new_state
) || datacnt
== 0);
1402 ASSERT(old_state
!= arc_anon
|| datacnt
<= 1);
1404 from_delta
= to_delta
= datacnt
* hdr
->b_size
;
1407 * If this buffer is evictable, transfer it from the
1408 * old state list to the new state list.
1411 if (old_state
!= arc_anon
&& old_state
!= arc_l2c_only
) {
1412 uint64_t *size
= &old_state
->arcs_lsize
[buftype
];
1414 ASSERT(HDR_HAS_L1HDR(hdr
));
1415 multilist_remove(&old_state
->arcs_list
[buftype
], hdr
);
1418 * If prefetching out of the ghost cache,
1419 * we will have a non-zero datacnt.
1421 if (GHOST_STATE(old_state
) && datacnt
== 0) {
1422 /* ghost elements have a ghost size */
1423 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1424 from_delta
= hdr
->b_size
;
1426 ASSERT3U(*size
, >=, from_delta
);
1427 atomic_add_64(size
, -from_delta
);
1429 if (new_state
!= arc_anon
&& new_state
!= arc_l2c_only
) {
1430 uint64_t *size
= &new_state
->arcs_lsize
[buftype
];
1433 * An L1 header always exists here, since if we're
1434 * moving to some L1-cached state (i.e. not l2c_only or
1435 * anonymous), we realloc the header to add an L1hdr
1438 ASSERT(HDR_HAS_L1HDR(hdr
));
1439 multilist_insert(&new_state
->arcs_list
[buftype
], hdr
);
1441 /* ghost elements have a ghost size */
1442 if (GHOST_STATE(new_state
)) {
1444 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1445 to_delta
= hdr
->b_size
;
1447 atomic_add_64(size
, to_delta
);
1451 ASSERT(!BUF_EMPTY(hdr
));
1452 if (new_state
== arc_anon
&& HDR_IN_HASH_TABLE(hdr
))
1453 buf_hash_remove(hdr
);
1455 /* adjust state sizes (ignore arc_l2c_only) */
1456 if (to_delta
&& new_state
!= arc_l2c_only
)
1457 atomic_add_64(&new_state
->arcs_size
, to_delta
);
1458 if (from_delta
&& old_state
!= arc_l2c_only
) {
1459 ASSERT3U(old_state
->arcs_size
, >=, from_delta
);
1460 atomic_add_64(&old_state
->arcs_size
, -from_delta
);
1462 if (HDR_HAS_L1HDR(hdr
))
1463 hdr
->b_l1hdr
.b_state
= new_state
;
1466 * L2 headers should never be on the L2 state list since they don't
1467 * have L1 headers allocated.
1469 ASSERT(multilist_is_empty(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]) &&
1470 multilist_is_empty(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]));
1474 arc_space_consume(uint64_t space
, arc_space_type_t type
)
1476 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1481 case ARC_SPACE_DATA
:
1482 ARCSTAT_INCR(arcstat_data_size
, space
);
1484 case ARC_SPACE_META
:
1485 ARCSTAT_INCR(arcstat_meta_size
, space
);
1487 case ARC_SPACE_OTHER
:
1488 ARCSTAT_INCR(arcstat_other_size
, space
);
1490 case ARC_SPACE_HDRS
:
1491 ARCSTAT_INCR(arcstat_hdr_size
, space
);
1493 case ARC_SPACE_L2HDRS
:
1494 ARCSTAT_INCR(arcstat_l2_hdr_size
, space
);
1498 if (type
!= ARC_SPACE_DATA
) {
1499 ARCSTAT_INCR(arcstat_meta_used
, space
);
1500 if (arc_meta_max
< arc_meta_used
)
1501 arc_meta_max
= arc_meta_used
;
1504 atomic_add_64(&arc_size
, space
);
1508 arc_space_return(uint64_t space
, arc_space_type_t type
)
1510 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1515 case ARC_SPACE_DATA
:
1516 ARCSTAT_INCR(arcstat_data_size
, -space
);
1518 case ARC_SPACE_META
:
1519 ARCSTAT_INCR(arcstat_meta_size
, -space
);
1521 case ARC_SPACE_OTHER
:
1522 ARCSTAT_INCR(arcstat_other_size
, -space
);
1524 case ARC_SPACE_HDRS
:
1525 ARCSTAT_INCR(arcstat_hdr_size
, -space
);
1527 case ARC_SPACE_L2HDRS
:
1528 ARCSTAT_INCR(arcstat_l2_hdr_size
, -space
);
1532 if (type
!= ARC_SPACE_DATA
) {
1533 ASSERT(arc_meta_used
>= space
);
1534 ARCSTAT_INCR(arcstat_meta_used
, -space
);
1537 ASSERT(arc_size
>= space
);
1538 atomic_add_64(&arc_size
, -space
);
1542 arc_buf_alloc(spa_t
*spa
, uint64_t size
, void *tag
, arc_buf_contents_t type
)
1547 VERIFY3U(size
, <=, spa_maxblocksize(spa
));
1548 hdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
1549 ASSERT(BUF_EMPTY(hdr
));
1550 ASSERT3P(hdr
->b_freeze_cksum
, ==, NULL
);
1552 hdr
->b_spa
= spa_load_guid(spa
);
1553 hdr
->b_l1hdr
.b_mru_hits
= 0;
1554 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
1555 hdr
->b_l1hdr
.b_mfu_hits
= 0;
1556 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
1557 hdr
->b_l1hdr
.b_l2_hits
= 0;
1559 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1562 buf
->b_efunc
= NULL
;
1563 buf
->b_private
= NULL
;
1566 hdr
->b_flags
= arc_bufc_to_flags(type
);
1567 hdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
1569 hdr
->b_l1hdr
.b_buf
= buf
;
1570 hdr
->b_l1hdr
.b_state
= arc_anon
;
1571 hdr
->b_l1hdr
.b_arc_access
= 0;
1572 hdr
->b_l1hdr
.b_datacnt
= 1;
1573 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
1575 arc_get_data_buf(buf
);
1577 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
1578 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1583 static char *arc_onloan_tag
= "onloan";
1586 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
1587 * flight data by arc_tempreserve_space() until they are "returned". Loaned
1588 * buffers must be returned to the arc before they can be used by the DMU or
1592 arc_loan_buf(spa_t
*spa
, uint64_t size
)
1596 buf
= arc_buf_alloc(spa
, size
, arc_onloan_tag
, ARC_BUFC_DATA
);
1598 atomic_add_64(&arc_loaned_bytes
, size
);
1603 * Return a loaned arc buffer to the arc.
1606 arc_return_buf(arc_buf_t
*buf
, void *tag
)
1608 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1610 ASSERT(buf
->b_data
!= NULL
);
1611 ASSERT(HDR_HAS_L1HDR(hdr
));
1612 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1613 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
1615 atomic_add_64(&arc_loaned_bytes
, -hdr
->b_size
);
1618 /* Detach an arc_buf from a dbuf (tag) */
1620 arc_loan_inuse_buf(arc_buf_t
*buf
, void *tag
)
1622 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1624 ASSERT(buf
->b_data
!= NULL
);
1625 ASSERT(HDR_HAS_L1HDR(hdr
));
1626 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
1627 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1628 buf
->b_efunc
= NULL
;
1629 buf
->b_private
= NULL
;
1631 atomic_add_64(&arc_loaned_bytes
, hdr
->b_size
);
1635 arc_buf_clone(arc_buf_t
*from
)
1638 arc_buf_hdr_t
*hdr
= from
->b_hdr
;
1639 uint64_t size
= hdr
->b_size
;
1641 ASSERT(HDR_HAS_L1HDR(hdr
));
1642 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_anon
);
1644 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1647 buf
->b_efunc
= NULL
;
1648 buf
->b_private
= NULL
;
1649 buf
->b_next
= hdr
->b_l1hdr
.b_buf
;
1650 hdr
->b_l1hdr
.b_buf
= buf
;
1651 arc_get_data_buf(buf
);
1652 bcopy(from
->b_data
, buf
->b_data
, size
);
1655 * This buffer already exists in the arc so create a duplicate
1656 * copy for the caller. If the buffer is associated with user data
1657 * then track the size and number of duplicates. These stats will be
1658 * updated as duplicate buffers are created and destroyed.
1660 if (HDR_ISTYPE_DATA(hdr
)) {
1661 ARCSTAT_BUMP(arcstat_duplicate_buffers
);
1662 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, size
);
1664 hdr
->b_l1hdr
.b_datacnt
+= 1;
1669 arc_buf_add_ref(arc_buf_t
*buf
, void* tag
)
1672 kmutex_t
*hash_lock
;
1675 * Check to see if this buffer is evicted. Callers
1676 * must verify b_data != NULL to know if the add_ref
1679 mutex_enter(&buf
->b_evict_lock
);
1680 if (buf
->b_data
== NULL
) {
1681 mutex_exit(&buf
->b_evict_lock
);
1684 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1685 mutex_enter(hash_lock
);
1687 ASSERT(HDR_HAS_L1HDR(hdr
));
1688 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
1689 mutex_exit(&buf
->b_evict_lock
);
1691 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
1692 hdr
->b_l1hdr
.b_state
== arc_mfu
);
1694 add_reference(hdr
, hash_lock
, tag
);
1695 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
1696 arc_access(hdr
, hash_lock
);
1697 mutex_exit(hash_lock
);
1698 ARCSTAT_BUMP(arcstat_hits
);
1699 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
1700 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
1701 data
, metadata
, hits
);
1705 arc_buf_free_on_write(void *data
, size_t size
,
1706 void (*free_func
)(void *, size_t))
1708 l2arc_data_free_t
*df
;
1710 df
= kmem_alloc(sizeof (*df
), KM_SLEEP
);
1711 df
->l2df_data
= data
;
1712 df
->l2df_size
= size
;
1713 df
->l2df_func
= free_func
;
1714 mutex_enter(&l2arc_free_on_write_mtx
);
1715 list_insert_head(l2arc_free_on_write
, df
);
1716 mutex_exit(&l2arc_free_on_write_mtx
);
1720 * Free the arc data buffer. If it is an l2arc write in progress,
1721 * the buffer is placed on l2arc_free_on_write to be freed later.
1724 arc_buf_data_free(arc_buf_t
*buf
, void (*free_func
)(void *, size_t))
1726 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1728 if (HDR_L2_WRITING(hdr
)) {
1729 arc_buf_free_on_write(buf
->b_data
, hdr
->b_size
, free_func
);
1730 ARCSTAT_BUMP(arcstat_l2_free_on_write
);
1732 free_func(buf
->b_data
, hdr
->b_size
);
1737 arc_buf_l2_cdata_free(arc_buf_hdr_t
*hdr
)
1739 ASSERT(HDR_HAS_L2HDR(hdr
));
1740 ASSERT(MUTEX_HELD(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
));
1743 * The b_tmp_cdata field is linked off of the b_l1hdr, so if
1744 * that doesn't exist, the header is in the arc_l2c_only state,
1745 * and there isn't anything to free (it's already been freed).
1747 if (!HDR_HAS_L1HDR(hdr
))
1751 * The header isn't being written to the l2arc device, thus it
1752 * shouldn't have a b_tmp_cdata to free.
1754 if (!HDR_L2_WRITING(hdr
)) {
1755 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1760 * The header does not have compression enabled. This can be due
1761 * to the buffer not being compressible, or because we're
1762 * freeing the buffer before the second phase of
1763 * l2arc_write_buffer() has started (which does the compression
1764 * step). In either case, b_tmp_cdata does not point to a
1765 * separately compressed buffer, so there's nothing to free (it
1766 * points to the same buffer as the arc_buf_t's b_data field).
1768 if (HDR_GET_COMPRESS(hdr
) == ZIO_COMPRESS_OFF
) {
1769 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
1774 * There's nothing to free since the buffer was all zero's and
1775 * compressed to a zero length buffer.
1777 if (HDR_GET_COMPRESS(hdr
) == ZIO_COMPRESS_EMPTY
) {
1778 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1782 ASSERT(L2ARC_IS_VALID_COMPRESS(HDR_GET_COMPRESS(hdr
)));
1784 arc_buf_free_on_write(hdr
->b_l1hdr
.b_tmp_cdata
,
1785 hdr
->b_size
, zio_data_buf_free
);
1787 ARCSTAT_BUMP(arcstat_l2_cdata_free_on_write
);
1788 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
1792 * Free up buf->b_data and if 'remove' is set, then pull the
1793 * arc_buf_t off of the the arc_buf_hdr_t's list and free it.
1796 arc_buf_destroy(arc_buf_t
*buf
, boolean_t remove
)
1800 /* free up data associated with the buf */
1801 if (buf
->b_data
!= NULL
) {
1802 arc_state_t
*state
= buf
->b_hdr
->b_l1hdr
.b_state
;
1803 uint64_t size
= buf
->b_hdr
->b_size
;
1804 arc_buf_contents_t type
= arc_buf_type(buf
->b_hdr
);
1806 arc_cksum_verify(buf
);
1807 arc_buf_unwatch(buf
);
1809 if (type
== ARC_BUFC_METADATA
) {
1810 arc_buf_data_free(buf
, zio_buf_free
);
1811 arc_space_return(size
, ARC_SPACE_META
);
1813 ASSERT(type
== ARC_BUFC_DATA
);
1814 arc_buf_data_free(buf
, zio_data_buf_free
);
1815 arc_space_return(size
, ARC_SPACE_DATA
);
1818 /* protected by hash lock, if in the hash table */
1819 if (multilist_link_active(&buf
->b_hdr
->b_l1hdr
.b_arc_node
)) {
1820 uint64_t *cnt
= &state
->arcs_lsize
[type
];
1822 ASSERT(refcount_is_zero(
1823 &buf
->b_hdr
->b_l1hdr
.b_refcnt
));
1824 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
1826 ASSERT3U(*cnt
, >=, size
);
1827 atomic_add_64(cnt
, -size
);
1829 ASSERT3U(state
->arcs_size
, >=, size
);
1830 atomic_add_64(&state
->arcs_size
, -size
);
1834 * If we're destroying a duplicate buffer make sure
1835 * that the appropriate statistics are updated.
1837 if (buf
->b_hdr
->b_l1hdr
.b_datacnt
> 1 &&
1838 HDR_ISTYPE_DATA(buf
->b_hdr
)) {
1839 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
1840 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, -size
);
1842 ASSERT(buf
->b_hdr
->b_l1hdr
.b_datacnt
> 0);
1843 buf
->b_hdr
->b_l1hdr
.b_datacnt
-= 1;
1846 /* only remove the buf if requested */
1850 /* remove the buf from the hdr list */
1851 for (bufp
= &buf
->b_hdr
->b_l1hdr
.b_buf
; *bufp
!= buf
;
1852 bufp
= &(*bufp
)->b_next
)
1854 *bufp
= buf
->b_next
;
1857 ASSERT(buf
->b_efunc
== NULL
);
1859 /* clean up the buf */
1861 kmem_cache_free(buf_cache
, buf
);
1865 arc_hdr_l2hdr_destroy(arc_buf_hdr_t
*hdr
)
1867 l2arc_buf_hdr_t
*l2hdr
= &hdr
->b_l2hdr
;
1868 l2arc_dev_t
*dev
= l2hdr
->b_dev
;
1870 ASSERT(MUTEX_HELD(&dev
->l2ad_mtx
));
1871 ASSERT(HDR_HAS_L2HDR(hdr
));
1873 list_remove(&dev
->l2ad_buflist
, hdr
);
1875 arc_space_return(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1878 * We don't want to leak the b_tmp_cdata buffer that was
1879 * allocated in l2arc_write_buffers()
1881 arc_buf_l2_cdata_free(hdr
);
1884 * If the l2hdr's b_daddr is equal to L2ARC_ADDR_UNSET, then
1885 * this header is being processed by l2arc_write_buffers() (i.e.
1886 * it's in the first stage of l2arc_write_buffers()).
1887 * Re-affirming that truth here, just to serve as a reminder. If
1888 * b_daddr does not equal L2ARC_ADDR_UNSET, then the header may or
1889 * may not have its HDR_L2_WRITING flag set. (the write may have
1890 * completed, in which case HDR_L2_WRITING will be false and the
1891 * b_daddr field will point to the address of the buffer on disk).
1893 IMPLY(l2hdr
->b_daddr
== L2ARC_ADDR_UNSET
, HDR_L2_WRITING(hdr
));
1896 * If b_daddr is equal to L2ARC_ADDR_UNSET, we're racing with
1897 * l2arc_write_buffers(). Since we've just removed this header
1898 * from the l2arc buffer list, this header will never reach the
1899 * second stage of l2arc_write_buffers(), which increments the
1900 * accounting stats for this header. Thus, we must be careful
1901 * not to decrement them for this header either.
1903 if (l2hdr
->b_daddr
!= L2ARC_ADDR_UNSET
) {
1904 ARCSTAT_INCR(arcstat_l2_asize
, -l2hdr
->b_asize
);
1905 ARCSTAT_INCR(arcstat_l2_size
, -hdr
->b_size
);
1907 vdev_space_update(dev
->l2ad_vdev
,
1908 -l2hdr
->b_asize
, 0, 0);
1910 (void) refcount_remove_many(&dev
->l2ad_alloc
,
1911 l2hdr
->b_asize
, hdr
);
1914 hdr
->b_flags
&= ~ARC_FLAG_HAS_L2HDR
;
1918 arc_hdr_destroy(arc_buf_hdr_t
*hdr
)
1920 if (HDR_HAS_L1HDR(hdr
)) {
1921 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
||
1922 hdr
->b_l1hdr
.b_datacnt
> 0);
1923 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
1924 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
1926 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
1927 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
1929 if (HDR_HAS_L2HDR(hdr
)) {
1930 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
1931 boolean_t buflist_held
= MUTEX_HELD(&dev
->l2ad_mtx
);
1934 mutex_enter(&dev
->l2ad_mtx
);
1937 * Even though we checked this conditional above, we
1938 * need to check this again now that we have the
1939 * l2ad_mtx. This is because we could be racing with
1940 * another thread calling l2arc_evict() which might have
1941 * destroyed this header's L2 portion as we were waiting
1942 * to acquire the l2ad_mtx. If that happens, we don't
1943 * want to re-destroy the header's L2 portion.
1945 if (HDR_HAS_L2HDR(hdr
))
1946 arc_hdr_l2hdr_destroy(hdr
);
1949 mutex_exit(&dev
->l2ad_mtx
);
1952 if (!BUF_EMPTY(hdr
))
1953 buf_discard_identity(hdr
);
1955 if (hdr
->b_freeze_cksum
!= NULL
) {
1956 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
1957 hdr
->b_freeze_cksum
= NULL
;
1960 if (HDR_HAS_L1HDR(hdr
)) {
1961 while (hdr
->b_l1hdr
.b_buf
) {
1962 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
1964 if (buf
->b_efunc
!= NULL
) {
1965 mutex_enter(&arc_user_evicts_lock
);
1966 mutex_enter(&buf
->b_evict_lock
);
1967 ASSERT(buf
->b_hdr
!= NULL
);
1968 arc_buf_destroy(hdr
->b_l1hdr
.b_buf
, FALSE
);
1969 hdr
->b_l1hdr
.b_buf
= buf
->b_next
;
1970 buf
->b_hdr
= &arc_eviction_hdr
;
1971 buf
->b_next
= arc_eviction_list
;
1972 arc_eviction_list
= buf
;
1973 mutex_exit(&buf
->b_evict_lock
);
1974 cv_signal(&arc_user_evicts_cv
);
1975 mutex_exit(&arc_user_evicts_lock
);
1977 arc_buf_destroy(hdr
->b_l1hdr
.b_buf
, TRUE
);
1982 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
1983 if (HDR_HAS_L1HDR(hdr
)) {
1984 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1985 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
1986 kmem_cache_free(hdr_full_cache
, hdr
);
1988 kmem_cache_free(hdr_l2only_cache
, hdr
);
1993 arc_buf_free(arc_buf_t
*buf
, void *tag
)
1995 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1996 int hashed
= hdr
->b_l1hdr
.b_state
!= arc_anon
;
1998 ASSERT(buf
->b_efunc
== NULL
);
1999 ASSERT(buf
->b_data
!= NULL
);
2002 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
2004 mutex_enter(hash_lock
);
2006 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
2008 (void) remove_reference(hdr
, hash_lock
, tag
);
2009 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
2010 arc_buf_destroy(buf
, TRUE
);
2012 ASSERT(buf
== hdr
->b_l1hdr
.b_buf
);
2013 ASSERT(buf
->b_efunc
== NULL
);
2014 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
2016 mutex_exit(hash_lock
);
2017 } else if (HDR_IO_IN_PROGRESS(hdr
)) {
2020 * We are in the middle of an async write. Don't destroy
2021 * this buffer unless the write completes before we finish
2022 * decrementing the reference count.
2024 mutex_enter(&arc_user_evicts_lock
);
2025 (void) remove_reference(hdr
, NULL
, tag
);
2026 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2027 destroy_hdr
= !HDR_IO_IN_PROGRESS(hdr
);
2028 mutex_exit(&arc_user_evicts_lock
);
2030 arc_hdr_destroy(hdr
);
2032 if (remove_reference(hdr
, NULL
, tag
) > 0)
2033 arc_buf_destroy(buf
, TRUE
);
2035 arc_hdr_destroy(hdr
);
2040 arc_buf_remove_ref(arc_buf_t
*buf
, void* tag
)
2042 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2043 kmutex_t
*hash_lock
= NULL
;
2044 boolean_t no_callback
= (buf
->b_efunc
== NULL
);
2046 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
2047 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
2048 arc_buf_free(buf
, tag
);
2049 return (no_callback
);
2052 hash_lock
= HDR_LOCK(hdr
);
2053 mutex_enter(hash_lock
);
2055 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
2056 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
2057 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_anon
);
2058 ASSERT(buf
->b_data
!= NULL
);
2060 (void) remove_reference(hdr
, hash_lock
, tag
);
2061 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
2063 arc_buf_destroy(buf
, TRUE
);
2064 } else if (no_callback
) {
2065 ASSERT(hdr
->b_l1hdr
.b_buf
== buf
&& buf
->b_next
== NULL
);
2066 ASSERT(buf
->b_efunc
== NULL
);
2067 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
2069 ASSERT(no_callback
|| hdr
->b_l1hdr
.b_datacnt
> 1 ||
2070 refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2071 mutex_exit(hash_lock
);
2072 return (no_callback
);
2076 arc_buf_size(arc_buf_t
*buf
)
2078 return (buf
->b_hdr
->b_size
);
2082 * Called from the DMU to determine if the current buffer should be
2083 * evicted. In order to ensure proper locking, the eviction must be initiated
2084 * from the DMU. Return true if the buffer is associated with user data and
2085 * duplicate buffers still exist.
2088 arc_buf_eviction_needed(arc_buf_t
*buf
)
2091 boolean_t evict_needed
= B_FALSE
;
2093 if (zfs_disable_dup_eviction
)
2096 mutex_enter(&buf
->b_evict_lock
);
2100 * We are in arc_do_user_evicts(); let that function
2101 * perform the eviction.
2103 ASSERT(buf
->b_data
== NULL
);
2104 mutex_exit(&buf
->b_evict_lock
);
2106 } else if (buf
->b_data
== NULL
) {
2108 * We have already been added to the arc eviction list;
2109 * recommend eviction.
2111 ASSERT3P(hdr
, ==, &arc_eviction_hdr
);
2112 mutex_exit(&buf
->b_evict_lock
);
2116 if (hdr
->b_l1hdr
.b_datacnt
> 1 && HDR_ISTYPE_DATA(hdr
))
2117 evict_needed
= B_TRUE
;
2119 mutex_exit(&buf
->b_evict_lock
);
2120 return (evict_needed
);
2124 * Evict the arc_buf_hdr that is provided as a parameter. The resultant
2125 * state of the header is dependent on its state prior to entering this
2126 * function. The following transitions are possible:
2128 * - arc_mru -> arc_mru_ghost
2129 * - arc_mfu -> arc_mfu_ghost
2130 * - arc_mru_ghost -> arc_l2c_only
2131 * - arc_mru_ghost -> deleted
2132 * - arc_mfu_ghost -> arc_l2c_only
2133 * - arc_mfu_ghost -> deleted
2136 arc_evict_hdr(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
2138 arc_state_t
*evicted_state
, *state
;
2139 int64_t bytes_evicted
= 0;
2141 ASSERT(MUTEX_HELD(hash_lock
));
2142 ASSERT(HDR_HAS_L1HDR(hdr
));
2144 state
= hdr
->b_l1hdr
.b_state
;
2145 if (GHOST_STATE(state
)) {
2146 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
2147 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
2150 * l2arc_write_buffers() relies on a header's L1 portion
2151 * (i.e. its b_tmp_cdata field) during its write phase.
2152 * Thus, we cannot push a header onto the arc_l2c_only
2153 * state (removing its L1 piece) until the header is
2154 * done being written to the l2arc.
2156 if (HDR_HAS_L2HDR(hdr
) && HDR_L2_WRITING(hdr
)) {
2157 ARCSTAT_BUMP(arcstat_evict_l2_skip
);
2158 return (bytes_evicted
);
2161 ARCSTAT_BUMP(arcstat_deleted
);
2162 bytes_evicted
+= hdr
->b_size
;
2164 DTRACE_PROBE1(arc__delete
, arc_buf_hdr_t
*, hdr
);
2166 if (HDR_HAS_L2HDR(hdr
)) {
2168 * This buffer is cached on the 2nd Level ARC;
2169 * don't destroy the header.
2171 arc_change_state(arc_l2c_only
, hdr
, hash_lock
);
2173 * dropping from L1+L2 cached to L2-only,
2174 * realloc to remove the L1 header.
2176 hdr
= arc_hdr_realloc(hdr
, hdr_full_cache
,
2179 arc_change_state(arc_anon
, hdr
, hash_lock
);
2180 arc_hdr_destroy(hdr
);
2182 return (bytes_evicted
);
2185 ASSERT(state
== arc_mru
|| state
== arc_mfu
);
2186 evicted_state
= (state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
2188 /* prefetch buffers have a minimum lifespan */
2189 if (HDR_IO_IN_PROGRESS(hdr
) ||
2190 ((hdr
->b_flags
& (ARC_FLAG_PREFETCH
| ARC_FLAG_INDIRECT
)) &&
2191 ddi_get_lbolt() - hdr
->b_l1hdr
.b_arc_access
<
2192 arc_min_prefetch_lifespan
)) {
2193 ARCSTAT_BUMP(arcstat_evict_skip
);
2194 return (bytes_evicted
);
2197 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
2198 ASSERT3U(hdr
->b_l1hdr
.b_datacnt
, >, 0);
2199 while (hdr
->b_l1hdr
.b_buf
) {
2200 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
2201 if (!mutex_tryenter(&buf
->b_evict_lock
)) {
2202 ARCSTAT_BUMP(arcstat_mutex_miss
);
2205 if (buf
->b_data
!= NULL
)
2206 bytes_evicted
+= hdr
->b_size
;
2207 if (buf
->b_efunc
!= NULL
) {
2208 mutex_enter(&arc_user_evicts_lock
);
2209 arc_buf_destroy(buf
, FALSE
);
2210 hdr
->b_l1hdr
.b_buf
= buf
->b_next
;
2211 buf
->b_hdr
= &arc_eviction_hdr
;
2212 buf
->b_next
= arc_eviction_list
;
2213 arc_eviction_list
= buf
;
2214 cv_signal(&arc_user_evicts_cv
);
2215 mutex_exit(&arc_user_evicts_lock
);
2216 mutex_exit(&buf
->b_evict_lock
);
2218 mutex_exit(&buf
->b_evict_lock
);
2219 arc_buf_destroy(buf
, TRUE
);
2223 if (HDR_HAS_L2HDR(hdr
)) {
2224 ARCSTAT_INCR(arcstat_evict_l2_cached
, hdr
->b_size
);
2226 if (l2arc_write_eligible(hdr
->b_spa
, hdr
))
2227 ARCSTAT_INCR(arcstat_evict_l2_eligible
, hdr
->b_size
);
2229 ARCSTAT_INCR(arcstat_evict_l2_ineligible
, hdr
->b_size
);
2232 if (hdr
->b_l1hdr
.b_datacnt
== 0) {
2233 arc_change_state(evicted_state
, hdr
, hash_lock
);
2234 ASSERT(HDR_IN_HASH_TABLE(hdr
));
2235 hdr
->b_flags
|= ARC_FLAG_IN_HASH_TABLE
;
2236 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
2237 DTRACE_PROBE1(arc__evict
, arc_buf_hdr_t
*, hdr
);
2240 return (bytes_evicted
);
2244 arc_evict_state_impl(multilist_t
*ml
, int idx
, arc_buf_hdr_t
*marker
,
2245 uint64_t spa
, int64_t bytes
)
2247 multilist_sublist_t
*mls
;
2248 uint64_t bytes_evicted
= 0;
2250 kmutex_t
*hash_lock
;
2251 int evict_count
= 0;
2253 ASSERT3P(marker
, !=, NULL
);
2254 ASSERTV(if (bytes
< 0) ASSERT(bytes
== ARC_EVICT_ALL
));
2256 mls
= multilist_sublist_lock(ml
, idx
);
2258 for (hdr
= multilist_sublist_prev(mls
, marker
); hdr
!= NULL
;
2259 hdr
= multilist_sublist_prev(mls
, marker
)) {
2260 if ((bytes
!= ARC_EVICT_ALL
&& bytes_evicted
>= bytes
) ||
2261 (evict_count
>= zfs_arc_evict_batch_limit
))
2265 * To keep our iteration location, move the marker
2266 * forward. Since we're not holding hdr's hash lock, we
2267 * must be very careful and not remove 'hdr' from the
2268 * sublist. Otherwise, other consumers might mistake the
2269 * 'hdr' as not being on a sublist when they call the
2270 * multilist_link_active() function (they all rely on
2271 * the hash lock protecting concurrent insertions and
2272 * removals). multilist_sublist_move_forward() was
2273 * specifically implemented to ensure this is the case
2274 * (only 'marker' will be removed and re-inserted).
2276 multilist_sublist_move_forward(mls
, marker
);
2279 * The only case where the b_spa field should ever be
2280 * zero, is the marker headers inserted by
2281 * arc_evict_state(). It's possible for multiple threads
2282 * to be calling arc_evict_state() concurrently (e.g.
2283 * dsl_pool_close() and zio_inject_fault()), so we must
2284 * skip any markers we see from these other threads.
2286 if (hdr
->b_spa
== 0)
2289 /* we're only interested in evicting buffers of a certain spa */
2290 if (spa
!= 0 && hdr
->b_spa
!= spa
) {
2291 ARCSTAT_BUMP(arcstat_evict_skip
);
2295 hash_lock
= HDR_LOCK(hdr
);
2298 * We aren't calling this function from any code path
2299 * that would already be holding a hash lock, so we're
2300 * asserting on this assumption to be defensive in case
2301 * this ever changes. Without this check, it would be
2302 * possible to incorrectly increment arcstat_mutex_miss
2303 * below (e.g. if the code changed such that we called
2304 * this function with a hash lock held).
2306 ASSERT(!MUTEX_HELD(hash_lock
));
2308 if (mutex_tryenter(hash_lock
)) {
2309 uint64_t evicted
= arc_evict_hdr(hdr
, hash_lock
);
2310 mutex_exit(hash_lock
);
2312 bytes_evicted
+= evicted
;
2315 * If evicted is zero, arc_evict_hdr() must have
2316 * decided to skip this header, don't increment
2317 * evict_count in this case.
2323 * If arc_size isn't overflowing, signal any
2324 * threads that might happen to be waiting.
2326 * For each header evicted, we wake up a single
2327 * thread. If we used cv_broadcast, we could
2328 * wake up "too many" threads causing arc_size
2329 * to significantly overflow arc_c; since
2330 * arc_get_data_buf() doesn't check for overflow
2331 * when it's woken up (it doesn't because it's
2332 * possible for the ARC to be overflowing while
2333 * full of un-evictable buffers, and the
2334 * function should proceed in this case).
2336 * If threads are left sleeping, due to not
2337 * using cv_broadcast, they will be woken up
2338 * just before arc_reclaim_thread() sleeps.
2340 mutex_enter(&arc_reclaim_lock
);
2341 if (!arc_is_overflowing())
2342 cv_signal(&arc_reclaim_waiters_cv
);
2343 mutex_exit(&arc_reclaim_lock
);
2345 ARCSTAT_BUMP(arcstat_mutex_miss
);
2349 multilist_sublist_unlock(mls
);
2351 return (bytes_evicted
);
2355 * Evict buffers from the given arc state, until we've removed the
2356 * specified number of bytes. Move the removed buffers to the
2357 * appropriate evict state.
2359 * This function makes a "best effort". It skips over any buffers
2360 * it can't get a hash_lock on, and so, may not catch all candidates.
2361 * It may also return without evicting as much space as requested.
2363 * If bytes is specified using the special value ARC_EVICT_ALL, this
2364 * will evict all available (i.e. unlocked and evictable) buffers from
2365 * the given arc state; which is used by arc_flush().
2368 arc_evict_state(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
2369 arc_buf_contents_t type
)
2371 uint64_t total_evicted
= 0;
2372 multilist_t
*ml
= &state
->arcs_list
[type
];
2374 arc_buf_hdr_t
**markers
;
2377 ASSERTV(if (bytes
< 0) ASSERT(bytes
== ARC_EVICT_ALL
));
2379 num_sublists
= multilist_get_num_sublists(ml
);
2382 * If we've tried to evict from each sublist, made some
2383 * progress, but still have not hit the target number of bytes
2384 * to evict, we want to keep trying. The markers allow us to
2385 * pick up where we left off for each individual sublist, rather
2386 * than starting from the tail each time.
2388 markers
= kmem_zalloc(sizeof (*markers
) * num_sublists
, KM_SLEEP
);
2389 for (i
= 0; i
< num_sublists
; i
++) {
2390 multilist_sublist_t
*mls
;
2392 markers
[i
] = kmem_cache_alloc(hdr_full_cache
, KM_SLEEP
);
2395 * A b_spa of 0 is used to indicate that this header is
2396 * a marker. This fact is used in arc_adjust_type() and
2397 * arc_evict_state_impl().
2399 markers
[i
]->b_spa
= 0;
2401 mls
= multilist_sublist_lock(ml
, i
);
2402 multilist_sublist_insert_tail(mls
, markers
[i
]);
2403 multilist_sublist_unlock(mls
);
2407 * While we haven't hit our target number of bytes to evict, or
2408 * we're evicting all available buffers.
2410 while (total_evicted
< bytes
|| bytes
== ARC_EVICT_ALL
) {
2412 * Start eviction using a randomly selected sublist,
2413 * this is to try and evenly balance eviction across all
2414 * sublists. Always starting at the same sublist
2415 * (e.g. index 0) would cause evictions to favor certain
2416 * sublists over others.
2418 int sublist_idx
= multilist_get_random_index(ml
);
2419 uint64_t scan_evicted
= 0;
2421 for (i
= 0; i
< num_sublists
; i
++) {
2422 uint64_t bytes_remaining
;
2423 uint64_t bytes_evicted
;
2425 if (bytes
== ARC_EVICT_ALL
)
2426 bytes_remaining
= ARC_EVICT_ALL
;
2427 else if (total_evicted
< bytes
)
2428 bytes_remaining
= bytes
- total_evicted
;
2432 bytes_evicted
= arc_evict_state_impl(ml
, sublist_idx
,
2433 markers
[sublist_idx
], spa
, bytes_remaining
);
2435 scan_evicted
+= bytes_evicted
;
2436 total_evicted
+= bytes_evicted
;
2438 /* we've reached the end, wrap to the beginning */
2439 if (++sublist_idx
>= num_sublists
)
2444 * If we didn't evict anything during this scan, we have
2445 * no reason to believe we'll evict more during another
2446 * scan, so break the loop.
2448 if (scan_evicted
== 0) {
2449 /* This isn't possible, let's make that obvious */
2450 ASSERT3S(bytes
, !=, 0);
2453 * When bytes is ARC_EVICT_ALL, the only way to
2454 * break the loop is when scan_evicted is zero.
2455 * In that case, we actually have evicted enough,
2456 * so we don't want to increment the kstat.
2458 if (bytes
!= ARC_EVICT_ALL
) {
2459 ASSERT3S(total_evicted
, <, bytes
);
2460 ARCSTAT_BUMP(arcstat_evict_not_enough
);
2467 for (i
= 0; i
< num_sublists
; i
++) {
2468 multilist_sublist_t
*mls
= multilist_sublist_lock(ml
, i
);
2469 multilist_sublist_remove(mls
, markers
[i
]);
2470 multilist_sublist_unlock(mls
);
2472 kmem_cache_free(hdr_full_cache
, markers
[i
]);
2474 kmem_free(markers
, sizeof (*markers
) * num_sublists
);
2476 return (total_evicted
);
2480 * Flush all "evictable" data of the given type from the arc state
2481 * specified. This will not evict any "active" buffers (i.e. referenced).
2483 * When 'retry' is set to FALSE, the function will make a single pass
2484 * over the state and evict any buffers that it can. Since it doesn't
2485 * continually retry the eviction, it might end up leaving some buffers
2486 * in the ARC due to lock misses.
2488 * When 'retry' is set to TRUE, the function will continually retry the
2489 * eviction until *all* evictable buffers have been removed from the
2490 * state. As a result, if concurrent insertions into the state are
2491 * allowed (e.g. if the ARC isn't shutting down), this function might
2492 * wind up in an infinite loop, continually trying to evict buffers.
2495 arc_flush_state(arc_state_t
*state
, uint64_t spa
, arc_buf_contents_t type
,
2498 uint64_t evicted
= 0;
2500 while (state
->arcs_lsize
[type
] != 0) {
2501 evicted
+= arc_evict_state(state
, spa
, ARC_EVICT_ALL
, type
);
2511 * Helper function for arc_prune() it is responsible for safely handling
2512 * the execution of a registered arc_prune_func_t.
2515 arc_prune_task(void *ptr
)
2517 arc_prune_t
*ap
= (arc_prune_t
*)ptr
;
2518 arc_prune_func_t
*func
= ap
->p_pfunc
;
2521 func(ap
->p_adjust
, ap
->p_private
);
2523 /* Callback unregistered concurrently with execution */
2524 if (refcount_remove(&ap
->p_refcnt
, func
) == 0) {
2525 ASSERT(!list_link_active(&ap
->p_node
));
2526 refcount_destroy(&ap
->p_refcnt
);
2527 kmem_free(ap
, sizeof (*ap
));
2532 * Notify registered consumers they must drop holds on a portion of the ARC
2533 * buffered they reference. This provides a mechanism to ensure the ARC can
2534 * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This
2535 * is analogous to dnlc_reduce_cache() but more generic.
2537 * This operation is performed asyncronously so it may be safely called
2538 * in the context of the arc_adapt_thread(). A reference is taken here
2539 * for each registered arc_prune_t and the arc_prune_task() is responsible
2540 * for releasing it once the registered arc_prune_func_t has completed.
2543 arc_prune_async(int64_t adjust
)
2547 mutex_enter(&arc_prune_mtx
);
2548 for (ap
= list_head(&arc_prune_list
); ap
!= NULL
;
2549 ap
= list_next(&arc_prune_list
, ap
)) {
2551 if (refcount_count(&ap
->p_refcnt
) >= 2)
2554 refcount_add(&ap
->p_refcnt
, ap
->p_pfunc
);
2555 ap
->p_adjust
= adjust
;
2556 taskq_dispatch(arc_prune_taskq
, arc_prune_task
, ap
, TQ_SLEEP
);
2557 ARCSTAT_BUMP(arcstat_prune
);
2559 mutex_exit(&arc_prune_mtx
);
2563 arc_prune(int64_t adjust
)
2565 arc_prune_async(adjust
);
2566 taskq_wait_outstanding(arc_prune_taskq
, 0);
2570 * Evict the specified number of bytes from the state specified,
2571 * restricting eviction to the spa and type given. This function
2572 * prevents us from trying to evict more from a state's list than
2573 * is "evictable", and to skip evicting altogether when passed a
2574 * negative value for "bytes". In contrast, arc_evict_state() will
2575 * evict everything it can, when passed a negative value for "bytes".
2578 arc_adjust_impl(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
2579 arc_buf_contents_t type
)
2583 if (bytes
> 0 && state
->arcs_lsize
[type
] > 0) {
2584 delta
= MIN(state
->arcs_lsize
[type
], bytes
);
2585 return (arc_evict_state(state
, spa
, delta
, type
));
2592 * The goal of this function is to evict enough meta data buffers from the
2593 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly
2594 * more complicated than it appears because it is common for data buffers
2595 * to have holds on meta data buffers. In addition, dnode meta data buffers
2596 * will be held by the dnodes in the block preventing them from being freed.
2597 * This means we can't simply traverse the ARC and expect to always find
2598 * enough unheld meta data buffer to release.
2600 * Therefore, this function has been updated to make alternating passes
2601 * over the ARC releasing data buffers and then newly unheld meta data
2602 * buffers. This ensures forward progress is maintained and arc_meta_used
2603 * will decrease. Normally this is sufficient, but if required the ARC
2604 * will call the registered prune callbacks causing dentry and inodes to
2605 * be dropped from the VFS cache. This will make dnode meta data buffers
2606 * available for reclaim.
2609 arc_adjust_meta_balanced(void)
2611 int64_t adjustmnt
, delta
, prune
= 0;
2612 uint64_t total_evicted
= 0;
2613 arc_buf_contents_t type
= ARC_BUFC_DATA
;
2614 unsigned long restarts
= zfs_arc_meta_adjust_restarts
;
2618 * This slightly differs than the way we evict from the mru in
2619 * arc_adjust because we don't have a "target" value (i.e. no
2620 * "meta" arc_p). As a result, I think we can completely
2621 * cannibalize the metadata in the MRU before we evict the
2622 * metadata from the MFU. I think we probably need to implement a
2623 * "metadata arc_p" value to do this properly.
2625 adjustmnt
= arc_meta_used
- arc_meta_limit
;
2627 if (adjustmnt
> 0 && arc_mru
->arcs_lsize
[type
] > 0) {
2628 delta
= MIN(arc_mru
->arcs_lsize
[type
], adjustmnt
);
2629 total_evicted
+= arc_adjust_impl(arc_mru
, 0, delta
, type
);
2634 * We can't afford to recalculate adjustmnt here. If we do,
2635 * new metadata buffers can sneak into the MRU or ANON lists,
2636 * thus penalize the MFU metadata. Although the fudge factor is
2637 * small, it has been empirically shown to be significant for
2638 * certain workloads (e.g. creating many empty directories). As
2639 * such, we use the original calculation for adjustmnt, and
2640 * simply decrement the amount of data evicted from the MRU.
2643 if (adjustmnt
> 0 && arc_mfu
->arcs_lsize
[type
] > 0) {
2644 delta
= MIN(arc_mfu
->arcs_lsize
[type
], adjustmnt
);
2645 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, delta
, type
);
2648 adjustmnt
= arc_meta_used
- arc_meta_limit
;
2650 if (adjustmnt
> 0 && arc_mru_ghost
->arcs_lsize
[type
] > 0) {
2651 delta
= MIN(adjustmnt
,
2652 arc_mru_ghost
->arcs_lsize
[type
]);
2653 total_evicted
+= arc_adjust_impl(arc_mru_ghost
, 0, delta
, type
);
2657 if (adjustmnt
> 0 && arc_mfu_ghost
->arcs_lsize
[type
] > 0) {
2658 delta
= MIN(adjustmnt
,
2659 arc_mfu_ghost
->arcs_lsize
[type
]);
2660 total_evicted
+= arc_adjust_impl(arc_mfu_ghost
, 0, delta
, type
);
2664 * If after attempting to make the requested adjustment to the ARC
2665 * the meta limit is still being exceeded then request that the
2666 * higher layers drop some cached objects which have holds on ARC
2667 * meta buffers. Requests to the upper layers will be made with
2668 * increasingly large scan sizes until the ARC is below the limit.
2670 if (arc_meta_used
> arc_meta_limit
) {
2671 if (type
== ARC_BUFC_DATA
) {
2672 type
= ARC_BUFC_METADATA
;
2674 type
= ARC_BUFC_DATA
;
2676 if (zfs_arc_meta_prune
) {
2677 prune
+= zfs_arc_meta_prune
;
2678 arc_prune_async(prune
);
2687 return (total_evicted
);
2691 * Evict metadata buffers from the cache, such that arc_meta_used is
2692 * capped by the arc_meta_limit tunable.
2695 arc_adjust_meta_only(void)
2697 uint64_t total_evicted
= 0;
2701 * If we're over the meta limit, we want to evict enough
2702 * metadata to get back under the meta limit. We don't want to
2703 * evict so much that we drop the MRU below arc_p, though. If
2704 * we're over the meta limit more than we're over arc_p, we
2705 * evict some from the MRU here, and some from the MFU below.
2707 target
= MIN((int64_t)(arc_meta_used
- arc_meta_limit
),
2708 (int64_t)(arc_anon
->arcs_size
+ arc_mru
->arcs_size
- arc_p
));
2710 total_evicted
+= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
2713 * Similar to the above, we want to evict enough bytes to get us
2714 * below the meta limit, but not so much as to drop us below the
2715 * space alloted to the MFU (which is defined as arc_c - arc_p).
2717 target
= MIN((int64_t)(arc_meta_used
- arc_meta_limit
),
2718 (int64_t)(arc_mfu
->arcs_size
- (arc_c
- arc_p
)));
2720 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
2722 return (total_evicted
);
2726 arc_adjust_meta(void)
2728 if (zfs_arc_meta_strategy
== ARC_STRATEGY_META_ONLY
)
2729 return (arc_adjust_meta_only());
2731 return (arc_adjust_meta_balanced());
2735 * Return the type of the oldest buffer in the given arc state
2737 * This function will select a random sublist of type ARC_BUFC_DATA and
2738 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist
2739 * is compared, and the type which contains the "older" buffer will be
2742 static arc_buf_contents_t
2743 arc_adjust_type(arc_state_t
*state
)
2745 multilist_t
*data_ml
= &state
->arcs_list
[ARC_BUFC_DATA
];
2746 multilist_t
*meta_ml
= &state
->arcs_list
[ARC_BUFC_METADATA
];
2747 int data_idx
= multilist_get_random_index(data_ml
);
2748 int meta_idx
= multilist_get_random_index(meta_ml
);
2749 multilist_sublist_t
*data_mls
;
2750 multilist_sublist_t
*meta_mls
;
2751 arc_buf_contents_t type
;
2752 arc_buf_hdr_t
*data_hdr
;
2753 arc_buf_hdr_t
*meta_hdr
;
2756 * We keep the sublist lock until we're finished, to prevent
2757 * the headers from being destroyed via arc_evict_state().
2759 data_mls
= multilist_sublist_lock(data_ml
, data_idx
);
2760 meta_mls
= multilist_sublist_lock(meta_ml
, meta_idx
);
2763 * These two loops are to ensure we skip any markers that
2764 * might be at the tail of the lists due to arc_evict_state().
2767 for (data_hdr
= multilist_sublist_tail(data_mls
); data_hdr
!= NULL
;
2768 data_hdr
= multilist_sublist_prev(data_mls
, data_hdr
)) {
2769 if (data_hdr
->b_spa
!= 0)
2773 for (meta_hdr
= multilist_sublist_tail(meta_mls
); meta_hdr
!= NULL
;
2774 meta_hdr
= multilist_sublist_prev(meta_mls
, meta_hdr
)) {
2775 if (meta_hdr
->b_spa
!= 0)
2779 if (data_hdr
== NULL
&& meta_hdr
== NULL
) {
2780 type
= ARC_BUFC_DATA
;
2781 } else if (data_hdr
== NULL
) {
2782 ASSERT3P(meta_hdr
, !=, NULL
);
2783 type
= ARC_BUFC_METADATA
;
2784 } else if (meta_hdr
== NULL
) {
2785 ASSERT3P(data_hdr
, !=, NULL
);
2786 type
= ARC_BUFC_DATA
;
2788 ASSERT3P(data_hdr
, !=, NULL
);
2789 ASSERT3P(meta_hdr
, !=, NULL
);
2791 /* The headers can't be on the sublist without an L1 header */
2792 ASSERT(HDR_HAS_L1HDR(data_hdr
));
2793 ASSERT(HDR_HAS_L1HDR(meta_hdr
));
2795 if (data_hdr
->b_l1hdr
.b_arc_access
<
2796 meta_hdr
->b_l1hdr
.b_arc_access
) {
2797 type
= ARC_BUFC_DATA
;
2799 type
= ARC_BUFC_METADATA
;
2803 multilist_sublist_unlock(meta_mls
);
2804 multilist_sublist_unlock(data_mls
);
2810 * Evict buffers from the cache, such that arc_size is capped by arc_c.
2815 uint64_t total_evicted
= 0;
2820 * If we're over arc_meta_limit, we want to correct that before
2821 * potentially evicting data buffers below.
2823 total_evicted
+= arc_adjust_meta();
2828 * If we're over the target cache size, we want to evict enough
2829 * from the list to get back to our target size. We don't want
2830 * to evict too much from the MRU, such that it drops below
2831 * arc_p. So, if we're over our target cache size more than
2832 * the MRU is over arc_p, we'll evict enough to get back to
2833 * arc_p here, and then evict more from the MFU below.
2835 target
= MIN((int64_t)(arc_size
- arc_c
),
2836 (int64_t)(arc_anon
->arcs_size
+ arc_mru
->arcs_size
+ arc_meta_used
-
2840 * If we're below arc_meta_min, always prefer to evict data.
2841 * Otherwise, try to satisfy the requested number of bytes to
2842 * evict from the type which contains older buffers; in an
2843 * effort to keep newer buffers in the cache regardless of their
2844 * type. If we cannot satisfy the number of bytes from this
2845 * type, spill over into the next type.
2847 if (arc_adjust_type(arc_mru
) == ARC_BUFC_METADATA
&&
2848 arc_meta_used
> arc_meta_min
) {
2849 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
2850 total_evicted
+= bytes
;
2853 * If we couldn't evict our target number of bytes from
2854 * metadata, we try to get the rest from data.
2859 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
2861 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
2862 total_evicted
+= bytes
;
2865 * If we couldn't evict our target number of bytes from
2866 * data, we try to get the rest from metadata.
2871 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
2877 * Now that we've tried to evict enough from the MRU to get its
2878 * size back to arc_p, if we're still above the target cache
2879 * size, we evict the rest from the MFU.
2881 target
= arc_size
- arc_c
;
2883 if (arc_adjust_type(arc_mru
) == ARC_BUFC_METADATA
&&
2884 arc_meta_used
> arc_meta_min
) {
2885 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
2886 total_evicted
+= bytes
;
2889 * If we couldn't evict our target number of bytes from
2890 * metadata, we try to get the rest from data.
2895 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
2897 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
2898 total_evicted
+= bytes
;
2901 * If we couldn't evict our target number of bytes from
2902 * data, we try to get the rest from data.
2907 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
2911 * Adjust ghost lists
2913 * In addition to the above, the ARC also defines target values
2914 * for the ghost lists. The sum of the mru list and mru ghost
2915 * list should never exceed the target size of the cache, and
2916 * the sum of the mru list, mfu list, mru ghost list, and mfu
2917 * ghost list should never exceed twice the target size of the
2918 * cache. The following logic enforces these limits on the ghost
2919 * caches, and evicts from them as needed.
2921 target
= arc_mru
->arcs_size
+ arc_mru_ghost
->arcs_size
- arc_c
;
2923 bytes
= arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_DATA
);
2924 total_evicted
+= bytes
;
2929 arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_METADATA
);
2932 * We assume the sum of the mru list and mfu list is less than
2933 * or equal to arc_c (we enforced this above), which means we
2934 * can use the simpler of the two equations below:
2936 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
2937 * mru ghost + mfu ghost <= arc_c
2939 target
= arc_mru_ghost
->arcs_size
+ arc_mfu_ghost
->arcs_size
- arc_c
;
2941 bytes
= arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_DATA
);
2942 total_evicted
+= bytes
;
2947 arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_METADATA
);
2949 return (total_evicted
);
2953 arc_do_user_evicts(void)
2955 mutex_enter(&arc_user_evicts_lock
);
2956 while (arc_eviction_list
!= NULL
) {
2957 arc_buf_t
*buf
= arc_eviction_list
;
2958 arc_eviction_list
= buf
->b_next
;
2959 mutex_enter(&buf
->b_evict_lock
);
2961 mutex_exit(&buf
->b_evict_lock
);
2962 mutex_exit(&arc_user_evicts_lock
);
2964 if (buf
->b_efunc
!= NULL
)
2965 VERIFY0(buf
->b_efunc(buf
->b_private
));
2967 buf
->b_efunc
= NULL
;
2968 buf
->b_private
= NULL
;
2969 kmem_cache_free(buf_cache
, buf
);
2970 mutex_enter(&arc_user_evicts_lock
);
2972 mutex_exit(&arc_user_evicts_lock
);
2976 arc_flush(spa_t
*spa
, boolean_t retry
)
2981 * If retry is TRUE, a spa must not be specified since we have
2982 * no good way to determine if all of a spa's buffers have been
2983 * evicted from an arc state.
2985 ASSERT(!retry
|| spa
== 0);
2988 guid
= spa_load_guid(spa
);
2990 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_DATA
, retry
);
2991 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_METADATA
, retry
);
2993 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_DATA
, retry
);
2994 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_METADATA
, retry
);
2996 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_DATA
, retry
);
2997 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
2999 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_DATA
, retry
);
3000 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
3002 arc_do_user_evicts();
3003 ASSERT(spa
|| arc_eviction_list
== NULL
);
3007 arc_shrink(uint64_t bytes
)
3009 if (arc_c
> arc_c_min
) {
3012 to_free
= bytes
? bytes
: arc_c
>> zfs_arc_shrink_shift
;
3014 if (arc_c
> arc_c_min
+ to_free
)
3015 atomic_add_64(&arc_c
, -to_free
);
3019 to_free
= bytes
? bytes
: arc_p
>> zfs_arc_shrink_shift
;
3021 if (arc_p
> to_free
)
3022 atomic_add_64(&arc_p
, -to_free
);
3026 if (arc_c
> arc_size
)
3027 arc_c
= MAX(arc_size
, arc_c_min
);
3029 arc_p
= (arc_c
>> 1);
3030 ASSERT(arc_c
>= arc_c_min
);
3031 ASSERT((int64_t)arc_p
>= 0);
3034 if (arc_size
> arc_c
)
3035 (void) arc_adjust();
3039 arc_kmem_reap_now(arc_reclaim_strategy_t strat
, uint64_t bytes
)
3042 kmem_cache_t
*prev_cache
= NULL
;
3043 kmem_cache_t
*prev_data_cache
= NULL
;
3044 extern kmem_cache_t
*zio_buf_cache
[];
3045 extern kmem_cache_t
*zio_data_buf_cache
[];
3046 extern kmem_cache_t
*range_seg_cache
;
3048 if ((arc_meta_used
>= arc_meta_limit
) && zfs_arc_meta_prune
) {
3050 * We are exceeding our meta-data cache limit.
3051 * Prune some entries to release holds on meta-data.
3053 arc_prune(zfs_arc_meta_prune
);
3057 * An aggressive reclamation will shrink the cache size as well as
3058 * reap free buffers from the arc kmem caches.
3060 if (strat
== ARC_RECLAIM_AGGR
)
3063 for (i
= 0; i
< SPA_MAXBLOCKSIZE
>> SPA_MINBLOCKSHIFT
; i
++) {
3064 if (zio_buf_cache
[i
] != prev_cache
) {
3065 prev_cache
= zio_buf_cache
[i
];
3066 kmem_cache_reap_now(zio_buf_cache
[i
]);
3068 if (zio_data_buf_cache
[i
] != prev_data_cache
) {
3069 prev_data_cache
= zio_data_buf_cache
[i
];
3070 kmem_cache_reap_now(zio_data_buf_cache
[i
]);
3074 kmem_cache_reap_now(buf_cache
);
3075 kmem_cache_reap_now(hdr_full_cache
);
3076 kmem_cache_reap_now(hdr_l2only_cache
);
3077 kmem_cache_reap_now(range_seg_cache
);
3081 * Threads can block in arc_get_data_buf() waiting for this thread to evict
3082 * enough data and signal them to proceed. When this happens, the threads in
3083 * arc_get_data_buf() are sleeping while holding the hash lock for their
3084 * particular arc header. Thus, we must be careful to never sleep on a
3085 * hash lock in this thread. This is to prevent the following deadlock:
3087 * - Thread A sleeps on CV in arc_get_data_buf() holding hash lock "L",
3088 * waiting for the reclaim thread to signal it.
3090 * - arc_reclaim_thread() tries to acquire hash lock "L" using mutex_enter,
3091 * fails, and goes to sleep forever.
3093 * This possible deadlock is avoided by always acquiring a hash lock
3094 * using mutex_tryenter() from arc_reclaim_thread().
3097 arc_adapt_thread(void)
3100 fstrans_cookie_t cookie
;
3101 uint64_t arc_evicted
;
3103 CALLB_CPR_INIT(&cpr
, &arc_reclaim_lock
, callb_generic_cpr
, FTAG
);
3105 cookie
= spl_fstrans_mark();
3106 mutex_enter(&arc_reclaim_lock
);
3107 while (arc_reclaim_thread_exit
== 0) {
3109 arc_reclaim_strategy_t last_reclaim
= ARC_RECLAIM_CONS
;
3111 mutex_exit(&arc_reclaim_lock
);
3112 if (spa_get_random(100) == 0) {
3115 if (last_reclaim
== ARC_RECLAIM_CONS
) {
3116 last_reclaim
= ARC_RECLAIM_AGGR
;
3118 last_reclaim
= ARC_RECLAIM_CONS
;
3122 last_reclaim
= ARC_RECLAIM_AGGR
;
3126 /* reset the growth delay for every reclaim */
3127 arc_grow_time
= ddi_get_lbolt() +
3128 (zfs_arc_grow_retry
* hz
);
3130 arc_kmem_reap_now(last_reclaim
, 0);
3134 mutex_exit(&arc_reclaim_lock
);
3135 #endif /* !_KERNEL */
3137 /* No recent memory pressure allow the ARC to grow. */
3139 ddi_time_after_eq(ddi_get_lbolt(), arc_grow_time
))
3140 arc_no_grow
= FALSE
;
3142 arc_evicted
= arc_adjust();
3145 * We're either no longer overflowing, or we
3146 * can't evict anything more, so we should wake
3147 * up any threads before we go to sleep.
3149 if (arc_size
<= arc_c
|| arc_evicted
== 0)
3150 cv_broadcast(&arc_reclaim_waiters_cv
);
3152 mutex_enter(&arc_reclaim_lock
);
3154 /* block until needed, or one second, whichever is shorter */
3155 CALLB_CPR_SAFE_BEGIN(&cpr
);
3156 (void) cv_timedwait_sig(&arc_reclaim_thread_cv
,
3157 &arc_reclaim_lock
, (ddi_get_lbolt() + hz
));
3158 CALLB_CPR_SAFE_END(&cpr
, &arc_reclaim_lock
);
3161 /* Allow the module options to be changed */
3162 if (zfs_arc_max
> 64 << 20 &&
3163 zfs_arc_max
< physmem
* PAGESIZE
&&
3164 zfs_arc_max
!= arc_c_max
)
3165 arc_c_max
= zfs_arc_max
;
3167 if (zfs_arc_min
>= 2ULL << SPA_MAXBLOCKSHIFT
&&
3168 zfs_arc_min
<= arc_c_max
&&
3169 zfs_arc_min
!= arc_c_min
)
3170 arc_c_min
= zfs_arc_min
;
3172 if (zfs_arc_meta_limit
> 0 &&
3173 zfs_arc_meta_limit
<= arc_c_max
&&
3174 zfs_arc_meta_limit
!= arc_meta_limit
)
3175 arc_meta_limit
= zfs_arc_meta_limit
;
3178 arc_reclaim_thread_exit
= 0;
3179 cv_broadcast(&arc_reclaim_thread_cv
);
3180 CALLB_CPR_EXIT(&cpr
); /* drops arc_reclaim_lock */
3181 spl_fstrans_unmark(cookie
);
3186 arc_user_evicts_thread(void)
3189 fstrans_cookie_t cookie
;
3191 CALLB_CPR_INIT(&cpr
, &arc_user_evicts_lock
, callb_generic_cpr
, FTAG
);
3193 cookie
= spl_fstrans_mark();
3194 mutex_enter(&arc_user_evicts_lock
);
3195 while (!arc_user_evicts_thread_exit
) {
3196 mutex_exit(&arc_user_evicts_lock
);
3198 arc_do_user_evicts();
3201 * This is necessary in order for the mdb ::arc dcmd to
3202 * show up to date information. Since the ::arc command
3203 * does not call the kstat's update function, without
3204 * this call, the command may show stale stats for the
3205 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
3206 * with this change, the data might be up to 1 second
3207 * out of date; but that should suffice. The arc_state_t
3208 * structures can be queried directly if more accurate
3209 * information is needed.
3211 if (arc_ksp
!= NULL
)
3212 arc_ksp
->ks_update(arc_ksp
, KSTAT_READ
);
3214 mutex_enter(&arc_user_evicts_lock
);
3217 * Block until signaled, or after one second (we need to
3218 * call the arc's kstat update function regularly).
3220 CALLB_CPR_SAFE_BEGIN(&cpr
);
3221 (void) cv_timedwait_sig(&arc_user_evicts_cv
,
3222 &arc_user_evicts_lock
, ddi_get_lbolt() + hz
);
3223 CALLB_CPR_SAFE_END(&cpr
, &arc_user_evicts_lock
);
3226 arc_user_evicts_thread_exit
= FALSE
;
3227 cv_broadcast(&arc_user_evicts_cv
);
3228 CALLB_CPR_EXIT(&cpr
); /* drops arc_user_evicts_lock */
3229 spl_fstrans_unmark(cookie
);
3235 * Determine the amount of memory eligible for eviction contained in the
3236 * ARC. All clean data reported by the ghost lists can always be safely
3237 * evicted. Due to arc_c_min, the same does not hold for all clean data
3238 * contained by the regular mru and mfu lists.
3240 * In the case of the regular mru and mfu lists, we need to report as
3241 * much clean data as possible, such that evicting that same reported
3242 * data will not bring arc_size below arc_c_min. Thus, in certain
3243 * circumstances, the total amount of clean data in the mru and mfu
3244 * lists might not actually be evictable.
3246 * The following two distinct cases are accounted for:
3248 * 1. The sum of the amount of dirty data contained by both the mru and
3249 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
3250 * is greater than or equal to arc_c_min.
3251 * (i.e. amount of dirty data >= arc_c_min)
3253 * This is the easy case; all clean data contained by the mru and mfu
3254 * lists is evictable. Evicting all clean data can only drop arc_size
3255 * to the amount of dirty data, which is greater than arc_c_min.
3257 * 2. The sum of the amount of dirty data contained by both the mru and
3258 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
3259 * is less than arc_c_min.
3260 * (i.e. arc_c_min > amount of dirty data)
3262 * 2.1. arc_size is greater than or equal arc_c_min.
3263 * (i.e. arc_size >= arc_c_min > amount of dirty data)
3265 * In this case, not all clean data from the regular mru and mfu
3266 * lists is actually evictable; we must leave enough clean data
3267 * to keep arc_size above arc_c_min. Thus, the maximum amount of
3268 * evictable data from the two lists combined, is exactly the
3269 * difference between arc_size and arc_c_min.
3271 * 2.2. arc_size is less than arc_c_min
3272 * (i.e. arc_c_min > arc_size > amount of dirty data)
3274 * In this case, none of the data contained in the mru and mfu
3275 * lists is evictable, even if it's clean. Since arc_size is
3276 * already below arc_c_min, evicting any more would only
3277 * increase this negative difference.
3280 arc_evictable_memory(void) {
3281 uint64_t arc_clean
=
3282 arc_mru
->arcs_lsize
[ARC_BUFC_DATA
] +
3283 arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
] +
3284 arc_mfu
->arcs_lsize
[ARC_BUFC_DATA
] +
3285 arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
];
3286 uint64_t ghost_clean
=
3287 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
3288 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_METADATA
] +
3289 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
3290 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_METADATA
];
3291 uint64_t arc_dirty
= MAX((int64_t)arc_size
- (int64_t)arc_clean
, 0);
3293 if (arc_dirty
>= arc_c_min
)
3294 return (ghost_clean
+ arc_clean
);
3296 return (ghost_clean
+ MAX((int64_t)arc_size
- (int64_t)arc_c_min
, 0));
3300 * If sc->nr_to_scan is zero, the caller is requesting a query of the
3301 * number of objects which can potentially be freed. If it is nonzero,
3302 * the request is to free that many objects.
3304 * Linux kernels >= 3.12 have the count_objects and scan_objects callbacks
3305 * in struct shrinker and also require the shrinker to return the number
3308 * Older kernels require the shrinker to return the number of freeable
3309 * objects following the freeing of nr_to_free.
3311 static spl_shrinker_t
3312 __arc_shrinker_func(struct shrinker
*shrink
, struct shrink_control
*sc
)
3316 /* The arc is considered warm once reclaim has occurred */
3317 if (unlikely(arc_warm
== B_FALSE
))
3320 /* Return the potential number of reclaimable pages */
3321 pages
= btop((int64_t)arc_evictable_memory());
3322 if (sc
->nr_to_scan
== 0)
3325 /* Not allowed to perform filesystem reclaim */
3326 if (!(sc
->gfp_mask
& __GFP_FS
))
3327 return (SHRINK_STOP
);
3329 /* Reclaim in progress */
3330 if (mutex_tryenter(&arc_reclaim_lock
) == 0)
3331 return (SHRINK_STOP
);
3333 mutex_exit(&arc_reclaim_lock
);
3336 * Evict the requested number of pages by shrinking arc_c the
3337 * requested amount. If there is nothing left to evict just
3338 * reap whatever we can from the various arc slabs.
3341 arc_kmem_reap_now(ARC_RECLAIM_AGGR
, ptob(sc
->nr_to_scan
));
3343 #ifdef HAVE_SPLIT_SHRINKER_CALLBACK
3344 pages
= MAX(pages
- btop(arc_evictable_memory()), 0);
3346 pages
= btop(arc_evictable_memory());
3349 arc_kmem_reap_now(ARC_RECLAIM_CONS
, ptob(sc
->nr_to_scan
));
3350 pages
= SHRINK_STOP
;
3354 * We've reaped what we can, wake up threads.
3356 cv_broadcast(&arc_reclaim_waiters_cv
);
3359 * When direct reclaim is observed it usually indicates a rapid
3360 * increase in memory pressure. This occurs because the kswapd
3361 * threads were unable to asynchronously keep enough free memory
3362 * available. In this case set arc_no_grow to briefly pause arc
3363 * growth to avoid compounding the memory pressure.
3365 if (current_is_kswapd()) {
3366 ARCSTAT_BUMP(arcstat_memory_indirect_count
);
3368 arc_no_grow
= B_TRUE
;
3369 arc_grow_time
= ddi_get_lbolt() + (zfs_arc_grow_retry
* hz
);
3370 ARCSTAT_BUMP(arcstat_memory_direct_count
);
3375 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func
);
3377 SPL_SHRINKER_DECLARE(arc_shrinker
, arc_shrinker_func
, DEFAULT_SEEKS
);
3378 #endif /* _KERNEL */
3381 * Adapt arc info given the number of bytes we are trying to add and
3382 * the state that we are comming from. This function is only called
3383 * when we are adding new content to the cache.
3386 arc_adapt(int bytes
, arc_state_t
*state
)
3390 if (state
== arc_l2c_only
)
3395 * Adapt the target size of the MRU list:
3396 * - if we just hit in the MRU ghost list, then increase
3397 * the target size of the MRU list.
3398 * - if we just hit in the MFU ghost list, then increase
3399 * the target size of the MFU list by decreasing the
3400 * target size of the MRU list.
3402 if (state
== arc_mru_ghost
) {
3403 mult
= ((arc_mru_ghost
->arcs_size
>= arc_mfu_ghost
->arcs_size
) ?
3404 1 : (arc_mfu_ghost
->arcs_size
/arc_mru_ghost
->arcs_size
));
3406 if (!zfs_arc_p_dampener_disable
)
3407 mult
= MIN(mult
, 10); /* avoid wild arc_p adjustment */
3409 arc_p
= MIN(arc_c
, arc_p
+ bytes
* mult
);
3410 } else if (state
== arc_mfu_ghost
) {
3413 mult
= ((arc_mfu_ghost
->arcs_size
>= arc_mru_ghost
->arcs_size
) ?
3414 1 : (arc_mru_ghost
->arcs_size
/arc_mfu_ghost
->arcs_size
));
3416 if (!zfs_arc_p_dampener_disable
)
3417 mult
= MIN(mult
, 10);
3419 delta
= MIN(bytes
* mult
, arc_p
);
3420 arc_p
= MAX(0, arc_p
- delta
);
3422 ASSERT((int64_t)arc_p
>= 0);
3427 if (arc_c
>= arc_c_max
)
3431 * If we're within (2 * maxblocksize) bytes of the target
3432 * cache size, increment the target cache size
3434 VERIFY3U(arc_c
, >=, 2ULL << SPA_MAXBLOCKSHIFT
);
3435 if (arc_size
>= arc_c
- (2ULL << SPA_MAXBLOCKSHIFT
)) {
3436 atomic_add_64(&arc_c
, (int64_t)bytes
);
3437 if (arc_c
> arc_c_max
)
3439 else if (state
== arc_anon
)
3440 atomic_add_64(&arc_p
, (int64_t)bytes
);
3444 ASSERT((int64_t)arc_p
>= 0);
3448 * Check if arc_size has grown past our upper threshold, determined by
3449 * zfs_arc_overflow_shift.
3452 arc_is_overflowing(void)
3454 /* Always allow at least one block of overflow */
3455 uint64_t overflow
= MAX(SPA_MAXBLOCKSIZE
,
3456 arc_c
>> zfs_arc_overflow_shift
);
3458 return (arc_size
>= arc_c
+ overflow
);
3462 * The buffer, supplied as the first argument, needs a data block. If we
3463 * are hitting the hard limit for the cache size, we must sleep, waiting
3464 * for the eviction thread to catch up. If we're past the target size
3465 * but below the hard limit, we'll only signal the reclaim thread and
3469 arc_get_data_buf(arc_buf_t
*buf
)
3471 arc_state_t
*state
= buf
->b_hdr
->b_l1hdr
.b_state
;
3472 uint64_t size
= buf
->b_hdr
->b_size
;
3473 arc_buf_contents_t type
= arc_buf_type(buf
->b_hdr
);
3475 arc_adapt(size
, state
);
3478 * If arc_size is currently overflowing, and has grown past our
3479 * upper limit, we must be adding data faster than the evict
3480 * thread can evict. Thus, to ensure we don't compound the
3481 * problem by adding more data and forcing arc_size to grow even
3482 * further past it's target size, we halt and wait for the
3483 * eviction thread to catch up.
3485 * It's also possible that the reclaim thread is unable to evict
3486 * enough buffers to get arc_size below the overflow limit (e.g.
3487 * due to buffers being un-evictable, or hash lock collisions).
3488 * In this case, we want to proceed regardless if we're
3489 * overflowing; thus we don't use a while loop here.
3491 if (arc_is_overflowing()) {
3492 mutex_enter(&arc_reclaim_lock
);
3495 * Now that we've acquired the lock, we may no longer be
3496 * over the overflow limit, lets check.
3498 * We're ignoring the case of spurious wake ups. If that
3499 * were to happen, it'd let this thread consume an ARC
3500 * buffer before it should have (i.e. before we're under
3501 * the overflow limit and were signalled by the reclaim
3502 * thread). As long as that is a rare occurrence, it
3503 * shouldn't cause any harm.
3505 if (arc_is_overflowing()) {
3506 cv_signal(&arc_reclaim_thread_cv
);
3507 cv_wait(&arc_reclaim_waiters_cv
, &arc_reclaim_lock
);
3510 mutex_exit(&arc_reclaim_lock
);
3513 if (type
== ARC_BUFC_METADATA
) {
3514 buf
->b_data
= zio_buf_alloc(size
);
3515 arc_space_consume(size
, ARC_SPACE_META
);
3517 ASSERT(type
== ARC_BUFC_DATA
);
3518 buf
->b_data
= zio_data_buf_alloc(size
);
3519 arc_space_consume(size
, ARC_SPACE_DATA
);
3523 * Update the state size. Note that ghost states have a
3524 * "ghost size" and so don't need to be updated.
3526 if (!GHOST_STATE(buf
->b_hdr
->b_l1hdr
.b_state
)) {
3527 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3529 atomic_add_64(&hdr
->b_l1hdr
.b_state
->arcs_size
, size
);
3532 * If this is reached via arc_read, the link is
3533 * protected by the hash lock. If reached via
3534 * arc_buf_alloc, the header should not be accessed by
3535 * any other thread. And, if reached via arc_read_done,
3536 * the hash lock will protect it if it's found in the
3537 * hash table; otherwise no other thread should be
3538 * trying to [add|remove]_reference it.
3540 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
3541 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3542 atomic_add_64(&hdr
->b_l1hdr
.b_state
->arcs_lsize
[type
],
3546 * If we are growing the cache, and we are adding anonymous
3547 * data, and we have outgrown arc_p, update arc_p
3549 if (arc_size
< arc_c
&& hdr
->b_l1hdr
.b_state
== arc_anon
&&
3550 arc_anon
->arcs_size
+ arc_mru
->arcs_size
> arc_p
)
3551 arc_p
= MIN(arc_c
, arc_p
+ size
);
3556 * This routine is called whenever a buffer is accessed.
3557 * NOTE: the hash lock is dropped in this function.
3560 arc_access(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
3564 ASSERT(MUTEX_HELD(hash_lock
));
3565 ASSERT(HDR_HAS_L1HDR(hdr
));
3567 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
3569 * This buffer is not in the cache, and does not
3570 * appear in our "ghost" list. Add the new buffer
3574 ASSERT0(hdr
->b_l1hdr
.b_arc_access
);
3575 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3576 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
3577 arc_change_state(arc_mru
, hdr
, hash_lock
);
3579 } else if (hdr
->b_l1hdr
.b_state
== arc_mru
) {
3580 now
= ddi_get_lbolt();
3583 * If this buffer is here because of a prefetch, then either:
3584 * - clear the flag if this is a "referencing" read
3585 * (any subsequent access will bump this into the MFU state).
3587 * - move the buffer to the head of the list if this is
3588 * another prefetch (to make it less likely to be evicted).
3590 if (HDR_PREFETCH(hdr
)) {
3591 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
3592 /* link protected by hash lock */
3593 ASSERT(multilist_link_active(
3594 &hdr
->b_l1hdr
.b_arc_node
));
3596 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
3597 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
3598 ARCSTAT_BUMP(arcstat_mru_hits
);
3600 hdr
->b_l1hdr
.b_arc_access
= now
;
3605 * This buffer has been "accessed" only once so far,
3606 * but it is still in the cache. Move it to the MFU
3609 if (ddi_time_after(now
, hdr
->b_l1hdr
.b_arc_access
+
3612 * More than 125ms have passed since we
3613 * instantiated this buffer. Move it to the
3614 * most frequently used state.
3616 hdr
->b_l1hdr
.b_arc_access
= now
;
3617 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
3618 arc_change_state(arc_mfu
, hdr
, hash_lock
);
3620 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
3621 ARCSTAT_BUMP(arcstat_mru_hits
);
3622 } else if (hdr
->b_l1hdr
.b_state
== arc_mru_ghost
) {
3623 arc_state_t
*new_state
;
3625 * This buffer has been "accessed" recently, but
3626 * was evicted from the cache. Move it to the
3630 if (HDR_PREFETCH(hdr
)) {
3631 new_state
= arc_mru
;
3632 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0)
3633 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
3634 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
3636 new_state
= arc_mfu
;
3637 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
3640 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3641 arc_change_state(new_state
, hdr
, hash_lock
);
3643 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_ghost_hits
);
3644 ARCSTAT_BUMP(arcstat_mru_ghost_hits
);
3645 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu
) {
3647 * This buffer has been accessed more than once and is
3648 * still in the cache. Keep it in the MFU state.
3650 * NOTE: an add_reference() that occurred when we did
3651 * the arc_read() will have kicked this off the list.
3652 * If it was a prefetch, we will explicitly move it to
3653 * the head of the list now.
3655 if ((HDR_PREFETCH(hdr
)) != 0) {
3656 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3657 /* link protected by hash_lock */
3658 ASSERT(multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3660 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_hits
);
3661 ARCSTAT_BUMP(arcstat_mfu_hits
);
3662 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3663 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu_ghost
) {
3664 arc_state_t
*new_state
= arc_mfu
;
3666 * This buffer has been accessed more than once but has
3667 * been evicted from the cache. Move it back to the
3671 if (HDR_PREFETCH(hdr
)) {
3673 * This is a prefetch access...
3674 * move this block back to the MRU state.
3676 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
3677 new_state
= arc_mru
;
3680 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3681 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
3682 arc_change_state(new_state
, hdr
, hash_lock
);
3684 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_ghost_hits
);
3685 ARCSTAT_BUMP(arcstat_mfu_ghost_hits
);
3686 } else if (hdr
->b_l1hdr
.b_state
== arc_l2c_only
) {
3688 * This buffer is on the 2nd Level ARC.
3691 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3692 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
3693 arc_change_state(arc_mfu
, hdr
, hash_lock
);
3695 cmn_err(CE_PANIC
, "invalid arc state 0x%p",
3696 hdr
->b_l1hdr
.b_state
);
3700 /* a generic arc_done_func_t which you can use */
3703 arc_bcopy_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
3705 if (zio
== NULL
|| zio
->io_error
== 0)
3706 bcopy(buf
->b_data
, arg
, buf
->b_hdr
->b_size
);
3707 VERIFY(arc_buf_remove_ref(buf
, arg
));
3710 /* a generic arc_done_func_t */
3712 arc_getbuf_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
3714 arc_buf_t
**bufp
= arg
;
3715 if (zio
&& zio
->io_error
) {
3716 VERIFY(arc_buf_remove_ref(buf
, arg
));
3720 ASSERT(buf
->b_data
);
3725 arc_read_done(zio_t
*zio
)
3729 arc_buf_t
*abuf
; /* buffer we're assigning to callback */
3730 kmutex_t
*hash_lock
= NULL
;
3731 arc_callback_t
*callback_list
, *acb
;
3732 int freeable
= FALSE
;
3734 buf
= zio
->io_private
;
3738 * The hdr was inserted into hash-table and removed from lists
3739 * prior to starting I/O. We should find this header, since
3740 * it's in the hash table, and it should be legit since it's
3741 * not possible to evict it during the I/O. The only possible
3742 * reason for it not to be found is if we were freed during the
3745 if (HDR_IN_HASH_TABLE(hdr
)) {
3746 arc_buf_hdr_t
*found
;
3748 ASSERT3U(hdr
->b_birth
, ==, BP_PHYSICAL_BIRTH(zio
->io_bp
));
3749 ASSERT3U(hdr
->b_dva
.dva_word
[0], ==,
3750 BP_IDENTITY(zio
->io_bp
)->dva_word
[0]);
3751 ASSERT3U(hdr
->b_dva
.dva_word
[1], ==,
3752 BP_IDENTITY(zio
->io_bp
)->dva_word
[1]);
3754 found
= buf_hash_find(hdr
->b_spa
, zio
->io_bp
,
3757 ASSERT((found
== NULL
&& HDR_FREED_IN_READ(hdr
) &&
3758 hash_lock
== NULL
) ||
3760 DVA_EQUAL(&hdr
->b_dva
, BP_IDENTITY(zio
->io_bp
))) ||
3761 (found
== hdr
&& HDR_L2_READING(hdr
)));
3764 hdr
->b_flags
&= ~ARC_FLAG_L2_EVICTED
;
3765 if (l2arc_noprefetch
&& HDR_PREFETCH(hdr
))
3766 hdr
->b_flags
&= ~ARC_FLAG_L2CACHE
;
3768 /* byteswap if necessary */
3769 callback_list
= hdr
->b_l1hdr
.b_acb
;
3770 ASSERT(callback_list
!= NULL
);
3771 if (BP_SHOULD_BYTESWAP(zio
->io_bp
) && zio
->io_error
== 0) {
3772 dmu_object_byteswap_t bswap
=
3773 DMU_OT_BYTESWAP(BP_GET_TYPE(zio
->io_bp
));
3774 if (BP_GET_LEVEL(zio
->io_bp
) > 0)
3775 byteswap_uint64_array(buf
->b_data
, hdr
->b_size
);
3777 dmu_ot_byteswap
[bswap
].ob_func(buf
->b_data
, hdr
->b_size
);
3780 arc_cksum_compute(buf
, B_FALSE
);
3783 if (hash_lock
&& zio
->io_error
== 0 &&
3784 hdr
->b_l1hdr
.b_state
== arc_anon
) {
3786 * Only call arc_access on anonymous buffers. This is because
3787 * if we've issued an I/O for an evicted buffer, we've already
3788 * called arc_access (to prevent any simultaneous readers from
3789 * getting confused).
3791 arc_access(hdr
, hash_lock
);
3794 /* create copies of the data buffer for the callers */
3796 for (acb
= callback_list
; acb
; acb
= acb
->acb_next
) {
3797 if (acb
->acb_done
) {
3799 ARCSTAT_BUMP(arcstat_duplicate_reads
);
3800 abuf
= arc_buf_clone(buf
);
3802 acb
->acb_buf
= abuf
;
3806 hdr
->b_l1hdr
.b_acb
= NULL
;
3807 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
3808 ASSERT(!HDR_BUF_AVAILABLE(hdr
));
3810 ASSERT(buf
->b_efunc
== NULL
);
3811 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
3812 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
3815 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
) ||
3816 callback_list
!= NULL
);
3818 if (zio
->io_error
!= 0) {
3819 hdr
->b_flags
|= ARC_FLAG_IO_ERROR
;
3820 if (hdr
->b_l1hdr
.b_state
!= arc_anon
)
3821 arc_change_state(arc_anon
, hdr
, hash_lock
);
3822 if (HDR_IN_HASH_TABLE(hdr
))
3823 buf_hash_remove(hdr
);
3824 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
3828 * Broadcast before we drop the hash_lock to avoid the possibility
3829 * that the hdr (and hence the cv) might be freed before we get to
3830 * the cv_broadcast().
3832 cv_broadcast(&hdr
->b_l1hdr
.b_cv
);
3834 if (hash_lock
!= NULL
) {
3835 mutex_exit(hash_lock
);
3838 * This block was freed while we waited for the read to
3839 * complete. It has been removed from the hash table and
3840 * moved to the anonymous state (so that it won't show up
3843 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3844 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
3847 /* execute each callback and free its structure */
3848 while ((acb
= callback_list
) != NULL
) {
3850 acb
->acb_done(zio
, acb
->acb_buf
, acb
->acb_private
);
3852 if (acb
->acb_zio_dummy
!= NULL
) {
3853 acb
->acb_zio_dummy
->io_error
= zio
->io_error
;
3854 zio_nowait(acb
->acb_zio_dummy
);
3857 callback_list
= acb
->acb_next
;
3858 kmem_free(acb
, sizeof (arc_callback_t
));
3862 arc_hdr_destroy(hdr
);
3866 * "Read" the block at the specified DVA (in bp) via the
3867 * cache. If the block is found in the cache, invoke the provided
3868 * callback immediately and return. Note that the `zio' parameter
3869 * in the callback will be NULL in this case, since no IO was
3870 * required. If the block is not in the cache pass the read request
3871 * on to the spa with a substitute callback function, so that the
3872 * requested block will be added to the cache.
3874 * If a read request arrives for a block that has a read in-progress,
3875 * either wait for the in-progress read to complete (and return the
3876 * results); or, if this is a read with a "done" func, add a record
3877 * to the read to invoke the "done" func when the read completes,
3878 * and return; or just return.
3880 * arc_read_done() will invoke all the requested "done" functions
3881 * for readers of this block.
3884 arc_read(zio_t
*pio
, spa_t
*spa
, const blkptr_t
*bp
, arc_done_func_t
*done
,
3885 void *private, zio_priority_t priority
, int zio_flags
,
3886 arc_flags_t
*arc_flags
, const zbookmark_phys_t
*zb
)
3888 arc_buf_hdr_t
*hdr
= NULL
;
3889 arc_buf_t
*buf
= NULL
;
3890 kmutex_t
*hash_lock
= NULL
;
3892 uint64_t guid
= spa_load_guid(spa
);
3895 ASSERT(!BP_IS_EMBEDDED(bp
) ||
3896 BPE_GET_ETYPE(bp
) == BP_EMBEDDED_TYPE_DATA
);
3899 if (!BP_IS_EMBEDDED(bp
)) {
3901 * Embedded BP's have no DVA and require no I/O to "read".
3902 * Create an anonymous arc buf to back it.
3904 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
3907 if (hdr
!= NULL
&& HDR_HAS_L1HDR(hdr
) && hdr
->b_l1hdr
.b_datacnt
> 0) {
3909 *arc_flags
|= ARC_FLAG_CACHED
;
3911 if (HDR_IO_IN_PROGRESS(hdr
)) {
3913 if (*arc_flags
& ARC_FLAG_WAIT
) {
3914 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
3915 mutex_exit(hash_lock
);
3918 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
3921 arc_callback_t
*acb
= NULL
;
3923 acb
= kmem_zalloc(sizeof (arc_callback_t
),
3925 acb
->acb_done
= done
;
3926 acb
->acb_private
= private;
3928 acb
->acb_zio_dummy
= zio_null(pio
,
3929 spa
, NULL
, NULL
, NULL
, zio_flags
);
3931 ASSERT(acb
->acb_done
!= NULL
);
3932 acb
->acb_next
= hdr
->b_l1hdr
.b_acb
;
3933 hdr
->b_l1hdr
.b_acb
= acb
;
3934 add_reference(hdr
, hash_lock
, private);
3935 mutex_exit(hash_lock
);
3938 mutex_exit(hash_lock
);
3942 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
3943 hdr
->b_l1hdr
.b_state
== arc_mfu
);
3946 add_reference(hdr
, hash_lock
, private);
3948 * If this block is already in use, create a new
3949 * copy of the data so that we will be guaranteed
3950 * that arc_release() will always succeed.
3952 buf
= hdr
->b_l1hdr
.b_buf
;
3954 ASSERT(buf
->b_data
);
3955 if (HDR_BUF_AVAILABLE(hdr
)) {
3956 ASSERT(buf
->b_efunc
== NULL
);
3957 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
3959 buf
= arc_buf_clone(buf
);
3962 } else if (*arc_flags
& ARC_FLAG_PREFETCH
&&
3963 refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
3964 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
3966 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
3967 arc_access(hdr
, hash_lock
);
3968 if (*arc_flags
& ARC_FLAG_L2CACHE
)
3969 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
3970 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
3971 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
3972 mutex_exit(hash_lock
);
3973 ARCSTAT_BUMP(arcstat_hits
);
3974 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
3975 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
3976 data
, metadata
, hits
);
3979 done(NULL
, buf
, private);
3981 uint64_t size
= BP_GET_LSIZE(bp
);
3982 arc_callback_t
*acb
;
3985 boolean_t devw
= B_FALSE
;
3986 enum zio_compress b_compress
= ZIO_COMPRESS_OFF
;
3987 int32_t b_asize
= 0;
3990 * Gracefully handle a damaged logical block size as a
3991 * checksum error by passing a dummy zio to the done callback.
3993 if (size
> spa_maxblocksize(spa
)) {
3995 rzio
= zio_null(pio
, spa
, NULL
,
3996 NULL
, NULL
, zio_flags
);
3997 rzio
->io_error
= ECKSUM
;
3998 done(rzio
, buf
, private);
4006 /* this block is not in the cache */
4007 arc_buf_hdr_t
*exists
= NULL
;
4008 arc_buf_contents_t type
= BP_GET_BUFC_TYPE(bp
);
4009 buf
= arc_buf_alloc(spa
, size
, private, type
);
4011 if (!BP_IS_EMBEDDED(bp
)) {
4012 hdr
->b_dva
= *BP_IDENTITY(bp
);
4013 hdr
->b_birth
= BP_PHYSICAL_BIRTH(bp
);
4014 exists
= buf_hash_insert(hdr
, &hash_lock
);
4016 if (exists
!= NULL
) {
4017 /* somebody beat us to the hash insert */
4018 mutex_exit(hash_lock
);
4019 buf_discard_identity(hdr
);
4020 (void) arc_buf_remove_ref(buf
, private);
4021 goto top
; /* restart the IO request */
4024 /* if this is a prefetch, we don't have a reference */
4025 if (*arc_flags
& ARC_FLAG_PREFETCH
) {
4026 (void) remove_reference(hdr
, hash_lock
,
4028 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
4030 if (*arc_flags
& ARC_FLAG_L2CACHE
)
4031 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4032 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
4033 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4034 if (BP_GET_LEVEL(bp
) > 0)
4035 hdr
->b_flags
|= ARC_FLAG_INDIRECT
;
4038 * This block is in the ghost cache. If it was L2-only
4039 * (and thus didn't have an L1 hdr), we realloc the
4040 * header to add an L1 hdr.
4042 if (!HDR_HAS_L1HDR(hdr
)) {
4043 hdr
= arc_hdr_realloc(hdr
, hdr_l2only_cache
,
4047 ASSERT(GHOST_STATE(hdr
->b_l1hdr
.b_state
));
4048 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4049 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
4050 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
4052 /* if this is a prefetch, we don't have a reference */
4053 if (*arc_flags
& ARC_FLAG_PREFETCH
)
4054 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
4056 add_reference(hdr
, hash_lock
, private);
4057 if (*arc_flags
& ARC_FLAG_L2CACHE
)
4058 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4059 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
4060 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4061 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
4064 buf
->b_efunc
= NULL
;
4065 buf
->b_private
= NULL
;
4067 hdr
->b_l1hdr
.b_buf
= buf
;
4068 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
4069 hdr
->b_l1hdr
.b_datacnt
= 1;
4070 arc_get_data_buf(buf
);
4071 arc_access(hdr
, hash_lock
);
4074 ASSERT(!GHOST_STATE(hdr
->b_l1hdr
.b_state
));
4076 acb
= kmem_zalloc(sizeof (arc_callback_t
), KM_SLEEP
);
4077 acb
->acb_done
= done
;
4078 acb
->acb_private
= private;
4080 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
4081 hdr
->b_l1hdr
.b_acb
= acb
;
4082 hdr
->b_flags
|= ARC_FLAG_IO_IN_PROGRESS
;
4084 if (HDR_HAS_L2HDR(hdr
) &&
4085 (vd
= hdr
->b_l2hdr
.b_dev
->l2ad_vdev
) != NULL
) {
4086 devw
= hdr
->b_l2hdr
.b_dev
->l2ad_writing
;
4087 addr
= hdr
->b_l2hdr
.b_daddr
;
4088 b_compress
= HDR_GET_COMPRESS(hdr
);
4089 b_asize
= hdr
->b_l2hdr
.b_asize
;
4091 * Lock out device removal.
4093 if (vdev_is_dead(vd
) ||
4094 !spa_config_tryenter(spa
, SCL_L2ARC
, vd
, RW_READER
))
4098 if (hash_lock
!= NULL
)
4099 mutex_exit(hash_lock
);
4102 * At this point, we have a level 1 cache miss. Try again in
4103 * L2ARC if possible.
4105 ASSERT3U(hdr
->b_size
, ==, size
);
4106 DTRACE_PROBE4(arc__miss
, arc_buf_hdr_t
*, hdr
, blkptr_t
*, bp
,
4107 uint64_t, size
, zbookmark_phys_t
*, zb
);
4108 ARCSTAT_BUMP(arcstat_misses
);
4109 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
4110 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
4111 data
, metadata
, misses
);
4113 if (vd
!= NULL
&& l2arc_ndev
!= 0 && !(l2arc_norw
&& devw
)) {
4115 * Read from the L2ARC if the following are true:
4116 * 1. The L2ARC vdev was previously cached.
4117 * 2. This buffer still has L2ARC metadata.
4118 * 3. This buffer isn't currently writing to the L2ARC.
4119 * 4. The L2ARC entry wasn't evicted, which may
4120 * also have invalidated the vdev.
4121 * 5. This isn't prefetch and l2arc_noprefetch is set.
4123 if (HDR_HAS_L2HDR(hdr
) &&
4124 !HDR_L2_WRITING(hdr
) && !HDR_L2_EVICTED(hdr
) &&
4125 !(l2arc_noprefetch
&& HDR_PREFETCH(hdr
))) {
4126 l2arc_read_callback_t
*cb
;
4128 DTRACE_PROBE1(l2arc__hit
, arc_buf_hdr_t
*, hdr
);
4129 ARCSTAT_BUMP(arcstat_l2_hits
);
4130 atomic_inc_32(&hdr
->b_l2hdr
.b_hits
);
4132 cb
= kmem_zalloc(sizeof (l2arc_read_callback_t
),
4134 cb
->l2rcb_buf
= buf
;
4135 cb
->l2rcb_spa
= spa
;
4138 cb
->l2rcb_flags
= zio_flags
;
4139 cb
->l2rcb_compress
= b_compress
;
4141 ASSERT(addr
>= VDEV_LABEL_START_SIZE
&&
4142 addr
+ size
< vd
->vdev_psize
-
4143 VDEV_LABEL_END_SIZE
);
4146 * l2arc read. The SCL_L2ARC lock will be
4147 * released by l2arc_read_done().
4148 * Issue a null zio if the underlying buffer
4149 * was squashed to zero size by compression.
4151 if (b_compress
== ZIO_COMPRESS_EMPTY
) {
4152 rzio
= zio_null(pio
, spa
, vd
,
4153 l2arc_read_done
, cb
,
4154 zio_flags
| ZIO_FLAG_DONT_CACHE
|
4156 ZIO_FLAG_DONT_PROPAGATE
|
4157 ZIO_FLAG_DONT_RETRY
);
4159 rzio
= zio_read_phys(pio
, vd
, addr
,
4160 b_asize
, buf
->b_data
,
4162 l2arc_read_done
, cb
, priority
,
4163 zio_flags
| ZIO_FLAG_DONT_CACHE
|
4165 ZIO_FLAG_DONT_PROPAGATE
|
4166 ZIO_FLAG_DONT_RETRY
, B_FALSE
);
4168 DTRACE_PROBE2(l2arc__read
, vdev_t
*, vd
,
4170 ARCSTAT_INCR(arcstat_l2_read_bytes
, b_asize
);
4172 if (*arc_flags
& ARC_FLAG_NOWAIT
) {
4177 ASSERT(*arc_flags
& ARC_FLAG_WAIT
);
4178 if (zio_wait(rzio
) == 0)
4181 /* l2arc read error; goto zio_read() */
4183 DTRACE_PROBE1(l2arc__miss
,
4184 arc_buf_hdr_t
*, hdr
);
4185 ARCSTAT_BUMP(arcstat_l2_misses
);
4186 if (HDR_L2_WRITING(hdr
))
4187 ARCSTAT_BUMP(arcstat_l2_rw_clash
);
4188 spa_config_exit(spa
, SCL_L2ARC
, vd
);
4192 spa_config_exit(spa
, SCL_L2ARC
, vd
);
4193 if (l2arc_ndev
!= 0) {
4194 DTRACE_PROBE1(l2arc__miss
,
4195 arc_buf_hdr_t
*, hdr
);
4196 ARCSTAT_BUMP(arcstat_l2_misses
);
4200 rzio
= zio_read(pio
, spa
, bp
, buf
->b_data
, size
,
4201 arc_read_done
, buf
, priority
, zio_flags
, zb
);
4203 if (*arc_flags
& ARC_FLAG_WAIT
) {
4204 rc
= zio_wait(rzio
);
4208 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
4213 spa_read_history_add(spa
, zb
, *arc_flags
);
4218 arc_add_prune_callback(arc_prune_func_t
*func
, void *private)
4222 p
= kmem_alloc(sizeof (*p
), KM_SLEEP
);
4224 p
->p_private
= private;
4225 list_link_init(&p
->p_node
);
4226 refcount_create(&p
->p_refcnt
);
4228 mutex_enter(&arc_prune_mtx
);
4229 refcount_add(&p
->p_refcnt
, &arc_prune_list
);
4230 list_insert_head(&arc_prune_list
, p
);
4231 mutex_exit(&arc_prune_mtx
);
4237 arc_remove_prune_callback(arc_prune_t
*p
)
4239 mutex_enter(&arc_prune_mtx
);
4240 list_remove(&arc_prune_list
, p
);
4241 if (refcount_remove(&p
->p_refcnt
, &arc_prune_list
) == 0) {
4242 refcount_destroy(&p
->p_refcnt
);
4243 kmem_free(p
, sizeof (*p
));
4245 mutex_exit(&arc_prune_mtx
);
4249 arc_set_callback(arc_buf_t
*buf
, arc_evict_func_t
*func
, void *private)
4251 ASSERT(buf
->b_hdr
!= NULL
);
4252 ASSERT(buf
->b_hdr
->b_l1hdr
.b_state
!= arc_anon
);
4253 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
) ||
4255 ASSERT(buf
->b_efunc
== NULL
);
4256 ASSERT(!HDR_BUF_AVAILABLE(buf
->b_hdr
));
4258 buf
->b_efunc
= func
;
4259 buf
->b_private
= private;
4263 * Notify the arc that a block was freed, and thus will never be used again.
4266 arc_freed(spa_t
*spa
, const blkptr_t
*bp
)
4269 kmutex_t
*hash_lock
;
4270 uint64_t guid
= spa_load_guid(spa
);
4272 ASSERT(!BP_IS_EMBEDDED(bp
));
4274 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
4277 if (HDR_BUF_AVAILABLE(hdr
)) {
4278 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
4279 add_reference(hdr
, hash_lock
, FTAG
);
4280 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
4281 mutex_exit(hash_lock
);
4283 arc_release(buf
, FTAG
);
4284 (void) arc_buf_remove_ref(buf
, FTAG
);
4286 mutex_exit(hash_lock
);
4292 * Clear the user eviction callback set by arc_set_callback(), first calling
4293 * it if it exists. Because the presence of a callback keeps an arc_buf cached
4294 * clearing the callback may result in the arc_buf being destroyed. However,
4295 * it will not result in the *last* arc_buf being destroyed, hence the data
4296 * will remain cached in the ARC. We make a copy of the arc buffer here so
4297 * that we can process the callback without holding any locks.
4299 * It's possible that the callback is already in the process of being cleared
4300 * by another thread. In this case we can not clear the callback.
4302 * Returns B_TRUE if the callback was successfully called and cleared.
4305 arc_clear_callback(arc_buf_t
*buf
)
4308 kmutex_t
*hash_lock
;
4309 arc_evict_func_t
*efunc
= buf
->b_efunc
;
4310 void *private = buf
->b_private
;
4312 mutex_enter(&buf
->b_evict_lock
);
4316 * We are in arc_do_user_evicts().
4318 ASSERT(buf
->b_data
== NULL
);
4319 mutex_exit(&buf
->b_evict_lock
);
4321 } else if (buf
->b_data
== NULL
) {
4323 * We are on the eviction list; process this buffer now
4324 * but let arc_do_user_evicts() do the reaping.
4326 buf
->b_efunc
= NULL
;
4327 mutex_exit(&buf
->b_evict_lock
);
4328 VERIFY0(efunc(private));
4331 hash_lock
= HDR_LOCK(hdr
);
4332 mutex_enter(hash_lock
);
4334 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
4336 ASSERT3U(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), <,
4337 hdr
->b_l1hdr
.b_datacnt
);
4338 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
4339 hdr
->b_l1hdr
.b_state
== arc_mfu
);
4341 buf
->b_efunc
= NULL
;
4342 buf
->b_private
= NULL
;
4344 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
4345 mutex_exit(&buf
->b_evict_lock
);
4346 arc_buf_destroy(buf
, TRUE
);
4348 ASSERT(buf
== hdr
->b_l1hdr
.b_buf
);
4349 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
4350 mutex_exit(&buf
->b_evict_lock
);
4353 mutex_exit(hash_lock
);
4354 VERIFY0(efunc(private));
4359 * Release this buffer from the cache, making it an anonymous buffer. This
4360 * must be done after a read and prior to modifying the buffer contents.
4361 * If the buffer has more than one reference, we must make
4362 * a new hdr for the buffer.
4365 arc_release(arc_buf_t
*buf
, void *tag
)
4367 kmutex_t
*hash_lock
;
4369 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
4372 * It would be nice to assert that if its DMU metadata (level >
4373 * 0 || it's the dnode file), then it must be syncing context.
4374 * But we don't know that information at this level.
4377 mutex_enter(&buf
->b_evict_lock
);
4379 ASSERT(HDR_HAS_L1HDR(hdr
));
4382 * We don't grab the hash lock prior to this check, because if
4383 * the buffer's header is in the arc_anon state, it won't be
4384 * linked into the hash table.
4386 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
4387 mutex_exit(&buf
->b_evict_lock
);
4388 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4389 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
4390 ASSERT(!HDR_HAS_L2HDR(hdr
));
4391 ASSERT(BUF_EMPTY(hdr
));
4393 ASSERT3U(hdr
->b_l1hdr
.b_datacnt
, ==, 1);
4394 ASSERT3S(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), ==, 1);
4395 ASSERT(!list_link_active(&hdr
->b_l1hdr
.b_arc_node
));
4397 ASSERT3P(buf
->b_efunc
, ==, NULL
);
4398 ASSERT3P(buf
->b_private
, ==, NULL
);
4400 hdr
->b_l1hdr
.b_arc_access
= 0;
4406 hash_lock
= HDR_LOCK(hdr
);
4407 mutex_enter(hash_lock
);
4410 * This assignment is only valid as long as the hash_lock is
4411 * held, we must be careful not to reference state or the
4412 * b_state field after dropping the lock.
4414 state
= hdr
->b_l1hdr
.b_state
;
4415 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
4416 ASSERT3P(state
, !=, arc_anon
);
4418 /* this buffer is not on any list */
4419 ASSERT(refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0);
4421 if (HDR_HAS_L2HDR(hdr
)) {
4422 mutex_enter(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
4425 * We have to recheck this conditional again now that
4426 * we're holding the l2ad_mtx to prevent a race with
4427 * another thread which might be concurrently calling
4428 * l2arc_evict(). In that case, l2arc_evict() might have
4429 * destroyed the header's L2 portion as we were waiting
4430 * to acquire the l2ad_mtx.
4432 if (HDR_HAS_L2HDR(hdr
))
4433 arc_hdr_l2hdr_destroy(hdr
);
4435 mutex_exit(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
4439 * Do we have more than one buf?
4441 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
4442 arc_buf_hdr_t
*nhdr
;
4444 uint64_t blksz
= hdr
->b_size
;
4445 uint64_t spa
= hdr
->b_spa
;
4446 arc_buf_contents_t type
= arc_buf_type(hdr
);
4447 uint32_t flags
= hdr
->b_flags
;
4449 ASSERT(hdr
->b_l1hdr
.b_buf
!= buf
|| buf
->b_next
!= NULL
);
4451 * Pull the data off of this hdr and attach it to
4452 * a new anonymous hdr.
4454 (void) remove_reference(hdr
, hash_lock
, tag
);
4455 bufp
= &hdr
->b_l1hdr
.b_buf
;
4456 while (*bufp
!= buf
)
4457 bufp
= &(*bufp
)->b_next
;
4458 *bufp
= buf
->b_next
;
4461 ASSERT3P(state
, !=, arc_l2c_only
);
4462 ASSERT3U(state
->arcs_size
, >=, hdr
->b_size
);
4463 atomic_add_64(&state
->arcs_size
, -hdr
->b_size
);
4464 if (refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
)) {
4467 ASSERT3P(state
, !=, arc_l2c_only
);
4468 size
= &state
->arcs_lsize
[type
];
4469 ASSERT3U(*size
, >=, hdr
->b_size
);
4470 atomic_add_64(size
, -hdr
->b_size
);
4474 * We're releasing a duplicate user data buffer, update
4475 * our statistics accordingly.
4477 if (HDR_ISTYPE_DATA(hdr
)) {
4478 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
4479 ARCSTAT_INCR(arcstat_duplicate_buffers_size
,
4482 hdr
->b_l1hdr
.b_datacnt
-= 1;
4483 arc_cksum_verify(buf
);
4484 arc_buf_unwatch(buf
);
4486 mutex_exit(hash_lock
);
4488 nhdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
4489 nhdr
->b_size
= blksz
;
4492 nhdr
->b_l1hdr
.b_mru_hits
= 0;
4493 nhdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
4494 nhdr
->b_l1hdr
.b_mfu_hits
= 0;
4495 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
4496 nhdr
->b_l1hdr
.b_l2_hits
= 0;
4497 nhdr
->b_flags
= flags
& ARC_FLAG_L2_WRITING
;
4498 nhdr
->b_flags
|= arc_bufc_to_flags(type
);
4499 nhdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
4501 nhdr
->b_l1hdr
.b_buf
= buf
;
4502 nhdr
->b_l1hdr
.b_datacnt
= 1;
4503 nhdr
->b_l1hdr
.b_state
= arc_anon
;
4504 nhdr
->b_l1hdr
.b_arc_access
= 0;
4505 nhdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
4506 nhdr
->b_freeze_cksum
= NULL
;
4508 (void) refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, tag
);
4510 mutex_exit(&buf
->b_evict_lock
);
4511 atomic_add_64(&arc_anon
->arcs_size
, blksz
);
4513 mutex_exit(&buf
->b_evict_lock
);
4514 ASSERT(refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 1);
4515 /* protected by hash lock, or hdr is on arc_anon */
4516 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
4517 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4518 hdr
->b_l1hdr
.b_mru_hits
= 0;
4519 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
4520 hdr
->b_l1hdr
.b_mfu_hits
= 0;
4521 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
4522 hdr
->b_l1hdr
.b_l2_hits
= 0;
4523 arc_change_state(arc_anon
, hdr
, hash_lock
);
4524 hdr
->b_l1hdr
.b_arc_access
= 0;
4525 mutex_exit(hash_lock
);
4527 buf_discard_identity(hdr
);
4530 buf
->b_efunc
= NULL
;
4531 buf
->b_private
= NULL
;
4535 arc_released(arc_buf_t
*buf
)
4539 mutex_enter(&buf
->b_evict_lock
);
4540 released
= (buf
->b_data
!= NULL
&&
4541 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
4542 mutex_exit(&buf
->b_evict_lock
);
4548 arc_referenced(arc_buf_t
*buf
)
4552 mutex_enter(&buf
->b_evict_lock
);
4553 referenced
= (refcount_count(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
4554 mutex_exit(&buf
->b_evict_lock
);
4555 return (referenced
);
4560 arc_write_ready(zio_t
*zio
)
4562 arc_write_callback_t
*callback
= zio
->io_private
;
4563 arc_buf_t
*buf
= callback
->awcb_buf
;
4564 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
4566 ASSERT(HDR_HAS_L1HDR(hdr
));
4567 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
4568 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
4569 callback
->awcb_ready(zio
, buf
, callback
->awcb_private
);
4572 * If the IO is already in progress, then this is a re-write
4573 * attempt, so we need to thaw and re-compute the cksum.
4574 * It is the responsibility of the callback to handle the
4575 * accounting for any re-write attempt.
4577 if (HDR_IO_IN_PROGRESS(hdr
)) {
4578 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
4579 if (hdr
->b_freeze_cksum
!= NULL
) {
4580 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
4581 hdr
->b_freeze_cksum
= NULL
;
4583 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
4585 arc_cksum_compute(buf
, B_FALSE
);
4586 hdr
->b_flags
|= ARC_FLAG_IO_IN_PROGRESS
;
4590 * The SPA calls this callback for each physical write that happens on behalf
4591 * of a logical write. See the comment in dbuf_write_physdone() for details.
4594 arc_write_physdone(zio_t
*zio
)
4596 arc_write_callback_t
*cb
= zio
->io_private
;
4597 if (cb
->awcb_physdone
!= NULL
)
4598 cb
->awcb_physdone(zio
, cb
->awcb_buf
, cb
->awcb_private
);
4602 arc_write_done(zio_t
*zio
)
4604 arc_write_callback_t
*callback
= zio
->io_private
;
4605 arc_buf_t
*buf
= callback
->awcb_buf
;
4606 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
4608 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
4610 if (zio
->io_error
== 0) {
4611 if (BP_IS_HOLE(zio
->io_bp
) || BP_IS_EMBEDDED(zio
->io_bp
)) {
4612 buf_discard_identity(hdr
);
4614 hdr
->b_dva
= *BP_IDENTITY(zio
->io_bp
);
4615 hdr
->b_birth
= BP_PHYSICAL_BIRTH(zio
->io_bp
);
4618 ASSERT(BUF_EMPTY(hdr
));
4622 * If the block to be written was all-zero or compressed enough to be
4623 * embedded in the BP, no write was performed so there will be no
4624 * dva/birth/checksum. The buffer must therefore remain anonymous
4627 if (!BUF_EMPTY(hdr
)) {
4628 arc_buf_hdr_t
*exists
;
4629 kmutex_t
*hash_lock
;
4631 ASSERT(zio
->io_error
== 0);
4633 arc_cksum_verify(buf
);
4635 exists
= buf_hash_insert(hdr
, &hash_lock
);
4636 if (exists
!= NULL
) {
4638 * This can only happen if we overwrite for
4639 * sync-to-convergence, because we remove
4640 * buffers from the hash table when we arc_free().
4642 if (zio
->io_flags
& ZIO_FLAG_IO_REWRITE
) {
4643 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
4644 panic("bad overwrite, hdr=%p exists=%p",
4645 (void *)hdr
, (void *)exists
);
4646 ASSERT(refcount_is_zero(
4647 &exists
->b_l1hdr
.b_refcnt
));
4648 arc_change_state(arc_anon
, exists
, hash_lock
);
4649 mutex_exit(hash_lock
);
4650 arc_hdr_destroy(exists
);
4651 exists
= buf_hash_insert(hdr
, &hash_lock
);
4652 ASSERT3P(exists
, ==, NULL
);
4653 } else if (zio
->io_flags
& ZIO_FLAG_NOPWRITE
) {
4655 ASSERT(zio
->io_prop
.zp_nopwrite
);
4656 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
4657 panic("bad nopwrite, hdr=%p exists=%p",
4658 (void *)hdr
, (void *)exists
);
4661 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
4662 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
4663 ASSERT(BP_GET_DEDUP(zio
->io_bp
));
4664 ASSERT(BP_GET_LEVEL(zio
->io_bp
) == 0);
4667 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
4668 /* if it's not anon, we are doing a scrub */
4669 if (exists
== NULL
&& hdr
->b_l1hdr
.b_state
== arc_anon
)
4670 arc_access(hdr
, hash_lock
);
4671 mutex_exit(hash_lock
);
4673 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
4676 ASSERT(!refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
4677 callback
->awcb_done(zio
, buf
, callback
->awcb_private
);
4679 kmem_free(callback
, sizeof (arc_write_callback_t
));
4683 arc_write(zio_t
*pio
, spa_t
*spa
, uint64_t txg
,
4684 blkptr_t
*bp
, arc_buf_t
*buf
, boolean_t l2arc
, boolean_t l2arc_compress
,
4685 const zio_prop_t
*zp
, arc_done_func_t
*ready
, arc_done_func_t
*physdone
,
4686 arc_done_func_t
*done
, void *private, zio_priority_t priority
,
4687 int zio_flags
, const zbookmark_phys_t
*zb
)
4689 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
4690 arc_write_callback_t
*callback
;
4693 ASSERT(ready
!= NULL
);
4694 ASSERT(done
!= NULL
);
4695 ASSERT(!HDR_IO_ERROR(hdr
));
4696 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4697 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
4698 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
4700 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4702 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4703 callback
= kmem_zalloc(sizeof (arc_write_callback_t
), KM_SLEEP
);
4704 callback
->awcb_ready
= ready
;
4705 callback
->awcb_physdone
= physdone
;
4706 callback
->awcb_done
= done
;
4707 callback
->awcb_private
= private;
4708 callback
->awcb_buf
= buf
;
4710 zio
= zio_write(pio
, spa
, txg
, bp
, buf
->b_data
, hdr
->b_size
, zp
,
4711 arc_write_ready
, arc_write_physdone
, arc_write_done
, callback
,
4712 priority
, zio_flags
, zb
);
4718 arc_memory_throttle(uint64_t reserve
, uint64_t txg
)
4721 if (zfs_arc_memory_throttle_disable
)
4724 if (freemem
<= physmem
* arc_lotsfree_percent
/ 100) {
4725 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
4726 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
4727 return (SET_ERROR(EAGAIN
));
4734 arc_tempreserve_clear(uint64_t reserve
)
4736 atomic_add_64(&arc_tempreserve
, -reserve
);
4737 ASSERT((int64_t)arc_tempreserve
>= 0);
4741 arc_tempreserve_space(uint64_t reserve
, uint64_t txg
)
4746 if (reserve
> arc_c
/4 && !arc_no_grow
)
4747 arc_c
= MIN(arc_c_max
, reserve
* 4);
4750 * Throttle when the calculated memory footprint for the TXG
4751 * exceeds the target ARC size.
4753 if (reserve
> arc_c
) {
4754 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve
);
4755 return (SET_ERROR(ERESTART
));
4759 * Don't count loaned bufs as in flight dirty data to prevent long
4760 * network delays from blocking transactions that are ready to be
4761 * assigned to a txg.
4763 anon_size
= MAX((int64_t)(arc_anon
->arcs_size
- arc_loaned_bytes
), 0);
4766 * Writes will, almost always, require additional memory allocations
4767 * in order to compress/encrypt/etc the data. We therefore need to
4768 * make sure that there is sufficient available memory for this.
4770 error
= arc_memory_throttle(reserve
, txg
);
4775 * Throttle writes when the amount of dirty data in the cache
4776 * gets too large. We try to keep the cache less than half full
4777 * of dirty blocks so that our sync times don't grow too large.
4778 * Note: if two requests come in concurrently, we might let them
4779 * both succeed, when one of them should fail. Not a huge deal.
4782 if (reserve
+ arc_tempreserve
+ anon_size
> arc_c
/ 2 &&
4783 anon_size
> arc_c
/ 4) {
4784 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
4785 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
4786 arc_tempreserve
>>10,
4787 arc_anon
->arcs_lsize
[ARC_BUFC_METADATA
]>>10,
4788 arc_anon
->arcs_lsize
[ARC_BUFC_DATA
]>>10,
4789 reserve
>>10, arc_c
>>10);
4790 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle
);
4791 return (SET_ERROR(ERESTART
));
4793 atomic_add_64(&arc_tempreserve
, reserve
);
4798 arc_kstat_update_state(arc_state_t
*state
, kstat_named_t
*size
,
4799 kstat_named_t
*evict_data
, kstat_named_t
*evict_metadata
)
4801 size
->value
.ui64
= state
->arcs_size
;
4802 evict_data
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_DATA
];
4803 evict_metadata
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_METADATA
];
4807 arc_kstat_update(kstat_t
*ksp
, int rw
)
4809 arc_stats_t
*as
= ksp
->ks_data
;
4811 if (rw
== KSTAT_WRITE
) {
4812 return (SET_ERROR(EACCES
));
4814 arc_kstat_update_state(arc_anon
,
4815 &as
->arcstat_anon_size
,
4816 &as
->arcstat_anon_evict_data
,
4817 &as
->arcstat_anon_evict_metadata
);
4818 arc_kstat_update_state(arc_mru
,
4819 &as
->arcstat_mru_size
,
4820 &as
->arcstat_mru_evict_data
,
4821 &as
->arcstat_mru_evict_metadata
);
4822 arc_kstat_update_state(arc_mru_ghost
,
4823 &as
->arcstat_mru_ghost_size
,
4824 &as
->arcstat_mru_ghost_evict_data
,
4825 &as
->arcstat_mru_ghost_evict_metadata
);
4826 arc_kstat_update_state(arc_mfu
,
4827 &as
->arcstat_mfu_size
,
4828 &as
->arcstat_mfu_evict_data
,
4829 &as
->arcstat_mfu_evict_metadata
);
4830 arc_kstat_update_state(arc_mfu_ghost
,
4831 &as
->arcstat_mfu_ghost_size
,
4832 &as
->arcstat_mfu_ghost_evict_data
,
4833 &as
->arcstat_mfu_ghost_evict_metadata
);
4840 * This function *must* return indices evenly distributed between all
4841 * sublists of the multilist. This is needed due to how the ARC eviction
4842 * code is laid out; arc_evict_state() assumes ARC buffers are evenly
4843 * distributed between all sublists and uses this assumption when
4844 * deciding which sublist to evict from and how much to evict from it.
4847 arc_state_multilist_index_func(multilist_t
*ml
, void *obj
)
4849 arc_buf_hdr_t
*hdr
= obj
;
4852 * We rely on b_dva to generate evenly distributed index
4853 * numbers using buf_hash below. So, as an added precaution,
4854 * let's make sure we never add empty buffers to the arc lists.
4856 ASSERT(!BUF_EMPTY(hdr
));
4859 * The assumption here, is the hash value for a given
4860 * arc_buf_hdr_t will remain constant throughout its lifetime
4861 * (i.e. its b_spa, b_dva, and b_birth fields don't change).
4862 * Thus, we don't need to store the header's sublist index
4863 * on insertion, as this index can be recalculated on removal.
4865 * Also, the low order bits of the hash value are thought to be
4866 * distributed evenly. Otherwise, in the case that the multilist
4867 * has a power of two number of sublists, each sublists' usage
4868 * would not be evenly distributed.
4870 return (buf_hash(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
) %
4871 multilist_get_num_sublists(ml
));
4877 mutex_init(&arc_reclaim_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
4878 cv_init(&arc_reclaim_thread_cv
, NULL
, CV_DEFAULT
, NULL
);
4879 cv_init(&arc_reclaim_waiters_cv
, NULL
, CV_DEFAULT
, NULL
);
4881 mutex_init(&arc_user_evicts_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
4882 cv_init(&arc_user_evicts_cv
, NULL
, CV_DEFAULT
, NULL
);
4884 /* Convert seconds to clock ticks */
4885 zfs_arc_min_prefetch_lifespan
= 1 * hz
;
4887 /* Start out with 1/8 of all memory */
4888 arc_c
= physmem
* PAGESIZE
/ 8;
4892 * On architectures where the physical memory can be larger
4893 * than the addressable space (intel in 32-bit mode), we may
4894 * need to limit the cache to 1/8 of VM size.
4896 arc_c
= MIN(arc_c
, vmem_size(heap_arena
, VMEM_ALLOC
| VMEM_FREE
) / 8);
4898 * Register a shrinker to support synchronous (direct) memory
4899 * reclaim from the arc. This is done to prevent kswapd from
4900 * swapping out pages when it is preferable to shrink the arc.
4902 spl_register_shrinker(&arc_shrinker
);
4905 /* set min cache to allow safe operation of arc_adapt() */
4906 arc_c_min
= 2ULL << SPA_MAXBLOCKSHIFT
;
4907 /* set max to 1/2 of all memory */
4908 arc_c_max
= arc_c
* 4;
4911 * Allow the tunables to override our calculations if they are
4912 * reasonable (ie. over 64MB)
4914 if (zfs_arc_max
> 64<<20 && zfs_arc_max
< physmem
* PAGESIZE
)
4915 arc_c_max
= zfs_arc_max
;
4916 if (zfs_arc_min
>= 2ULL << SPA_MAXBLOCKSHIFT
&&
4917 zfs_arc_min
<= arc_c_max
)
4918 arc_c_min
= zfs_arc_min
;
4921 arc_p
= (arc_c
>> 1);
4923 /* limit meta-data to 3/4 of the arc capacity */
4924 arc_meta_limit
= (3 * arc_c_max
) / 4;
4927 /* Allow the tunable to override if it is reasonable */
4928 if (zfs_arc_meta_limit
> 0 && zfs_arc_meta_limit
<= arc_c_max
)
4929 arc_meta_limit
= zfs_arc_meta_limit
;
4931 if (zfs_arc_meta_min
> 0) {
4932 arc_meta_min
= zfs_arc_meta_min
;
4934 arc_meta_min
= arc_c_min
/ 2;
4937 if (zfs_arc_num_sublists_per_state
< 1)
4938 zfs_arc_num_sublists_per_state
= num_online_cpus();
4940 /* if kmem_flags are set, lets try to use less memory */
4941 if (kmem_debugging())
4943 if (arc_c
< arc_c_min
)
4946 arc_anon
= &ARC_anon
;
4948 arc_mru_ghost
= &ARC_mru_ghost
;
4950 arc_mfu_ghost
= &ARC_mfu_ghost
;
4951 arc_l2c_only
= &ARC_l2c_only
;
4954 multilist_create(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
],
4955 sizeof (arc_buf_hdr_t
),
4956 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4957 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4958 multilist_create(&arc_mru
->arcs_list
[ARC_BUFC_DATA
],
4959 sizeof (arc_buf_hdr_t
),
4960 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4961 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4962 multilist_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
],
4963 sizeof (arc_buf_hdr_t
),
4964 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4965 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4966 multilist_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
],
4967 sizeof (arc_buf_hdr_t
),
4968 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4969 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4970 multilist_create(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
],
4971 sizeof (arc_buf_hdr_t
),
4972 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4973 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4974 multilist_create(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
],
4975 sizeof (arc_buf_hdr_t
),
4976 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4977 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4978 multilist_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
],
4979 sizeof (arc_buf_hdr_t
),
4980 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4981 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4982 multilist_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
],
4983 sizeof (arc_buf_hdr_t
),
4984 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4985 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4986 multilist_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
],
4987 sizeof (arc_buf_hdr_t
),
4988 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4989 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4990 multilist_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
],
4991 sizeof (arc_buf_hdr_t
),
4992 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
4993 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
4995 arc_anon
->arcs_state
= ARC_STATE_ANON
;
4996 arc_mru
->arcs_state
= ARC_STATE_MRU
;
4997 arc_mru_ghost
->arcs_state
= ARC_STATE_MRU_GHOST
;
4998 arc_mfu
->arcs_state
= ARC_STATE_MFU
;
4999 arc_mfu_ghost
->arcs_state
= ARC_STATE_MFU_GHOST
;
5000 arc_l2c_only
->arcs_state
= ARC_STATE_L2C_ONLY
;
5004 arc_reclaim_thread_exit
= FALSE
;
5005 arc_user_evicts_thread_exit
= FALSE
;
5006 list_create(&arc_prune_list
, sizeof (arc_prune_t
),
5007 offsetof(arc_prune_t
, p_node
));
5008 arc_eviction_list
= NULL
;
5009 mutex_init(&arc_prune_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
5010 bzero(&arc_eviction_hdr
, sizeof (arc_buf_hdr_t
));
5012 arc_prune_taskq
= taskq_create("arc_prune", max_ncpus
, minclsyspri
,
5013 max_ncpus
, INT_MAX
, TASKQ_PREPOPULATE
| TASKQ_DYNAMIC
);
5015 arc_ksp
= kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED
,
5016 sizeof (arc_stats
) / sizeof (kstat_named_t
), KSTAT_FLAG_VIRTUAL
);
5018 if (arc_ksp
!= NULL
) {
5019 arc_ksp
->ks_data
= &arc_stats
;
5020 arc_ksp
->ks_update
= arc_kstat_update
;
5021 kstat_install(arc_ksp
);
5024 (void) thread_create(NULL
, 0, arc_adapt_thread
, NULL
, 0, &p0
,
5025 TS_RUN
, minclsyspri
);
5027 (void) thread_create(NULL
, 0, arc_user_evicts_thread
, NULL
, 0, &p0
,
5028 TS_RUN
, minclsyspri
);
5034 * Calculate maximum amount of dirty data per pool.
5036 * If it has been set by a module parameter, take that.
5037 * Otherwise, use a percentage of physical memory defined by
5038 * zfs_dirty_data_max_percent (default 10%) with a cap at
5039 * zfs_dirty_data_max_max (default 25% of physical memory).
5041 if (zfs_dirty_data_max_max
== 0)
5042 zfs_dirty_data_max_max
= physmem
* PAGESIZE
*
5043 zfs_dirty_data_max_max_percent
/ 100;
5045 if (zfs_dirty_data_max
== 0) {
5046 zfs_dirty_data_max
= physmem
* PAGESIZE
*
5047 zfs_dirty_data_max_percent
/ 100;
5048 zfs_dirty_data_max
= MIN(zfs_dirty_data_max
,
5049 zfs_dirty_data_max_max
);
5059 spl_unregister_shrinker(&arc_shrinker
);
5060 #endif /* _KERNEL */
5062 mutex_enter(&arc_reclaim_lock
);
5063 arc_reclaim_thread_exit
= TRUE
;
5065 * The reclaim thread will set arc_reclaim_thread_exit back to
5066 * FALSE when it is finished exiting; we're waiting for that.
5068 while (arc_reclaim_thread_exit
) {
5069 cv_signal(&arc_reclaim_thread_cv
);
5070 cv_wait(&arc_reclaim_thread_cv
, &arc_reclaim_lock
);
5072 mutex_exit(&arc_reclaim_lock
);
5074 mutex_enter(&arc_user_evicts_lock
);
5075 arc_user_evicts_thread_exit
= TRUE
;
5077 * The user evicts thread will set arc_user_evicts_thread_exit
5078 * to FALSE when it is finished exiting; we're waiting for that.
5080 while (arc_user_evicts_thread_exit
) {
5081 cv_signal(&arc_user_evicts_cv
);
5082 cv_wait(&arc_user_evicts_cv
, &arc_user_evicts_lock
);
5084 mutex_exit(&arc_user_evicts_lock
);
5086 /* Use TRUE to ensure *all* buffers are evicted */
5087 arc_flush(NULL
, TRUE
);
5091 if (arc_ksp
!= NULL
) {
5092 kstat_delete(arc_ksp
);
5096 taskq_wait(arc_prune_taskq
);
5097 taskq_destroy(arc_prune_taskq
);
5099 mutex_enter(&arc_prune_mtx
);
5100 while ((p
= list_head(&arc_prune_list
)) != NULL
) {
5101 list_remove(&arc_prune_list
, p
);
5102 refcount_remove(&p
->p_refcnt
, &arc_prune_list
);
5103 refcount_destroy(&p
->p_refcnt
);
5104 kmem_free(p
, sizeof (*p
));
5106 mutex_exit(&arc_prune_mtx
);
5108 list_destroy(&arc_prune_list
);
5109 mutex_destroy(&arc_prune_mtx
);
5110 mutex_destroy(&arc_reclaim_lock
);
5111 cv_destroy(&arc_reclaim_thread_cv
);
5112 cv_destroy(&arc_reclaim_waiters_cv
);
5114 mutex_destroy(&arc_user_evicts_lock
);
5115 cv_destroy(&arc_user_evicts_cv
);
5117 multilist_destroy(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
]);
5118 multilist_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
5119 multilist_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
]);
5120 multilist_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
5121 multilist_destroy(&arc_mru
->arcs_list
[ARC_BUFC_DATA
]);
5122 multilist_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
]);
5123 multilist_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
]);
5124 multilist_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
]);
5125 multilist_destroy(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]);
5126 multilist_destroy(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]);
5130 ASSERT0(arc_loaned_bytes
);
5136 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
5137 * It uses dedicated storage devices to hold cached data, which are populated
5138 * using large infrequent writes. The main role of this cache is to boost
5139 * the performance of random read workloads. The intended L2ARC devices
5140 * include short-stroked disks, solid state disks, and other media with
5141 * substantially faster read latency than disk.
5143 * +-----------------------+
5145 * +-----------------------+
5148 * l2arc_feed_thread() arc_read()
5152 * +---------------+ |
5154 * +---------------+ |
5159 * +-------+ +-------+
5161 * | cache | | cache |
5162 * +-------+ +-------+
5163 * +=========+ .-----.
5164 * : L2ARC : |-_____-|
5165 * : devices : | Disks |
5166 * +=========+ `-_____-'
5168 * Read requests are satisfied from the following sources, in order:
5171 * 2) vdev cache of L2ARC devices
5173 * 4) vdev cache of disks
5176 * Some L2ARC device types exhibit extremely slow write performance.
5177 * To accommodate for this there are some significant differences between
5178 * the L2ARC and traditional cache design:
5180 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
5181 * the ARC behave as usual, freeing buffers and placing headers on ghost
5182 * lists. The ARC does not send buffers to the L2ARC during eviction as
5183 * this would add inflated write latencies for all ARC memory pressure.
5185 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
5186 * It does this by periodically scanning buffers from the eviction-end of
5187 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
5188 * not already there. It scans until a headroom of buffers is satisfied,
5189 * which itself is a buffer for ARC eviction. If a compressible buffer is
5190 * found during scanning and selected for writing to an L2ARC device, we
5191 * temporarily boost scanning headroom during the next scan cycle to make
5192 * sure we adapt to compression effects (which might significantly reduce
5193 * the data volume we write to L2ARC). The thread that does this is
5194 * l2arc_feed_thread(), illustrated below; example sizes are included to
5195 * provide a better sense of ratio than this diagram:
5198 * +---------------------+----------+
5199 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
5200 * +---------------------+----------+ | o L2ARC eligible
5201 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
5202 * +---------------------+----------+ |
5203 * 15.9 Gbytes ^ 32 Mbytes |
5205 * l2arc_feed_thread()
5207 * l2arc write hand <--[oooo]--'
5211 * +==============================+
5212 * L2ARC dev |####|#|###|###| |####| ... |
5213 * +==============================+
5216 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
5217 * evicted, then the L2ARC has cached a buffer much sooner than it probably
5218 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
5219 * safe to say that this is an uncommon case, since buffers at the end of
5220 * the ARC lists have moved there due to inactivity.
5222 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
5223 * then the L2ARC simply misses copying some buffers. This serves as a
5224 * pressure valve to prevent heavy read workloads from both stalling the ARC
5225 * with waits and clogging the L2ARC with writes. This also helps prevent
5226 * the potential for the L2ARC to churn if it attempts to cache content too
5227 * quickly, such as during backups of the entire pool.
5229 * 5. After system boot and before the ARC has filled main memory, there are
5230 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
5231 * lists can remain mostly static. Instead of searching from tail of these
5232 * lists as pictured, the l2arc_feed_thread() will search from the list heads
5233 * for eligible buffers, greatly increasing its chance of finding them.
5235 * The L2ARC device write speed is also boosted during this time so that
5236 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
5237 * there are no L2ARC reads, and no fear of degrading read performance
5238 * through increased writes.
5240 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
5241 * the vdev queue can aggregate them into larger and fewer writes. Each
5242 * device is written to in a rotor fashion, sweeping writes through
5243 * available space then repeating.
5245 * 7. The L2ARC does not store dirty content. It never needs to flush
5246 * write buffers back to disk based storage.
5248 * 8. If an ARC buffer is written (and dirtied) which also exists in the
5249 * L2ARC, the now stale L2ARC buffer is immediately dropped.
5251 * The performance of the L2ARC can be tweaked by a number of tunables, which
5252 * may be necessary for different workloads:
5254 * l2arc_write_max max write bytes per interval
5255 * l2arc_write_boost extra write bytes during device warmup
5256 * l2arc_noprefetch skip caching prefetched buffers
5257 * l2arc_nocompress skip compressing buffers
5258 * l2arc_headroom number of max device writes to precache
5259 * l2arc_headroom_boost when we find compressed buffers during ARC
5260 * scanning, we multiply headroom by this
5261 * percentage factor for the next scan cycle,
5262 * since more compressed buffers are likely to
5264 * l2arc_feed_secs seconds between L2ARC writing
5266 * Tunables may be removed or added as future performance improvements are
5267 * integrated, and also may become zpool properties.
5269 * There are three key functions that control how the L2ARC warms up:
5271 * l2arc_write_eligible() check if a buffer is eligible to cache
5272 * l2arc_write_size() calculate how much to write
5273 * l2arc_write_interval() calculate sleep delay between writes
5275 * These three functions determine what to write, how much, and how quickly
5280 l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*hdr
)
5283 * A buffer is *not* eligible for the L2ARC if it:
5284 * 1. belongs to a different spa.
5285 * 2. is already cached on the L2ARC.
5286 * 3. has an I/O in progress (it may be an incomplete read).
5287 * 4. is flagged not eligible (zfs property).
5289 if (hdr
->b_spa
!= spa_guid
|| HDR_HAS_L2HDR(hdr
) ||
5290 HDR_IO_IN_PROGRESS(hdr
) || !HDR_L2CACHE(hdr
))
5297 l2arc_write_size(void)
5302 * Make sure our globals have meaningful values in case the user
5305 size
= l2arc_write_max
;
5307 cmn_err(CE_NOTE
, "Bad value for l2arc_write_max, value must "
5308 "be greater than zero, resetting it to the default (%d)",
5310 size
= l2arc_write_max
= L2ARC_WRITE_SIZE
;
5313 if (arc_warm
== B_FALSE
)
5314 size
+= l2arc_write_boost
;
5321 l2arc_write_interval(clock_t began
, uint64_t wanted
, uint64_t wrote
)
5323 clock_t interval
, next
, now
;
5326 * If the ARC lists are busy, increase our write rate; if the
5327 * lists are stale, idle back. This is achieved by checking
5328 * how much we previously wrote - if it was more than half of
5329 * what we wanted, schedule the next write much sooner.
5331 if (l2arc_feed_again
&& wrote
> (wanted
/ 2))
5332 interval
= (hz
* l2arc_feed_min_ms
) / 1000;
5334 interval
= hz
* l2arc_feed_secs
;
5336 now
= ddi_get_lbolt();
5337 next
= MAX(now
, MIN(now
+ interval
, began
+ interval
));
5343 * Cycle through L2ARC devices. This is how L2ARC load balances.
5344 * If a device is returned, this also returns holding the spa config lock.
5346 static l2arc_dev_t
*
5347 l2arc_dev_get_next(void)
5349 l2arc_dev_t
*first
, *next
= NULL
;
5352 * Lock out the removal of spas (spa_namespace_lock), then removal
5353 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
5354 * both locks will be dropped and a spa config lock held instead.
5356 mutex_enter(&spa_namespace_lock
);
5357 mutex_enter(&l2arc_dev_mtx
);
5359 /* if there are no vdevs, there is nothing to do */
5360 if (l2arc_ndev
== 0)
5364 next
= l2arc_dev_last
;
5366 /* loop around the list looking for a non-faulted vdev */
5368 next
= list_head(l2arc_dev_list
);
5370 next
= list_next(l2arc_dev_list
, next
);
5372 next
= list_head(l2arc_dev_list
);
5375 /* if we have come back to the start, bail out */
5378 else if (next
== first
)
5381 } while (vdev_is_dead(next
->l2ad_vdev
));
5383 /* if we were unable to find any usable vdevs, return NULL */
5384 if (vdev_is_dead(next
->l2ad_vdev
))
5387 l2arc_dev_last
= next
;
5390 mutex_exit(&l2arc_dev_mtx
);
5393 * Grab the config lock to prevent the 'next' device from being
5394 * removed while we are writing to it.
5397 spa_config_enter(next
->l2ad_spa
, SCL_L2ARC
, next
, RW_READER
);
5398 mutex_exit(&spa_namespace_lock
);
5404 * Free buffers that were tagged for destruction.
5407 l2arc_do_free_on_write(void)
5410 l2arc_data_free_t
*df
, *df_prev
;
5412 mutex_enter(&l2arc_free_on_write_mtx
);
5413 buflist
= l2arc_free_on_write
;
5415 for (df
= list_tail(buflist
); df
; df
= df_prev
) {
5416 df_prev
= list_prev(buflist
, df
);
5417 ASSERT(df
->l2df_data
!= NULL
);
5418 ASSERT(df
->l2df_func
!= NULL
);
5419 df
->l2df_func(df
->l2df_data
, df
->l2df_size
);
5420 list_remove(buflist
, df
);
5421 kmem_free(df
, sizeof (l2arc_data_free_t
));
5424 mutex_exit(&l2arc_free_on_write_mtx
);
5428 * A write to a cache device has completed. Update all headers to allow
5429 * reads from these buffers to begin.
5432 l2arc_write_done(zio_t
*zio
)
5434 l2arc_write_callback_t
*cb
;
5437 arc_buf_hdr_t
*head
, *hdr
, *hdr_prev
;
5438 kmutex_t
*hash_lock
;
5439 int64_t bytes_dropped
= 0;
5441 cb
= zio
->io_private
;
5443 dev
= cb
->l2wcb_dev
;
5444 ASSERT(dev
!= NULL
);
5445 head
= cb
->l2wcb_head
;
5446 ASSERT(head
!= NULL
);
5447 buflist
= &dev
->l2ad_buflist
;
5448 ASSERT(buflist
!= NULL
);
5449 DTRACE_PROBE2(l2arc__iodone
, zio_t
*, zio
,
5450 l2arc_write_callback_t
*, cb
);
5452 if (zio
->io_error
!= 0)
5453 ARCSTAT_BUMP(arcstat_l2_writes_error
);
5456 * All writes completed, or an error was hit.
5459 mutex_enter(&dev
->l2ad_mtx
);
5460 for (hdr
= list_prev(buflist
, head
); hdr
; hdr
= hdr_prev
) {
5461 hdr_prev
= list_prev(buflist
, hdr
);
5463 hash_lock
= HDR_LOCK(hdr
);
5466 * We cannot use mutex_enter or else we can deadlock
5467 * with l2arc_write_buffers (due to swapping the order
5468 * the hash lock and l2ad_mtx are taken).
5470 if (!mutex_tryenter(hash_lock
)) {
5472 * Missed the hash lock. We must retry so we
5473 * don't leave the ARC_FLAG_L2_WRITING bit set.
5475 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry
);
5478 * We don't want to rescan the headers we've
5479 * already marked as having been written out, so
5480 * we reinsert the head node so we can pick up
5481 * where we left off.
5483 list_remove(buflist
, head
);
5484 list_insert_after(buflist
, hdr
, head
);
5486 mutex_exit(&dev
->l2ad_mtx
);
5489 * We wait for the hash lock to become available
5490 * to try and prevent busy waiting, and increase
5491 * the chance we'll be able to acquire the lock
5492 * the next time around.
5494 mutex_enter(hash_lock
);
5495 mutex_exit(hash_lock
);
5500 * We could not have been moved into the arc_l2c_only
5501 * state while in-flight due to our ARC_FLAG_L2_WRITING
5502 * bit being set. Let's just ensure that's being enforced.
5504 ASSERT(HDR_HAS_L1HDR(hdr
));
5507 * We may have allocated a buffer for L2ARC compression,
5508 * we must release it to avoid leaking this data.
5510 l2arc_release_cdata_buf(hdr
);
5512 if (zio
->io_error
!= 0) {
5514 * Error - drop L2ARC entry.
5516 list_remove(buflist
, hdr
);
5517 hdr
->b_flags
&= ~ARC_FLAG_HAS_L2HDR
;
5519 ARCSTAT_INCR(arcstat_l2_asize
, -hdr
->b_l2hdr
.b_asize
);
5520 ARCSTAT_INCR(arcstat_l2_size
, -hdr
->b_size
);
5522 bytes_dropped
+= hdr
->b_l2hdr
.b_asize
;
5523 (void) refcount_remove_many(&dev
->l2ad_alloc
,
5524 hdr
->b_l2hdr
.b_asize
, hdr
);
5528 * Allow ARC to begin reads and ghost list evictions to
5531 hdr
->b_flags
&= ~ARC_FLAG_L2_WRITING
;
5533 mutex_exit(hash_lock
);
5536 atomic_inc_64(&l2arc_writes_done
);
5537 list_remove(buflist
, head
);
5538 ASSERT(!HDR_HAS_L1HDR(head
));
5539 kmem_cache_free(hdr_l2only_cache
, head
);
5540 mutex_exit(&dev
->l2ad_mtx
);
5542 vdev_space_update(dev
->l2ad_vdev
, -bytes_dropped
, 0, 0);
5544 l2arc_do_free_on_write();
5546 kmem_free(cb
, sizeof (l2arc_write_callback_t
));
5550 * A read to a cache device completed. Validate buffer contents before
5551 * handing over to the regular ARC routines.
5554 l2arc_read_done(zio_t
*zio
)
5556 l2arc_read_callback_t
*cb
;
5559 kmutex_t
*hash_lock
;
5562 ASSERT(zio
->io_vd
!= NULL
);
5563 ASSERT(zio
->io_flags
& ZIO_FLAG_DONT_PROPAGATE
);
5565 spa_config_exit(zio
->io_spa
, SCL_L2ARC
, zio
->io_vd
);
5567 cb
= zio
->io_private
;
5569 buf
= cb
->l2rcb_buf
;
5570 ASSERT(buf
!= NULL
);
5572 hash_lock
= HDR_LOCK(buf
->b_hdr
);
5573 mutex_enter(hash_lock
);
5575 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
5578 * If the buffer was compressed, decompress it first.
5580 if (cb
->l2rcb_compress
!= ZIO_COMPRESS_OFF
)
5581 l2arc_decompress_zio(zio
, hdr
, cb
->l2rcb_compress
);
5582 ASSERT(zio
->io_data
!= NULL
);
5585 * Check this survived the L2ARC journey.
5587 equal
= arc_cksum_equal(buf
);
5588 if (equal
&& zio
->io_error
== 0 && !HDR_L2_EVICTED(hdr
)) {
5589 mutex_exit(hash_lock
);
5590 zio
->io_private
= buf
;
5591 zio
->io_bp_copy
= cb
->l2rcb_bp
; /* XXX fix in L2ARC 2.0 */
5592 zio
->io_bp
= &zio
->io_bp_copy
; /* XXX fix in L2ARC 2.0 */
5595 mutex_exit(hash_lock
);
5597 * Buffer didn't survive caching. Increment stats and
5598 * reissue to the original storage device.
5600 if (zio
->io_error
!= 0) {
5601 ARCSTAT_BUMP(arcstat_l2_io_error
);
5603 zio
->io_error
= SET_ERROR(EIO
);
5606 ARCSTAT_BUMP(arcstat_l2_cksum_bad
);
5609 * If there's no waiter, issue an async i/o to the primary
5610 * storage now. If there *is* a waiter, the caller must
5611 * issue the i/o in a context where it's OK to block.
5613 if (zio
->io_waiter
== NULL
) {
5614 zio_t
*pio
= zio_unique_parent(zio
);
5616 ASSERT(!pio
|| pio
->io_child_type
== ZIO_CHILD_LOGICAL
);
5618 zio_nowait(zio_read(pio
, cb
->l2rcb_spa
, &cb
->l2rcb_bp
,
5619 buf
->b_data
, zio
->io_size
, arc_read_done
, buf
,
5620 zio
->io_priority
, cb
->l2rcb_flags
, &cb
->l2rcb_zb
));
5624 kmem_free(cb
, sizeof (l2arc_read_callback_t
));
5628 * This is the list priority from which the L2ARC will search for pages to
5629 * cache. This is used within loops (0..3) to cycle through lists in the
5630 * desired order. This order can have a significant effect on cache
5633 * Currently the metadata lists are hit first, MFU then MRU, followed by
5634 * the data lists. This function returns a locked list, and also returns
5637 static multilist_sublist_t
*
5638 l2arc_sublist_lock(int list_num
)
5640 multilist_t
*ml
= NULL
;
5643 ASSERT(list_num
>= 0 && list_num
<= 3);
5647 ml
= &arc_mfu
->arcs_list
[ARC_BUFC_METADATA
];
5650 ml
= &arc_mru
->arcs_list
[ARC_BUFC_METADATA
];
5653 ml
= &arc_mfu
->arcs_list
[ARC_BUFC_DATA
];
5656 ml
= &arc_mru
->arcs_list
[ARC_BUFC_DATA
];
5661 * Return a randomly-selected sublist. This is acceptable
5662 * because the caller feeds only a little bit of data for each
5663 * call (8MB). Subsequent calls will result in different
5664 * sublists being selected.
5666 idx
= multilist_get_random_index(ml
);
5667 return (multilist_sublist_lock(ml
, idx
));
5671 * Evict buffers from the device write hand to the distance specified in
5672 * bytes. This distance may span populated buffers, it may span nothing.
5673 * This is clearing a region on the L2ARC device ready for writing.
5674 * If the 'all' boolean is set, every buffer is evicted.
5677 l2arc_evict(l2arc_dev_t
*dev
, uint64_t distance
, boolean_t all
)
5680 arc_buf_hdr_t
*hdr
, *hdr_prev
;
5681 kmutex_t
*hash_lock
;
5684 buflist
= &dev
->l2ad_buflist
;
5686 if (!all
&& dev
->l2ad_first
) {
5688 * This is the first sweep through the device. There is
5694 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- (2 * distance
))) {
5696 * When nearing the end of the device, evict to the end
5697 * before the device write hand jumps to the start.
5699 taddr
= dev
->l2ad_end
;
5701 taddr
= dev
->l2ad_hand
+ distance
;
5703 DTRACE_PROBE4(l2arc__evict
, l2arc_dev_t
*, dev
, list_t
*, buflist
,
5704 uint64_t, taddr
, boolean_t
, all
);
5707 mutex_enter(&dev
->l2ad_mtx
);
5708 for (hdr
= list_tail(buflist
); hdr
; hdr
= hdr_prev
) {
5709 hdr_prev
= list_prev(buflist
, hdr
);
5711 hash_lock
= HDR_LOCK(hdr
);
5714 * We cannot use mutex_enter or else we can deadlock
5715 * with l2arc_write_buffers (due to swapping the order
5716 * the hash lock and l2ad_mtx are taken).
5718 if (!mutex_tryenter(hash_lock
)) {
5720 * Missed the hash lock. Retry.
5722 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry
);
5723 mutex_exit(&dev
->l2ad_mtx
);
5724 mutex_enter(hash_lock
);
5725 mutex_exit(hash_lock
);
5729 if (HDR_L2_WRITE_HEAD(hdr
)) {
5731 * We hit a write head node. Leave it for
5732 * l2arc_write_done().
5734 list_remove(buflist
, hdr
);
5735 mutex_exit(hash_lock
);
5739 if (!all
&& HDR_HAS_L2HDR(hdr
) &&
5740 (hdr
->b_l2hdr
.b_daddr
> taddr
||
5741 hdr
->b_l2hdr
.b_daddr
< dev
->l2ad_hand
)) {
5743 * We've evicted to the target address,
5744 * or the end of the device.
5746 mutex_exit(hash_lock
);
5750 ASSERT(HDR_HAS_L2HDR(hdr
));
5751 if (!HDR_HAS_L1HDR(hdr
)) {
5752 ASSERT(!HDR_L2_READING(hdr
));
5754 * This doesn't exist in the ARC. Destroy.
5755 * arc_hdr_destroy() will call list_remove()
5756 * and decrement arcstat_l2_size.
5758 arc_change_state(arc_anon
, hdr
, hash_lock
);
5759 arc_hdr_destroy(hdr
);
5761 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_l2c_only
);
5762 ARCSTAT_BUMP(arcstat_l2_evict_l1cached
);
5764 * Invalidate issued or about to be issued
5765 * reads, since we may be about to write
5766 * over this location.
5768 if (HDR_L2_READING(hdr
)) {
5769 ARCSTAT_BUMP(arcstat_l2_evict_reading
);
5770 hdr
->b_flags
|= ARC_FLAG_L2_EVICTED
;
5773 /* Ensure this header has finished being written */
5774 ASSERT(!HDR_L2_WRITING(hdr
));
5775 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
5777 arc_hdr_l2hdr_destroy(hdr
);
5779 mutex_exit(hash_lock
);
5781 mutex_exit(&dev
->l2ad_mtx
);
5785 * Find and write ARC buffers to the L2ARC device.
5787 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
5788 * for reading until they have completed writing.
5789 * The headroom_boost is an in-out parameter used to maintain headroom boost
5790 * state between calls to this function.
5792 * Returns the number of bytes actually written (which may be smaller than
5793 * the delta by which the device hand has changed due to alignment).
5796 l2arc_write_buffers(spa_t
*spa
, l2arc_dev_t
*dev
, uint64_t target_sz
,
5797 boolean_t
*headroom_boost
)
5799 arc_buf_hdr_t
*hdr
, *hdr_prev
, *head
;
5800 uint64_t write_asize
, write_sz
, headroom
, buf_compress_minsz
,
5804 l2arc_write_callback_t
*cb
;
5806 uint64_t guid
= spa_load_guid(spa
);
5808 const boolean_t do_headroom_boost
= *headroom_boost
;
5810 ASSERT(dev
->l2ad_vdev
!= NULL
);
5812 /* Lower the flag now, we might want to raise it again later. */
5813 *headroom_boost
= B_FALSE
;
5816 write_sz
= write_asize
= 0;
5818 head
= kmem_cache_alloc(hdr_l2only_cache
, KM_PUSHPAGE
);
5819 head
->b_flags
|= ARC_FLAG_L2_WRITE_HEAD
;
5820 head
->b_flags
|= ARC_FLAG_HAS_L2HDR
;
5823 * We will want to try to compress buffers that are at least 2x the
5824 * device sector size.
5826 buf_compress_minsz
= 2 << dev
->l2ad_vdev
->vdev_ashift
;
5829 * Copy buffers for L2ARC writing.
5831 for (try = 0; try <= 3; try++) {
5832 multilist_sublist_t
*mls
= l2arc_sublist_lock(try);
5833 uint64_t passed_sz
= 0;
5836 * L2ARC fast warmup.
5838 * Until the ARC is warm and starts to evict, read from the
5839 * head of the ARC lists rather than the tail.
5841 if (arc_warm
== B_FALSE
)
5842 hdr
= multilist_sublist_head(mls
);
5844 hdr
= multilist_sublist_tail(mls
);
5846 headroom
= target_sz
* l2arc_headroom
;
5847 if (do_headroom_boost
)
5848 headroom
= (headroom
* l2arc_headroom_boost
) / 100;
5850 for (; hdr
; hdr
= hdr_prev
) {
5851 kmutex_t
*hash_lock
;
5855 if (arc_warm
== B_FALSE
)
5856 hdr_prev
= multilist_sublist_next(mls
, hdr
);
5858 hdr_prev
= multilist_sublist_prev(mls
, hdr
);
5860 hash_lock
= HDR_LOCK(hdr
);
5861 if (!mutex_tryenter(hash_lock
)) {
5863 * Skip this buffer rather than waiting.
5868 passed_sz
+= hdr
->b_size
;
5869 if (passed_sz
> headroom
) {
5873 mutex_exit(hash_lock
);
5877 if (!l2arc_write_eligible(guid
, hdr
)) {
5878 mutex_exit(hash_lock
);
5883 * Assume that the buffer is not going to be compressed
5884 * and could take more space on disk because of a larger
5887 buf_sz
= hdr
->b_size
;
5888 buf_a_sz
= vdev_psize_to_asize(dev
->l2ad_vdev
, buf_sz
);
5890 if ((write_asize
+ buf_a_sz
) > target_sz
) {
5892 mutex_exit(hash_lock
);
5898 * Insert a dummy header on the buflist so
5899 * l2arc_write_done() can find where the
5900 * write buffers begin without searching.
5902 mutex_enter(&dev
->l2ad_mtx
);
5903 list_insert_head(&dev
->l2ad_buflist
, head
);
5904 mutex_exit(&dev
->l2ad_mtx
);
5906 cb
= kmem_alloc(sizeof (l2arc_write_callback_t
),
5908 cb
->l2wcb_dev
= dev
;
5909 cb
->l2wcb_head
= head
;
5910 pio
= zio_root(spa
, l2arc_write_done
, cb
,
5915 * Create and add a new L2ARC header.
5917 hdr
->b_l2hdr
.b_dev
= dev
;
5918 arc_space_consume(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
5919 hdr
->b_flags
|= ARC_FLAG_L2_WRITING
;
5921 * Temporarily stash the data buffer in b_tmp_cdata.
5922 * The subsequent write step will pick it up from
5923 * there. This is because can't access b_l1hdr.b_buf
5924 * without holding the hash_lock, which we in turn
5925 * can't access without holding the ARC list locks
5926 * (which we want to avoid during compression/writing)
5928 HDR_SET_COMPRESS(hdr
, ZIO_COMPRESS_OFF
);
5929 hdr
->b_l2hdr
.b_asize
= hdr
->b_size
;
5930 hdr
->b_l2hdr
.b_hits
= 0;
5931 hdr
->b_l1hdr
.b_tmp_cdata
= hdr
->b_l1hdr
.b_buf
->b_data
;
5934 * Explicitly set the b_daddr field to a known
5935 * value which means "invalid address". This
5936 * enables us to differentiate which stage of
5937 * l2arc_write_buffers() the particular header
5938 * is in (e.g. this loop, or the one below).
5939 * ARC_FLAG_L2_WRITING is not enough to make
5940 * this distinction, and we need to know in
5941 * order to do proper l2arc vdev accounting in
5942 * arc_release() and arc_hdr_destroy().
5944 * Note, we can't use a new flag to distinguish
5945 * the two stages because we don't hold the
5946 * header's hash_lock below, in the second stage
5947 * of this function. Thus, we can't simply
5948 * change the b_flags field to denote that the
5949 * IO has been sent. We can change the b_daddr
5950 * field of the L2 portion, though, since we'll
5951 * be holding the l2ad_mtx; which is why we're
5952 * using it to denote the header's state change.
5954 hdr
->b_l2hdr
.b_daddr
= L2ARC_ADDR_UNSET
;
5955 hdr
->b_flags
|= ARC_FLAG_HAS_L2HDR
;
5957 mutex_enter(&dev
->l2ad_mtx
);
5958 list_insert_head(&dev
->l2ad_buflist
, hdr
);
5959 mutex_exit(&dev
->l2ad_mtx
);
5962 * Compute and store the buffer cksum before
5963 * writing. On debug the cksum is verified first.
5965 arc_cksum_verify(hdr
->b_l1hdr
.b_buf
);
5966 arc_cksum_compute(hdr
->b_l1hdr
.b_buf
, B_TRUE
);
5968 mutex_exit(hash_lock
);
5971 write_asize
+= buf_a_sz
;
5974 multilist_sublist_unlock(mls
);
5980 /* No buffers selected for writing? */
5983 ASSERT(!HDR_HAS_L1HDR(head
));
5984 kmem_cache_free(hdr_l2only_cache
, head
);
5988 mutex_enter(&dev
->l2ad_mtx
);
5991 * Note that elsewhere in this file arcstat_l2_asize
5992 * and the used space on l2ad_vdev are updated using b_asize,
5993 * which is not necessarily rounded up to the device block size.
5994 * Too keep accounting consistent we do the same here as well:
5995 * stats_size accumulates the sum of b_asize of the written buffers,
5996 * while write_asize accumulates the sum of b_asize rounded up
5997 * to the device block size.
5998 * The latter sum is used only to validate the corectness of the code.
6004 * Now start writing the buffers. We're starting at the write head
6005 * and work backwards, retracing the course of the buffer selector
6008 for (hdr
= list_prev(&dev
->l2ad_buflist
, head
); hdr
;
6009 hdr
= list_prev(&dev
->l2ad_buflist
, hdr
)) {
6013 * We rely on the L1 portion of the header below, so
6014 * it's invalid for this header to have been evicted out
6015 * of the ghost cache, prior to being written out. The
6016 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
6018 ASSERT(HDR_HAS_L1HDR(hdr
));
6021 * We shouldn't need to lock the buffer here, since we flagged
6022 * it as ARC_FLAG_L2_WRITING in the previous step, but we must
6023 * take care to only access its L2 cache parameters. In
6024 * particular, hdr->l1hdr.b_buf may be invalid by now due to
6027 hdr
->b_l2hdr
.b_daddr
= dev
->l2ad_hand
;
6029 if ((!l2arc_nocompress
&& HDR_L2COMPRESS(hdr
)) &&
6030 hdr
->b_l2hdr
.b_asize
>= buf_compress_minsz
) {
6031 if (l2arc_compress_buf(hdr
)) {
6033 * If compression succeeded, enable headroom
6034 * boost on the next scan cycle.
6036 *headroom_boost
= B_TRUE
;
6041 * Pick up the buffer data we had previously stashed away
6042 * (and now potentially also compressed).
6044 buf_data
= hdr
->b_l1hdr
.b_tmp_cdata
;
6045 buf_sz
= hdr
->b_l2hdr
.b_asize
;
6048 * We need to do this regardless if buf_sz is zero or
6049 * not, otherwise, when this l2hdr is evicted we'll
6050 * remove a reference that was never added.
6052 (void) refcount_add_many(&dev
->l2ad_alloc
, buf_sz
, hdr
);
6054 /* Compression may have squashed the buffer to zero length. */
6058 wzio
= zio_write_phys(pio
, dev
->l2ad_vdev
,
6059 dev
->l2ad_hand
, buf_sz
, buf_data
, ZIO_CHECKSUM_OFF
,
6060 NULL
, NULL
, ZIO_PRIORITY_ASYNC_WRITE
,
6061 ZIO_FLAG_CANFAIL
, B_FALSE
);
6063 DTRACE_PROBE2(l2arc__write
, vdev_t
*, dev
->l2ad_vdev
,
6065 (void) zio_nowait(wzio
);
6067 stats_size
+= buf_sz
;
6070 * Keep the clock hand suitably device-aligned.
6072 buf_a_sz
= vdev_psize_to_asize(dev
->l2ad_vdev
, buf_sz
);
6073 write_asize
+= buf_a_sz
;
6074 dev
->l2ad_hand
+= buf_a_sz
;
6078 mutex_exit(&dev
->l2ad_mtx
);
6080 ASSERT3U(write_asize
, <=, target_sz
);
6081 ARCSTAT_BUMP(arcstat_l2_writes_sent
);
6082 ARCSTAT_INCR(arcstat_l2_write_bytes
, write_asize
);
6083 ARCSTAT_INCR(arcstat_l2_size
, write_sz
);
6084 ARCSTAT_INCR(arcstat_l2_asize
, stats_size
);
6085 vdev_space_update(dev
->l2ad_vdev
, stats_size
, 0, 0);
6088 * Bump device hand to the device start if it is approaching the end.
6089 * l2arc_evict() will already have evicted ahead for this case.
6091 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- target_sz
)) {
6092 dev
->l2ad_hand
= dev
->l2ad_start
;
6093 dev
->l2ad_first
= B_FALSE
;
6096 dev
->l2ad_writing
= B_TRUE
;
6097 (void) zio_wait(pio
);
6098 dev
->l2ad_writing
= B_FALSE
;
6100 return (write_asize
);
6104 * Compresses an L2ARC buffer.
6105 * The data to be compressed must be prefilled in l1hdr.b_tmp_cdata and its
6106 * size in l2hdr->b_asize. This routine tries to compress the data and
6107 * depending on the compression result there are three possible outcomes:
6108 * *) The buffer was incompressible. The original l2hdr contents were left
6109 * untouched and are ready for writing to an L2 device.
6110 * *) The buffer was all-zeros, so there is no need to write it to an L2
6111 * device. To indicate this situation b_tmp_cdata is NULL'ed, b_asize is
6112 * set to zero and b_compress is set to ZIO_COMPRESS_EMPTY.
6113 * *) Compression succeeded and b_tmp_cdata was replaced with a temporary
6114 * data buffer which holds the compressed data to be written, and b_asize
6115 * tells us how much data there is. b_compress is set to the appropriate
6116 * compression algorithm. Once writing is done, invoke
6117 * l2arc_release_cdata_buf on this l2hdr to free this temporary buffer.
6119 * Returns B_TRUE if compression succeeded, or B_FALSE if it didn't (the
6120 * buffer was incompressible).
6123 l2arc_compress_buf(arc_buf_hdr_t
*hdr
)
6126 size_t csize
, len
, rounded
;
6127 l2arc_buf_hdr_t
*l2hdr
;
6129 ASSERT(HDR_HAS_L2HDR(hdr
));
6131 l2hdr
= &hdr
->b_l2hdr
;
6133 ASSERT(HDR_HAS_L1HDR(hdr
));
6134 ASSERT(HDR_GET_COMPRESS(hdr
) == ZIO_COMPRESS_OFF
);
6135 ASSERT(hdr
->b_l1hdr
.b_tmp_cdata
!= NULL
);
6137 len
= l2hdr
->b_asize
;
6138 cdata
= zio_data_buf_alloc(len
);
6139 ASSERT3P(cdata
, !=, NULL
);
6140 csize
= zio_compress_data(ZIO_COMPRESS_LZ4
, hdr
->b_l1hdr
.b_tmp_cdata
,
6141 cdata
, l2hdr
->b_asize
);
6143 rounded
= P2ROUNDUP(csize
, (size_t)SPA_MINBLOCKSIZE
);
6144 if (rounded
> csize
) {
6145 bzero((char *)cdata
+ csize
, rounded
- csize
);
6150 /* zero block, indicate that there's nothing to write */
6151 zio_data_buf_free(cdata
, len
);
6152 HDR_SET_COMPRESS(hdr
, ZIO_COMPRESS_EMPTY
);
6154 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6155 ARCSTAT_BUMP(arcstat_l2_compress_zeros
);
6157 } else if (csize
> 0 && csize
< len
) {
6159 * Compression succeeded, we'll keep the cdata around for
6160 * writing and release it afterwards.
6162 HDR_SET_COMPRESS(hdr
, ZIO_COMPRESS_LZ4
);
6163 l2hdr
->b_asize
= csize
;
6164 hdr
->b_l1hdr
.b_tmp_cdata
= cdata
;
6165 ARCSTAT_BUMP(arcstat_l2_compress_successes
);
6169 * Compression failed, release the compressed buffer.
6170 * l2hdr will be left unmodified.
6172 zio_data_buf_free(cdata
, len
);
6173 ARCSTAT_BUMP(arcstat_l2_compress_failures
);
6179 * Decompresses a zio read back from an l2arc device. On success, the
6180 * underlying zio's io_data buffer is overwritten by the uncompressed
6181 * version. On decompression error (corrupt compressed stream), the
6182 * zio->io_error value is set to signal an I/O error.
6184 * Please note that the compressed data stream is not checksummed, so
6185 * if the underlying device is experiencing data corruption, we may feed
6186 * corrupt data to the decompressor, so the decompressor needs to be
6187 * able to handle this situation (LZ4 does).
6190 l2arc_decompress_zio(zio_t
*zio
, arc_buf_hdr_t
*hdr
, enum zio_compress c
)
6195 ASSERT(L2ARC_IS_VALID_COMPRESS(c
));
6197 if (zio
->io_error
!= 0) {
6199 * An io error has occured, just restore the original io
6200 * size in preparation for a main pool read.
6202 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
6206 if (c
== ZIO_COMPRESS_EMPTY
) {
6208 * An empty buffer results in a null zio, which means we
6209 * need to fill its io_data after we're done restoring the
6210 * buffer's contents.
6212 ASSERT(hdr
->b_l1hdr
.b_buf
!= NULL
);
6213 bzero(hdr
->b_l1hdr
.b_buf
->b_data
, hdr
->b_size
);
6214 zio
->io_data
= zio
->io_orig_data
= hdr
->b_l1hdr
.b_buf
->b_data
;
6216 ASSERT(zio
->io_data
!= NULL
);
6218 * We copy the compressed data from the start of the arc buffer
6219 * (the zio_read will have pulled in only what we need, the
6220 * rest is garbage which we will overwrite at decompression)
6221 * and then decompress back to the ARC data buffer. This way we
6222 * can minimize copying by simply decompressing back over the
6223 * original compressed data (rather than decompressing to an
6224 * aux buffer and then copying back the uncompressed buffer,
6225 * which is likely to be much larger).
6227 csize
= zio
->io_size
;
6228 cdata
= zio_data_buf_alloc(csize
);
6229 bcopy(zio
->io_data
, cdata
, csize
);
6230 if (zio_decompress_data(c
, cdata
, zio
->io_data
, csize
,
6232 zio
->io_error
= SET_ERROR(EIO
);
6233 zio_data_buf_free(cdata
, csize
);
6236 /* Restore the expected uncompressed IO size. */
6237 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
6241 * Releases the temporary b_tmp_cdata buffer in an l2arc header structure.
6242 * This buffer serves as a temporary holder of compressed data while
6243 * the buffer entry is being written to an l2arc device. Once that is
6244 * done, we can dispose of it.
6247 l2arc_release_cdata_buf(arc_buf_hdr_t
*hdr
)
6249 enum zio_compress comp
= HDR_GET_COMPRESS(hdr
);
6251 ASSERT(HDR_HAS_L1HDR(hdr
));
6252 ASSERT(comp
== ZIO_COMPRESS_OFF
|| L2ARC_IS_VALID_COMPRESS(comp
));
6254 if (comp
== ZIO_COMPRESS_OFF
) {
6256 * In this case, b_tmp_cdata points to the same buffer
6257 * as the arc_buf_t's b_data field. We don't want to
6258 * free it, since the arc_buf_t will handle that.
6260 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6261 } else if (comp
== ZIO_COMPRESS_EMPTY
) {
6263 * In this case, b_tmp_cdata was compressed to an empty
6264 * buffer, thus there's nothing to free and b_tmp_cdata
6265 * should have been set to NULL in l2arc_write_buffers().
6267 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
6270 * If the data was compressed, then we've allocated a
6271 * temporary buffer for it, so now we need to release it.
6273 ASSERT(hdr
->b_l1hdr
.b_tmp_cdata
!= NULL
);
6274 zio_data_buf_free(hdr
->b_l1hdr
.b_tmp_cdata
,
6276 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6282 * This thread feeds the L2ARC at regular intervals. This is the beating
6283 * heart of the L2ARC.
6286 l2arc_feed_thread(void)
6291 uint64_t size
, wrote
;
6292 clock_t begin
, next
= ddi_get_lbolt();
6293 boolean_t headroom_boost
= B_FALSE
;
6294 fstrans_cookie_t cookie
;
6296 CALLB_CPR_INIT(&cpr
, &l2arc_feed_thr_lock
, callb_generic_cpr
, FTAG
);
6298 mutex_enter(&l2arc_feed_thr_lock
);
6300 cookie
= spl_fstrans_mark();
6301 while (l2arc_thread_exit
== 0) {
6302 CALLB_CPR_SAFE_BEGIN(&cpr
);
6303 (void) cv_timedwait_sig(&l2arc_feed_thr_cv
,
6304 &l2arc_feed_thr_lock
, next
);
6305 CALLB_CPR_SAFE_END(&cpr
, &l2arc_feed_thr_lock
);
6306 next
= ddi_get_lbolt() + hz
;
6309 * Quick check for L2ARC devices.
6311 mutex_enter(&l2arc_dev_mtx
);
6312 if (l2arc_ndev
== 0) {
6313 mutex_exit(&l2arc_dev_mtx
);
6316 mutex_exit(&l2arc_dev_mtx
);
6317 begin
= ddi_get_lbolt();
6320 * This selects the next l2arc device to write to, and in
6321 * doing so the next spa to feed from: dev->l2ad_spa. This
6322 * will return NULL if there are now no l2arc devices or if
6323 * they are all faulted.
6325 * If a device is returned, its spa's config lock is also
6326 * held to prevent device removal. l2arc_dev_get_next()
6327 * will grab and release l2arc_dev_mtx.
6329 if ((dev
= l2arc_dev_get_next()) == NULL
)
6332 spa
= dev
->l2ad_spa
;
6333 ASSERT(spa
!= NULL
);
6336 * If the pool is read-only then force the feed thread to
6337 * sleep a little longer.
6339 if (!spa_writeable(spa
)) {
6340 next
= ddi_get_lbolt() + 5 * l2arc_feed_secs
* hz
;
6341 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6346 * Avoid contributing to memory pressure.
6349 ARCSTAT_BUMP(arcstat_l2_abort_lowmem
);
6350 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6354 ARCSTAT_BUMP(arcstat_l2_feeds
);
6356 size
= l2arc_write_size();
6359 * Evict L2ARC buffers that will be overwritten.
6361 l2arc_evict(dev
, size
, B_FALSE
);
6364 * Write ARC buffers.
6366 wrote
= l2arc_write_buffers(spa
, dev
, size
, &headroom_boost
);
6369 * Calculate interval between writes.
6371 next
= l2arc_write_interval(begin
, size
, wrote
);
6372 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6374 spl_fstrans_unmark(cookie
);
6376 l2arc_thread_exit
= 0;
6377 cv_broadcast(&l2arc_feed_thr_cv
);
6378 CALLB_CPR_EXIT(&cpr
); /* drops l2arc_feed_thr_lock */
6383 l2arc_vdev_present(vdev_t
*vd
)
6387 mutex_enter(&l2arc_dev_mtx
);
6388 for (dev
= list_head(l2arc_dev_list
); dev
!= NULL
;
6389 dev
= list_next(l2arc_dev_list
, dev
)) {
6390 if (dev
->l2ad_vdev
== vd
)
6393 mutex_exit(&l2arc_dev_mtx
);
6395 return (dev
!= NULL
);
6399 * Add a vdev for use by the L2ARC. By this point the spa has already
6400 * validated the vdev and opened it.
6403 l2arc_add_vdev(spa_t
*spa
, vdev_t
*vd
)
6405 l2arc_dev_t
*adddev
;
6407 ASSERT(!l2arc_vdev_present(vd
));
6410 * Create a new l2arc device entry.
6412 adddev
= kmem_zalloc(sizeof (l2arc_dev_t
), KM_SLEEP
);
6413 adddev
->l2ad_spa
= spa
;
6414 adddev
->l2ad_vdev
= vd
;
6415 adddev
->l2ad_start
= VDEV_LABEL_START_SIZE
;
6416 adddev
->l2ad_end
= VDEV_LABEL_START_SIZE
+ vdev_get_min_asize(vd
);
6417 adddev
->l2ad_hand
= adddev
->l2ad_start
;
6418 adddev
->l2ad_first
= B_TRUE
;
6419 adddev
->l2ad_writing
= B_FALSE
;
6420 list_link_init(&adddev
->l2ad_node
);
6422 mutex_init(&adddev
->l2ad_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
6424 * This is a list of all ARC buffers that are still valid on the
6427 list_create(&adddev
->l2ad_buflist
, sizeof (arc_buf_hdr_t
),
6428 offsetof(arc_buf_hdr_t
, b_l2hdr
.b_l2node
));
6430 vdev_space_update(vd
, 0, 0, adddev
->l2ad_end
- adddev
->l2ad_hand
);
6431 refcount_create(&adddev
->l2ad_alloc
);
6434 * Add device to global list
6436 mutex_enter(&l2arc_dev_mtx
);
6437 list_insert_head(l2arc_dev_list
, adddev
);
6438 atomic_inc_64(&l2arc_ndev
);
6439 mutex_exit(&l2arc_dev_mtx
);
6443 * Remove a vdev from the L2ARC.
6446 l2arc_remove_vdev(vdev_t
*vd
)
6448 l2arc_dev_t
*dev
, *nextdev
, *remdev
= NULL
;
6451 * Find the device by vdev
6453 mutex_enter(&l2arc_dev_mtx
);
6454 for (dev
= list_head(l2arc_dev_list
); dev
; dev
= nextdev
) {
6455 nextdev
= list_next(l2arc_dev_list
, dev
);
6456 if (vd
== dev
->l2ad_vdev
) {
6461 ASSERT(remdev
!= NULL
);
6464 * Remove device from global list
6466 list_remove(l2arc_dev_list
, remdev
);
6467 l2arc_dev_last
= NULL
; /* may have been invalidated */
6468 atomic_dec_64(&l2arc_ndev
);
6469 mutex_exit(&l2arc_dev_mtx
);
6472 * Clear all buflists and ARC references. L2ARC device flush.
6474 l2arc_evict(remdev
, 0, B_TRUE
);
6475 list_destroy(&remdev
->l2ad_buflist
);
6476 mutex_destroy(&remdev
->l2ad_mtx
);
6477 refcount_destroy(&remdev
->l2ad_alloc
);
6478 kmem_free(remdev
, sizeof (l2arc_dev_t
));
6484 l2arc_thread_exit
= 0;
6486 l2arc_writes_sent
= 0;
6487 l2arc_writes_done
= 0;
6489 mutex_init(&l2arc_feed_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
6490 cv_init(&l2arc_feed_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
6491 mutex_init(&l2arc_dev_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
6492 mutex_init(&l2arc_free_on_write_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
6494 l2arc_dev_list
= &L2ARC_dev_list
;
6495 l2arc_free_on_write
= &L2ARC_free_on_write
;
6496 list_create(l2arc_dev_list
, sizeof (l2arc_dev_t
),
6497 offsetof(l2arc_dev_t
, l2ad_node
));
6498 list_create(l2arc_free_on_write
, sizeof (l2arc_data_free_t
),
6499 offsetof(l2arc_data_free_t
, l2df_list_node
));
6506 * This is called from dmu_fini(), which is called from spa_fini();
6507 * Because of this, we can assume that all l2arc devices have
6508 * already been removed when the pools themselves were removed.
6511 l2arc_do_free_on_write();
6513 mutex_destroy(&l2arc_feed_thr_lock
);
6514 cv_destroy(&l2arc_feed_thr_cv
);
6515 mutex_destroy(&l2arc_dev_mtx
);
6516 mutex_destroy(&l2arc_free_on_write_mtx
);
6518 list_destroy(l2arc_dev_list
);
6519 list_destroy(l2arc_free_on_write
);
6525 if (!(spa_mode_global
& FWRITE
))
6528 (void) thread_create(NULL
, 0, l2arc_feed_thread
, NULL
, 0, &p0
,
6529 TS_RUN
, minclsyspri
);
6535 if (!(spa_mode_global
& FWRITE
))
6538 mutex_enter(&l2arc_feed_thr_lock
);
6539 cv_signal(&l2arc_feed_thr_cv
); /* kick thread out of startup */
6540 l2arc_thread_exit
= 1;
6541 while (l2arc_thread_exit
!= 0)
6542 cv_wait(&l2arc_feed_thr_cv
, &l2arc_feed_thr_lock
);
6543 mutex_exit(&l2arc_feed_thr_lock
);
6546 #if defined(_KERNEL) && defined(HAVE_SPL)
6547 EXPORT_SYMBOL(arc_buf_size
);
6548 EXPORT_SYMBOL(arc_write
);
6549 EXPORT_SYMBOL(arc_read
);
6550 EXPORT_SYMBOL(arc_buf_remove_ref
);
6551 EXPORT_SYMBOL(arc_buf_info
);
6552 EXPORT_SYMBOL(arc_getbuf_func
);
6553 EXPORT_SYMBOL(arc_add_prune_callback
);
6554 EXPORT_SYMBOL(arc_remove_prune_callback
);
6556 module_param(zfs_arc_min
, ulong
, 0644);
6557 MODULE_PARM_DESC(zfs_arc_min
, "Min arc size");
6559 module_param(zfs_arc_max
, ulong
, 0644);
6560 MODULE_PARM_DESC(zfs_arc_max
, "Max arc size");
6562 module_param(zfs_arc_meta_limit
, ulong
, 0644);
6563 MODULE_PARM_DESC(zfs_arc_meta_limit
, "Meta limit for arc size");
6565 module_param(zfs_arc_meta_min
, ulong
, 0644);
6566 MODULE_PARM_DESC(zfs_arc_meta_min
, "Min arc metadata");
6568 module_param(zfs_arc_meta_prune
, int, 0644);
6569 MODULE_PARM_DESC(zfs_arc_meta_prune
, "Meta objects to scan for prune");
6571 module_param(zfs_arc_meta_adjust_restarts
, ulong
, 0644);
6572 MODULE_PARM_DESC(zfs_arc_meta_adjust_restarts
,
6573 "Limit number of restarts in arc_adjust_meta");
6575 module_param(zfs_arc_meta_strategy
, int, 0644);
6576 MODULE_PARM_DESC(zfs_arc_meta_strategy
, "Meta reclaim strategy");
6578 module_param(zfs_arc_grow_retry
, int, 0644);
6579 MODULE_PARM_DESC(zfs_arc_grow_retry
, "Seconds before growing arc size");
6581 module_param(zfs_arc_p_aggressive_disable
, int, 0644);
6582 MODULE_PARM_DESC(zfs_arc_p_aggressive_disable
, "disable aggressive arc_p grow");
6584 module_param(zfs_arc_p_dampener_disable
, int, 0644);
6585 MODULE_PARM_DESC(zfs_arc_p_dampener_disable
, "disable arc_p adapt dampener");
6587 module_param(zfs_arc_shrink_shift
, int, 0644);
6588 MODULE_PARM_DESC(zfs_arc_shrink_shift
, "log2(fraction of arc to reclaim)");
6590 module_param(zfs_disable_dup_eviction
, int, 0644);
6591 MODULE_PARM_DESC(zfs_disable_dup_eviction
, "disable duplicate buffer eviction");
6593 module_param(zfs_arc_average_blocksize
, int, 0444);
6594 MODULE_PARM_DESC(zfs_arc_average_blocksize
, "Target average block size");
6596 module_param(zfs_arc_memory_throttle_disable
, int, 0644);
6597 MODULE_PARM_DESC(zfs_arc_memory_throttle_disable
, "disable memory throttle");
6599 module_param(zfs_arc_min_prefetch_lifespan
, int, 0644);
6600 MODULE_PARM_DESC(zfs_arc_min_prefetch_lifespan
, "Min life of prefetch block");
6602 module_param(zfs_arc_num_sublists_per_state
, int, 0644);
6603 MODULE_PARM_DESC(zfs_arc_num_sublists_per_state
,
6604 "Number of sublists used in each of the ARC state lists");
6606 module_param(l2arc_write_max
, ulong
, 0644);
6607 MODULE_PARM_DESC(l2arc_write_max
, "Max write bytes per interval");
6609 module_param(l2arc_write_boost
, ulong
, 0644);
6610 MODULE_PARM_DESC(l2arc_write_boost
, "Extra write bytes during device warmup");
6612 module_param(l2arc_headroom
, ulong
, 0644);
6613 MODULE_PARM_DESC(l2arc_headroom
, "Number of max device writes to precache");
6615 module_param(l2arc_headroom_boost
, ulong
, 0644);
6616 MODULE_PARM_DESC(l2arc_headroom_boost
, "Compressed l2arc_headroom multiplier");
6618 module_param(l2arc_feed_secs
, ulong
, 0644);
6619 MODULE_PARM_DESC(l2arc_feed_secs
, "Seconds between L2ARC writing");
6621 module_param(l2arc_feed_min_ms
, ulong
, 0644);
6622 MODULE_PARM_DESC(l2arc_feed_min_ms
, "Min feed interval in milliseconds");
6624 module_param(l2arc_noprefetch
, int, 0644);
6625 MODULE_PARM_DESC(l2arc_noprefetch
, "Skip caching prefetched buffers");
6627 module_param(l2arc_nocompress
, int, 0644);
6628 MODULE_PARM_DESC(l2arc_nocompress
, "Skip compressing L2ARC buffers");
6630 module_param(l2arc_feed_again
, int, 0644);
6631 MODULE_PARM_DESC(l2arc_feed_again
, "Turbo L2ARC warmup");
6633 module_param(l2arc_norw
, int, 0644);
6634 MODULE_PARM_DESC(l2arc_norw
, "No reads during writes");