4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2012, Joyent, Inc. All rights reserved.
24 * Copyright (c) 2011, 2016 by Delphix. All rights reserved.
25 * Copyright (c) 2014 by Saso Kiselkov. All rights reserved.
26 * Copyright 2014 Nexenta Systems, Inc. All rights reserved.
30 * DVA-based Adjustable Replacement Cache
32 * While much of the theory of operation used here is
33 * based on the self-tuning, low overhead replacement cache
34 * presented by Megiddo and Modha at FAST 2003, there are some
35 * significant differences:
37 * 1. The Megiddo and Modha model assumes any page is evictable.
38 * Pages in its cache cannot be "locked" into memory. This makes
39 * the eviction algorithm simple: evict the last page in the list.
40 * This also make the performance characteristics easy to reason
41 * about. Our cache is not so simple. At any given moment, some
42 * subset of the blocks in the cache are un-evictable because we
43 * have handed out a reference to them. Blocks are only evictable
44 * when there are no external references active. This makes
45 * eviction far more problematic: we choose to evict the evictable
46 * blocks that are the "lowest" in the list.
48 * There are times when it is not possible to evict the requested
49 * space. In these circumstances we are unable to adjust the cache
50 * size. To prevent the cache growing unbounded at these times we
51 * implement a "cache throttle" that slows the flow of new data
52 * into the cache until we can make space available.
54 * 2. The Megiddo and Modha model assumes a fixed cache size.
55 * Pages are evicted when the cache is full and there is a cache
56 * miss. Our model has a variable sized cache. It grows with
57 * high use, but also tries to react to memory pressure from the
58 * operating system: decreasing its size when system memory is
61 * 3. The Megiddo and Modha model assumes a fixed page size. All
62 * elements of the cache are therefore exactly the same size. So
63 * when adjusting the cache size following a cache miss, its simply
64 * a matter of choosing a single page to evict. In our model, we
65 * have variable sized cache blocks (rangeing from 512 bytes to
66 * 128K bytes). We therefore choose a set of blocks to evict to make
67 * space for a cache miss that approximates as closely as possible
68 * the space used by the new block.
70 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
71 * by N. Megiddo & D. Modha, FAST 2003
77 * A new reference to a cache buffer can be obtained in two
78 * ways: 1) via a hash table lookup using the DVA as a key,
79 * or 2) via one of the ARC lists. The arc_read() interface
80 * uses method 1, while the internal arc algorithms for
81 * adjusting the cache use method 2. We therefore provide two
82 * types of locks: 1) the hash table lock array, and 2) the
85 * Buffers do not have their own mutexes, rather they rely on the
86 * hash table mutexes for the bulk of their protection (i.e. most
87 * fields in the arc_buf_hdr_t are protected by these mutexes).
89 * buf_hash_find() returns the appropriate mutex (held) when it
90 * locates the requested buffer in the hash table. It returns
91 * NULL for the mutex if the buffer was not in the table.
93 * buf_hash_remove() expects the appropriate hash mutex to be
94 * already held before it is invoked.
96 * Each arc state also has a mutex which is used to protect the
97 * buffer list associated with the state. When attempting to
98 * obtain a hash table lock while holding an arc list lock you
99 * must use: mutex_tryenter() to avoid deadlock. Also note that
100 * the active state mutex must be held before the ghost state mutex.
102 * Arc buffers may have an associated eviction callback function.
103 * This function will be invoked prior to removing the buffer (e.g.
104 * in arc_do_user_evicts()). Note however that the data associated
105 * with the buffer may be evicted prior to the callback. The callback
106 * must be made with *no locks held* (to prevent deadlock). Additionally,
107 * the users of callbacks must ensure that their private data is
108 * protected from simultaneous callbacks from arc_clear_callback()
109 * and arc_do_user_evicts().
111 * It as also possible to register a callback which is run when the
112 * arc_meta_limit is reached and no buffers can be safely evicted. In
113 * this case the arc user should drop a reference on some arc buffers so
114 * they can be reclaimed and the arc_meta_limit honored. For example,
115 * when using the ZPL each dentry holds a references on a znode. These
116 * dentries must be pruned before the arc buffer holding the znode can
119 * Note that the majority of the performance stats are manipulated
120 * with atomic operations.
122 * The L2ARC uses the l2ad_mtx on each vdev for the following:
124 * - L2ARC buflist creation
125 * - L2ARC buflist eviction
126 * - L2ARC write completion, which walks L2ARC buflists
127 * - ARC header destruction, as it removes from L2ARC buflists
128 * - ARC header release, as it removes from L2ARC buflists
133 #include <sys/zio_compress.h>
134 #include <sys/zfs_context.h>
136 #include <sys/refcount.h>
137 #include <sys/vdev.h>
138 #include <sys/vdev_impl.h>
139 #include <sys/dsl_pool.h>
140 #include <sys/multilist.h>
142 #include <sys/vmsystm.h>
144 #include <sys/fs/swapnode.h>
146 #include <linux/mm_compat.h>
148 #include <sys/callb.h>
149 #include <sys/kstat.h>
150 #include <sys/dmu_tx.h>
151 #include <zfs_fletcher.h>
152 #include <sys/arc_impl.h>
153 #include <sys/trace_arc.h>
156 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
157 boolean_t arc_watch
= B_FALSE
;
160 static kmutex_t arc_reclaim_lock
;
161 static kcondvar_t arc_reclaim_thread_cv
;
162 static boolean_t arc_reclaim_thread_exit
;
163 static kcondvar_t arc_reclaim_waiters_cv
;
165 static kmutex_t arc_user_evicts_lock
;
166 static kcondvar_t arc_user_evicts_cv
;
167 static boolean_t arc_user_evicts_thread_exit
;
170 * The number of headers to evict in arc_evict_state_impl() before
171 * dropping the sublist lock and evicting from another sublist. A lower
172 * value means we're more likely to evict the "correct" header (i.e. the
173 * oldest header in the arc state), but comes with higher overhead
174 * (i.e. more invocations of arc_evict_state_impl()).
176 int zfs_arc_evict_batch_limit
= 10;
179 * The number of sublists used for each of the arc state lists. If this
180 * is not set to a suitable value by the user, it will be configured to
181 * the number of CPUs on the system in arc_init().
183 int zfs_arc_num_sublists_per_state
= 0;
185 /* number of seconds before growing cache again */
186 static int arc_grow_retry
= 5;
188 /* shift of arc_c for calculating overflow limit in arc_get_data_buf */
189 int zfs_arc_overflow_shift
= 8;
191 /* shift of arc_c for calculating both min and max arc_p */
192 static int arc_p_min_shift
= 4;
194 /* log2(fraction of arc to reclaim) */
195 static int arc_shrink_shift
= 7;
198 * log2(fraction of ARC which must be free to allow growing).
199 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory,
200 * when reading a new block into the ARC, we will evict an equal-sized block
203 * This must be less than arc_shrink_shift, so that when we shrink the ARC,
204 * we will still not allow it to grow.
206 int arc_no_grow_shift
= 5;
210 * minimum lifespan of a prefetch block in clock ticks
211 * (initialized in arc_init())
213 static int arc_min_prefetch_lifespan
;
216 * If this percent of memory is free, don't throttle.
218 int arc_lotsfree_percent
= 10;
223 * The arc has filled available memory and has now warmed up.
225 static boolean_t arc_warm
;
228 * These tunables are for performance analysis.
230 unsigned long zfs_arc_max
= 0;
231 unsigned long zfs_arc_min
= 0;
232 unsigned long zfs_arc_meta_limit
= 0;
233 unsigned long zfs_arc_meta_min
= 0;
234 unsigned long zfs_arc_dnode_limit
= 0;
235 unsigned long zfs_arc_dnode_reduce_percent
= 10;
236 int zfs_arc_grow_retry
= 0;
237 int zfs_arc_shrink_shift
= 0;
238 int zfs_arc_p_min_shift
= 0;
239 int zfs_disable_dup_eviction
= 0;
240 int zfs_arc_average_blocksize
= 8 * 1024; /* 8KB */
243 * These tunables are Linux specific
245 unsigned long zfs_arc_sys_free
= 0;
246 int zfs_arc_min_prefetch_lifespan
= 0;
247 int zfs_arc_p_aggressive_disable
= 1;
248 int zfs_arc_p_dampener_disable
= 1;
249 int zfs_arc_meta_prune
= 10000;
250 int zfs_arc_meta_strategy
= ARC_STRATEGY_META_BALANCED
;
251 int zfs_arc_meta_adjust_restarts
= 4096;
252 int zfs_arc_lotsfree_percent
= 10;
255 static arc_state_t ARC_anon
;
256 static arc_state_t ARC_mru
;
257 static arc_state_t ARC_mru_ghost
;
258 static arc_state_t ARC_mfu
;
259 static arc_state_t ARC_mfu_ghost
;
260 static arc_state_t ARC_l2c_only
;
262 typedef struct arc_stats
{
263 kstat_named_t arcstat_hits
;
264 kstat_named_t arcstat_misses
;
265 kstat_named_t arcstat_demand_data_hits
;
266 kstat_named_t arcstat_demand_data_misses
;
267 kstat_named_t arcstat_demand_metadata_hits
;
268 kstat_named_t arcstat_demand_metadata_misses
;
269 kstat_named_t arcstat_prefetch_data_hits
;
270 kstat_named_t arcstat_prefetch_data_misses
;
271 kstat_named_t arcstat_prefetch_metadata_hits
;
272 kstat_named_t arcstat_prefetch_metadata_misses
;
273 kstat_named_t arcstat_mru_hits
;
274 kstat_named_t arcstat_mru_ghost_hits
;
275 kstat_named_t arcstat_mfu_hits
;
276 kstat_named_t arcstat_mfu_ghost_hits
;
277 kstat_named_t arcstat_deleted
;
279 * Number of buffers that could not be evicted because the hash lock
280 * was held by another thread. The lock may not necessarily be held
281 * by something using the same buffer, since hash locks are shared
282 * by multiple buffers.
284 kstat_named_t arcstat_mutex_miss
;
286 * Number of buffers skipped because they have I/O in progress, are
287 * indrect prefetch buffers that have not lived long enough, or are
288 * not from the spa we're trying to evict from.
290 kstat_named_t arcstat_evict_skip
;
292 * Number of times arc_evict_state() was unable to evict enough
293 * buffers to reach its target amount.
295 kstat_named_t arcstat_evict_not_enough
;
296 kstat_named_t arcstat_evict_l2_cached
;
297 kstat_named_t arcstat_evict_l2_eligible
;
298 kstat_named_t arcstat_evict_l2_ineligible
;
299 kstat_named_t arcstat_evict_l2_skip
;
300 kstat_named_t arcstat_hash_elements
;
301 kstat_named_t arcstat_hash_elements_max
;
302 kstat_named_t arcstat_hash_collisions
;
303 kstat_named_t arcstat_hash_chains
;
304 kstat_named_t arcstat_hash_chain_max
;
305 kstat_named_t arcstat_p
;
306 kstat_named_t arcstat_c
;
307 kstat_named_t arcstat_c_min
;
308 kstat_named_t arcstat_c_max
;
309 kstat_named_t arcstat_size
;
311 * Number of bytes consumed by internal ARC structures necessary
312 * for tracking purposes; these structures are not actually
313 * backed by ARC buffers. This includes arc_buf_hdr_t structures
314 * (allocated via arc_buf_hdr_t_full and arc_buf_hdr_t_l2only
315 * caches), and arc_buf_t structures (allocated via arc_buf_t
318 kstat_named_t arcstat_hdr_size
;
320 * Number of bytes consumed by ARC buffers of type equal to
321 * ARC_BUFC_DATA. This is generally consumed by buffers backing
322 * on disk user data (e.g. plain file contents).
324 kstat_named_t arcstat_data_size
;
326 * Number of bytes consumed by ARC buffers of type equal to
327 * ARC_BUFC_METADATA. This is generally consumed by buffers
328 * backing on disk data that is used for internal ZFS
329 * structures (e.g. ZAP, dnode, indirect blocks, etc).
331 kstat_named_t arcstat_metadata_size
;
333 * Number of bytes consumed by dmu_buf_impl_t objects.
335 kstat_named_t arcstat_dbuf_size
;
337 * Number of bytes consumed by dnode_t objects.
339 kstat_named_t arcstat_dnode_size
;
341 * Number of bytes consumed by bonus buffers.
343 kstat_named_t arcstat_bonus_size
;
345 * Total number of bytes consumed by ARC buffers residing in the
346 * arc_anon state. This includes *all* buffers in the arc_anon
347 * state; e.g. data, metadata, evictable, and unevictable buffers
348 * are all included in this value.
350 kstat_named_t arcstat_anon_size
;
352 * Number of bytes consumed by ARC buffers that meet the
353 * following criteria: backing buffers of type ARC_BUFC_DATA,
354 * residing in the arc_anon state, and are eligible for eviction
355 * (e.g. have no outstanding holds on the buffer).
357 kstat_named_t arcstat_anon_evictable_data
;
359 * Number of bytes consumed by ARC buffers that meet the
360 * following criteria: backing buffers of type ARC_BUFC_METADATA,
361 * residing in the arc_anon state, and are eligible for eviction
362 * (e.g. have no outstanding holds on the buffer).
364 kstat_named_t arcstat_anon_evictable_metadata
;
366 * Total number of bytes consumed by ARC buffers residing in the
367 * arc_mru state. This includes *all* buffers in the arc_mru
368 * state; e.g. data, metadata, evictable, and unevictable buffers
369 * are all included in this value.
371 kstat_named_t arcstat_mru_size
;
373 * Number of bytes consumed by ARC buffers that meet the
374 * following criteria: backing buffers of type ARC_BUFC_DATA,
375 * residing in the arc_mru state, and are eligible for eviction
376 * (e.g. have no outstanding holds on the buffer).
378 kstat_named_t arcstat_mru_evictable_data
;
380 * Number of bytes consumed by ARC buffers that meet the
381 * following criteria: backing buffers of type ARC_BUFC_METADATA,
382 * residing in the arc_mru state, and are eligible for eviction
383 * (e.g. have no outstanding holds on the buffer).
385 kstat_named_t arcstat_mru_evictable_metadata
;
387 * Total number of bytes that *would have been* consumed by ARC
388 * buffers in the arc_mru_ghost state. The key thing to note
389 * here, is the fact that this size doesn't actually indicate
390 * RAM consumption. The ghost lists only consist of headers and
391 * don't actually have ARC buffers linked off of these headers.
392 * Thus, *if* the headers had associated ARC buffers, these
393 * buffers *would have* consumed this number of bytes.
395 kstat_named_t arcstat_mru_ghost_size
;
397 * Number of bytes that *would have been* consumed by ARC
398 * buffers that are eligible for eviction, of type
399 * ARC_BUFC_DATA, and linked off the arc_mru_ghost state.
401 kstat_named_t arcstat_mru_ghost_evictable_data
;
403 * Number of bytes that *would have been* consumed by ARC
404 * buffers that are eligible for eviction, of type
405 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
407 kstat_named_t arcstat_mru_ghost_evictable_metadata
;
409 * Total number of bytes consumed by ARC buffers residing in the
410 * arc_mfu state. This includes *all* buffers in the arc_mfu
411 * state; e.g. data, metadata, evictable, and unevictable buffers
412 * are all included in this value.
414 kstat_named_t arcstat_mfu_size
;
416 * Number of bytes consumed by ARC buffers that are eligible for
417 * eviction, of type ARC_BUFC_DATA, and reside in the arc_mfu
420 kstat_named_t arcstat_mfu_evictable_data
;
422 * Number of bytes consumed by ARC buffers that are eligible for
423 * eviction, of type ARC_BUFC_METADATA, and reside in the
426 kstat_named_t arcstat_mfu_evictable_metadata
;
428 * Total number of bytes that *would have been* consumed by ARC
429 * buffers in the arc_mfu_ghost state. See the comment above
430 * arcstat_mru_ghost_size for more details.
432 kstat_named_t arcstat_mfu_ghost_size
;
434 * Number of bytes that *would have been* consumed by ARC
435 * buffers that are eligible for eviction, of type
436 * ARC_BUFC_DATA, and linked off the arc_mfu_ghost state.
438 kstat_named_t arcstat_mfu_ghost_evictable_data
;
440 * Number of bytes that *would have been* consumed by ARC
441 * buffers that are eligible for eviction, of type
442 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
444 kstat_named_t arcstat_mfu_ghost_evictable_metadata
;
445 kstat_named_t arcstat_l2_hits
;
446 kstat_named_t arcstat_l2_misses
;
447 kstat_named_t arcstat_l2_feeds
;
448 kstat_named_t arcstat_l2_rw_clash
;
449 kstat_named_t arcstat_l2_read_bytes
;
450 kstat_named_t arcstat_l2_write_bytes
;
451 kstat_named_t arcstat_l2_writes_sent
;
452 kstat_named_t arcstat_l2_writes_done
;
453 kstat_named_t arcstat_l2_writes_error
;
454 kstat_named_t arcstat_l2_writes_lock_retry
;
455 kstat_named_t arcstat_l2_writes_skip_toobig
;
456 kstat_named_t arcstat_l2_evict_lock_retry
;
457 kstat_named_t arcstat_l2_evict_reading
;
458 kstat_named_t arcstat_l2_evict_l1cached
;
459 kstat_named_t arcstat_l2_free_on_write
;
460 kstat_named_t arcstat_l2_cdata_free_on_write
;
461 kstat_named_t arcstat_l2_abort_lowmem
;
462 kstat_named_t arcstat_l2_cksum_bad
;
463 kstat_named_t arcstat_l2_io_error
;
464 kstat_named_t arcstat_l2_size
;
465 kstat_named_t arcstat_l2_asize
;
466 kstat_named_t arcstat_l2_hdr_size
;
467 kstat_named_t arcstat_l2_compress_successes
;
468 kstat_named_t arcstat_l2_compress_zeros
;
469 kstat_named_t arcstat_l2_compress_failures
;
470 kstat_named_t arcstat_memory_throttle_count
;
471 kstat_named_t arcstat_duplicate_buffers
;
472 kstat_named_t arcstat_duplicate_buffers_size
;
473 kstat_named_t arcstat_duplicate_reads
;
474 kstat_named_t arcstat_memory_direct_count
;
475 kstat_named_t arcstat_memory_indirect_count
;
476 kstat_named_t arcstat_no_grow
;
477 kstat_named_t arcstat_tempreserve
;
478 kstat_named_t arcstat_loaned_bytes
;
479 kstat_named_t arcstat_prune
;
480 kstat_named_t arcstat_meta_used
;
481 kstat_named_t arcstat_meta_limit
;
482 kstat_named_t arcstat_dnode_limit
;
483 kstat_named_t arcstat_meta_max
;
484 kstat_named_t arcstat_meta_min
;
485 kstat_named_t arcstat_sync_wait_for_async
;
486 kstat_named_t arcstat_demand_hit_predictive_prefetch
;
487 kstat_named_t arcstat_need_free
;
488 kstat_named_t arcstat_sys_free
;
491 static arc_stats_t arc_stats
= {
492 { "hits", KSTAT_DATA_UINT64
},
493 { "misses", KSTAT_DATA_UINT64
},
494 { "demand_data_hits", KSTAT_DATA_UINT64
},
495 { "demand_data_misses", KSTAT_DATA_UINT64
},
496 { "demand_metadata_hits", KSTAT_DATA_UINT64
},
497 { "demand_metadata_misses", KSTAT_DATA_UINT64
},
498 { "prefetch_data_hits", KSTAT_DATA_UINT64
},
499 { "prefetch_data_misses", KSTAT_DATA_UINT64
},
500 { "prefetch_metadata_hits", KSTAT_DATA_UINT64
},
501 { "prefetch_metadata_misses", KSTAT_DATA_UINT64
},
502 { "mru_hits", KSTAT_DATA_UINT64
},
503 { "mru_ghost_hits", KSTAT_DATA_UINT64
},
504 { "mfu_hits", KSTAT_DATA_UINT64
},
505 { "mfu_ghost_hits", KSTAT_DATA_UINT64
},
506 { "deleted", KSTAT_DATA_UINT64
},
507 { "mutex_miss", KSTAT_DATA_UINT64
},
508 { "evict_skip", KSTAT_DATA_UINT64
},
509 { "evict_not_enough", KSTAT_DATA_UINT64
},
510 { "evict_l2_cached", KSTAT_DATA_UINT64
},
511 { "evict_l2_eligible", KSTAT_DATA_UINT64
},
512 { "evict_l2_ineligible", KSTAT_DATA_UINT64
},
513 { "evict_l2_skip", KSTAT_DATA_UINT64
},
514 { "hash_elements", KSTAT_DATA_UINT64
},
515 { "hash_elements_max", KSTAT_DATA_UINT64
},
516 { "hash_collisions", KSTAT_DATA_UINT64
},
517 { "hash_chains", KSTAT_DATA_UINT64
},
518 { "hash_chain_max", KSTAT_DATA_UINT64
},
519 { "p", KSTAT_DATA_UINT64
},
520 { "c", KSTAT_DATA_UINT64
},
521 { "c_min", KSTAT_DATA_UINT64
},
522 { "c_max", KSTAT_DATA_UINT64
},
523 { "size", KSTAT_DATA_UINT64
},
524 { "hdr_size", KSTAT_DATA_UINT64
},
525 { "data_size", KSTAT_DATA_UINT64
},
526 { "metadata_size", KSTAT_DATA_UINT64
},
527 { "dbuf_size", KSTAT_DATA_UINT64
},
528 { "dnode_size", KSTAT_DATA_UINT64
},
529 { "bonus_size", KSTAT_DATA_UINT64
},
530 { "anon_size", KSTAT_DATA_UINT64
},
531 { "anon_evictable_data", KSTAT_DATA_UINT64
},
532 { "anon_evictable_metadata", KSTAT_DATA_UINT64
},
533 { "mru_size", KSTAT_DATA_UINT64
},
534 { "mru_evictable_data", KSTAT_DATA_UINT64
},
535 { "mru_evictable_metadata", KSTAT_DATA_UINT64
},
536 { "mru_ghost_size", KSTAT_DATA_UINT64
},
537 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64
},
538 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
539 { "mfu_size", KSTAT_DATA_UINT64
},
540 { "mfu_evictable_data", KSTAT_DATA_UINT64
},
541 { "mfu_evictable_metadata", KSTAT_DATA_UINT64
},
542 { "mfu_ghost_size", KSTAT_DATA_UINT64
},
543 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64
},
544 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
545 { "l2_hits", KSTAT_DATA_UINT64
},
546 { "l2_misses", KSTAT_DATA_UINT64
},
547 { "l2_feeds", KSTAT_DATA_UINT64
},
548 { "l2_rw_clash", KSTAT_DATA_UINT64
},
549 { "l2_read_bytes", KSTAT_DATA_UINT64
},
550 { "l2_write_bytes", KSTAT_DATA_UINT64
},
551 { "l2_writes_sent", KSTAT_DATA_UINT64
},
552 { "l2_writes_done", KSTAT_DATA_UINT64
},
553 { "l2_writes_error", KSTAT_DATA_UINT64
},
554 { "l2_writes_lock_retry", KSTAT_DATA_UINT64
},
555 { "l2_writes_skip_toobig", KSTAT_DATA_UINT64
},
556 { "l2_evict_lock_retry", KSTAT_DATA_UINT64
},
557 { "l2_evict_reading", KSTAT_DATA_UINT64
},
558 { "l2_evict_l1cached", KSTAT_DATA_UINT64
},
559 { "l2_free_on_write", KSTAT_DATA_UINT64
},
560 { "l2_cdata_free_on_write", KSTAT_DATA_UINT64
},
561 { "l2_abort_lowmem", KSTAT_DATA_UINT64
},
562 { "l2_cksum_bad", KSTAT_DATA_UINT64
},
563 { "l2_io_error", KSTAT_DATA_UINT64
},
564 { "l2_size", KSTAT_DATA_UINT64
},
565 { "l2_asize", KSTAT_DATA_UINT64
},
566 { "l2_hdr_size", KSTAT_DATA_UINT64
},
567 { "l2_compress_successes", KSTAT_DATA_UINT64
},
568 { "l2_compress_zeros", KSTAT_DATA_UINT64
},
569 { "l2_compress_failures", KSTAT_DATA_UINT64
},
570 { "memory_throttle_count", KSTAT_DATA_UINT64
},
571 { "duplicate_buffers", KSTAT_DATA_UINT64
},
572 { "duplicate_buffers_size", KSTAT_DATA_UINT64
},
573 { "duplicate_reads", KSTAT_DATA_UINT64
},
574 { "memory_direct_count", KSTAT_DATA_UINT64
},
575 { "memory_indirect_count", KSTAT_DATA_UINT64
},
576 { "arc_no_grow", KSTAT_DATA_UINT64
},
577 { "arc_tempreserve", KSTAT_DATA_UINT64
},
578 { "arc_loaned_bytes", KSTAT_DATA_UINT64
},
579 { "arc_prune", KSTAT_DATA_UINT64
},
580 { "arc_meta_used", KSTAT_DATA_UINT64
},
581 { "arc_meta_limit", KSTAT_DATA_UINT64
},
582 { "arc_dnode_limit", KSTAT_DATA_UINT64
},
583 { "arc_meta_max", KSTAT_DATA_UINT64
},
584 { "arc_meta_min", KSTAT_DATA_UINT64
},
585 { "sync_wait_for_async", KSTAT_DATA_UINT64
},
586 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64
},
587 { "arc_need_free", KSTAT_DATA_UINT64
},
588 { "arc_sys_free", KSTAT_DATA_UINT64
}
591 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
593 #define ARCSTAT_INCR(stat, val) \
594 atomic_add_64(&arc_stats.stat.value.ui64, (val))
596 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
597 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
599 #define ARCSTAT_MAX(stat, val) { \
601 while ((val) > (m = arc_stats.stat.value.ui64) && \
602 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
606 #define ARCSTAT_MAXSTAT(stat) \
607 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
610 * We define a macro to allow ARC hits/misses to be easily broken down by
611 * two separate conditions, giving a total of four different subtypes for
612 * each of hits and misses (so eight statistics total).
614 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
617 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
619 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
623 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
625 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
630 static arc_state_t
*arc_anon
;
631 static arc_state_t
*arc_mru
;
632 static arc_state_t
*arc_mru_ghost
;
633 static arc_state_t
*arc_mfu
;
634 static arc_state_t
*arc_mfu_ghost
;
635 static arc_state_t
*arc_l2c_only
;
638 * There are several ARC variables that are critical to export as kstats --
639 * but we don't want to have to grovel around in the kstat whenever we wish to
640 * manipulate them. For these variables, we therefore define them to be in
641 * terms of the statistic variable. This assures that we are not introducing
642 * the possibility of inconsistency by having shadow copies of the variables,
643 * while still allowing the code to be readable.
645 #define arc_size ARCSTAT(arcstat_size) /* actual total arc size */
646 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
647 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
648 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
649 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
650 #define arc_no_grow ARCSTAT(arcstat_no_grow)
651 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
652 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
653 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
654 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */
655 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */
656 #define arc_meta_used ARCSTAT(arcstat_meta_used) /* size of metadata */
657 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
658 #define arc_dbuf_size ARCSTAT(arcstat_dbuf_size) /* dbuf metadata */
659 #define arc_dnode_size ARCSTAT(arcstat_dnode_size) /* dnode metadata */
660 #define arc_bonus_size ARCSTAT(arcstat_bonus_size) /* bonus buffer metadata */
661 #define arc_need_free ARCSTAT(arcstat_need_free) /* bytes to be freed */
662 #define arc_sys_free ARCSTAT(arcstat_sys_free) /* target system free bytes */
664 #define L2ARC_IS_VALID_COMPRESS(_c_) \
665 ((_c_) == ZIO_COMPRESS_LZ4 || (_c_) == ZIO_COMPRESS_EMPTY)
667 static list_t arc_prune_list
;
668 static kmutex_t arc_prune_mtx
;
669 static taskq_t
*arc_prune_taskq
;
670 static arc_buf_t
*arc_eviction_list
;
671 static arc_buf_hdr_t arc_eviction_hdr
;
673 #define GHOST_STATE(state) \
674 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
675 (state) == arc_l2c_only)
677 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
678 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
679 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
680 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
681 #define HDR_FREED_IN_READ(hdr) ((hdr)->b_flags & ARC_FLAG_FREED_IN_READ)
682 #define HDR_BUF_AVAILABLE(hdr) ((hdr)->b_flags & ARC_FLAG_BUF_AVAILABLE)
684 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
685 #define HDR_L2COMPRESS(hdr) ((hdr)->b_flags & ARC_FLAG_L2COMPRESS)
686 #define HDR_L2_READING(hdr) \
687 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
688 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
689 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
690 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
691 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
693 #define HDR_ISTYPE_METADATA(hdr) \
694 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
695 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
697 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
698 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
704 #define HDR_FULL_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
705 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
708 * Hash table routines
711 #define HT_LOCK_ALIGN 64
712 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
717 unsigned char pad
[HT_LOCK_PAD
];
721 #define BUF_LOCKS 8192
722 typedef struct buf_hash_table
{
724 arc_buf_hdr_t
**ht_table
;
725 struct ht_lock ht_locks
[BUF_LOCKS
];
728 static buf_hash_table_t buf_hash_table
;
730 #define BUF_HASH_INDEX(spa, dva, birth) \
731 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
732 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
733 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
734 #define HDR_LOCK(hdr) \
735 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
737 uint64_t zfs_crc64_table
[256];
743 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
744 #define L2ARC_HEADROOM 2 /* num of writes */
745 #define L2ARC_MAX_BLOCK_SIZE (16 * 1024 * 1024) /* max compress size */
748 * If we discover during ARC scan any buffers to be compressed, we boost
749 * our headroom for the next scanning cycle by this percentage multiple.
751 #define L2ARC_HEADROOM_BOOST 200
752 #define L2ARC_FEED_SECS 1 /* caching interval secs */
753 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
757 * Used to distinguish headers that are being process by
758 * l2arc_write_buffers(), but have yet to be assigned to a l2arc disk
759 * address. This can happen when the header is added to the l2arc's list
760 * of buffers to write in the first stage of l2arc_write_buffers(), but
761 * has not yet been written out which happens in the second stage of
762 * l2arc_write_buffers().
764 #define L2ARC_ADDR_UNSET ((uint64_t)(-1))
766 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
767 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
769 /* L2ARC Performance Tunables */
770 unsigned long l2arc_write_max
= L2ARC_WRITE_SIZE
; /* def max write size */
771 unsigned long l2arc_write_boost
= L2ARC_WRITE_SIZE
; /* extra warmup write */
772 unsigned long l2arc_headroom
= L2ARC_HEADROOM
; /* # of dev writes */
773 unsigned long l2arc_headroom_boost
= L2ARC_HEADROOM_BOOST
;
774 unsigned long l2arc_max_block_size
= L2ARC_MAX_BLOCK_SIZE
;
775 unsigned long l2arc_feed_secs
= L2ARC_FEED_SECS
; /* interval seconds */
776 unsigned long l2arc_feed_min_ms
= L2ARC_FEED_MIN_MS
; /* min interval msecs */
777 int l2arc_noprefetch
= B_TRUE
; /* don't cache prefetch bufs */
778 int l2arc_nocompress
= B_FALSE
; /* don't compress bufs */
779 int l2arc_feed_again
= B_TRUE
; /* turbo warmup */
780 int l2arc_norw
= B_FALSE
; /* no reads during writes */
785 static list_t L2ARC_dev_list
; /* device list */
786 static list_t
*l2arc_dev_list
; /* device list pointer */
787 static kmutex_t l2arc_dev_mtx
; /* device list mutex */
788 static l2arc_dev_t
*l2arc_dev_last
; /* last device used */
789 static list_t L2ARC_free_on_write
; /* free after write buf list */
790 static list_t
*l2arc_free_on_write
; /* free after write list ptr */
791 static kmutex_t l2arc_free_on_write_mtx
; /* mutex for list */
792 static uint64_t l2arc_ndev
; /* number of devices */
794 typedef struct l2arc_read_callback
{
795 arc_buf_t
*l2rcb_buf
; /* read buffer */
796 spa_t
*l2rcb_spa
; /* spa */
797 blkptr_t l2rcb_bp
; /* original blkptr */
798 zbookmark_phys_t l2rcb_zb
; /* original bookmark */
799 int l2rcb_flags
; /* original flags */
800 enum zio_compress l2rcb_compress
; /* applied compress */
801 } l2arc_read_callback_t
;
803 typedef struct l2arc_data_free
{
804 /* protected by l2arc_free_on_write_mtx */
807 void (*l2df_func
)(void *, size_t);
808 list_node_t l2df_list_node
;
811 static kmutex_t l2arc_feed_thr_lock
;
812 static kcondvar_t l2arc_feed_thr_cv
;
813 static uint8_t l2arc_thread_exit
;
815 static void arc_get_data_buf(arc_buf_t
*);
816 static void arc_access(arc_buf_hdr_t
*, kmutex_t
*);
817 static boolean_t
arc_is_overflowing(void);
818 static void arc_buf_watch(arc_buf_t
*);
819 static void arc_tuning_update(void);
820 static void arc_prune_async(int64_t);
822 static arc_buf_contents_t
arc_buf_type(arc_buf_hdr_t
*);
823 static uint32_t arc_bufc_to_flags(arc_buf_contents_t
);
825 static boolean_t
l2arc_write_eligible(uint64_t, arc_buf_hdr_t
*);
826 static void l2arc_read_done(zio_t
*);
828 static boolean_t
l2arc_compress_buf(arc_buf_hdr_t
*);
829 static void l2arc_decompress_zio(zio_t
*, arc_buf_hdr_t
*, enum zio_compress
);
830 static void l2arc_release_cdata_buf(arc_buf_hdr_t
*);
833 buf_hash(uint64_t spa
, const dva_t
*dva
, uint64_t birth
)
835 uint8_t *vdva
= (uint8_t *)dva
;
836 uint64_t crc
= -1ULL;
839 ASSERT(zfs_crc64_table
[128] == ZFS_CRC64_POLY
);
841 for (i
= 0; i
< sizeof (dva_t
); i
++)
842 crc
= (crc
>> 8) ^ zfs_crc64_table
[(crc
^ vdva
[i
]) & 0xFF];
844 crc
^= (spa
>>8) ^ birth
;
849 #define BUF_EMPTY(buf) \
850 ((buf)->b_dva.dva_word[0] == 0 && \
851 (buf)->b_dva.dva_word[1] == 0)
853 #define BUF_EQUAL(spa, dva, birth, buf) \
854 ((buf)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
855 ((buf)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
856 ((buf)->b_birth == birth) && ((buf)->b_spa == spa)
859 buf_discard_identity(arc_buf_hdr_t
*hdr
)
861 hdr
->b_dva
.dva_word
[0] = 0;
862 hdr
->b_dva
.dva_word
[1] = 0;
866 static arc_buf_hdr_t
*
867 buf_hash_find(uint64_t spa
, const blkptr_t
*bp
, kmutex_t
**lockp
)
869 const dva_t
*dva
= BP_IDENTITY(bp
);
870 uint64_t birth
= BP_PHYSICAL_BIRTH(bp
);
871 uint64_t idx
= BUF_HASH_INDEX(spa
, dva
, birth
);
872 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
875 mutex_enter(hash_lock
);
876 for (hdr
= buf_hash_table
.ht_table
[idx
]; hdr
!= NULL
;
877 hdr
= hdr
->b_hash_next
) {
878 if (BUF_EQUAL(spa
, dva
, birth
, hdr
)) {
883 mutex_exit(hash_lock
);
889 * Insert an entry into the hash table. If there is already an element
890 * equal to elem in the hash table, then the already existing element
891 * will be returned and the new element will not be inserted.
892 * Otherwise returns NULL.
893 * If lockp == NULL, the caller is assumed to already hold the hash lock.
895 static arc_buf_hdr_t
*
896 buf_hash_insert(arc_buf_hdr_t
*hdr
, kmutex_t
**lockp
)
898 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
899 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
903 ASSERT(!DVA_IS_EMPTY(&hdr
->b_dva
));
904 ASSERT(hdr
->b_birth
!= 0);
905 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
909 mutex_enter(hash_lock
);
911 ASSERT(MUTEX_HELD(hash_lock
));
914 for (fhdr
= buf_hash_table
.ht_table
[idx
], i
= 0; fhdr
!= NULL
;
915 fhdr
= fhdr
->b_hash_next
, i
++) {
916 if (BUF_EQUAL(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
, fhdr
))
920 hdr
->b_hash_next
= buf_hash_table
.ht_table
[idx
];
921 buf_hash_table
.ht_table
[idx
] = hdr
;
922 hdr
->b_flags
|= ARC_FLAG_IN_HASH_TABLE
;
924 /* collect some hash table performance data */
926 ARCSTAT_BUMP(arcstat_hash_collisions
);
928 ARCSTAT_BUMP(arcstat_hash_chains
);
930 ARCSTAT_MAX(arcstat_hash_chain_max
, i
);
933 ARCSTAT_BUMP(arcstat_hash_elements
);
934 ARCSTAT_MAXSTAT(arcstat_hash_elements
);
940 buf_hash_remove(arc_buf_hdr_t
*hdr
)
942 arc_buf_hdr_t
*fhdr
, **hdrp
;
943 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
945 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx
)));
946 ASSERT(HDR_IN_HASH_TABLE(hdr
));
948 hdrp
= &buf_hash_table
.ht_table
[idx
];
949 while ((fhdr
= *hdrp
) != hdr
) {
950 ASSERT(fhdr
!= NULL
);
951 hdrp
= &fhdr
->b_hash_next
;
953 *hdrp
= hdr
->b_hash_next
;
954 hdr
->b_hash_next
= NULL
;
955 hdr
->b_flags
&= ~ARC_FLAG_IN_HASH_TABLE
;
957 /* collect some hash table performance data */
958 ARCSTAT_BUMPDOWN(arcstat_hash_elements
);
960 if (buf_hash_table
.ht_table
[idx
] &&
961 buf_hash_table
.ht_table
[idx
]->b_hash_next
== NULL
)
962 ARCSTAT_BUMPDOWN(arcstat_hash_chains
);
966 * Global data structures and functions for the buf kmem cache.
968 static kmem_cache_t
*hdr_full_cache
;
969 static kmem_cache_t
*hdr_l2only_cache
;
970 static kmem_cache_t
*buf_cache
;
977 #if defined(_KERNEL) && defined(HAVE_SPL)
979 * Large allocations which do not require contiguous pages
980 * should be using vmem_free() in the linux kernel\
982 vmem_free(buf_hash_table
.ht_table
,
983 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
985 kmem_free(buf_hash_table
.ht_table
,
986 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
988 for (i
= 0; i
< BUF_LOCKS
; i
++)
989 mutex_destroy(&buf_hash_table
.ht_locks
[i
].ht_lock
);
990 kmem_cache_destroy(hdr_full_cache
);
991 kmem_cache_destroy(hdr_l2only_cache
);
992 kmem_cache_destroy(buf_cache
);
996 * Constructor callback - called when the cache is empty
997 * and a new buf is requested.
1001 hdr_full_cons(void *vbuf
, void *unused
, int kmflag
)
1003 arc_buf_hdr_t
*hdr
= vbuf
;
1005 bzero(hdr
, HDR_FULL_SIZE
);
1006 cv_init(&hdr
->b_l1hdr
.b_cv
, NULL
, CV_DEFAULT
, NULL
);
1007 refcount_create(&hdr
->b_l1hdr
.b_refcnt
);
1008 mutex_init(&hdr
->b_l1hdr
.b_freeze_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1009 list_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1010 list_link_init(&hdr
->b_l2hdr
.b_l2node
);
1011 multilist_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1012 arc_space_consume(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1019 hdr_l2only_cons(void *vbuf
, void *unused
, int kmflag
)
1021 arc_buf_hdr_t
*hdr
= vbuf
;
1023 bzero(hdr
, HDR_L2ONLY_SIZE
);
1024 arc_space_consume(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1031 buf_cons(void *vbuf
, void *unused
, int kmflag
)
1033 arc_buf_t
*buf
= vbuf
;
1035 bzero(buf
, sizeof (arc_buf_t
));
1036 mutex_init(&buf
->b_evict_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1037 arc_space_consume(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1043 * Destructor callback - called when a cached buf is
1044 * no longer required.
1048 hdr_full_dest(void *vbuf
, void *unused
)
1050 arc_buf_hdr_t
*hdr
= vbuf
;
1052 ASSERT(BUF_EMPTY(hdr
));
1053 cv_destroy(&hdr
->b_l1hdr
.b_cv
);
1054 refcount_destroy(&hdr
->b_l1hdr
.b_refcnt
);
1055 mutex_destroy(&hdr
->b_l1hdr
.b_freeze_lock
);
1056 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1057 arc_space_return(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1062 hdr_l2only_dest(void *vbuf
, void *unused
)
1064 ASSERTV(arc_buf_hdr_t
*hdr
= vbuf
);
1066 ASSERT(BUF_EMPTY(hdr
));
1067 arc_space_return(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1072 buf_dest(void *vbuf
, void *unused
)
1074 arc_buf_t
*buf
= vbuf
;
1076 mutex_destroy(&buf
->b_evict_lock
);
1077 arc_space_return(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1081 * Reclaim callback -- invoked when memory is low.
1085 hdr_recl(void *unused
)
1087 dprintf("hdr_recl called\n");
1089 * umem calls the reclaim func when we destroy the buf cache,
1090 * which is after we do arc_fini().
1093 cv_signal(&arc_reclaim_thread_cv
);
1100 uint64_t hsize
= 1ULL << 12;
1104 * The hash table is big enough to fill all of physical memory
1105 * with an average block size of zfs_arc_average_blocksize (default 8K).
1106 * By default, the table will take up
1107 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
1109 while (hsize
* zfs_arc_average_blocksize
< physmem
* PAGESIZE
)
1112 buf_hash_table
.ht_mask
= hsize
- 1;
1113 #if defined(_KERNEL) && defined(HAVE_SPL)
1115 * Large allocations which do not require contiguous pages
1116 * should be using vmem_alloc() in the linux kernel
1118 buf_hash_table
.ht_table
=
1119 vmem_zalloc(hsize
* sizeof (void*), KM_SLEEP
);
1121 buf_hash_table
.ht_table
=
1122 kmem_zalloc(hsize
* sizeof (void*), KM_NOSLEEP
);
1124 if (buf_hash_table
.ht_table
== NULL
) {
1125 ASSERT(hsize
> (1ULL << 8));
1130 hdr_full_cache
= kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE
,
1131 0, hdr_full_cons
, hdr_full_dest
, hdr_recl
, NULL
, NULL
, 0);
1132 hdr_l2only_cache
= kmem_cache_create("arc_buf_hdr_t_l2only",
1133 HDR_L2ONLY_SIZE
, 0, hdr_l2only_cons
, hdr_l2only_dest
, hdr_recl
,
1135 buf_cache
= kmem_cache_create("arc_buf_t", sizeof (arc_buf_t
),
1136 0, buf_cons
, buf_dest
, NULL
, NULL
, NULL
, 0);
1138 for (i
= 0; i
< 256; i
++)
1139 for (ct
= zfs_crc64_table
+ i
, *ct
= i
, j
= 8; j
> 0; j
--)
1140 *ct
= (*ct
>> 1) ^ (-(*ct
& 1) & ZFS_CRC64_POLY
);
1142 for (i
= 0; i
< BUF_LOCKS
; i
++) {
1143 mutex_init(&buf_hash_table
.ht_locks
[i
].ht_lock
,
1144 NULL
, MUTEX_DEFAULT
, NULL
);
1149 * Transition between the two allocation states for the arc_buf_hdr struct.
1150 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
1151 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
1152 * version is used when a cache buffer is only in the L2ARC in order to reduce
1155 static arc_buf_hdr_t
*
1156 arc_hdr_realloc(arc_buf_hdr_t
*hdr
, kmem_cache_t
*old
, kmem_cache_t
*new)
1158 arc_buf_hdr_t
*nhdr
;
1161 ASSERT(HDR_HAS_L2HDR(hdr
));
1162 ASSERT((old
== hdr_full_cache
&& new == hdr_l2only_cache
) ||
1163 (old
== hdr_l2only_cache
&& new == hdr_full_cache
));
1165 dev
= hdr
->b_l2hdr
.b_dev
;
1166 nhdr
= kmem_cache_alloc(new, KM_PUSHPAGE
);
1168 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
1169 buf_hash_remove(hdr
);
1171 bcopy(hdr
, nhdr
, HDR_L2ONLY_SIZE
);
1173 if (new == hdr_full_cache
) {
1174 nhdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
1176 * arc_access and arc_change_state need to be aware that a
1177 * header has just come out of L2ARC, so we set its state to
1178 * l2c_only even though it's about to change.
1180 nhdr
->b_l1hdr
.b_state
= arc_l2c_only
;
1182 /* Verify previous threads set to NULL before freeing */
1183 ASSERT3P(nhdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1185 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1186 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
1189 * If we've reached here, We must have been called from
1190 * arc_evict_hdr(), as such we should have already been
1191 * removed from any ghost list we were previously on
1192 * (which protects us from racing with arc_evict_state),
1193 * thus no locking is needed during this check.
1195 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1198 * A buffer must not be moved into the arc_l2c_only
1199 * state if it's not finished being written out to the
1200 * l2arc device. Otherwise, the b_l1hdr.b_tmp_cdata field
1201 * might try to be accessed, even though it was removed.
1203 VERIFY(!HDR_L2_WRITING(hdr
));
1204 VERIFY3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1206 nhdr
->b_flags
&= ~ARC_FLAG_HAS_L1HDR
;
1209 * The header has been reallocated so we need to re-insert it into any
1212 (void) buf_hash_insert(nhdr
, NULL
);
1214 ASSERT(list_link_active(&hdr
->b_l2hdr
.b_l2node
));
1216 mutex_enter(&dev
->l2ad_mtx
);
1219 * We must place the realloc'ed header back into the list at
1220 * the same spot. Otherwise, if it's placed earlier in the list,
1221 * l2arc_write_buffers() could find it during the function's
1222 * write phase, and try to write it out to the l2arc.
1224 list_insert_after(&dev
->l2ad_buflist
, hdr
, nhdr
);
1225 list_remove(&dev
->l2ad_buflist
, hdr
);
1227 mutex_exit(&dev
->l2ad_mtx
);
1230 * Since we're using the pointer address as the tag when
1231 * incrementing and decrementing the l2ad_alloc refcount, we
1232 * must remove the old pointer (that we're about to destroy) and
1233 * add the new pointer to the refcount. Otherwise we'd remove
1234 * the wrong pointer address when calling arc_hdr_destroy() later.
1237 (void) refcount_remove_many(&dev
->l2ad_alloc
,
1238 hdr
->b_l2hdr
.b_asize
, hdr
);
1240 (void) refcount_add_many(&dev
->l2ad_alloc
,
1241 nhdr
->b_l2hdr
.b_asize
, nhdr
);
1243 buf_discard_identity(hdr
);
1244 hdr
->b_freeze_cksum
= NULL
;
1245 kmem_cache_free(old
, hdr
);
1251 #define ARC_MINTIME (hz>>4) /* 62 ms */
1254 arc_cksum_verify(arc_buf_t
*buf
)
1258 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1261 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1262 if (buf
->b_hdr
->b_freeze_cksum
== NULL
|| HDR_IO_ERROR(buf
->b_hdr
)) {
1263 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1266 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1267 if (!ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
))
1268 panic("buffer modified while frozen!");
1269 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1273 arc_cksum_equal(arc_buf_t
*buf
)
1278 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1279 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
, &zc
);
1280 equal
= ZIO_CHECKSUM_EQUAL(*buf
->b_hdr
->b_freeze_cksum
, zc
);
1281 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1287 arc_cksum_compute(arc_buf_t
*buf
, boolean_t force
)
1289 if (!force
&& !(zfs_flags
& ZFS_DEBUG_MODIFY
))
1292 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1293 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1294 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1297 buf
->b_hdr
->b_freeze_cksum
= kmem_alloc(sizeof (zio_cksum_t
), KM_SLEEP
);
1298 fletcher_2_native(buf
->b_data
, buf
->b_hdr
->b_size
,
1299 buf
->b_hdr
->b_freeze_cksum
);
1300 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1306 arc_buf_sigsegv(int sig
, siginfo_t
*si
, void *unused
)
1308 panic("Got SIGSEGV at address: 0x%lx\n", (long) si
->si_addr
);
1314 arc_buf_unwatch(arc_buf_t
*buf
)
1318 ASSERT0(mprotect(buf
->b_data
, buf
->b_hdr
->b_size
,
1319 PROT_READ
| PROT_WRITE
));
1326 arc_buf_watch(arc_buf_t
*buf
)
1330 ASSERT0(mprotect(buf
->b_data
, buf
->b_hdr
->b_size
, PROT_READ
));
1334 static arc_buf_contents_t
1335 arc_buf_type(arc_buf_hdr_t
*hdr
)
1337 if (HDR_ISTYPE_METADATA(hdr
)) {
1338 return (ARC_BUFC_METADATA
);
1340 return (ARC_BUFC_DATA
);
1345 arc_bufc_to_flags(arc_buf_contents_t type
)
1349 /* metadata field is 0 if buffer contains normal data */
1351 case ARC_BUFC_METADATA
:
1352 return (ARC_FLAG_BUFC_METADATA
);
1356 panic("undefined ARC buffer type!");
1357 return ((uint32_t)-1);
1361 arc_buf_thaw(arc_buf_t
*buf
)
1363 if (zfs_flags
& ZFS_DEBUG_MODIFY
) {
1364 if (buf
->b_hdr
->b_l1hdr
.b_state
!= arc_anon
)
1365 panic("modifying non-anon buffer!");
1366 if (HDR_IO_IN_PROGRESS(buf
->b_hdr
))
1367 panic("modifying buffer while i/o in progress!");
1368 arc_cksum_verify(buf
);
1371 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1372 if (buf
->b_hdr
->b_freeze_cksum
!= NULL
) {
1373 kmem_free(buf
->b_hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
1374 buf
->b_hdr
->b_freeze_cksum
= NULL
;
1377 mutex_exit(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1379 arc_buf_unwatch(buf
);
1383 arc_buf_freeze(arc_buf_t
*buf
)
1385 kmutex_t
*hash_lock
;
1387 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1390 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1391 mutex_enter(hash_lock
);
1393 ASSERT(buf
->b_hdr
->b_freeze_cksum
!= NULL
||
1394 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
1395 arc_cksum_compute(buf
, B_FALSE
);
1396 mutex_exit(hash_lock
);
1401 add_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
1405 ASSERT(HDR_HAS_L1HDR(hdr
));
1406 ASSERT(MUTEX_HELD(hash_lock
));
1408 state
= hdr
->b_l1hdr
.b_state
;
1410 if ((refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
) == 1) &&
1411 (state
!= arc_anon
)) {
1412 /* We don't use the L2-only state list. */
1413 if (state
!= arc_l2c_only
) {
1414 arc_buf_contents_t type
= arc_buf_type(hdr
);
1415 uint64_t delta
= hdr
->b_size
* hdr
->b_l1hdr
.b_datacnt
;
1416 multilist_t
*list
= &state
->arcs_list
[type
];
1417 uint64_t *size
= &state
->arcs_lsize
[type
];
1419 multilist_remove(list
, hdr
);
1421 if (GHOST_STATE(state
)) {
1422 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
1423 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
1424 delta
= hdr
->b_size
;
1427 ASSERT3U(*size
, >=, delta
);
1428 atomic_add_64(size
, -delta
);
1430 /* remove the prefetch flag if we get a reference */
1431 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
1436 remove_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
1439 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
1441 ASSERT(HDR_HAS_L1HDR(hdr
));
1442 ASSERT(state
== arc_anon
|| MUTEX_HELD(hash_lock
));
1443 ASSERT(!GHOST_STATE(state
));
1446 * arc_l2c_only counts as a ghost state so we don't need to explicitly
1447 * check to prevent usage of the arc_l2c_only list.
1449 if (((cnt
= refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
)) == 0) &&
1450 (state
!= arc_anon
)) {
1451 arc_buf_contents_t type
= arc_buf_type(hdr
);
1452 multilist_t
*list
= &state
->arcs_list
[type
];
1453 uint64_t *size
= &state
->arcs_lsize
[type
];
1455 multilist_insert(list
, hdr
);
1457 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
1458 atomic_add_64(size
, hdr
->b_size
*
1459 hdr
->b_l1hdr
.b_datacnt
);
1465 * Returns detailed information about a specific arc buffer. When the
1466 * state_index argument is set the function will calculate the arc header
1467 * list position for its arc state. Since this requires a linear traversal
1468 * callers are strongly encourage not to do this. However, it can be helpful
1469 * for targeted analysis so the functionality is provided.
1472 arc_buf_info(arc_buf_t
*ab
, arc_buf_info_t
*abi
, int state_index
)
1474 arc_buf_hdr_t
*hdr
= ab
->b_hdr
;
1475 l1arc_buf_hdr_t
*l1hdr
= NULL
;
1476 l2arc_buf_hdr_t
*l2hdr
= NULL
;
1477 arc_state_t
*state
= NULL
;
1479 memset(abi
, 0, sizeof (arc_buf_info_t
));
1484 abi
->abi_flags
= hdr
->b_flags
;
1486 if (HDR_HAS_L1HDR(hdr
)) {
1487 l1hdr
= &hdr
->b_l1hdr
;
1488 state
= l1hdr
->b_state
;
1490 if (HDR_HAS_L2HDR(hdr
))
1491 l2hdr
= &hdr
->b_l2hdr
;
1494 abi
->abi_datacnt
= l1hdr
->b_datacnt
;
1495 abi
->abi_access
= l1hdr
->b_arc_access
;
1496 abi
->abi_mru_hits
= l1hdr
->b_mru_hits
;
1497 abi
->abi_mru_ghost_hits
= l1hdr
->b_mru_ghost_hits
;
1498 abi
->abi_mfu_hits
= l1hdr
->b_mfu_hits
;
1499 abi
->abi_mfu_ghost_hits
= l1hdr
->b_mfu_ghost_hits
;
1500 abi
->abi_holds
= refcount_count(&l1hdr
->b_refcnt
);
1504 abi
->abi_l2arc_dattr
= l2hdr
->b_daddr
;
1505 abi
->abi_l2arc_asize
= l2hdr
->b_asize
;
1506 abi
->abi_l2arc_compress
= l2hdr
->b_compress
;
1507 abi
->abi_l2arc_hits
= l2hdr
->b_hits
;
1510 abi
->abi_state_type
= state
? state
->arcs_state
: ARC_STATE_ANON
;
1511 abi
->abi_state_contents
= arc_buf_type(hdr
);
1512 abi
->abi_size
= hdr
->b_size
;
1516 * Move the supplied buffer to the indicated state. The hash lock
1517 * for the buffer must be held by the caller.
1520 arc_change_state(arc_state_t
*new_state
, arc_buf_hdr_t
*hdr
,
1521 kmutex_t
*hash_lock
)
1523 arc_state_t
*old_state
;
1526 uint64_t from_delta
, to_delta
;
1527 arc_buf_contents_t buftype
= arc_buf_type(hdr
);
1530 * We almost always have an L1 hdr here, since we call arc_hdr_realloc()
1531 * in arc_read() when bringing a buffer out of the L2ARC. However, the
1532 * L1 hdr doesn't always exist when we change state to arc_anon before
1533 * destroying a header, in which case reallocating to add the L1 hdr is
1536 if (HDR_HAS_L1HDR(hdr
)) {
1537 old_state
= hdr
->b_l1hdr
.b_state
;
1538 refcnt
= refcount_count(&hdr
->b_l1hdr
.b_refcnt
);
1539 datacnt
= hdr
->b_l1hdr
.b_datacnt
;
1541 old_state
= arc_l2c_only
;
1546 ASSERT(MUTEX_HELD(hash_lock
));
1547 ASSERT3P(new_state
, !=, old_state
);
1548 ASSERT(refcnt
== 0 || datacnt
> 0);
1549 ASSERT(!GHOST_STATE(new_state
) || datacnt
== 0);
1550 ASSERT(old_state
!= arc_anon
|| datacnt
<= 1);
1552 from_delta
= to_delta
= datacnt
* hdr
->b_size
;
1555 * If this buffer is evictable, transfer it from the
1556 * old state list to the new state list.
1559 if (old_state
!= arc_anon
&& old_state
!= arc_l2c_only
) {
1560 uint64_t *size
= &old_state
->arcs_lsize
[buftype
];
1562 ASSERT(HDR_HAS_L1HDR(hdr
));
1563 multilist_remove(&old_state
->arcs_list
[buftype
], hdr
);
1566 * If prefetching out of the ghost cache,
1567 * we will have a non-zero datacnt.
1569 if (GHOST_STATE(old_state
) && datacnt
== 0) {
1570 /* ghost elements have a ghost size */
1571 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1572 from_delta
= hdr
->b_size
;
1574 ASSERT3U(*size
, >=, from_delta
);
1575 atomic_add_64(size
, -from_delta
);
1577 if (new_state
!= arc_anon
&& new_state
!= arc_l2c_only
) {
1578 uint64_t *size
= &new_state
->arcs_lsize
[buftype
];
1581 * An L1 header always exists here, since if we're
1582 * moving to some L1-cached state (i.e. not l2c_only or
1583 * anonymous), we realloc the header to add an L1hdr
1586 ASSERT(HDR_HAS_L1HDR(hdr
));
1587 multilist_insert(&new_state
->arcs_list
[buftype
], hdr
);
1589 /* ghost elements have a ghost size */
1590 if (GHOST_STATE(new_state
)) {
1592 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
1593 to_delta
= hdr
->b_size
;
1595 atomic_add_64(size
, to_delta
);
1599 ASSERT(!BUF_EMPTY(hdr
));
1600 if (new_state
== arc_anon
&& HDR_IN_HASH_TABLE(hdr
))
1601 buf_hash_remove(hdr
);
1603 /* adjust state sizes (ignore arc_l2c_only) */
1605 if (to_delta
&& new_state
!= arc_l2c_only
) {
1606 ASSERT(HDR_HAS_L1HDR(hdr
));
1607 if (GHOST_STATE(new_state
)) {
1611 * We moving a header to a ghost state, we first
1612 * remove all arc buffers. Thus, we'll have a
1613 * datacnt of zero, and no arc buffer to use for
1614 * the reference. As a result, we use the arc
1615 * header pointer for the reference.
1617 (void) refcount_add_many(&new_state
->arcs_size
,
1621 ASSERT3U(datacnt
, !=, 0);
1624 * Each individual buffer holds a unique reference,
1625 * thus we must remove each of these references one
1628 for (buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
1629 buf
= buf
->b_next
) {
1630 (void) refcount_add_many(&new_state
->arcs_size
,
1636 if (from_delta
&& old_state
!= arc_l2c_only
) {
1637 ASSERT(HDR_HAS_L1HDR(hdr
));
1638 if (GHOST_STATE(old_state
)) {
1640 * When moving a header off of a ghost state,
1641 * there's the possibility for datacnt to be
1642 * non-zero. This is because we first add the
1643 * arc buffer to the header prior to changing
1644 * the header's state. Since we used the header
1645 * for the reference when putting the header on
1646 * the ghost state, we must balance that and use
1647 * the header when removing off the ghost state
1648 * (even though datacnt is non zero).
1651 IMPLY(datacnt
== 0, new_state
== arc_anon
||
1652 new_state
== arc_l2c_only
);
1654 (void) refcount_remove_many(&old_state
->arcs_size
,
1658 ASSERT3U(datacnt
, !=, 0);
1661 * Each individual buffer holds a unique reference,
1662 * thus we must remove each of these references one
1665 for (buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
1666 buf
= buf
->b_next
) {
1667 (void) refcount_remove_many(
1668 &old_state
->arcs_size
, hdr
->b_size
, buf
);
1673 if (HDR_HAS_L1HDR(hdr
))
1674 hdr
->b_l1hdr
.b_state
= new_state
;
1677 * L2 headers should never be on the L2 state list since they don't
1678 * have L1 headers allocated.
1680 ASSERT(multilist_is_empty(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]) &&
1681 multilist_is_empty(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]));
1685 arc_space_consume(uint64_t space
, arc_space_type_t type
)
1687 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1692 case ARC_SPACE_DATA
:
1693 ARCSTAT_INCR(arcstat_data_size
, space
);
1695 case ARC_SPACE_META
:
1696 ARCSTAT_INCR(arcstat_metadata_size
, space
);
1698 case ARC_SPACE_BONUS
:
1699 ARCSTAT_INCR(arcstat_bonus_size
, space
);
1701 case ARC_SPACE_DNODE
:
1702 ARCSTAT_INCR(arcstat_dnode_size
, space
);
1704 case ARC_SPACE_DBUF
:
1705 ARCSTAT_INCR(arcstat_dbuf_size
, space
);
1707 case ARC_SPACE_HDRS
:
1708 ARCSTAT_INCR(arcstat_hdr_size
, space
);
1710 case ARC_SPACE_L2HDRS
:
1711 ARCSTAT_INCR(arcstat_l2_hdr_size
, space
);
1715 if (type
!= ARC_SPACE_DATA
)
1716 ARCSTAT_INCR(arcstat_meta_used
, space
);
1718 atomic_add_64(&arc_size
, space
);
1722 arc_space_return(uint64_t space
, arc_space_type_t type
)
1724 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
1729 case ARC_SPACE_DATA
:
1730 ARCSTAT_INCR(arcstat_data_size
, -space
);
1732 case ARC_SPACE_META
:
1733 ARCSTAT_INCR(arcstat_metadata_size
, -space
);
1735 case ARC_SPACE_BONUS
:
1736 ARCSTAT_INCR(arcstat_bonus_size
, -space
);
1738 case ARC_SPACE_DNODE
:
1739 ARCSTAT_INCR(arcstat_dnode_size
, -space
);
1741 case ARC_SPACE_DBUF
:
1742 ARCSTAT_INCR(arcstat_dbuf_size
, -space
);
1744 case ARC_SPACE_HDRS
:
1745 ARCSTAT_INCR(arcstat_hdr_size
, -space
);
1747 case ARC_SPACE_L2HDRS
:
1748 ARCSTAT_INCR(arcstat_l2_hdr_size
, -space
);
1752 if (type
!= ARC_SPACE_DATA
) {
1753 ASSERT(arc_meta_used
>= space
);
1754 if (arc_meta_max
< arc_meta_used
)
1755 arc_meta_max
= arc_meta_used
;
1756 ARCSTAT_INCR(arcstat_meta_used
, -space
);
1759 ASSERT(arc_size
>= space
);
1760 atomic_add_64(&arc_size
, -space
);
1764 arc_buf_alloc(spa_t
*spa
, uint64_t size
, void *tag
, arc_buf_contents_t type
)
1769 VERIFY3U(size
, <=, spa_maxblocksize(spa
));
1770 hdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
1771 ASSERT(BUF_EMPTY(hdr
));
1772 ASSERT3P(hdr
->b_freeze_cksum
, ==, NULL
);
1774 hdr
->b_spa
= spa_load_guid(spa
);
1775 hdr
->b_l1hdr
.b_mru_hits
= 0;
1776 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
1777 hdr
->b_l1hdr
.b_mfu_hits
= 0;
1778 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
1779 hdr
->b_l1hdr
.b_l2_hits
= 0;
1781 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1784 buf
->b_efunc
= NULL
;
1785 buf
->b_private
= NULL
;
1788 hdr
->b_flags
= arc_bufc_to_flags(type
);
1789 hdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
1791 hdr
->b_l1hdr
.b_buf
= buf
;
1792 hdr
->b_l1hdr
.b_state
= arc_anon
;
1793 hdr
->b_l1hdr
.b_arc_access
= 0;
1794 hdr
->b_l1hdr
.b_datacnt
= 1;
1795 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
1797 arc_get_data_buf(buf
);
1798 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
1799 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1804 static char *arc_onloan_tag
= "onloan";
1807 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
1808 * flight data by arc_tempreserve_space() until they are "returned". Loaned
1809 * buffers must be returned to the arc before they can be used by the DMU or
1813 arc_loan_buf(spa_t
*spa
, uint64_t size
)
1817 buf
= arc_buf_alloc(spa
, size
, arc_onloan_tag
, ARC_BUFC_DATA
);
1819 atomic_add_64(&arc_loaned_bytes
, size
);
1824 * Return a loaned arc buffer to the arc.
1827 arc_return_buf(arc_buf_t
*buf
, void *tag
)
1829 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1831 ASSERT(buf
->b_data
!= NULL
);
1832 ASSERT(HDR_HAS_L1HDR(hdr
));
1833 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1834 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
1836 atomic_add_64(&arc_loaned_bytes
, -hdr
->b_size
);
1839 /* Detach an arc_buf from a dbuf (tag) */
1841 arc_loan_inuse_buf(arc_buf_t
*buf
, void *tag
)
1843 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1845 ASSERT(buf
->b_data
!= NULL
);
1846 ASSERT(HDR_HAS_L1HDR(hdr
));
1847 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
1848 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
);
1849 buf
->b_efunc
= NULL
;
1850 buf
->b_private
= NULL
;
1852 atomic_add_64(&arc_loaned_bytes
, hdr
->b_size
);
1856 arc_buf_clone(arc_buf_t
*from
)
1859 arc_buf_hdr_t
*hdr
= from
->b_hdr
;
1860 uint64_t size
= hdr
->b_size
;
1862 ASSERT(HDR_HAS_L1HDR(hdr
));
1863 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_anon
);
1865 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
1868 buf
->b_efunc
= NULL
;
1869 buf
->b_private
= NULL
;
1870 buf
->b_next
= hdr
->b_l1hdr
.b_buf
;
1871 hdr
->b_l1hdr
.b_buf
= buf
;
1872 arc_get_data_buf(buf
);
1873 bcopy(from
->b_data
, buf
->b_data
, size
);
1876 * This buffer already exists in the arc so create a duplicate
1877 * copy for the caller. If the buffer is associated with user data
1878 * then track the size and number of duplicates. These stats will be
1879 * updated as duplicate buffers are created and destroyed.
1881 if (HDR_ISTYPE_DATA(hdr
)) {
1882 ARCSTAT_BUMP(arcstat_duplicate_buffers
);
1883 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, size
);
1885 hdr
->b_l1hdr
.b_datacnt
+= 1;
1890 arc_buf_add_ref(arc_buf_t
*buf
, void* tag
)
1893 kmutex_t
*hash_lock
;
1896 * Check to see if this buffer is evicted. Callers
1897 * must verify b_data != NULL to know if the add_ref
1900 mutex_enter(&buf
->b_evict_lock
);
1901 if (buf
->b_data
== NULL
) {
1902 mutex_exit(&buf
->b_evict_lock
);
1905 hash_lock
= HDR_LOCK(buf
->b_hdr
);
1906 mutex_enter(hash_lock
);
1908 ASSERT(HDR_HAS_L1HDR(hdr
));
1909 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
1910 mutex_exit(&buf
->b_evict_lock
);
1912 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
1913 hdr
->b_l1hdr
.b_state
== arc_mfu
);
1915 add_reference(hdr
, hash_lock
, tag
);
1916 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
1917 arc_access(hdr
, hash_lock
);
1918 mutex_exit(hash_lock
);
1919 ARCSTAT_BUMP(arcstat_hits
);
1920 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
1921 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
1922 data
, metadata
, hits
);
1926 arc_buf_free_on_write(void *data
, size_t size
,
1927 void (*free_func
)(void *, size_t))
1929 l2arc_data_free_t
*df
;
1931 df
= kmem_alloc(sizeof (*df
), KM_SLEEP
);
1932 df
->l2df_data
= data
;
1933 df
->l2df_size
= size
;
1934 df
->l2df_func
= free_func
;
1935 mutex_enter(&l2arc_free_on_write_mtx
);
1936 list_insert_head(l2arc_free_on_write
, df
);
1937 mutex_exit(&l2arc_free_on_write_mtx
);
1941 * Free the arc data buffer. If it is an l2arc write in progress,
1942 * the buffer is placed on l2arc_free_on_write to be freed later.
1945 arc_buf_data_free(arc_buf_t
*buf
, void (*free_func
)(void *, size_t))
1947 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1949 if (HDR_L2_WRITING(hdr
)) {
1950 arc_buf_free_on_write(buf
->b_data
, hdr
->b_size
, free_func
);
1951 ARCSTAT_BUMP(arcstat_l2_free_on_write
);
1953 free_func(buf
->b_data
, hdr
->b_size
);
1958 arc_buf_l2_cdata_free(arc_buf_hdr_t
*hdr
)
1960 ASSERT(HDR_HAS_L2HDR(hdr
));
1961 ASSERT(MUTEX_HELD(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
));
1964 * The b_tmp_cdata field is linked off of the b_l1hdr, so if
1965 * that doesn't exist, the header is in the arc_l2c_only state,
1966 * and there isn't anything to free (it's already been freed).
1968 if (!HDR_HAS_L1HDR(hdr
))
1972 * The header isn't being written to the l2arc device, thus it
1973 * shouldn't have a b_tmp_cdata to free.
1975 if (!HDR_L2_WRITING(hdr
)) {
1976 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
1981 * The header does not have compression enabled. This can be due
1982 * to the buffer not being compressible, or because we're
1983 * freeing the buffer before the second phase of
1984 * l2arc_write_buffer() has started (which does the compression
1985 * step). In either case, b_tmp_cdata does not point to a
1986 * separately compressed buffer, so there's nothing to free (it
1987 * points to the same buffer as the arc_buf_t's b_data field).
1989 if (hdr
->b_l2hdr
.b_compress
== ZIO_COMPRESS_OFF
) {
1990 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
1995 * There's nothing to free since the buffer was all zero's and
1996 * compressed to a zero length buffer.
1998 if (hdr
->b_l2hdr
.b_compress
== ZIO_COMPRESS_EMPTY
) {
1999 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
2003 ASSERT(L2ARC_IS_VALID_COMPRESS(hdr
->b_l2hdr
.b_compress
));
2005 arc_buf_free_on_write(hdr
->b_l1hdr
.b_tmp_cdata
,
2006 hdr
->b_size
, zio_data_buf_free
);
2008 ARCSTAT_BUMP(arcstat_l2_cdata_free_on_write
);
2009 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
2013 * Free up buf->b_data and if 'remove' is set, then pull the
2014 * arc_buf_t off of the the arc_buf_hdr_t's list and free it.
2017 arc_buf_destroy(arc_buf_t
*buf
, boolean_t remove
)
2021 /* free up data associated with the buf */
2022 if (buf
->b_data
!= NULL
) {
2023 arc_state_t
*state
= buf
->b_hdr
->b_l1hdr
.b_state
;
2024 uint64_t size
= buf
->b_hdr
->b_size
;
2025 arc_buf_contents_t type
= arc_buf_type(buf
->b_hdr
);
2027 arc_cksum_verify(buf
);
2028 arc_buf_unwatch(buf
);
2030 if (type
== ARC_BUFC_METADATA
) {
2031 arc_buf_data_free(buf
, zio_buf_free
);
2032 arc_space_return(size
, ARC_SPACE_META
);
2034 ASSERT(type
== ARC_BUFC_DATA
);
2035 arc_buf_data_free(buf
, zio_data_buf_free
);
2036 arc_space_return(size
, ARC_SPACE_DATA
);
2039 /* protected by hash lock, if in the hash table */
2040 if (multilist_link_active(&buf
->b_hdr
->b_l1hdr
.b_arc_node
)) {
2041 uint64_t *cnt
= &state
->arcs_lsize
[type
];
2043 ASSERT(refcount_is_zero(
2044 &buf
->b_hdr
->b_l1hdr
.b_refcnt
));
2045 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
2047 ASSERT3U(*cnt
, >=, size
);
2048 atomic_add_64(cnt
, -size
);
2051 (void) refcount_remove_many(&state
->arcs_size
, size
, buf
);
2055 * If we're destroying a duplicate buffer make sure
2056 * that the appropriate statistics are updated.
2058 if (buf
->b_hdr
->b_l1hdr
.b_datacnt
> 1 &&
2059 HDR_ISTYPE_DATA(buf
->b_hdr
)) {
2060 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
2061 ARCSTAT_INCR(arcstat_duplicate_buffers_size
, -size
);
2063 ASSERT(buf
->b_hdr
->b_l1hdr
.b_datacnt
> 0);
2064 buf
->b_hdr
->b_l1hdr
.b_datacnt
-= 1;
2067 /* only remove the buf if requested */
2071 /* remove the buf from the hdr list */
2072 for (bufp
= &buf
->b_hdr
->b_l1hdr
.b_buf
; *bufp
!= buf
;
2073 bufp
= &(*bufp
)->b_next
)
2075 *bufp
= buf
->b_next
;
2078 ASSERT(buf
->b_efunc
== NULL
);
2080 /* clean up the buf */
2082 kmem_cache_free(buf_cache
, buf
);
2086 arc_hdr_l2hdr_destroy(arc_buf_hdr_t
*hdr
)
2088 l2arc_buf_hdr_t
*l2hdr
= &hdr
->b_l2hdr
;
2089 l2arc_dev_t
*dev
= l2hdr
->b_dev
;
2091 ASSERT(MUTEX_HELD(&dev
->l2ad_mtx
));
2092 ASSERT(HDR_HAS_L2HDR(hdr
));
2094 list_remove(&dev
->l2ad_buflist
, hdr
);
2097 * We don't want to leak the b_tmp_cdata buffer that was
2098 * allocated in l2arc_write_buffers()
2100 arc_buf_l2_cdata_free(hdr
);
2103 * If the l2hdr's b_daddr is equal to L2ARC_ADDR_UNSET, then
2104 * this header is being processed by l2arc_write_buffers() (i.e.
2105 * it's in the first stage of l2arc_write_buffers()).
2106 * Re-affirming that truth here, just to serve as a reminder. If
2107 * b_daddr does not equal L2ARC_ADDR_UNSET, then the header may or
2108 * may not have its HDR_L2_WRITING flag set. (the write may have
2109 * completed, in which case HDR_L2_WRITING will be false and the
2110 * b_daddr field will point to the address of the buffer on disk).
2112 IMPLY(l2hdr
->b_daddr
== L2ARC_ADDR_UNSET
, HDR_L2_WRITING(hdr
));
2115 * If b_daddr is equal to L2ARC_ADDR_UNSET, we're racing with
2116 * l2arc_write_buffers(). Since we've just removed this header
2117 * from the l2arc buffer list, this header will never reach the
2118 * second stage of l2arc_write_buffers(), which increments the
2119 * accounting stats for this header. Thus, we must be careful
2120 * not to decrement them for this header either.
2122 if (l2hdr
->b_daddr
!= L2ARC_ADDR_UNSET
) {
2123 ARCSTAT_INCR(arcstat_l2_asize
, -l2hdr
->b_asize
);
2124 ARCSTAT_INCR(arcstat_l2_size
, -hdr
->b_size
);
2126 vdev_space_update(dev
->l2ad_vdev
,
2127 -l2hdr
->b_asize
, 0, 0);
2129 (void) refcount_remove_many(&dev
->l2ad_alloc
,
2130 l2hdr
->b_asize
, hdr
);
2133 hdr
->b_flags
&= ~ARC_FLAG_HAS_L2HDR
;
2137 arc_hdr_destroy(arc_buf_hdr_t
*hdr
)
2139 if (HDR_HAS_L1HDR(hdr
)) {
2140 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
||
2141 hdr
->b_l1hdr
.b_datacnt
> 0);
2142 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2143 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
2145 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
2146 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
2148 if (HDR_HAS_L2HDR(hdr
)) {
2149 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
2150 boolean_t buflist_held
= MUTEX_HELD(&dev
->l2ad_mtx
);
2153 mutex_enter(&dev
->l2ad_mtx
);
2156 * Even though we checked this conditional above, we
2157 * need to check this again now that we have the
2158 * l2ad_mtx. This is because we could be racing with
2159 * another thread calling l2arc_evict() which might have
2160 * destroyed this header's L2 portion as we were waiting
2161 * to acquire the l2ad_mtx. If that happens, we don't
2162 * want to re-destroy the header's L2 portion.
2164 if (HDR_HAS_L2HDR(hdr
))
2165 arc_hdr_l2hdr_destroy(hdr
);
2168 mutex_exit(&dev
->l2ad_mtx
);
2171 if (!BUF_EMPTY(hdr
))
2172 buf_discard_identity(hdr
);
2174 if (hdr
->b_freeze_cksum
!= NULL
) {
2175 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
2176 hdr
->b_freeze_cksum
= NULL
;
2179 if (HDR_HAS_L1HDR(hdr
)) {
2180 while (hdr
->b_l1hdr
.b_buf
) {
2181 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
2183 if (buf
->b_efunc
!= NULL
) {
2184 mutex_enter(&arc_user_evicts_lock
);
2185 mutex_enter(&buf
->b_evict_lock
);
2186 ASSERT(buf
->b_hdr
!= NULL
);
2187 arc_buf_destroy(hdr
->b_l1hdr
.b_buf
, FALSE
);
2188 hdr
->b_l1hdr
.b_buf
= buf
->b_next
;
2189 buf
->b_hdr
= &arc_eviction_hdr
;
2190 buf
->b_next
= arc_eviction_list
;
2191 arc_eviction_list
= buf
;
2192 mutex_exit(&buf
->b_evict_lock
);
2193 cv_signal(&arc_user_evicts_cv
);
2194 mutex_exit(&arc_user_evicts_lock
);
2196 arc_buf_destroy(hdr
->b_l1hdr
.b_buf
, TRUE
);
2201 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
2202 if (HDR_HAS_L1HDR(hdr
)) {
2203 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
2204 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
2205 kmem_cache_free(hdr_full_cache
, hdr
);
2207 kmem_cache_free(hdr_l2only_cache
, hdr
);
2212 arc_buf_free(arc_buf_t
*buf
, void *tag
)
2214 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2215 int hashed
= hdr
->b_l1hdr
.b_state
!= arc_anon
;
2217 ASSERT(buf
->b_efunc
== NULL
);
2218 ASSERT(buf
->b_data
!= NULL
);
2221 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
2223 mutex_enter(hash_lock
);
2225 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
2227 (void) remove_reference(hdr
, hash_lock
, tag
);
2228 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
2229 arc_buf_destroy(buf
, TRUE
);
2231 ASSERT(buf
== hdr
->b_l1hdr
.b_buf
);
2232 ASSERT(buf
->b_efunc
== NULL
);
2233 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
2235 mutex_exit(hash_lock
);
2236 } else if (HDR_IO_IN_PROGRESS(hdr
)) {
2239 * We are in the middle of an async write. Don't destroy
2240 * this buffer unless the write completes before we finish
2241 * decrementing the reference count.
2243 mutex_enter(&arc_user_evicts_lock
);
2244 (void) remove_reference(hdr
, NULL
, tag
);
2245 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2246 destroy_hdr
= !HDR_IO_IN_PROGRESS(hdr
);
2247 mutex_exit(&arc_user_evicts_lock
);
2249 arc_hdr_destroy(hdr
);
2251 if (remove_reference(hdr
, NULL
, tag
) > 0)
2252 arc_buf_destroy(buf
, TRUE
);
2254 arc_hdr_destroy(hdr
);
2259 arc_buf_remove_ref(arc_buf_t
*buf
, void* tag
)
2261 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2262 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
2263 boolean_t no_callback
= (buf
->b_efunc
== NULL
);
2265 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
2266 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
2267 arc_buf_free(buf
, tag
);
2268 return (no_callback
);
2271 mutex_enter(hash_lock
);
2273 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
2274 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
2275 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_anon
);
2276 ASSERT(buf
->b_data
!= NULL
);
2278 (void) remove_reference(hdr
, hash_lock
, tag
);
2279 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
2281 arc_buf_destroy(buf
, TRUE
);
2282 } else if (no_callback
) {
2283 ASSERT(hdr
->b_l1hdr
.b_buf
== buf
&& buf
->b_next
== NULL
);
2284 ASSERT(buf
->b_efunc
== NULL
);
2285 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
2287 ASSERT(no_callback
|| hdr
->b_l1hdr
.b_datacnt
> 1 ||
2288 refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2289 mutex_exit(hash_lock
);
2290 return (no_callback
);
2294 arc_buf_size(arc_buf_t
*buf
)
2296 return (buf
->b_hdr
->b_size
);
2300 * Called from the DMU to determine if the current buffer should be
2301 * evicted. In order to ensure proper locking, the eviction must be initiated
2302 * from the DMU. Return true if the buffer is associated with user data and
2303 * duplicate buffers still exist.
2306 arc_buf_eviction_needed(arc_buf_t
*buf
)
2309 boolean_t evict_needed
= B_FALSE
;
2311 if (zfs_disable_dup_eviction
)
2314 mutex_enter(&buf
->b_evict_lock
);
2318 * We are in arc_do_user_evicts(); let that function
2319 * perform the eviction.
2321 ASSERT(buf
->b_data
== NULL
);
2322 mutex_exit(&buf
->b_evict_lock
);
2324 } else if (buf
->b_data
== NULL
) {
2326 * We have already been added to the arc eviction list;
2327 * recommend eviction.
2329 ASSERT3P(hdr
, ==, &arc_eviction_hdr
);
2330 mutex_exit(&buf
->b_evict_lock
);
2334 if (hdr
->b_l1hdr
.b_datacnt
> 1 && HDR_ISTYPE_DATA(hdr
))
2335 evict_needed
= B_TRUE
;
2337 mutex_exit(&buf
->b_evict_lock
);
2338 return (evict_needed
);
2342 * Evict the arc_buf_hdr that is provided as a parameter. The resultant
2343 * state of the header is dependent on its state prior to entering this
2344 * function. The following transitions are possible:
2346 * - arc_mru -> arc_mru_ghost
2347 * - arc_mfu -> arc_mfu_ghost
2348 * - arc_mru_ghost -> arc_l2c_only
2349 * - arc_mru_ghost -> deleted
2350 * - arc_mfu_ghost -> arc_l2c_only
2351 * - arc_mfu_ghost -> deleted
2354 arc_evict_hdr(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
2356 arc_state_t
*evicted_state
, *state
;
2357 int64_t bytes_evicted
= 0;
2359 ASSERT(MUTEX_HELD(hash_lock
));
2360 ASSERT(HDR_HAS_L1HDR(hdr
));
2362 state
= hdr
->b_l1hdr
.b_state
;
2363 if (GHOST_STATE(state
)) {
2364 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
2365 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
);
2368 * l2arc_write_buffers() relies on a header's L1 portion
2369 * (i.e. its b_tmp_cdata field) during its write phase.
2370 * Thus, we cannot push a header onto the arc_l2c_only
2371 * state (removing its L1 piece) until the header is
2372 * done being written to the l2arc.
2374 if (HDR_HAS_L2HDR(hdr
) && HDR_L2_WRITING(hdr
)) {
2375 ARCSTAT_BUMP(arcstat_evict_l2_skip
);
2376 return (bytes_evicted
);
2379 ARCSTAT_BUMP(arcstat_deleted
);
2380 bytes_evicted
+= hdr
->b_size
;
2382 DTRACE_PROBE1(arc__delete
, arc_buf_hdr_t
*, hdr
);
2384 if (HDR_HAS_L2HDR(hdr
)) {
2386 * This buffer is cached on the 2nd Level ARC;
2387 * don't destroy the header.
2389 arc_change_state(arc_l2c_only
, hdr
, hash_lock
);
2391 * dropping from L1+L2 cached to L2-only,
2392 * realloc to remove the L1 header.
2394 hdr
= arc_hdr_realloc(hdr
, hdr_full_cache
,
2397 arc_change_state(arc_anon
, hdr
, hash_lock
);
2398 arc_hdr_destroy(hdr
);
2400 return (bytes_evicted
);
2403 ASSERT(state
== arc_mru
|| state
== arc_mfu
);
2404 evicted_state
= (state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
2406 /* prefetch buffers have a minimum lifespan */
2407 if (HDR_IO_IN_PROGRESS(hdr
) ||
2408 ((hdr
->b_flags
& (ARC_FLAG_PREFETCH
| ARC_FLAG_INDIRECT
)) &&
2409 ddi_get_lbolt() - hdr
->b_l1hdr
.b_arc_access
<
2410 arc_min_prefetch_lifespan
)) {
2411 ARCSTAT_BUMP(arcstat_evict_skip
);
2412 return (bytes_evicted
);
2415 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
2416 ASSERT3U(hdr
->b_l1hdr
.b_datacnt
, >, 0);
2417 while (hdr
->b_l1hdr
.b_buf
) {
2418 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
2419 if (!mutex_tryenter(&buf
->b_evict_lock
)) {
2420 ARCSTAT_BUMP(arcstat_mutex_miss
);
2423 if (buf
->b_data
!= NULL
)
2424 bytes_evicted
+= hdr
->b_size
;
2425 if (buf
->b_efunc
!= NULL
) {
2426 mutex_enter(&arc_user_evicts_lock
);
2427 arc_buf_destroy(buf
, FALSE
);
2428 hdr
->b_l1hdr
.b_buf
= buf
->b_next
;
2429 buf
->b_hdr
= &arc_eviction_hdr
;
2430 buf
->b_next
= arc_eviction_list
;
2431 arc_eviction_list
= buf
;
2432 cv_signal(&arc_user_evicts_cv
);
2433 mutex_exit(&arc_user_evicts_lock
);
2434 mutex_exit(&buf
->b_evict_lock
);
2436 mutex_exit(&buf
->b_evict_lock
);
2437 arc_buf_destroy(buf
, TRUE
);
2441 if (HDR_HAS_L2HDR(hdr
)) {
2442 ARCSTAT_INCR(arcstat_evict_l2_cached
, hdr
->b_size
);
2444 if (l2arc_write_eligible(hdr
->b_spa
, hdr
))
2445 ARCSTAT_INCR(arcstat_evict_l2_eligible
, hdr
->b_size
);
2447 ARCSTAT_INCR(arcstat_evict_l2_ineligible
, hdr
->b_size
);
2450 if (hdr
->b_l1hdr
.b_datacnt
== 0) {
2451 arc_change_state(evicted_state
, hdr
, hash_lock
);
2452 ASSERT(HDR_IN_HASH_TABLE(hdr
));
2453 hdr
->b_flags
|= ARC_FLAG_IN_HASH_TABLE
;
2454 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
2455 DTRACE_PROBE1(arc__evict
, arc_buf_hdr_t
*, hdr
);
2458 return (bytes_evicted
);
2462 arc_evict_state_impl(multilist_t
*ml
, int idx
, arc_buf_hdr_t
*marker
,
2463 uint64_t spa
, int64_t bytes
)
2465 multilist_sublist_t
*mls
;
2466 uint64_t bytes_evicted
= 0;
2468 kmutex_t
*hash_lock
;
2469 int evict_count
= 0;
2471 ASSERT3P(marker
, !=, NULL
);
2472 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
2474 mls
= multilist_sublist_lock(ml
, idx
);
2476 for (hdr
= multilist_sublist_prev(mls
, marker
); hdr
!= NULL
;
2477 hdr
= multilist_sublist_prev(mls
, marker
)) {
2478 if ((bytes
!= ARC_EVICT_ALL
&& bytes_evicted
>= bytes
) ||
2479 (evict_count
>= zfs_arc_evict_batch_limit
))
2483 * To keep our iteration location, move the marker
2484 * forward. Since we're not holding hdr's hash lock, we
2485 * must be very careful and not remove 'hdr' from the
2486 * sublist. Otherwise, other consumers might mistake the
2487 * 'hdr' as not being on a sublist when they call the
2488 * multilist_link_active() function (they all rely on
2489 * the hash lock protecting concurrent insertions and
2490 * removals). multilist_sublist_move_forward() was
2491 * specifically implemented to ensure this is the case
2492 * (only 'marker' will be removed and re-inserted).
2494 multilist_sublist_move_forward(mls
, marker
);
2497 * The only case where the b_spa field should ever be
2498 * zero, is the marker headers inserted by
2499 * arc_evict_state(). It's possible for multiple threads
2500 * to be calling arc_evict_state() concurrently (e.g.
2501 * dsl_pool_close() and zio_inject_fault()), so we must
2502 * skip any markers we see from these other threads.
2504 if (hdr
->b_spa
== 0)
2507 /* we're only interested in evicting buffers of a certain spa */
2508 if (spa
!= 0 && hdr
->b_spa
!= spa
) {
2509 ARCSTAT_BUMP(arcstat_evict_skip
);
2513 hash_lock
= HDR_LOCK(hdr
);
2516 * We aren't calling this function from any code path
2517 * that would already be holding a hash lock, so we're
2518 * asserting on this assumption to be defensive in case
2519 * this ever changes. Without this check, it would be
2520 * possible to incorrectly increment arcstat_mutex_miss
2521 * below (e.g. if the code changed such that we called
2522 * this function with a hash lock held).
2524 ASSERT(!MUTEX_HELD(hash_lock
));
2526 if (mutex_tryenter(hash_lock
)) {
2527 uint64_t evicted
= arc_evict_hdr(hdr
, hash_lock
);
2528 mutex_exit(hash_lock
);
2530 bytes_evicted
+= evicted
;
2533 * If evicted is zero, arc_evict_hdr() must have
2534 * decided to skip this header, don't increment
2535 * evict_count in this case.
2541 * If arc_size isn't overflowing, signal any
2542 * threads that might happen to be waiting.
2544 * For each header evicted, we wake up a single
2545 * thread. If we used cv_broadcast, we could
2546 * wake up "too many" threads causing arc_size
2547 * to significantly overflow arc_c; since
2548 * arc_get_data_buf() doesn't check for overflow
2549 * when it's woken up (it doesn't because it's
2550 * possible for the ARC to be overflowing while
2551 * full of un-evictable buffers, and the
2552 * function should proceed in this case).
2554 * If threads are left sleeping, due to not
2555 * using cv_broadcast, they will be woken up
2556 * just before arc_reclaim_thread() sleeps.
2558 mutex_enter(&arc_reclaim_lock
);
2559 if (!arc_is_overflowing())
2560 cv_signal(&arc_reclaim_waiters_cv
);
2561 mutex_exit(&arc_reclaim_lock
);
2563 ARCSTAT_BUMP(arcstat_mutex_miss
);
2567 multilist_sublist_unlock(mls
);
2569 return (bytes_evicted
);
2573 * Evict buffers from the given arc state, until we've removed the
2574 * specified number of bytes. Move the removed buffers to the
2575 * appropriate evict state.
2577 * This function makes a "best effort". It skips over any buffers
2578 * it can't get a hash_lock on, and so, may not catch all candidates.
2579 * It may also return without evicting as much space as requested.
2581 * If bytes is specified using the special value ARC_EVICT_ALL, this
2582 * will evict all available (i.e. unlocked and evictable) buffers from
2583 * the given arc state; which is used by arc_flush().
2586 arc_evict_state(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
2587 arc_buf_contents_t type
)
2589 uint64_t total_evicted
= 0;
2590 multilist_t
*ml
= &state
->arcs_list
[type
];
2592 arc_buf_hdr_t
**markers
;
2595 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
2597 num_sublists
= multilist_get_num_sublists(ml
);
2600 * If we've tried to evict from each sublist, made some
2601 * progress, but still have not hit the target number of bytes
2602 * to evict, we want to keep trying. The markers allow us to
2603 * pick up where we left off for each individual sublist, rather
2604 * than starting from the tail each time.
2606 markers
= kmem_zalloc(sizeof (*markers
) * num_sublists
, KM_SLEEP
);
2607 for (i
= 0; i
< num_sublists
; i
++) {
2608 multilist_sublist_t
*mls
;
2610 markers
[i
] = kmem_cache_alloc(hdr_full_cache
, KM_SLEEP
);
2613 * A b_spa of 0 is used to indicate that this header is
2614 * a marker. This fact is used in arc_adjust_type() and
2615 * arc_evict_state_impl().
2617 markers
[i
]->b_spa
= 0;
2619 mls
= multilist_sublist_lock(ml
, i
);
2620 multilist_sublist_insert_tail(mls
, markers
[i
]);
2621 multilist_sublist_unlock(mls
);
2625 * While we haven't hit our target number of bytes to evict, or
2626 * we're evicting all available buffers.
2628 while (total_evicted
< bytes
|| bytes
== ARC_EVICT_ALL
) {
2629 int sublist_idx
= multilist_get_random_index(ml
);
2630 uint64_t scan_evicted
= 0;
2633 * Try to reduce pinned dnodes with a floor of arc_dnode_limit.
2634 * Request that 10% of the LRUs be scanned by the superblock
2637 if (type
== ARC_BUFC_DATA
&& arc_dnode_size
> arc_dnode_limit
)
2638 arc_prune_async((arc_dnode_size
- arc_dnode_limit
) /
2639 sizeof (dnode_t
) / zfs_arc_dnode_reduce_percent
);
2642 * Start eviction using a randomly selected sublist,
2643 * this is to try and evenly balance eviction across all
2644 * sublists. Always starting at the same sublist
2645 * (e.g. index 0) would cause evictions to favor certain
2646 * sublists over others.
2648 for (i
= 0; i
< num_sublists
; i
++) {
2649 uint64_t bytes_remaining
;
2650 uint64_t bytes_evicted
;
2652 if (bytes
== ARC_EVICT_ALL
)
2653 bytes_remaining
= ARC_EVICT_ALL
;
2654 else if (total_evicted
< bytes
)
2655 bytes_remaining
= bytes
- total_evicted
;
2659 bytes_evicted
= arc_evict_state_impl(ml
, sublist_idx
,
2660 markers
[sublist_idx
], spa
, bytes_remaining
);
2662 scan_evicted
+= bytes_evicted
;
2663 total_evicted
+= bytes_evicted
;
2665 /* we've reached the end, wrap to the beginning */
2666 if (++sublist_idx
>= num_sublists
)
2671 * If we didn't evict anything during this scan, we have
2672 * no reason to believe we'll evict more during another
2673 * scan, so break the loop.
2675 if (scan_evicted
== 0) {
2676 /* This isn't possible, let's make that obvious */
2677 ASSERT3S(bytes
, !=, 0);
2680 * When bytes is ARC_EVICT_ALL, the only way to
2681 * break the loop is when scan_evicted is zero.
2682 * In that case, we actually have evicted enough,
2683 * so we don't want to increment the kstat.
2685 if (bytes
!= ARC_EVICT_ALL
) {
2686 ASSERT3S(total_evicted
, <, bytes
);
2687 ARCSTAT_BUMP(arcstat_evict_not_enough
);
2694 for (i
= 0; i
< num_sublists
; i
++) {
2695 multilist_sublist_t
*mls
= multilist_sublist_lock(ml
, i
);
2696 multilist_sublist_remove(mls
, markers
[i
]);
2697 multilist_sublist_unlock(mls
);
2699 kmem_cache_free(hdr_full_cache
, markers
[i
]);
2701 kmem_free(markers
, sizeof (*markers
) * num_sublists
);
2703 return (total_evicted
);
2707 * Flush all "evictable" data of the given type from the arc state
2708 * specified. This will not evict any "active" buffers (i.e. referenced).
2710 * When 'retry' is set to FALSE, the function will make a single pass
2711 * over the state and evict any buffers that it can. Since it doesn't
2712 * continually retry the eviction, it might end up leaving some buffers
2713 * in the ARC due to lock misses.
2715 * When 'retry' is set to TRUE, the function will continually retry the
2716 * eviction until *all* evictable buffers have been removed from the
2717 * state. As a result, if concurrent insertions into the state are
2718 * allowed (e.g. if the ARC isn't shutting down), this function might
2719 * wind up in an infinite loop, continually trying to evict buffers.
2722 arc_flush_state(arc_state_t
*state
, uint64_t spa
, arc_buf_contents_t type
,
2725 uint64_t evicted
= 0;
2727 while (state
->arcs_lsize
[type
] != 0) {
2728 evicted
+= arc_evict_state(state
, spa
, ARC_EVICT_ALL
, type
);
2738 * Helper function for arc_prune_async() it is responsible for safely
2739 * handling the execution of a registered arc_prune_func_t.
2742 arc_prune_task(void *ptr
)
2744 arc_prune_t
*ap
= (arc_prune_t
*)ptr
;
2745 arc_prune_func_t
*func
= ap
->p_pfunc
;
2748 func(ap
->p_adjust
, ap
->p_private
);
2750 refcount_remove(&ap
->p_refcnt
, func
);
2754 * Notify registered consumers they must drop holds on a portion of the ARC
2755 * buffered they reference. This provides a mechanism to ensure the ARC can
2756 * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This
2757 * is analogous to dnlc_reduce_cache() but more generic.
2759 * This operation is performed asynchronously so it may be safely called
2760 * in the context of the arc_reclaim_thread(). A reference is taken here
2761 * for each registered arc_prune_t and the arc_prune_task() is responsible
2762 * for releasing it once the registered arc_prune_func_t has completed.
2765 arc_prune_async(int64_t adjust
)
2769 mutex_enter(&arc_prune_mtx
);
2770 for (ap
= list_head(&arc_prune_list
); ap
!= NULL
;
2771 ap
= list_next(&arc_prune_list
, ap
)) {
2773 if (refcount_count(&ap
->p_refcnt
) >= 2)
2776 refcount_add(&ap
->p_refcnt
, ap
->p_pfunc
);
2777 ap
->p_adjust
= adjust
;
2778 taskq_dispatch(arc_prune_taskq
, arc_prune_task
, ap
, TQ_SLEEP
);
2779 ARCSTAT_BUMP(arcstat_prune
);
2781 mutex_exit(&arc_prune_mtx
);
2785 * Evict the specified number of bytes from the state specified,
2786 * restricting eviction to the spa and type given. This function
2787 * prevents us from trying to evict more from a state's list than
2788 * is "evictable", and to skip evicting altogether when passed a
2789 * negative value for "bytes". In contrast, arc_evict_state() will
2790 * evict everything it can, when passed a negative value for "bytes".
2793 arc_adjust_impl(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
2794 arc_buf_contents_t type
)
2798 if (bytes
> 0 && state
->arcs_lsize
[type
] > 0) {
2799 delta
= MIN(state
->arcs_lsize
[type
], bytes
);
2800 return (arc_evict_state(state
, spa
, delta
, type
));
2807 * The goal of this function is to evict enough meta data buffers from the
2808 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly
2809 * more complicated than it appears because it is common for data buffers
2810 * to have holds on meta data buffers. In addition, dnode meta data buffers
2811 * will be held by the dnodes in the block preventing them from being freed.
2812 * This means we can't simply traverse the ARC and expect to always find
2813 * enough unheld meta data buffer to release.
2815 * Therefore, this function has been updated to make alternating passes
2816 * over the ARC releasing data buffers and then newly unheld meta data
2817 * buffers. This ensures forward progress is maintained and arc_meta_used
2818 * will decrease. Normally this is sufficient, but if required the ARC
2819 * will call the registered prune callbacks causing dentry and inodes to
2820 * be dropped from the VFS cache. This will make dnode meta data buffers
2821 * available for reclaim.
2824 arc_adjust_meta_balanced(void)
2826 int64_t adjustmnt
, delta
, prune
= 0;
2827 uint64_t total_evicted
= 0;
2828 arc_buf_contents_t type
= ARC_BUFC_DATA
;
2829 int restarts
= MAX(zfs_arc_meta_adjust_restarts
, 0);
2833 * This slightly differs than the way we evict from the mru in
2834 * arc_adjust because we don't have a "target" value (i.e. no
2835 * "meta" arc_p). As a result, I think we can completely
2836 * cannibalize the metadata in the MRU before we evict the
2837 * metadata from the MFU. I think we probably need to implement a
2838 * "metadata arc_p" value to do this properly.
2840 adjustmnt
= arc_meta_used
- arc_meta_limit
;
2842 if (adjustmnt
> 0 && arc_mru
->arcs_lsize
[type
] > 0) {
2843 delta
= MIN(arc_mru
->arcs_lsize
[type
], adjustmnt
);
2844 total_evicted
+= arc_adjust_impl(arc_mru
, 0, delta
, type
);
2849 * We can't afford to recalculate adjustmnt here. If we do,
2850 * new metadata buffers can sneak into the MRU or ANON lists,
2851 * thus penalize the MFU metadata. Although the fudge factor is
2852 * small, it has been empirically shown to be significant for
2853 * certain workloads (e.g. creating many empty directories). As
2854 * such, we use the original calculation for adjustmnt, and
2855 * simply decrement the amount of data evicted from the MRU.
2858 if (adjustmnt
> 0 && arc_mfu
->arcs_lsize
[type
] > 0) {
2859 delta
= MIN(arc_mfu
->arcs_lsize
[type
], adjustmnt
);
2860 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, delta
, type
);
2863 adjustmnt
= arc_meta_used
- arc_meta_limit
;
2865 if (adjustmnt
> 0 && arc_mru_ghost
->arcs_lsize
[type
] > 0) {
2866 delta
= MIN(adjustmnt
,
2867 arc_mru_ghost
->arcs_lsize
[type
]);
2868 total_evicted
+= arc_adjust_impl(arc_mru_ghost
, 0, delta
, type
);
2872 if (adjustmnt
> 0 && arc_mfu_ghost
->arcs_lsize
[type
] > 0) {
2873 delta
= MIN(adjustmnt
,
2874 arc_mfu_ghost
->arcs_lsize
[type
]);
2875 total_evicted
+= arc_adjust_impl(arc_mfu_ghost
, 0, delta
, type
);
2879 * If after attempting to make the requested adjustment to the ARC
2880 * the meta limit is still being exceeded then request that the
2881 * higher layers drop some cached objects which have holds on ARC
2882 * meta buffers. Requests to the upper layers will be made with
2883 * increasingly large scan sizes until the ARC is below the limit.
2885 if (arc_meta_used
> arc_meta_limit
) {
2886 if (type
== ARC_BUFC_DATA
) {
2887 type
= ARC_BUFC_METADATA
;
2889 type
= ARC_BUFC_DATA
;
2891 if (zfs_arc_meta_prune
) {
2892 prune
+= zfs_arc_meta_prune
;
2893 arc_prune_async(prune
);
2902 return (total_evicted
);
2906 * Evict metadata buffers from the cache, such that arc_meta_used is
2907 * capped by the arc_meta_limit tunable.
2910 arc_adjust_meta_only(void)
2912 uint64_t total_evicted
= 0;
2916 * If we're over the meta limit, we want to evict enough
2917 * metadata to get back under the meta limit. We don't want to
2918 * evict so much that we drop the MRU below arc_p, though. If
2919 * we're over the meta limit more than we're over arc_p, we
2920 * evict some from the MRU here, and some from the MFU below.
2922 target
= MIN((int64_t)(arc_meta_used
- arc_meta_limit
),
2923 (int64_t)(refcount_count(&arc_anon
->arcs_size
) +
2924 refcount_count(&arc_mru
->arcs_size
) - arc_p
));
2926 total_evicted
+= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
2929 * Similar to the above, we want to evict enough bytes to get us
2930 * below the meta limit, but not so much as to drop us below the
2931 * space alloted to the MFU (which is defined as arc_c - arc_p).
2933 target
= MIN((int64_t)(arc_meta_used
- arc_meta_limit
),
2934 (int64_t)(refcount_count(&arc_mfu
->arcs_size
) - (arc_c
- arc_p
)));
2936 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
2938 return (total_evicted
);
2942 arc_adjust_meta(void)
2944 if (zfs_arc_meta_strategy
== ARC_STRATEGY_META_ONLY
)
2945 return (arc_adjust_meta_only());
2947 return (arc_adjust_meta_balanced());
2951 * Return the type of the oldest buffer in the given arc state
2953 * This function will select a random sublist of type ARC_BUFC_DATA and
2954 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist
2955 * is compared, and the type which contains the "older" buffer will be
2958 static arc_buf_contents_t
2959 arc_adjust_type(arc_state_t
*state
)
2961 multilist_t
*data_ml
= &state
->arcs_list
[ARC_BUFC_DATA
];
2962 multilist_t
*meta_ml
= &state
->arcs_list
[ARC_BUFC_METADATA
];
2963 int data_idx
= multilist_get_random_index(data_ml
);
2964 int meta_idx
= multilist_get_random_index(meta_ml
);
2965 multilist_sublist_t
*data_mls
;
2966 multilist_sublist_t
*meta_mls
;
2967 arc_buf_contents_t type
;
2968 arc_buf_hdr_t
*data_hdr
;
2969 arc_buf_hdr_t
*meta_hdr
;
2972 * We keep the sublist lock until we're finished, to prevent
2973 * the headers from being destroyed via arc_evict_state().
2975 data_mls
= multilist_sublist_lock(data_ml
, data_idx
);
2976 meta_mls
= multilist_sublist_lock(meta_ml
, meta_idx
);
2979 * These two loops are to ensure we skip any markers that
2980 * might be at the tail of the lists due to arc_evict_state().
2983 for (data_hdr
= multilist_sublist_tail(data_mls
); data_hdr
!= NULL
;
2984 data_hdr
= multilist_sublist_prev(data_mls
, data_hdr
)) {
2985 if (data_hdr
->b_spa
!= 0)
2989 for (meta_hdr
= multilist_sublist_tail(meta_mls
); meta_hdr
!= NULL
;
2990 meta_hdr
= multilist_sublist_prev(meta_mls
, meta_hdr
)) {
2991 if (meta_hdr
->b_spa
!= 0)
2995 if (data_hdr
== NULL
&& meta_hdr
== NULL
) {
2996 type
= ARC_BUFC_DATA
;
2997 } else if (data_hdr
== NULL
) {
2998 ASSERT3P(meta_hdr
, !=, NULL
);
2999 type
= ARC_BUFC_METADATA
;
3000 } else if (meta_hdr
== NULL
) {
3001 ASSERT3P(data_hdr
, !=, NULL
);
3002 type
= ARC_BUFC_DATA
;
3004 ASSERT3P(data_hdr
, !=, NULL
);
3005 ASSERT3P(meta_hdr
, !=, NULL
);
3007 /* The headers can't be on the sublist without an L1 header */
3008 ASSERT(HDR_HAS_L1HDR(data_hdr
));
3009 ASSERT(HDR_HAS_L1HDR(meta_hdr
));
3011 if (data_hdr
->b_l1hdr
.b_arc_access
<
3012 meta_hdr
->b_l1hdr
.b_arc_access
) {
3013 type
= ARC_BUFC_DATA
;
3015 type
= ARC_BUFC_METADATA
;
3019 multilist_sublist_unlock(meta_mls
);
3020 multilist_sublist_unlock(data_mls
);
3026 * Evict buffers from the cache, such that arc_size is capped by arc_c.
3031 uint64_t total_evicted
= 0;
3036 * If we're over arc_meta_limit, we want to correct that before
3037 * potentially evicting data buffers below.
3039 total_evicted
+= arc_adjust_meta();
3044 * If we're over the target cache size, we want to evict enough
3045 * from the list to get back to our target size. We don't want
3046 * to evict too much from the MRU, such that it drops below
3047 * arc_p. So, if we're over our target cache size more than
3048 * the MRU is over arc_p, we'll evict enough to get back to
3049 * arc_p here, and then evict more from the MFU below.
3051 target
= MIN((int64_t)(arc_size
- arc_c
),
3052 (int64_t)(refcount_count(&arc_anon
->arcs_size
) +
3053 refcount_count(&arc_mru
->arcs_size
) + arc_meta_used
- arc_p
));
3056 * If we're below arc_meta_min, always prefer to evict data.
3057 * Otherwise, try to satisfy the requested number of bytes to
3058 * evict from the type which contains older buffers; in an
3059 * effort to keep newer buffers in the cache regardless of their
3060 * type. If we cannot satisfy the number of bytes from this
3061 * type, spill over into the next type.
3063 if (arc_adjust_type(arc_mru
) == ARC_BUFC_METADATA
&&
3064 arc_meta_used
> arc_meta_min
) {
3065 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
3066 total_evicted
+= bytes
;
3069 * If we couldn't evict our target number of bytes from
3070 * metadata, we try to get the rest from data.
3075 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
3077 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
3078 total_evicted
+= bytes
;
3081 * If we couldn't evict our target number of bytes from
3082 * data, we try to get the rest from metadata.
3087 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
3093 * Now that we've tried to evict enough from the MRU to get its
3094 * size back to arc_p, if we're still above the target cache
3095 * size, we evict the rest from the MFU.
3097 target
= arc_size
- arc_c
;
3099 if (arc_adjust_type(arc_mfu
) == ARC_BUFC_METADATA
&&
3100 arc_meta_used
> arc_meta_min
) {
3101 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
3102 total_evicted
+= bytes
;
3105 * If we couldn't evict our target number of bytes from
3106 * metadata, we try to get the rest from data.
3111 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
3113 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
3114 total_evicted
+= bytes
;
3117 * If we couldn't evict our target number of bytes from
3118 * data, we try to get the rest from data.
3123 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
3127 * Adjust ghost lists
3129 * In addition to the above, the ARC also defines target values
3130 * for the ghost lists. The sum of the mru list and mru ghost
3131 * list should never exceed the target size of the cache, and
3132 * the sum of the mru list, mfu list, mru ghost list, and mfu
3133 * ghost list should never exceed twice the target size of the
3134 * cache. The following logic enforces these limits on the ghost
3135 * caches, and evicts from them as needed.
3137 target
= refcount_count(&arc_mru
->arcs_size
) +
3138 refcount_count(&arc_mru_ghost
->arcs_size
) - arc_c
;
3140 bytes
= arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_DATA
);
3141 total_evicted
+= bytes
;
3146 arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_METADATA
);
3149 * We assume the sum of the mru list and mfu list is less than
3150 * or equal to arc_c (we enforced this above), which means we
3151 * can use the simpler of the two equations below:
3153 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
3154 * mru ghost + mfu ghost <= arc_c
3156 target
= refcount_count(&arc_mru_ghost
->arcs_size
) +
3157 refcount_count(&arc_mfu_ghost
->arcs_size
) - arc_c
;
3159 bytes
= arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_DATA
);
3160 total_evicted
+= bytes
;
3165 arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_METADATA
);
3167 return (total_evicted
);
3171 arc_do_user_evicts(void)
3173 mutex_enter(&arc_user_evicts_lock
);
3174 while (arc_eviction_list
!= NULL
) {
3175 arc_buf_t
*buf
= arc_eviction_list
;
3176 arc_eviction_list
= buf
->b_next
;
3177 mutex_enter(&buf
->b_evict_lock
);
3179 mutex_exit(&buf
->b_evict_lock
);
3180 mutex_exit(&arc_user_evicts_lock
);
3182 if (buf
->b_efunc
!= NULL
)
3183 VERIFY0(buf
->b_efunc(buf
->b_private
));
3185 buf
->b_efunc
= NULL
;
3186 buf
->b_private
= NULL
;
3187 kmem_cache_free(buf_cache
, buf
);
3188 mutex_enter(&arc_user_evicts_lock
);
3190 mutex_exit(&arc_user_evicts_lock
);
3194 arc_flush(spa_t
*spa
, boolean_t retry
)
3199 * If retry is TRUE, a spa must not be specified since we have
3200 * no good way to determine if all of a spa's buffers have been
3201 * evicted from an arc state.
3203 ASSERT(!retry
|| spa
== 0);
3206 guid
= spa_load_guid(spa
);
3208 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_DATA
, retry
);
3209 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_METADATA
, retry
);
3211 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_DATA
, retry
);
3212 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_METADATA
, retry
);
3214 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_DATA
, retry
);
3215 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
3217 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_DATA
, retry
);
3218 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
3220 arc_do_user_evicts();
3221 ASSERT(spa
|| arc_eviction_list
== NULL
);
3225 arc_shrink(int64_t to_free
)
3229 if (c
> to_free
&& c
- to_free
> arc_c_min
) {
3230 arc_c
= c
- to_free
;
3231 atomic_add_64(&arc_p
, -(arc_p
>> arc_shrink_shift
));
3232 if (arc_c
> arc_size
)
3233 arc_c
= MAX(arc_size
, arc_c_min
);
3235 arc_p
= (arc_c
>> 1);
3236 ASSERT(arc_c
>= arc_c_min
);
3237 ASSERT((int64_t)arc_p
>= 0);
3242 if (arc_size
> arc_c
)
3243 (void) arc_adjust();
3246 typedef enum free_memory_reason_t
{
3251 FMR_PAGES_PP_MAXIMUM
,
3254 } free_memory_reason_t
;
3256 int64_t last_free_memory
;
3257 free_memory_reason_t last_free_reason
;
3261 * Additional reserve of pages for pp_reserve.
3263 int64_t arc_pages_pp_reserve
= 64;
3266 * Additional reserve of pages for swapfs.
3268 int64_t arc_swapfs_reserve
= 64;
3269 #endif /* _KERNEL */
3272 * Return the amount of memory that can be consumed before reclaim will be
3273 * needed. Positive if there is sufficient free memory, negative indicates
3274 * the amount of memory that needs to be freed up.
3277 arc_available_memory(void)
3279 int64_t lowest
= INT64_MAX
;
3280 free_memory_reason_t r
= FMR_UNKNOWN
;
3284 pgcnt_t needfree
= btop(arc_need_free
);
3285 pgcnt_t lotsfree
= btop(arc_sys_free
);
3286 pgcnt_t desfree
= 0;
3290 n
= PAGESIZE
* (-needfree
);
3298 * check that we're out of range of the pageout scanner. It starts to
3299 * schedule paging if freemem is less than lotsfree and needfree.
3300 * lotsfree is the high-water mark for pageout, and needfree is the
3301 * number of needed free pages. We add extra pages here to make sure
3302 * the scanner doesn't start up while we're freeing memory.
3304 n
= PAGESIZE
* (freemem
- lotsfree
- needfree
- desfree
);
3312 * check to make sure that swapfs has enough space so that anon
3313 * reservations can still succeed. anon_resvmem() checks that the
3314 * availrmem is greater than swapfs_minfree, and the number of reserved
3315 * swap pages. We also add a bit of extra here just to prevent
3316 * circumstances from getting really dire.
3318 n
= PAGESIZE
* (availrmem
- swapfs_minfree
- swapfs_reserve
-
3319 desfree
- arc_swapfs_reserve
);
3322 r
= FMR_SWAPFS_MINFREE
;
3327 * Check that we have enough availrmem that memory locking (e.g., via
3328 * mlock(3C) or memcntl(2)) can still succeed. (pages_pp_maximum
3329 * stores the number of pages that cannot be locked; when availrmem
3330 * drops below pages_pp_maximum, page locking mechanisms such as
3331 * page_pp_lock() will fail.)
3333 n
= PAGESIZE
* (availrmem
- pages_pp_maximum
-
3334 arc_pages_pp_reserve
);
3337 r
= FMR_PAGES_PP_MAXIMUM
;
3343 * If we're on an i386 platform, it's possible that we'll exhaust the
3344 * kernel heap space before we ever run out of available physical
3345 * memory. Most checks of the size of the heap_area compare against
3346 * tune.t_minarmem, which is the minimum available real memory that we
3347 * can have in the system. However, this is generally fixed at 25 pages
3348 * which is so low that it's useless. In this comparison, we seek to
3349 * calculate the total heap-size, and reclaim if more than 3/4ths of the
3350 * heap is allocated. (Or, in the calculation, if less than 1/4th is
3353 n
= vmem_size(heap_arena
, VMEM_FREE
) -
3354 (vmem_size(heap_arena
, VMEM_FREE
| VMEM_ALLOC
) >> 2);
3362 * If zio data pages are being allocated out of a separate heap segment,
3363 * then enforce that the size of available vmem for this arena remains
3364 * above about 1/16th free.
3366 * Note: The 1/16th arena free requirement was put in place
3367 * to aggressively evict memory from the arc in order to avoid
3368 * memory fragmentation issues.
3370 if (zio_arena
!= NULL
) {
3371 n
= vmem_size(zio_arena
, VMEM_FREE
) -
3372 (vmem_size(zio_arena
, VMEM_ALLOC
) >> 4);
3379 /* Every 100 calls, free a small amount */
3380 if (spa_get_random(100) == 0)
3382 #endif /* _KERNEL */
3384 last_free_memory
= lowest
;
3385 last_free_reason
= r
;
3391 * Determine if the system is under memory pressure and is asking
3392 * to reclaim memory. A return value of TRUE indicates that the system
3393 * is under memory pressure and that the arc should adjust accordingly.
3396 arc_reclaim_needed(void)
3398 return (arc_available_memory() < 0);
3402 arc_kmem_reap_now(void)
3405 kmem_cache_t
*prev_cache
= NULL
;
3406 kmem_cache_t
*prev_data_cache
= NULL
;
3407 extern kmem_cache_t
*zio_buf_cache
[];
3408 extern kmem_cache_t
*zio_data_buf_cache
[];
3409 extern kmem_cache_t
*range_seg_cache
;
3411 if ((arc_meta_used
>= arc_meta_limit
) && zfs_arc_meta_prune
) {
3413 * We are exceeding our meta-data cache limit.
3414 * Prune some entries to release holds on meta-data.
3416 arc_prune_async(zfs_arc_meta_prune
);
3419 for (i
= 0; i
< SPA_MAXBLOCKSIZE
>> SPA_MINBLOCKSHIFT
; i
++) {
3421 /* reach upper limit of cache size on 32-bit */
3422 if (zio_buf_cache
[i
] == NULL
)
3425 if (zio_buf_cache
[i
] != prev_cache
) {
3426 prev_cache
= zio_buf_cache
[i
];
3427 kmem_cache_reap_now(zio_buf_cache
[i
]);
3429 if (zio_data_buf_cache
[i
] != prev_data_cache
) {
3430 prev_data_cache
= zio_data_buf_cache
[i
];
3431 kmem_cache_reap_now(zio_data_buf_cache
[i
]);
3434 kmem_cache_reap_now(buf_cache
);
3435 kmem_cache_reap_now(hdr_full_cache
);
3436 kmem_cache_reap_now(hdr_l2only_cache
);
3437 kmem_cache_reap_now(range_seg_cache
);
3439 if (zio_arena
!= NULL
) {
3441 * Ask the vmem arena to reclaim unused memory from its
3444 vmem_qcache_reap(zio_arena
);
3449 * Threads can block in arc_get_data_buf() waiting for this thread to evict
3450 * enough data and signal them to proceed. When this happens, the threads in
3451 * arc_get_data_buf() are sleeping while holding the hash lock for their
3452 * particular arc header. Thus, we must be careful to never sleep on a
3453 * hash lock in this thread. This is to prevent the following deadlock:
3455 * - Thread A sleeps on CV in arc_get_data_buf() holding hash lock "L",
3456 * waiting for the reclaim thread to signal it.
3458 * - arc_reclaim_thread() tries to acquire hash lock "L" using mutex_enter,
3459 * fails, and goes to sleep forever.
3461 * This possible deadlock is avoided by always acquiring a hash lock
3462 * using mutex_tryenter() from arc_reclaim_thread().
3465 arc_reclaim_thread(void)
3467 fstrans_cookie_t cookie
= spl_fstrans_mark();
3468 hrtime_t growtime
= 0;
3471 CALLB_CPR_INIT(&cpr
, &arc_reclaim_lock
, callb_generic_cpr
, FTAG
);
3473 mutex_enter(&arc_reclaim_lock
);
3474 while (!arc_reclaim_thread_exit
) {
3476 int64_t free_memory
= arc_available_memory();
3477 uint64_t evicted
= 0;
3479 arc_tuning_update();
3481 mutex_exit(&arc_reclaim_lock
);
3483 if (free_memory
< 0) {
3485 arc_no_grow
= B_TRUE
;
3489 * Wait at least zfs_grow_retry (default 5) seconds
3490 * before considering growing.
3492 growtime
= gethrtime() + SEC2NSEC(arc_grow_retry
);
3494 arc_kmem_reap_now();
3497 * If we are still low on memory, shrink the ARC
3498 * so that we have arc_shrink_min free space.
3500 free_memory
= arc_available_memory();
3502 to_free
= (arc_c
>> arc_shrink_shift
) - free_memory
;
3505 to_free
= MAX(to_free
, arc_need_free
);
3507 arc_shrink(to_free
);
3509 } else if (free_memory
< arc_c
>> arc_no_grow_shift
) {
3510 arc_no_grow
= B_TRUE
;
3511 } else if (gethrtime() >= growtime
) {
3512 arc_no_grow
= B_FALSE
;
3515 evicted
= arc_adjust();
3517 mutex_enter(&arc_reclaim_lock
);
3520 * If evicted is zero, we couldn't evict anything via
3521 * arc_adjust(). This could be due to hash lock
3522 * collisions, but more likely due to the majority of
3523 * arc buffers being unevictable. Therefore, even if
3524 * arc_size is above arc_c, another pass is unlikely to
3525 * be helpful and could potentially cause us to enter an
3528 if (arc_size
<= arc_c
|| evicted
== 0) {
3530 * We're either no longer overflowing, or we
3531 * can't evict anything more, so we should wake
3532 * up any threads before we go to sleep and clear
3533 * arc_need_free since nothing more can be done.
3535 cv_broadcast(&arc_reclaim_waiters_cv
);
3539 * Block until signaled, or after one second (we
3540 * might need to perform arc_kmem_reap_now()
3541 * even if we aren't being signalled)
3543 CALLB_CPR_SAFE_BEGIN(&cpr
);
3544 (void) cv_timedwait_sig_hires(&arc_reclaim_thread_cv
,
3545 &arc_reclaim_lock
, SEC2NSEC(1), MSEC2NSEC(1), 0);
3546 CALLB_CPR_SAFE_END(&cpr
, &arc_reclaim_lock
);
3550 arc_reclaim_thread_exit
= FALSE
;
3551 cv_broadcast(&arc_reclaim_thread_cv
);
3552 CALLB_CPR_EXIT(&cpr
); /* drops arc_reclaim_lock */
3553 spl_fstrans_unmark(cookie
);
3558 arc_user_evicts_thread(void)
3560 fstrans_cookie_t cookie
= spl_fstrans_mark();
3563 CALLB_CPR_INIT(&cpr
, &arc_user_evicts_lock
, callb_generic_cpr
, FTAG
);
3565 mutex_enter(&arc_user_evicts_lock
);
3566 while (!arc_user_evicts_thread_exit
) {
3567 mutex_exit(&arc_user_evicts_lock
);
3569 arc_do_user_evicts();
3572 * This is necessary in order for the mdb ::arc dcmd to
3573 * show up to date information. Since the ::arc command
3574 * does not call the kstat's update function, without
3575 * this call, the command may show stale stats for the
3576 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
3577 * with this change, the data might be up to 1 second
3578 * out of date; but that should suffice. The arc_state_t
3579 * structures can be queried directly if more accurate
3580 * information is needed.
3582 if (arc_ksp
!= NULL
)
3583 arc_ksp
->ks_update(arc_ksp
, KSTAT_READ
);
3585 mutex_enter(&arc_user_evicts_lock
);
3588 * Block until signaled, or after one second (we need to
3589 * call the arc's kstat update function regularly).
3591 CALLB_CPR_SAFE_BEGIN(&cpr
);
3592 (void) cv_timedwait_sig(&arc_user_evicts_cv
,
3593 &arc_user_evicts_lock
, ddi_get_lbolt() + hz
);
3594 CALLB_CPR_SAFE_END(&cpr
, &arc_user_evicts_lock
);
3597 arc_user_evicts_thread_exit
= FALSE
;
3598 cv_broadcast(&arc_user_evicts_cv
);
3599 CALLB_CPR_EXIT(&cpr
); /* drops arc_user_evicts_lock */
3600 spl_fstrans_unmark(cookie
);
3606 * Determine the amount of memory eligible for eviction contained in the
3607 * ARC. All clean data reported by the ghost lists can always be safely
3608 * evicted. Due to arc_c_min, the same does not hold for all clean data
3609 * contained by the regular mru and mfu lists.
3611 * In the case of the regular mru and mfu lists, we need to report as
3612 * much clean data as possible, such that evicting that same reported
3613 * data will not bring arc_size below arc_c_min. Thus, in certain
3614 * circumstances, the total amount of clean data in the mru and mfu
3615 * lists might not actually be evictable.
3617 * The following two distinct cases are accounted for:
3619 * 1. The sum of the amount of dirty data contained by both the mru and
3620 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
3621 * is greater than or equal to arc_c_min.
3622 * (i.e. amount of dirty data >= arc_c_min)
3624 * This is the easy case; all clean data contained by the mru and mfu
3625 * lists is evictable. Evicting all clean data can only drop arc_size
3626 * to the amount of dirty data, which is greater than arc_c_min.
3628 * 2. The sum of the amount of dirty data contained by both the mru and
3629 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
3630 * is less than arc_c_min.
3631 * (i.e. arc_c_min > amount of dirty data)
3633 * 2.1. arc_size is greater than or equal arc_c_min.
3634 * (i.e. arc_size >= arc_c_min > amount of dirty data)
3636 * In this case, not all clean data from the regular mru and mfu
3637 * lists is actually evictable; we must leave enough clean data
3638 * to keep arc_size above arc_c_min. Thus, the maximum amount of
3639 * evictable data from the two lists combined, is exactly the
3640 * difference between arc_size and arc_c_min.
3642 * 2.2. arc_size is less than arc_c_min
3643 * (i.e. arc_c_min > arc_size > amount of dirty data)
3645 * In this case, none of the data contained in the mru and mfu
3646 * lists is evictable, even if it's clean. Since arc_size is
3647 * already below arc_c_min, evicting any more would only
3648 * increase this negative difference.
3651 arc_evictable_memory(void) {
3652 uint64_t arc_clean
=
3653 arc_mru
->arcs_lsize
[ARC_BUFC_DATA
] +
3654 arc_mru
->arcs_lsize
[ARC_BUFC_METADATA
] +
3655 arc_mfu
->arcs_lsize
[ARC_BUFC_DATA
] +
3656 arc_mfu
->arcs_lsize
[ARC_BUFC_METADATA
];
3657 uint64_t ghost_clean
=
3658 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
3659 arc_mru_ghost
->arcs_lsize
[ARC_BUFC_METADATA
] +
3660 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_DATA
] +
3661 arc_mfu_ghost
->arcs_lsize
[ARC_BUFC_METADATA
];
3662 uint64_t arc_dirty
= MAX((int64_t)arc_size
- (int64_t)arc_clean
, 0);
3664 if (arc_dirty
>= arc_c_min
)
3665 return (ghost_clean
+ arc_clean
);
3667 return (ghost_clean
+ MAX((int64_t)arc_size
- (int64_t)arc_c_min
, 0));
3671 * If sc->nr_to_scan is zero, the caller is requesting a query of the
3672 * number of objects which can potentially be freed. If it is nonzero,
3673 * the request is to free that many objects.
3675 * Linux kernels >= 3.12 have the count_objects and scan_objects callbacks
3676 * in struct shrinker and also require the shrinker to return the number
3679 * Older kernels require the shrinker to return the number of freeable
3680 * objects following the freeing of nr_to_free.
3682 static spl_shrinker_t
3683 __arc_shrinker_func(struct shrinker
*shrink
, struct shrink_control
*sc
)
3687 /* The arc is considered warm once reclaim has occurred */
3688 if (unlikely(arc_warm
== B_FALSE
))
3691 /* Return the potential number of reclaimable pages */
3692 pages
= btop((int64_t)arc_evictable_memory());
3693 if (sc
->nr_to_scan
== 0)
3696 /* Not allowed to perform filesystem reclaim */
3697 if (!(sc
->gfp_mask
& __GFP_FS
))
3698 return (SHRINK_STOP
);
3700 /* Reclaim in progress */
3701 if (mutex_tryenter(&arc_reclaim_lock
) == 0)
3702 return (SHRINK_STOP
);
3704 mutex_exit(&arc_reclaim_lock
);
3707 * Evict the requested number of pages by shrinking arc_c the
3708 * requested amount. If there is nothing left to evict just
3709 * reap whatever we can from the various arc slabs.
3712 arc_shrink(ptob(sc
->nr_to_scan
));
3713 arc_kmem_reap_now();
3714 #ifdef HAVE_SPLIT_SHRINKER_CALLBACK
3715 pages
= MAX(pages
- btop(arc_evictable_memory()), 0);
3717 pages
= btop(arc_evictable_memory());
3720 arc_kmem_reap_now();
3721 pages
= SHRINK_STOP
;
3725 * We've reaped what we can, wake up threads.
3727 cv_broadcast(&arc_reclaim_waiters_cv
);
3730 * When direct reclaim is observed it usually indicates a rapid
3731 * increase in memory pressure. This occurs because the kswapd
3732 * threads were unable to asynchronously keep enough free memory
3733 * available. In this case set arc_no_grow to briefly pause arc
3734 * growth to avoid compounding the memory pressure.
3736 if (current_is_kswapd()) {
3737 ARCSTAT_BUMP(arcstat_memory_indirect_count
);
3739 arc_no_grow
= B_TRUE
;
3740 arc_need_free
= ptob(sc
->nr_to_scan
);
3741 ARCSTAT_BUMP(arcstat_memory_direct_count
);
3746 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func
);
3748 SPL_SHRINKER_DECLARE(arc_shrinker
, arc_shrinker_func
, DEFAULT_SEEKS
);
3749 #endif /* _KERNEL */
3752 * Adapt arc info given the number of bytes we are trying to add and
3753 * the state that we are comming from. This function is only called
3754 * when we are adding new content to the cache.
3757 arc_adapt(int bytes
, arc_state_t
*state
)
3760 uint64_t arc_p_min
= (arc_c
>> arc_p_min_shift
);
3761 int64_t mrug_size
= refcount_count(&arc_mru_ghost
->arcs_size
);
3762 int64_t mfug_size
= refcount_count(&arc_mfu_ghost
->arcs_size
);
3764 if (state
== arc_l2c_only
)
3769 * Adapt the target size of the MRU list:
3770 * - if we just hit in the MRU ghost list, then increase
3771 * the target size of the MRU list.
3772 * - if we just hit in the MFU ghost list, then increase
3773 * the target size of the MFU list by decreasing the
3774 * target size of the MRU list.
3776 if (state
== arc_mru_ghost
) {
3777 mult
= (mrug_size
>= mfug_size
) ? 1 : (mfug_size
/ mrug_size
);
3778 if (!zfs_arc_p_dampener_disable
)
3779 mult
= MIN(mult
, 10); /* avoid wild arc_p adjustment */
3781 arc_p
= MIN(arc_c
- arc_p_min
, arc_p
+ bytes
* mult
);
3782 } else if (state
== arc_mfu_ghost
) {
3785 mult
= (mfug_size
>= mrug_size
) ? 1 : (mrug_size
/ mfug_size
);
3786 if (!zfs_arc_p_dampener_disable
)
3787 mult
= MIN(mult
, 10);
3789 delta
= MIN(bytes
* mult
, arc_p
);
3790 arc_p
= MAX(arc_p_min
, arc_p
- delta
);
3792 ASSERT((int64_t)arc_p
>= 0);
3794 if (arc_reclaim_needed()) {
3795 cv_signal(&arc_reclaim_thread_cv
);
3802 if (arc_c
>= arc_c_max
)
3806 * If we're within (2 * maxblocksize) bytes of the target
3807 * cache size, increment the target cache size
3809 ASSERT3U(arc_c
, >=, 2ULL << SPA_MAXBLOCKSHIFT
);
3810 if (arc_size
>= arc_c
- (2ULL << SPA_MAXBLOCKSHIFT
)) {
3811 atomic_add_64(&arc_c
, (int64_t)bytes
);
3812 if (arc_c
> arc_c_max
)
3814 else if (state
== arc_anon
)
3815 atomic_add_64(&arc_p
, (int64_t)bytes
);
3819 ASSERT((int64_t)arc_p
>= 0);
3823 * Check if arc_size has grown past our upper threshold, determined by
3824 * zfs_arc_overflow_shift.
3827 arc_is_overflowing(void)
3829 /* Always allow at least one block of overflow */
3830 uint64_t overflow
= MAX(SPA_MAXBLOCKSIZE
,
3831 arc_c
>> zfs_arc_overflow_shift
);
3833 return (arc_size
>= arc_c
+ overflow
);
3837 * The buffer, supplied as the first argument, needs a data block. If we
3838 * are hitting the hard limit for the cache size, we must sleep, waiting
3839 * for the eviction thread to catch up. If we're past the target size
3840 * but below the hard limit, we'll only signal the reclaim thread and
3844 arc_get_data_buf(arc_buf_t
*buf
)
3846 arc_state_t
*state
= buf
->b_hdr
->b_l1hdr
.b_state
;
3847 uint64_t size
= buf
->b_hdr
->b_size
;
3848 arc_buf_contents_t type
= arc_buf_type(buf
->b_hdr
);
3850 arc_adapt(size
, state
);
3853 * If arc_size is currently overflowing, and has grown past our
3854 * upper limit, we must be adding data faster than the evict
3855 * thread can evict. Thus, to ensure we don't compound the
3856 * problem by adding more data and forcing arc_size to grow even
3857 * further past it's target size, we halt and wait for the
3858 * eviction thread to catch up.
3860 * It's also possible that the reclaim thread is unable to evict
3861 * enough buffers to get arc_size below the overflow limit (e.g.
3862 * due to buffers being un-evictable, or hash lock collisions).
3863 * In this case, we want to proceed regardless if we're
3864 * overflowing; thus we don't use a while loop here.
3866 if (arc_is_overflowing()) {
3867 mutex_enter(&arc_reclaim_lock
);
3870 * Now that we've acquired the lock, we may no longer be
3871 * over the overflow limit, lets check.
3873 * We're ignoring the case of spurious wake ups. If that
3874 * were to happen, it'd let this thread consume an ARC
3875 * buffer before it should have (i.e. before we're under
3876 * the overflow limit and were signalled by the reclaim
3877 * thread). As long as that is a rare occurrence, it
3878 * shouldn't cause any harm.
3880 if (arc_is_overflowing()) {
3881 cv_signal(&arc_reclaim_thread_cv
);
3882 cv_wait(&arc_reclaim_waiters_cv
, &arc_reclaim_lock
);
3885 mutex_exit(&arc_reclaim_lock
);
3888 if (type
== ARC_BUFC_METADATA
) {
3889 buf
->b_data
= zio_buf_alloc(size
);
3890 arc_space_consume(size
, ARC_SPACE_META
);
3892 ASSERT(type
== ARC_BUFC_DATA
);
3893 buf
->b_data
= zio_data_buf_alloc(size
);
3894 arc_space_consume(size
, ARC_SPACE_DATA
);
3898 * Update the state size. Note that ghost states have a
3899 * "ghost size" and so don't need to be updated.
3901 if (!GHOST_STATE(buf
->b_hdr
->b_l1hdr
.b_state
)) {
3902 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3903 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
3905 (void) refcount_add_many(&state
->arcs_size
, size
, buf
);
3908 * If this is reached via arc_read, the link is
3909 * protected by the hash lock. If reached via
3910 * arc_buf_alloc, the header should not be accessed by
3911 * any other thread. And, if reached via arc_read_done,
3912 * the hash lock will protect it if it's found in the
3913 * hash table; otherwise no other thread should be
3914 * trying to [add|remove]_reference it.
3916 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
3917 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3918 atomic_add_64(&hdr
->b_l1hdr
.b_state
->arcs_lsize
[type
],
3922 * If we are growing the cache, and we are adding anonymous
3923 * data, and we have outgrown arc_p, update arc_p
3925 if (arc_size
< arc_c
&& hdr
->b_l1hdr
.b_state
== arc_anon
&&
3926 (refcount_count(&arc_anon
->arcs_size
) +
3927 refcount_count(&arc_mru
->arcs_size
) > arc_p
))
3928 arc_p
= MIN(arc_c
, arc_p
+ size
);
3933 * This routine is called whenever a buffer is accessed.
3934 * NOTE: the hash lock is dropped in this function.
3937 arc_access(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
3941 ASSERT(MUTEX_HELD(hash_lock
));
3942 ASSERT(HDR_HAS_L1HDR(hdr
));
3944 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
3946 * This buffer is not in the cache, and does not
3947 * appear in our "ghost" list. Add the new buffer
3951 ASSERT0(hdr
->b_l1hdr
.b_arc_access
);
3952 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
3953 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
3954 arc_change_state(arc_mru
, hdr
, hash_lock
);
3956 } else if (hdr
->b_l1hdr
.b_state
== arc_mru
) {
3957 now
= ddi_get_lbolt();
3960 * If this buffer is here because of a prefetch, then either:
3961 * - clear the flag if this is a "referencing" read
3962 * (any subsequent access will bump this into the MFU state).
3964 * - move the buffer to the head of the list if this is
3965 * another prefetch (to make it less likely to be evicted).
3967 if (HDR_PREFETCH(hdr
)) {
3968 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
3969 /* link protected by hash lock */
3970 ASSERT(multilist_link_active(
3971 &hdr
->b_l1hdr
.b_arc_node
));
3973 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
3974 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
3975 ARCSTAT_BUMP(arcstat_mru_hits
);
3977 hdr
->b_l1hdr
.b_arc_access
= now
;
3982 * This buffer has been "accessed" only once so far,
3983 * but it is still in the cache. Move it to the MFU
3986 if (ddi_time_after(now
, hdr
->b_l1hdr
.b_arc_access
+
3989 * More than 125ms have passed since we
3990 * instantiated this buffer. Move it to the
3991 * most frequently used state.
3993 hdr
->b_l1hdr
.b_arc_access
= now
;
3994 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
3995 arc_change_state(arc_mfu
, hdr
, hash_lock
);
3997 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
3998 ARCSTAT_BUMP(arcstat_mru_hits
);
3999 } else if (hdr
->b_l1hdr
.b_state
== arc_mru_ghost
) {
4000 arc_state_t
*new_state
;
4002 * This buffer has been "accessed" recently, but
4003 * was evicted from the cache. Move it to the
4007 if (HDR_PREFETCH(hdr
)) {
4008 new_state
= arc_mru
;
4009 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0)
4010 hdr
->b_flags
&= ~ARC_FLAG_PREFETCH
;
4011 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
4013 new_state
= arc_mfu
;
4014 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
4017 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
4018 arc_change_state(new_state
, hdr
, hash_lock
);
4020 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_ghost_hits
);
4021 ARCSTAT_BUMP(arcstat_mru_ghost_hits
);
4022 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu
) {
4024 * This buffer has been accessed more than once and is
4025 * still in the cache. Keep it in the MFU state.
4027 * NOTE: an add_reference() that occurred when we did
4028 * the arc_read() will have kicked this off the list.
4029 * If it was a prefetch, we will explicitly move it to
4030 * the head of the list now.
4032 if ((HDR_PREFETCH(hdr
)) != 0) {
4033 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
4034 /* link protected by hash_lock */
4035 ASSERT(multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
4037 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_hits
);
4038 ARCSTAT_BUMP(arcstat_mfu_hits
);
4039 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
4040 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu_ghost
) {
4041 arc_state_t
*new_state
= arc_mfu
;
4043 * This buffer has been accessed more than once but has
4044 * been evicted from the cache. Move it back to the
4048 if (HDR_PREFETCH(hdr
)) {
4050 * This is a prefetch access...
4051 * move this block back to the MRU state.
4053 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
4054 new_state
= arc_mru
;
4057 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
4058 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
4059 arc_change_state(new_state
, hdr
, hash_lock
);
4061 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_ghost_hits
);
4062 ARCSTAT_BUMP(arcstat_mfu_ghost_hits
);
4063 } else if (hdr
->b_l1hdr
.b_state
== arc_l2c_only
) {
4065 * This buffer is on the 2nd Level ARC.
4068 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
4069 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
4070 arc_change_state(arc_mfu
, hdr
, hash_lock
);
4072 cmn_err(CE_PANIC
, "invalid arc state 0x%p",
4073 hdr
->b_l1hdr
.b_state
);
4077 /* a generic arc_done_func_t which you can use */
4080 arc_bcopy_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
4082 if (zio
== NULL
|| zio
->io_error
== 0)
4083 bcopy(buf
->b_data
, arg
, buf
->b_hdr
->b_size
);
4084 VERIFY(arc_buf_remove_ref(buf
, arg
));
4087 /* a generic arc_done_func_t */
4089 arc_getbuf_func(zio_t
*zio
, arc_buf_t
*buf
, void *arg
)
4091 arc_buf_t
**bufp
= arg
;
4092 if (zio
&& zio
->io_error
) {
4093 VERIFY(arc_buf_remove_ref(buf
, arg
));
4097 ASSERT(buf
->b_data
);
4102 arc_read_done(zio_t
*zio
)
4106 arc_buf_t
*abuf
; /* buffer we're assigning to callback */
4107 kmutex_t
*hash_lock
= NULL
;
4108 arc_callback_t
*callback_list
, *acb
;
4109 int freeable
= FALSE
;
4111 buf
= zio
->io_private
;
4115 * The hdr was inserted into hash-table and removed from lists
4116 * prior to starting I/O. We should find this header, since
4117 * it's in the hash table, and it should be legit since it's
4118 * not possible to evict it during the I/O. The only possible
4119 * reason for it not to be found is if we were freed during the
4122 if (HDR_IN_HASH_TABLE(hdr
)) {
4123 arc_buf_hdr_t
*found
;
4125 ASSERT3U(hdr
->b_birth
, ==, BP_PHYSICAL_BIRTH(zio
->io_bp
));
4126 ASSERT3U(hdr
->b_dva
.dva_word
[0], ==,
4127 BP_IDENTITY(zio
->io_bp
)->dva_word
[0]);
4128 ASSERT3U(hdr
->b_dva
.dva_word
[1], ==,
4129 BP_IDENTITY(zio
->io_bp
)->dva_word
[1]);
4131 found
= buf_hash_find(hdr
->b_spa
, zio
->io_bp
,
4134 ASSERT((found
== NULL
&& HDR_FREED_IN_READ(hdr
) &&
4135 hash_lock
== NULL
) ||
4137 DVA_EQUAL(&hdr
->b_dva
, BP_IDENTITY(zio
->io_bp
))) ||
4138 (found
== hdr
&& HDR_L2_READING(hdr
)));
4141 hdr
->b_flags
&= ~ARC_FLAG_L2_EVICTED
;
4142 if (l2arc_noprefetch
&& HDR_PREFETCH(hdr
))
4143 hdr
->b_flags
&= ~ARC_FLAG_L2CACHE
;
4145 /* byteswap if necessary */
4146 callback_list
= hdr
->b_l1hdr
.b_acb
;
4147 ASSERT(callback_list
!= NULL
);
4148 if (BP_SHOULD_BYTESWAP(zio
->io_bp
) && zio
->io_error
== 0) {
4149 dmu_object_byteswap_t bswap
=
4150 DMU_OT_BYTESWAP(BP_GET_TYPE(zio
->io_bp
));
4151 if (BP_GET_LEVEL(zio
->io_bp
) > 0)
4152 byteswap_uint64_array(buf
->b_data
, hdr
->b_size
);
4154 dmu_ot_byteswap
[bswap
].ob_func(buf
->b_data
, hdr
->b_size
);
4157 arc_cksum_compute(buf
, B_FALSE
);
4160 if (hash_lock
&& zio
->io_error
== 0 &&
4161 hdr
->b_l1hdr
.b_state
== arc_anon
) {
4163 * Only call arc_access on anonymous buffers. This is because
4164 * if we've issued an I/O for an evicted buffer, we've already
4165 * called arc_access (to prevent any simultaneous readers from
4166 * getting confused).
4168 arc_access(hdr
, hash_lock
);
4171 /* create copies of the data buffer for the callers */
4173 for (acb
= callback_list
; acb
; acb
= acb
->acb_next
) {
4174 if (acb
->acb_done
) {
4176 ARCSTAT_BUMP(arcstat_duplicate_reads
);
4177 abuf
= arc_buf_clone(buf
);
4179 acb
->acb_buf
= abuf
;
4183 hdr
->b_l1hdr
.b_acb
= NULL
;
4184 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
4185 ASSERT(!HDR_BUF_AVAILABLE(hdr
));
4187 ASSERT(buf
->b_efunc
== NULL
);
4188 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
4189 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
4192 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
) ||
4193 callback_list
!= NULL
);
4195 if (zio
->io_error
!= 0) {
4196 hdr
->b_flags
|= ARC_FLAG_IO_ERROR
;
4197 if (hdr
->b_l1hdr
.b_state
!= arc_anon
)
4198 arc_change_state(arc_anon
, hdr
, hash_lock
);
4199 if (HDR_IN_HASH_TABLE(hdr
))
4200 buf_hash_remove(hdr
);
4201 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
4205 * Broadcast before we drop the hash_lock to avoid the possibility
4206 * that the hdr (and hence the cv) might be freed before we get to
4207 * the cv_broadcast().
4209 cv_broadcast(&hdr
->b_l1hdr
.b_cv
);
4211 if (hash_lock
!= NULL
) {
4212 mutex_exit(hash_lock
);
4215 * This block was freed while we waited for the read to
4216 * complete. It has been removed from the hash table and
4217 * moved to the anonymous state (so that it won't show up
4220 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
4221 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
4224 /* execute each callback and free its structure */
4225 while ((acb
= callback_list
) != NULL
) {
4227 acb
->acb_done(zio
, acb
->acb_buf
, acb
->acb_private
);
4229 if (acb
->acb_zio_dummy
!= NULL
) {
4230 acb
->acb_zio_dummy
->io_error
= zio
->io_error
;
4231 zio_nowait(acb
->acb_zio_dummy
);
4234 callback_list
= acb
->acb_next
;
4235 kmem_free(acb
, sizeof (arc_callback_t
));
4239 arc_hdr_destroy(hdr
);
4243 * "Read" the block at the specified DVA (in bp) via the
4244 * cache. If the block is found in the cache, invoke the provided
4245 * callback immediately and return. Note that the `zio' parameter
4246 * in the callback will be NULL in this case, since no IO was
4247 * required. If the block is not in the cache pass the read request
4248 * on to the spa with a substitute callback function, so that the
4249 * requested block will be added to the cache.
4251 * If a read request arrives for a block that has a read in-progress,
4252 * either wait for the in-progress read to complete (and return the
4253 * results); or, if this is a read with a "done" func, add a record
4254 * to the read to invoke the "done" func when the read completes,
4255 * and return; or just return.
4257 * arc_read_done() will invoke all the requested "done" functions
4258 * for readers of this block.
4261 arc_read(zio_t
*pio
, spa_t
*spa
, const blkptr_t
*bp
, arc_done_func_t
*done
,
4262 void *private, zio_priority_t priority
, int zio_flags
,
4263 arc_flags_t
*arc_flags
, const zbookmark_phys_t
*zb
)
4265 arc_buf_hdr_t
*hdr
= NULL
;
4266 arc_buf_t
*buf
= NULL
;
4267 kmutex_t
*hash_lock
= NULL
;
4269 uint64_t guid
= spa_load_guid(spa
);
4272 ASSERT(!BP_IS_EMBEDDED(bp
) ||
4273 BPE_GET_ETYPE(bp
) == BP_EMBEDDED_TYPE_DATA
);
4276 if (!BP_IS_EMBEDDED(bp
)) {
4278 * Embedded BP's have no DVA and require no I/O to "read".
4279 * Create an anonymous arc buf to back it.
4281 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
4284 if (hdr
!= NULL
&& HDR_HAS_L1HDR(hdr
) && hdr
->b_l1hdr
.b_datacnt
> 0) {
4286 *arc_flags
|= ARC_FLAG_CACHED
;
4288 if (HDR_IO_IN_PROGRESS(hdr
)) {
4290 if ((hdr
->b_flags
& ARC_FLAG_PRIO_ASYNC_READ
) &&
4291 priority
== ZIO_PRIORITY_SYNC_READ
) {
4293 * This sync read must wait for an
4294 * in-progress async read (e.g. a predictive
4295 * prefetch). Async reads are queued
4296 * separately at the vdev_queue layer, so
4297 * this is a form of priority inversion.
4298 * Ideally, we would "inherit" the demand
4299 * i/o's priority by moving the i/o from
4300 * the async queue to the synchronous queue,
4301 * but there is currently no mechanism to do
4302 * so. Track this so that we can evaluate
4303 * the magnitude of this potential performance
4306 * Note that if the prefetch i/o is already
4307 * active (has been issued to the device),
4308 * the prefetch improved performance, because
4309 * we issued it sooner than we would have
4310 * without the prefetch.
4312 DTRACE_PROBE1(arc__sync__wait__for__async
,
4313 arc_buf_hdr_t
*, hdr
);
4314 ARCSTAT_BUMP(arcstat_sync_wait_for_async
);
4316 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
4317 hdr
->b_flags
&= ~ARC_FLAG_PREDICTIVE_PREFETCH
;
4320 if (*arc_flags
& ARC_FLAG_WAIT
) {
4321 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
4322 mutex_exit(hash_lock
);
4325 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
4328 arc_callback_t
*acb
= NULL
;
4330 acb
= kmem_zalloc(sizeof (arc_callback_t
),
4332 acb
->acb_done
= done
;
4333 acb
->acb_private
= private;
4335 acb
->acb_zio_dummy
= zio_null(pio
,
4336 spa
, NULL
, NULL
, NULL
, zio_flags
);
4338 ASSERT(acb
->acb_done
!= NULL
);
4339 acb
->acb_next
= hdr
->b_l1hdr
.b_acb
;
4340 hdr
->b_l1hdr
.b_acb
= acb
;
4341 add_reference(hdr
, hash_lock
, private);
4342 mutex_exit(hash_lock
);
4345 mutex_exit(hash_lock
);
4349 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
4350 hdr
->b_l1hdr
.b_state
== arc_mfu
);
4353 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
4355 * This is a demand read which does not have to
4356 * wait for i/o because we did a predictive
4357 * prefetch i/o for it, which has completed.
4360 arc__demand__hit__predictive__prefetch
,
4361 arc_buf_hdr_t
*, hdr
);
4363 arcstat_demand_hit_predictive_prefetch
);
4364 hdr
->b_flags
&= ~ARC_FLAG_PREDICTIVE_PREFETCH
;
4366 add_reference(hdr
, hash_lock
, private);
4368 * If this block is already in use, create a new
4369 * copy of the data so that we will be guaranteed
4370 * that arc_release() will always succeed.
4372 buf
= hdr
->b_l1hdr
.b_buf
;
4374 ASSERT(buf
->b_data
);
4375 if (HDR_BUF_AVAILABLE(hdr
)) {
4376 ASSERT(buf
->b_efunc
== NULL
);
4377 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
4379 buf
= arc_buf_clone(buf
);
4382 } else if (*arc_flags
& ARC_FLAG_PREFETCH
&&
4383 refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
4384 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
4386 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
4387 arc_access(hdr
, hash_lock
);
4388 if (*arc_flags
& ARC_FLAG_L2CACHE
)
4389 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4390 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
4391 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4392 mutex_exit(hash_lock
);
4393 ARCSTAT_BUMP(arcstat_hits
);
4394 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
4395 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
4396 data
, metadata
, hits
);
4399 done(NULL
, buf
, private);
4401 uint64_t size
= BP_GET_LSIZE(bp
);
4402 arc_callback_t
*acb
;
4405 boolean_t devw
= B_FALSE
;
4406 enum zio_compress b_compress
= ZIO_COMPRESS_OFF
;
4407 int32_t b_asize
= 0;
4410 * Gracefully handle a damaged logical block size as a
4413 if (size
> spa_maxblocksize(spa
)) {
4414 ASSERT3P(buf
, ==, NULL
);
4415 rc
= SET_ERROR(ECKSUM
);
4420 /* this block is not in the cache */
4421 arc_buf_hdr_t
*exists
= NULL
;
4422 arc_buf_contents_t type
= BP_GET_BUFC_TYPE(bp
);
4423 buf
= arc_buf_alloc(spa
, size
, private, type
);
4425 if (!BP_IS_EMBEDDED(bp
)) {
4426 hdr
->b_dva
= *BP_IDENTITY(bp
);
4427 hdr
->b_birth
= BP_PHYSICAL_BIRTH(bp
);
4428 exists
= buf_hash_insert(hdr
, &hash_lock
);
4430 if (exists
!= NULL
) {
4431 /* somebody beat us to the hash insert */
4432 mutex_exit(hash_lock
);
4433 buf_discard_identity(hdr
);
4434 (void) arc_buf_remove_ref(buf
, private);
4435 goto top
; /* restart the IO request */
4439 * If there is a callback, we pass our reference to
4440 * it; otherwise we remove our reference.
4443 (void) remove_reference(hdr
, hash_lock
,
4446 if (*arc_flags
& ARC_FLAG_PREFETCH
)
4447 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
4448 if (*arc_flags
& ARC_FLAG_L2CACHE
)
4449 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4450 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
4451 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4452 if (BP_GET_LEVEL(bp
) > 0)
4453 hdr
->b_flags
|= ARC_FLAG_INDIRECT
;
4456 * This block is in the ghost cache. If it was L2-only
4457 * (and thus didn't have an L1 hdr), we realloc the
4458 * header to add an L1 hdr.
4460 if (!HDR_HAS_L1HDR(hdr
)) {
4461 hdr
= arc_hdr_realloc(hdr
, hdr_l2only_cache
,
4465 ASSERT(GHOST_STATE(hdr
->b_l1hdr
.b_state
));
4466 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4467 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
4468 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
4471 * If there is a callback, we pass a reference to it.
4474 add_reference(hdr
, hash_lock
, private);
4475 if (*arc_flags
& ARC_FLAG_PREFETCH
)
4476 hdr
->b_flags
|= ARC_FLAG_PREFETCH
;
4477 if (*arc_flags
& ARC_FLAG_L2CACHE
)
4478 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
4479 if (*arc_flags
& ARC_FLAG_L2COMPRESS
)
4480 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
4481 buf
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
4484 buf
->b_efunc
= NULL
;
4485 buf
->b_private
= NULL
;
4487 hdr
->b_l1hdr
.b_buf
= buf
;
4488 ASSERT0(hdr
->b_l1hdr
.b_datacnt
);
4489 hdr
->b_l1hdr
.b_datacnt
= 1;
4490 arc_get_data_buf(buf
);
4491 arc_access(hdr
, hash_lock
);
4494 if (*arc_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
)
4495 hdr
->b_flags
|= ARC_FLAG_PREDICTIVE_PREFETCH
;
4496 ASSERT(!GHOST_STATE(hdr
->b_l1hdr
.b_state
));
4498 acb
= kmem_zalloc(sizeof (arc_callback_t
), KM_SLEEP
);
4499 acb
->acb_done
= done
;
4500 acb
->acb_private
= private;
4502 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
4503 hdr
->b_l1hdr
.b_acb
= acb
;
4504 hdr
->b_flags
|= ARC_FLAG_IO_IN_PROGRESS
;
4506 if (HDR_HAS_L2HDR(hdr
) &&
4507 (vd
= hdr
->b_l2hdr
.b_dev
->l2ad_vdev
) != NULL
) {
4508 devw
= hdr
->b_l2hdr
.b_dev
->l2ad_writing
;
4509 addr
= hdr
->b_l2hdr
.b_daddr
;
4510 b_compress
= hdr
->b_l2hdr
.b_compress
;
4511 b_asize
= hdr
->b_l2hdr
.b_asize
;
4513 * Lock out device removal.
4515 if (vdev_is_dead(vd
) ||
4516 !spa_config_tryenter(spa
, SCL_L2ARC
, vd
, RW_READER
))
4520 if (hash_lock
!= NULL
)
4521 mutex_exit(hash_lock
);
4524 * At this point, we have a level 1 cache miss. Try again in
4525 * L2ARC if possible.
4527 ASSERT3U(hdr
->b_size
, ==, size
);
4528 DTRACE_PROBE4(arc__miss
, arc_buf_hdr_t
*, hdr
, blkptr_t
*, bp
,
4529 uint64_t, size
, zbookmark_phys_t
*, zb
);
4530 ARCSTAT_BUMP(arcstat_misses
);
4531 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
4532 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
4533 data
, metadata
, misses
);
4535 if (priority
== ZIO_PRIORITY_ASYNC_READ
)
4536 hdr
->b_flags
|= ARC_FLAG_PRIO_ASYNC_READ
;
4538 hdr
->b_flags
&= ~ARC_FLAG_PRIO_ASYNC_READ
;
4540 if (vd
!= NULL
&& l2arc_ndev
!= 0 && !(l2arc_norw
&& devw
)) {
4542 * Read from the L2ARC if the following are true:
4543 * 1. The L2ARC vdev was previously cached.
4544 * 2. This buffer still has L2ARC metadata.
4545 * 3. This buffer isn't currently writing to the L2ARC.
4546 * 4. The L2ARC entry wasn't evicted, which may
4547 * also have invalidated the vdev.
4548 * 5. This isn't prefetch and l2arc_noprefetch is set.
4550 if (HDR_HAS_L2HDR(hdr
) &&
4551 !HDR_L2_WRITING(hdr
) && !HDR_L2_EVICTED(hdr
) &&
4552 !(l2arc_noprefetch
&& HDR_PREFETCH(hdr
))) {
4553 l2arc_read_callback_t
*cb
;
4555 DTRACE_PROBE1(l2arc__hit
, arc_buf_hdr_t
*, hdr
);
4556 ARCSTAT_BUMP(arcstat_l2_hits
);
4557 atomic_inc_32(&hdr
->b_l2hdr
.b_hits
);
4559 cb
= kmem_zalloc(sizeof (l2arc_read_callback_t
),
4561 cb
->l2rcb_buf
= buf
;
4562 cb
->l2rcb_spa
= spa
;
4565 cb
->l2rcb_flags
= zio_flags
;
4566 cb
->l2rcb_compress
= b_compress
;
4568 ASSERT(addr
>= VDEV_LABEL_START_SIZE
&&
4569 addr
+ size
< vd
->vdev_psize
-
4570 VDEV_LABEL_END_SIZE
);
4573 * l2arc read. The SCL_L2ARC lock will be
4574 * released by l2arc_read_done().
4575 * Issue a null zio if the underlying buffer
4576 * was squashed to zero size by compression.
4578 if (b_compress
== ZIO_COMPRESS_EMPTY
) {
4579 rzio
= zio_null(pio
, spa
, vd
,
4580 l2arc_read_done
, cb
,
4581 zio_flags
| ZIO_FLAG_DONT_CACHE
|
4583 ZIO_FLAG_DONT_PROPAGATE
|
4584 ZIO_FLAG_DONT_RETRY
);
4586 rzio
= zio_read_phys(pio
, vd
, addr
,
4587 b_asize
, buf
->b_data
,
4589 l2arc_read_done
, cb
, priority
,
4590 zio_flags
| ZIO_FLAG_DONT_CACHE
|
4592 ZIO_FLAG_DONT_PROPAGATE
|
4593 ZIO_FLAG_DONT_RETRY
, B_FALSE
);
4595 DTRACE_PROBE2(l2arc__read
, vdev_t
*, vd
,
4597 ARCSTAT_INCR(arcstat_l2_read_bytes
, b_asize
);
4599 if (*arc_flags
& ARC_FLAG_NOWAIT
) {
4604 ASSERT(*arc_flags
& ARC_FLAG_WAIT
);
4605 if (zio_wait(rzio
) == 0)
4608 /* l2arc read error; goto zio_read() */
4610 DTRACE_PROBE1(l2arc__miss
,
4611 arc_buf_hdr_t
*, hdr
);
4612 ARCSTAT_BUMP(arcstat_l2_misses
);
4613 if (HDR_L2_WRITING(hdr
))
4614 ARCSTAT_BUMP(arcstat_l2_rw_clash
);
4615 spa_config_exit(spa
, SCL_L2ARC
, vd
);
4619 spa_config_exit(spa
, SCL_L2ARC
, vd
);
4620 if (l2arc_ndev
!= 0) {
4621 DTRACE_PROBE1(l2arc__miss
,
4622 arc_buf_hdr_t
*, hdr
);
4623 ARCSTAT_BUMP(arcstat_l2_misses
);
4627 rzio
= zio_read(pio
, spa
, bp
, buf
->b_data
, size
,
4628 arc_read_done
, buf
, priority
, zio_flags
, zb
);
4630 if (*arc_flags
& ARC_FLAG_WAIT
) {
4631 rc
= zio_wait(rzio
);
4635 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
4640 spa_read_history_add(spa
, zb
, *arc_flags
);
4645 arc_add_prune_callback(arc_prune_func_t
*func
, void *private)
4649 p
= kmem_alloc(sizeof (*p
), KM_SLEEP
);
4651 p
->p_private
= private;
4652 list_link_init(&p
->p_node
);
4653 refcount_create(&p
->p_refcnt
);
4655 mutex_enter(&arc_prune_mtx
);
4656 refcount_add(&p
->p_refcnt
, &arc_prune_list
);
4657 list_insert_head(&arc_prune_list
, p
);
4658 mutex_exit(&arc_prune_mtx
);
4664 arc_remove_prune_callback(arc_prune_t
*p
)
4666 boolean_t wait
= B_FALSE
;
4667 mutex_enter(&arc_prune_mtx
);
4668 list_remove(&arc_prune_list
, p
);
4669 if (refcount_remove(&p
->p_refcnt
, &arc_prune_list
) > 0)
4671 mutex_exit(&arc_prune_mtx
);
4673 /* wait for arc_prune_task to finish */
4675 taskq_wait_outstanding(arc_prune_taskq
, 0);
4676 ASSERT0(refcount_count(&p
->p_refcnt
));
4677 refcount_destroy(&p
->p_refcnt
);
4678 kmem_free(p
, sizeof (*p
));
4682 arc_set_callback(arc_buf_t
*buf
, arc_evict_func_t
*func
, void *private)
4684 ASSERT(buf
->b_hdr
!= NULL
);
4685 ASSERT(buf
->b_hdr
->b_l1hdr
.b_state
!= arc_anon
);
4686 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
) ||
4688 ASSERT(buf
->b_efunc
== NULL
);
4689 ASSERT(!HDR_BUF_AVAILABLE(buf
->b_hdr
));
4691 buf
->b_efunc
= func
;
4692 buf
->b_private
= private;
4696 * Notify the arc that a block was freed, and thus will never be used again.
4699 arc_freed(spa_t
*spa
, const blkptr_t
*bp
)
4702 kmutex_t
*hash_lock
;
4703 uint64_t guid
= spa_load_guid(spa
);
4705 ASSERT(!BP_IS_EMBEDDED(bp
));
4707 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
4710 if (HDR_BUF_AVAILABLE(hdr
)) {
4711 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
4712 add_reference(hdr
, hash_lock
, FTAG
);
4713 hdr
->b_flags
&= ~ARC_FLAG_BUF_AVAILABLE
;
4714 mutex_exit(hash_lock
);
4716 arc_release(buf
, FTAG
);
4717 (void) arc_buf_remove_ref(buf
, FTAG
);
4719 mutex_exit(hash_lock
);
4725 * Clear the user eviction callback set by arc_set_callback(), first calling
4726 * it if it exists. Because the presence of a callback keeps an arc_buf cached
4727 * clearing the callback may result in the arc_buf being destroyed. However,
4728 * it will not result in the *last* arc_buf being destroyed, hence the data
4729 * will remain cached in the ARC. We make a copy of the arc buffer here so
4730 * that we can process the callback without holding any locks.
4732 * It's possible that the callback is already in the process of being cleared
4733 * by another thread. In this case we can not clear the callback.
4735 * Returns B_TRUE if the callback was successfully called and cleared.
4738 arc_clear_callback(arc_buf_t
*buf
)
4741 kmutex_t
*hash_lock
;
4742 arc_evict_func_t
*efunc
= buf
->b_efunc
;
4743 void *private = buf
->b_private
;
4745 mutex_enter(&buf
->b_evict_lock
);
4749 * We are in arc_do_user_evicts().
4751 ASSERT(buf
->b_data
== NULL
);
4752 mutex_exit(&buf
->b_evict_lock
);
4754 } else if (buf
->b_data
== NULL
) {
4756 * We are on the eviction list; process this buffer now
4757 * but let arc_do_user_evicts() do the reaping.
4759 buf
->b_efunc
= NULL
;
4760 mutex_exit(&buf
->b_evict_lock
);
4761 VERIFY0(efunc(private));
4764 hash_lock
= HDR_LOCK(hdr
);
4765 mutex_enter(hash_lock
);
4767 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
4769 ASSERT3U(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), <,
4770 hdr
->b_l1hdr
.b_datacnt
);
4771 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
4772 hdr
->b_l1hdr
.b_state
== arc_mfu
);
4774 buf
->b_efunc
= NULL
;
4775 buf
->b_private
= NULL
;
4777 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
4778 mutex_exit(&buf
->b_evict_lock
);
4779 arc_buf_destroy(buf
, TRUE
);
4781 ASSERT(buf
== hdr
->b_l1hdr
.b_buf
);
4782 hdr
->b_flags
|= ARC_FLAG_BUF_AVAILABLE
;
4783 mutex_exit(&buf
->b_evict_lock
);
4786 mutex_exit(hash_lock
);
4787 VERIFY0(efunc(private));
4792 * Release this buffer from the cache, making it an anonymous buffer. This
4793 * must be done after a read and prior to modifying the buffer contents.
4794 * If the buffer has more than one reference, we must make
4795 * a new hdr for the buffer.
4798 arc_release(arc_buf_t
*buf
, void *tag
)
4800 kmutex_t
*hash_lock
;
4802 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
4805 * It would be nice to assert that if its DMU metadata (level >
4806 * 0 || it's the dnode file), then it must be syncing context.
4807 * But we don't know that information at this level.
4810 mutex_enter(&buf
->b_evict_lock
);
4812 ASSERT(HDR_HAS_L1HDR(hdr
));
4815 * We don't grab the hash lock prior to this check, because if
4816 * the buffer's header is in the arc_anon state, it won't be
4817 * linked into the hash table.
4819 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
4820 mutex_exit(&buf
->b_evict_lock
);
4821 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4822 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
4823 ASSERT(!HDR_HAS_L2HDR(hdr
));
4824 ASSERT(BUF_EMPTY(hdr
));
4826 ASSERT3U(hdr
->b_l1hdr
.b_datacnt
, ==, 1);
4827 ASSERT3S(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), ==, 1);
4828 ASSERT(!list_link_active(&hdr
->b_l1hdr
.b_arc_node
));
4830 ASSERT3P(buf
->b_efunc
, ==, NULL
);
4831 ASSERT3P(buf
->b_private
, ==, NULL
);
4833 hdr
->b_l1hdr
.b_arc_access
= 0;
4839 hash_lock
= HDR_LOCK(hdr
);
4840 mutex_enter(hash_lock
);
4843 * This assignment is only valid as long as the hash_lock is
4844 * held, we must be careful not to reference state or the
4845 * b_state field after dropping the lock.
4847 state
= hdr
->b_l1hdr
.b_state
;
4848 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
4849 ASSERT3P(state
, !=, arc_anon
);
4851 /* this buffer is not on any list */
4852 ASSERT(refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0);
4854 if (HDR_HAS_L2HDR(hdr
)) {
4855 mutex_enter(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
4858 * We have to recheck this conditional again now that
4859 * we're holding the l2ad_mtx to prevent a race with
4860 * another thread which might be concurrently calling
4861 * l2arc_evict(). In that case, l2arc_evict() might have
4862 * destroyed the header's L2 portion as we were waiting
4863 * to acquire the l2ad_mtx.
4865 if (HDR_HAS_L2HDR(hdr
))
4866 arc_hdr_l2hdr_destroy(hdr
);
4868 mutex_exit(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
4872 * Do we have more than one buf?
4874 if (hdr
->b_l1hdr
.b_datacnt
> 1) {
4875 arc_buf_hdr_t
*nhdr
;
4877 uint64_t blksz
= hdr
->b_size
;
4878 uint64_t spa
= hdr
->b_spa
;
4879 arc_buf_contents_t type
= arc_buf_type(hdr
);
4880 uint32_t flags
= hdr
->b_flags
;
4882 ASSERT(hdr
->b_l1hdr
.b_buf
!= buf
|| buf
->b_next
!= NULL
);
4884 * Pull the data off of this hdr and attach it to
4885 * a new anonymous hdr.
4887 (void) remove_reference(hdr
, hash_lock
, tag
);
4888 bufp
= &hdr
->b_l1hdr
.b_buf
;
4889 while (*bufp
!= buf
)
4890 bufp
= &(*bufp
)->b_next
;
4891 *bufp
= buf
->b_next
;
4894 ASSERT3P(state
, !=, arc_l2c_only
);
4896 (void) refcount_remove_many(
4897 &state
->arcs_size
, hdr
->b_size
, buf
);
4899 if (refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
)) {
4902 ASSERT3P(state
, !=, arc_l2c_only
);
4903 size
= &state
->arcs_lsize
[type
];
4904 ASSERT3U(*size
, >=, hdr
->b_size
);
4905 atomic_add_64(size
, -hdr
->b_size
);
4909 * We're releasing a duplicate user data buffer, update
4910 * our statistics accordingly.
4912 if (HDR_ISTYPE_DATA(hdr
)) {
4913 ARCSTAT_BUMPDOWN(arcstat_duplicate_buffers
);
4914 ARCSTAT_INCR(arcstat_duplicate_buffers_size
,
4917 hdr
->b_l1hdr
.b_datacnt
-= 1;
4918 arc_cksum_verify(buf
);
4919 arc_buf_unwatch(buf
);
4921 mutex_exit(hash_lock
);
4923 nhdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
4924 nhdr
->b_size
= blksz
;
4927 nhdr
->b_l1hdr
.b_mru_hits
= 0;
4928 nhdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
4929 nhdr
->b_l1hdr
.b_mfu_hits
= 0;
4930 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
4931 nhdr
->b_l1hdr
.b_l2_hits
= 0;
4932 nhdr
->b_flags
= flags
& ARC_FLAG_L2_WRITING
;
4933 nhdr
->b_flags
|= arc_bufc_to_flags(type
);
4934 nhdr
->b_flags
|= ARC_FLAG_HAS_L1HDR
;
4936 nhdr
->b_l1hdr
.b_buf
= buf
;
4937 nhdr
->b_l1hdr
.b_datacnt
= 1;
4938 nhdr
->b_l1hdr
.b_state
= arc_anon
;
4939 nhdr
->b_l1hdr
.b_arc_access
= 0;
4940 nhdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
4941 nhdr
->b_freeze_cksum
= NULL
;
4943 (void) refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, tag
);
4945 mutex_exit(&buf
->b_evict_lock
);
4946 (void) refcount_add_many(&arc_anon
->arcs_size
, blksz
, buf
);
4948 mutex_exit(&buf
->b_evict_lock
);
4949 ASSERT(refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 1);
4950 /* protected by hash lock, or hdr is on arc_anon */
4951 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
4952 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
4953 hdr
->b_l1hdr
.b_mru_hits
= 0;
4954 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
4955 hdr
->b_l1hdr
.b_mfu_hits
= 0;
4956 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
4957 hdr
->b_l1hdr
.b_l2_hits
= 0;
4958 arc_change_state(arc_anon
, hdr
, hash_lock
);
4959 hdr
->b_l1hdr
.b_arc_access
= 0;
4960 mutex_exit(hash_lock
);
4962 buf_discard_identity(hdr
);
4965 buf
->b_efunc
= NULL
;
4966 buf
->b_private
= NULL
;
4970 arc_released(arc_buf_t
*buf
)
4974 mutex_enter(&buf
->b_evict_lock
);
4975 released
= (buf
->b_data
!= NULL
&&
4976 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
4977 mutex_exit(&buf
->b_evict_lock
);
4983 arc_referenced(arc_buf_t
*buf
)
4987 mutex_enter(&buf
->b_evict_lock
);
4988 referenced
= (refcount_count(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
4989 mutex_exit(&buf
->b_evict_lock
);
4990 return (referenced
);
4995 arc_write_ready(zio_t
*zio
)
4997 arc_write_callback_t
*callback
= zio
->io_private
;
4998 arc_buf_t
*buf
= callback
->awcb_buf
;
4999 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
5001 ASSERT(HDR_HAS_L1HDR(hdr
));
5002 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
5003 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
5004 callback
->awcb_ready(zio
, buf
, callback
->awcb_private
);
5007 * If the IO is already in progress, then this is a re-write
5008 * attempt, so we need to thaw and re-compute the cksum.
5009 * It is the responsibility of the callback to handle the
5010 * accounting for any re-write attempt.
5012 if (HDR_IO_IN_PROGRESS(hdr
)) {
5013 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
5014 if (hdr
->b_freeze_cksum
!= NULL
) {
5015 kmem_free(hdr
->b_freeze_cksum
, sizeof (zio_cksum_t
));
5016 hdr
->b_freeze_cksum
= NULL
;
5018 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
5020 arc_cksum_compute(buf
, B_FALSE
);
5021 hdr
->b_flags
|= ARC_FLAG_IO_IN_PROGRESS
;
5025 arc_write_children_ready(zio_t
*zio
)
5027 arc_write_callback_t
*callback
= zio
->io_private
;
5028 arc_buf_t
*buf
= callback
->awcb_buf
;
5030 callback
->awcb_children_ready(zio
, buf
, callback
->awcb_private
);
5034 * The SPA calls this callback for each physical write that happens on behalf
5035 * of a logical write. See the comment in dbuf_write_physdone() for details.
5038 arc_write_physdone(zio_t
*zio
)
5040 arc_write_callback_t
*cb
= zio
->io_private
;
5041 if (cb
->awcb_physdone
!= NULL
)
5042 cb
->awcb_physdone(zio
, cb
->awcb_buf
, cb
->awcb_private
);
5046 arc_write_done(zio_t
*zio
)
5048 arc_write_callback_t
*callback
= zio
->io_private
;
5049 arc_buf_t
*buf
= callback
->awcb_buf
;
5050 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
5052 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
5054 if (zio
->io_error
== 0) {
5055 if (BP_IS_HOLE(zio
->io_bp
) || BP_IS_EMBEDDED(zio
->io_bp
)) {
5056 buf_discard_identity(hdr
);
5058 hdr
->b_dva
= *BP_IDENTITY(zio
->io_bp
);
5059 hdr
->b_birth
= BP_PHYSICAL_BIRTH(zio
->io_bp
);
5062 ASSERT(BUF_EMPTY(hdr
));
5066 * If the block to be written was all-zero or compressed enough to be
5067 * embedded in the BP, no write was performed so there will be no
5068 * dva/birth/checksum. The buffer must therefore remain anonymous
5071 if (!BUF_EMPTY(hdr
)) {
5072 arc_buf_hdr_t
*exists
;
5073 kmutex_t
*hash_lock
;
5075 ASSERT(zio
->io_error
== 0);
5077 arc_cksum_verify(buf
);
5079 exists
= buf_hash_insert(hdr
, &hash_lock
);
5080 if (exists
!= NULL
) {
5082 * This can only happen if we overwrite for
5083 * sync-to-convergence, because we remove
5084 * buffers from the hash table when we arc_free().
5086 if (zio
->io_flags
& ZIO_FLAG_IO_REWRITE
) {
5087 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
5088 panic("bad overwrite, hdr=%p exists=%p",
5089 (void *)hdr
, (void *)exists
);
5090 ASSERT(refcount_is_zero(
5091 &exists
->b_l1hdr
.b_refcnt
));
5092 arc_change_state(arc_anon
, exists
, hash_lock
);
5093 mutex_exit(hash_lock
);
5094 arc_hdr_destroy(exists
);
5095 exists
= buf_hash_insert(hdr
, &hash_lock
);
5096 ASSERT3P(exists
, ==, NULL
);
5097 } else if (zio
->io_flags
& ZIO_FLAG_NOPWRITE
) {
5099 ASSERT(zio
->io_prop
.zp_nopwrite
);
5100 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
5101 panic("bad nopwrite, hdr=%p exists=%p",
5102 (void *)hdr
, (void *)exists
);
5105 ASSERT(hdr
->b_l1hdr
.b_datacnt
== 1);
5106 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
5107 ASSERT(BP_GET_DEDUP(zio
->io_bp
));
5108 ASSERT(BP_GET_LEVEL(zio
->io_bp
) == 0);
5111 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
5112 /* if it's not anon, we are doing a scrub */
5113 if (exists
== NULL
&& hdr
->b_l1hdr
.b_state
== arc_anon
)
5114 arc_access(hdr
, hash_lock
);
5115 mutex_exit(hash_lock
);
5117 hdr
->b_flags
&= ~ARC_FLAG_IO_IN_PROGRESS
;
5120 ASSERT(!refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
5121 callback
->awcb_done(zio
, buf
, callback
->awcb_private
);
5123 kmem_free(callback
, sizeof (arc_write_callback_t
));
5127 arc_write(zio_t
*pio
, spa_t
*spa
, uint64_t txg
,
5128 blkptr_t
*bp
, arc_buf_t
*buf
, boolean_t l2arc
, boolean_t l2arc_compress
,
5129 const zio_prop_t
*zp
, arc_done_func_t
*ready
,
5130 arc_done_func_t
*children_ready
, arc_done_func_t
*physdone
,
5131 arc_done_func_t
*done
, void *private, zio_priority_t priority
,
5132 int zio_flags
, const zbookmark_phys_t
*zb
)
5134 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
5135 arc_write_callback_t
*callback
;
5138 ASSERT(ready
!= NULL
);
5139 ASSERT(done
!= NULL
);
5140 ASSERT(!HDR_IO_ERROR(hdr
));
5141 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
5142 ASSERT(hdr
->b_l1hdr
.b_acb
== NULL
);
5143 ASSERT(hdr
->b_l1hdr
.b_datacnt
> 0);
5145 hdr
->b_flags
|= ARC_FLAG_L2CACHE
;
5147 hdr
->b_flags
|= ARC_FLAG_L2COMPRESS
;
5148 callback
= kmem_zalloc(sizeof (arc_write_callback_t
), KM_SLEEP
);
5149 callback
->awcb_ready
= ready
;
5150 callback
->awcb_children_ready
= children_ready
;
5151 callback
->awcb_physdone
= physdone
;
5152 callback
->awcb_done
= done
;
5153 callback
->awcb_private
= private;
5154 callback
->awcb_buf
= buf
;
5156 zio
= zio_write(pio
, spa
, txg
, bp
, buf
->b_data
, hdr
->b_size
, zp
,
5158 (children_ready
!= NULL
) ? arc_write_children_ready
: NULL
,
5159 arc_write_physdone
, arc_write_done
, callback
,
5160 priority
, zio_flags
, zb
);
5166 arc_memory_throttle(uint64_t reserve
, uint64_t txg
)
5169 uint64_t available_memory
= ptob(freemem
);
5170 static uint64_t page_load
= 0;
5171 static uint64_t last_txg
= 0;
5173 pgcnt_t minfree
= btop(arc_sys_free
/ 4);
5176 if (freemem
> physmem
* arc_lotsfree_percent
/ 100)
5179 if (txg
> last_txg
) {
5185 * If we are in pageout, we know that memory is already tight,
5186 * the arc is already going to be evicting, so we just want to
5187 * continue to let page writes occur as quickly as possible.
5189 if (current_is_kswapd()) {
5190 if (page_load
> MAX(ptob(minfree
), available_memory
) / 4) {
5191 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
5192 return (SET_ERROR(ERESTART
));
5194 /* Note: reserve is inflated, so we deflate */
5195 page_load
+= reserve
/ 8;
5197 } else if (page_load
> 0 && arc_reclaim_needed()) {
5198 /* memory is low, delay before restarting */
5199 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
5200 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
5201 return (SET_ERROR(EAGAIN
));
5209 arc_tempreserve_clear(uint64_t reserve
)
5211 atomic_add_64(&arc_tempreserve
, -reserve
);
5212 ASSERT((int64_t)arc_tempreserve
>= 0);
5216 arc_tempreserve_space(uint64_t reserve
, uint64_t txg
)
5222 reserve
> arc_c
/4 &&
5223 reserve
* 4 > (2ULL << SPA_MAXBLOCKSHIFT
))
5224 arc_c
= MIN(arc_c_max
, reserve
* 4);
5227 * Throttle when the calculated memory footprint for the TXG
5228 * exceeds the target ARC size.
5230 if (reserve
> arc_c
) {
5231 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve
);
5232 return (SET_ERROR(ERESTART
));
5236 * Don't count loaned bufs as in flight dirty data to prevent long
5237 * network delays from blocking transactions that are ready to be
5238 * assigned to a txg.
5240 anon_size
= MAX((int64_t)(refcount_count(&arc_anon
->arcs_size
) -
5241 arc_loaned_bytes
), 0);
5244 * Writes will, almost always, require additional memory allocations
5245 * in order to compress/encrypt/etc the data. We therefore need to
5246 * make sure that there is sufficient available memory for this.
5248 error
= arc_memory_throttle(reserve
, txg
);
5253 * Throttle writes when the amount of dirty data in the cache
5254 * gets too large. We try to keep the cache less than half full
5255 * of dirty blocks so that our sync times don't grow too large.
5256 * Note: if two requests come in concurrently, we might let them
5257 * both succeed, when one of them should fail. Not a huge deal.
5260 if (reserve
+ arc_tempreserve
+ anon_size
> arc_c
/ 2 &&
5261 anon_size
> arc_c
/ 4) {
5262 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
5263 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
5264 arc_tempreserve
>>10,
5265 arc_anon
->arcs_lsize
[ARC_BUFC_METADATA
]>>10,
5266 arc_anon
->arcs_lsize
[ARC_BUFC_DATA
]>>10,
5267 reserve
>>10, arc_c
>>10);
5268 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle
);
5269 return (SET_ERROR(ERESTART
));
5271 atomic_add_64(&arc_tempreserve
, reserve
);
5276 arc_kstat_update_state(arc_state_t
*state
, kstat_named_t
*size
,
5277 kstat_named_t
*evict_data
, kstat_named_t
*evict_metadata
)
5279 size
->value
.ui64
= refcount_count(&state
->arcs_size
);
5280 evict_data
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_DATA
];
5281 evict_metadata
->value
.ui64
= state
->arcs_lsize
[ARC_BUFC_METADATA
];
5285 arc_kstat_update(kstat_t
*ksp
, int rw
)
5287 arc_stats_t
*as
= ksp
->ks_data
;
5289 if (rw
== KSTAT_WRITE
) {
5292 arc_kstat_update_state(arc_anon
,
5293 &as
->arcstat_anon_size
,
5294 &as
->arcstat_anon_evictable_data
,
5295 &as
->arcstat_anon_evictable_metadata
);
5296 arc_kstat_update_state(arc_mru
,
5297 &as
->arcstat_mru_size
,
5298 &as
->arcstat_mru_evictable_data
,
5299 &as
->arcstat_mru_evictable_metadata
);
5300 arc_kstat_update_state(arc_mru_ghost
,
5301 &as
->arcstat_mru_ghost_size
,
5302 &as
->arcstat_mru_ghost_evictable_data
,
5303 &as
->arcstat_mru_ghost_evictable_metadata
);
5304 arc_kstat_update_state(arc_mfu
,
5305 &as
->arcstat_mfu_size
,
5306 &as
->arcstat_mfu_evictable_data
,
5307 &as
->arcstat_mfu_evictable_metadata
);
5308 arc_kstat_update_state(arc_mfu_ghost
,
5309 &as
->arcstat_mfu_ghost_size
,
5310 &as
->arcstat_mfu_ghost_evictable_data
,
5311 &as
->arcstat_mfu_ghost_evictable_metadata
);
5318 * This function *must* return indices evenly distributed between all
5319 * sublists of the multilist. This is needed due to how the ARC eviction
5320 * code is laid out; arc_evict_state() assumes ARC buffers are evenly
5321 * distributed between all sublists and uses this assumption when
5322 * deciding which sublist to evict from and how much to evict from it.
5325 arc_state_multilist_index_func(multilist_t
*ml
, void *obj
)
5327 arc_buf_hdr_t
*hdr
= obj
;
5330 * We rely on b_dva to generate evenly distributed index
5331 * numbers using buf_hash below. So, as an added precaution,
5332 * let's make sure we never add empty buffers to the arc lists.
5334 ASSERT(!BUF_EMPTY(hdr
));
5337 * The assumption here, is the hash value for a given
5338 * arc_buf_hdr_t will remain constant throughout its lifetime
5339 * (i.e. its b_spa, b_dva, and b_birth fields don't change).
5340 * Thus, we don't need to store the header's sublist index
5341 * on insertion, as this index can be recalculated on removal.
5343 * Also, the low order bits of the hash value are thought to be
5344 * distributed evenly. Otherwise, in the case that the multilist
5345 * has a power of two number of sublists, each sublists' usage
5346 * would not be evenly distributed.
5348 return (buf_hash(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
) %
5349 multilist_get_num_sublists(ml
));
5353 * Called during module initialization and periodically thereafter to
5354 * apply reasonable changes to the exposed performance tunings. Non-zero
5355 * zfs_* values which differ from the currently set values will be applied.
5358 arc_tuning_update(void)
5360 /* Valid range: 64M - <all physical memory> */
5361 if ((zfs_arc_max
) && (zfs_arc_max
!= arc_c_max
) &&
5362 (zfs_arc_max
> 64 << 20) && (zfs_arc_max
< ptob(physmem
)) &&
5363 (zfs_arc_max
> arc_c_min
)) {
5364 arc_c_max
= zfs_arc_max
;
5366 arc_p
= (arc_c
>> 1);
5367 arc_meta_limit
= MIN(arc_meta_limit
, (3 * arc_c_max
) / 4);
5368 arc_dnode_limit
= arc_meta_limit
/ 10;
5371 /* Valid range: 32M - <arc_c_max> */
5372 if ((zfs_arc_min
) && (zfs_arc_min
!= arc_c_min
) &&
5373 (zfs_arc_min
>= 2ULL << SPA_MAXBLOCKSHIFT
) &&
5374 (zfs_arc_min
<= arc_c_max
)) {
5375 arc_c_min
= zfs_arc_min
;
5376 arc_c
= MAX(arc_c
, arc_c_min
);
5379 /* Valid range: 16M - <arc_c_max> */
5380 if ((zfs_arc_meta_min
) && (zfs_arc_meta_min
!= arc_meta_min
) &&
5381 (zfs_arc_meta_min
>= 1ULL << SPA_MAXBLOCKSHIFT
) &&
5382 (zfs_arc_meta_min
<= arc_c_max
)) {
5383 arc_meta_min
= zfs_arc_meta_min
;
5384 arc_meta_limit
= MAX(arc_meta_limit
, arc_meta_min
);
5385 arc_dnode_limit
= arc_meta_limit
/ 10;
5388 /* Valid range: <arc_meta_min> - <arc_c_max> */
5389 if ((zfs_arc_meta_limit
) && (zfs_arc_meta_limit
!= arc_meta_limit
) &&
5390 (zfs_arc_meta_limit
>= zfs_arc_meta_min
) &&
5391 (zfs_arc_meta_limit
<= arc_c_max
))
5392 arc_meta_limit
= zfs_arc_meta_limit
;
5394 /* Valid range: <arc_meta_min> - <arc_c_max> */
5395 if ((zfs_arc_dnode_limit
) && (zfs_arc_dnode_limit
!= arc_dnode_limit
) &&
5396 (zfs_arc_dnode_limit
>= zfs_arc_meta_min
) &&
5397 (zfs_arc_dnode_limit
<= arc_c_max
))
5398 arc_dnode_limit
= zfs_arc_dnode_limit
;
5400 /* Valid range: 1 - N */
5401 if (zfs_arc_grow_retry
)
5402 arc_grow_retry
= zfs_arc_grow_retry
;
5404 /* Valid range: 1 - N */
5405 if (zfs_arc_shrink_shift
) {
5406 arc_shrink_shift
= zfs_arc_shrink_shift
;
5407 arc_no_grow_shift
= MIN(arc_no_grow_shift
, arc_shrink_shift
-1);
5410 /* Valid range: 1 - N */
5411 if (zfs_arc_p_min_shift
)
5412 arc_p_min_shift
= zfs_arc_p_min_shift
;
5414 /* Valid range: 1 - N ticks */
5415 if (zfs_arc_min_prefetch_lifespan
)
5416 arc_min_prefetch_lifespan
= zfs_arc_min_prefetch_lifespan
;
5418 /* Valid range: 0 - 100 */
5419 if ((zfs_arc_lotsfree_percent
>= 0) &&
5420 (zfs_arc_lotsfree_percent
<= 100))
5421 arc_lotsfree_percent
= zfs_arc_lotsfree_percent
;
5423 /* Valid range: 0 - <all physical memory> */
5424 if ((zfs_arc_sys_free
) && (zfs_arc_sys_free
!= arc_sys_free
))
5425 arc_sys_free
= MIN(MAX(zfs_arc_sys_free
, 0), ptob(physmem
));
5433 * allmem is "all memory that we could possibly use".
5436 uint64_t allmem
= ptob(physmem
);
5438 uint64_t allmem
= (physmem
* PAGESIZE
) / 2;
5441 mutex_init(&arc_reclaim_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
5442 cv_init(&arc_reclaim_thread_cv
, NULL
, CV_DEFAULT
, NULL
);
5443 cv_init(&arc_reclaim_waiters_cv
, NULL
, CV_DEFAULT
, NULL
);
5445 mutex_init(&arc_user_evicts_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
5446 cv_init(&arc_user_evicts_cv
, NULL
, CV_DEFAULT
, NULL
);
5448 /* Convert seconds to clock ticks */
5449 arc_min_prefetch_lifespan
= 1 * hz
;
5451 /* Start out with 1/8 of all memory */
5456 * On architectures where the physical memory can be larger
5457 * than the addressable space (intel in 32-bit mode), we may
5458 * need to limit the cache to 1/8 of VM size.
5460 arc_c
= MIN(arc_c
, vmem_size(heap_arena
, VMEM_ALLOC
| VMEM_FREE
) / 8);
5463 * Register a shrinker to support synchronous (direct) memory
5464 * reclaim from the arc. This is done to prevent kswapd from
5465 * swapping out pages when it is preferable to shrink the arc.
5467 spl_register_shrinker(&arc_shrinker
);
5469 /* Set to 1/64 of all memory or a minimum of 512K */
5470 arc_sys_free
= MAX(ptob(physmem
/ 64), (512 * 1024));
5474 /* Set max to 1/2 of all memory */
5475 arc_c_max
= allmem
/ 2;
5478 * In userland, there's only the memory pressure that we artificially
5479 * create (see arc_available_memory()). Don't let arc_c get too
5480 * small, because it can cause transactions to be larger than
5481 * arc_c, causing arc_tempreserve_space() to fail.
5484 arc_c_min
= MAX(arc_c_max
/ 2, 2ULL << SPA_MAXBLOCKSHIFT
);
5486 arc_c_min
= 2ULL << SPA_MAXBLOCKSHIFT
;
5490 arc_p
= (arc_c
>> 1);
5492 /* Set min to 1/2 of arc_c_min */
5493 arc_meta_min
= 1ULL << SPA_MAXBLOCKSHIFT
;
5494 /* Initialize maximum observed usage to zero */
5496 /* Set limit to 3/4 of arc_c_max with a floor of arc_meta_min */
5497 arc_meta_limit
= MAX((3 * arc_c_max
) / 4, arc_meta_min
);
5498 /* Default dnode limit is 10% of overall meta limit */
5499 arc_dnode_limit
= arc_meta_limit
/ 10;
5501 /* Apply user specified tunings */
5502 arc_tuning_update();
5504 if (zfs_arc_num_sublists_per_state
< 1)
5505 zfs_arc_num_sublists_per_state
= MAX(boot_ncpus
, 1);
5507 /* if kmem_flags are set, lets try to use less memory */
5508 if (kmem_debugging())
5510 if (arc_c
< arc_c_min
)
5513 arc_anon
= &ARC_anon
;
5515 arc_mru_ghost
= &ARC_mru_ghost
;
5517 arc_mfu_ghost
= &ARC_mfu_ghost
;
5518 arc_l2c_only
= &ARC_l2c_only
;
5521 multilist_create(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
],
5522 sizeof (arc_buf_hdr_t
),
5523 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5524 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5525 multilist_create(&arc_mru
->arcs_list
[ARC_BUFC_DATA
],
5526 sizeof (arc_buf_hdr_t
),
5527 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5528 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5529 multilist_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
],
5530 sizeof (arc_buf_hdr_t
),
5531 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5532 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5533 multilist_create(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
],
5534 sizeof (arc_buf_hdr_t
),
5535 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5536 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5537 multilist_create(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
],
5538 sizeof (arc_buf_hdr_t
),
5539 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5540 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5541 multilist_create(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
],
5542 sizeof (arc_buf_hdr_t
),
5543 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5544 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5545 multilist_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
],
5546 sizeof (arc_buf_hdr_t
),
5547 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5548 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5549 multilist_create(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
],
5550 sizeof (arc_buf_hdr_t
),
5551 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5552 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5553 multilist_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
],
5554 sizeof (arc_buf_hdr_t
),
5555 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5556 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5557 multilist_create(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
],
5558 sizeof (arc_buf_hdr_t
),
5559 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
5560 zfs_arc_num_sublists_per_state
, arc_state_multilist_index_func
);
5562 arc_anon
->arcs_state
= ARC_STATE_ANON
;
5563 arc_mru
->arcs_state
= ARC_STATE_MRU
;
5564 arc_mru_ghost
->arcs_state
= ARC_STATE_MRU_GHOST
;
5565 arc_mfu
->arcs_state
= ARC_STATE_MFU
;
5566 arc_mfu_ghost
->arcs_state
= ARC_STATE_MFU_GHOST
;
5567 arc_l2c_only
->arcs_state
= ARC_STATE_L2C_ONLY
;
5569 refcount_create(&arc_anon
->arcs_size
);
5570 refcount_create(&arc_mru
->arcs_size
);
5571 refcount_create(&arc_mru_ghost
->arcs_size
);
5572 refcount_create(&arc_mfu
->arcs_size
);
5573 refcount_create(&arc_mfu_ghost
->arcs_size
);
5574 refcount_create(&arc_l2c_only
->arcs_size
);
5578 arc_reclaim_thread_exit
= FALSE
;
5579 arc_user_evicts_thread_exit
= FALSE
;
5580 list_create(&arc_prune_list
, sizeof (arc_prune_t
),
5581 offsetof(arc_prune_t
, p_node
));
5582 arc_eviction_list
= NULL
;
5583 mutex_init(&arc_prune_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
5584 bzero(&arc_eviction_hdr
, sizeof (arc_buf_hdr_t
));
5586 arc_prune_taskq
= taskq_create("arc_prune", max_ncpus
, defclsyspri
,
5587 max_ncpus
, INT_MAX
, TASKQ_PREPOPULATE
| TASKQ_DYNAMIC
);
5589 arc_ksp
= kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED
,
5590 sizeof (arc_stats
) / sizeof (kstat_named_t
), KSTAT_FLAG_VIRTUAL
);
5592 if (arc_ksp
!= NULL
) {
5593 arc_ksp
->ks_data
= &arc_stats
;
5594 arc_ksp
->ks_update
= arc_kstat_update
;
5595 kstat_install(arc_ksp
);
5598 (void) thread_create(NULL
, 0, arc_reclaim_thread
, NULL
, 0, &p0
,
5599 TS_RUN
, defclsyspri
);
5601 (void) thread_create(NULL
, 0, arc_user_evicts_thread
, NULL
, 0, &p0
,
5602 TS_RUN
, defclsyspri
);
5608 * Calculate maximum amount of dirty data per pool.
5610 * If it has been set by a module parameter, take that.
5611 * Otherwise, use a percentage of physical memory defined by
5612 * zfs_dirty_data_max_percent (default 10%) with a cap at
5613 * zfs_dirty_data_max_max (default 25% of physical memory).
5615 if (zfs_dirty_data_max_max
== 0)
5616 zfs_dirty_data_max_max
= (uint64_t)physmem
* PAGESIZE
*
5617 zfs_dirty_data_max_max_percent
/ 100;
5619 if (zfs_dirty_data_max
== 0) {
5620 zfs_dirty_data_max
= (uint64_t)physmem
* PAGESIZE
*
5621 zfs_dirty_data_max_percent
/ 100;
5622 zfs_dirty_data_max
= MIN(zfs_dirty_data_max
,
5623 zfs_dirty_data_max_max
);
5633 spl_unregister_shrinker(&arc_shrinker
);
5634 #endif /* _KERNEL */
5636 mutex_enter(&arc_reclaim_lock
);
5637 arc_reclaim_thread_exit
= TRUE
;
5639 * The reclaim thread will set arc_reclaim_thread_exit back to
5640 * FALSE when it is finished exiting; we're waiting for that.
5642 while (arc_reclaim_thread_exit
) {
5643 cv_signal(&arc_reclaim_thread_cv
);
5644 cv_wait(&arc_reclaim_thread_cv
, &arc_reclaim_lock
);
5646 mutex_exit(&arc_reclaim_lock
);
5648 mutex_enter(&arc_user_evicts_lock
);
5649 arc_user_evicts_thread_exit
= TRUE
;
5651 * The user evicts thread will set arc_user_evicts_thread_exit
5652 * to FALSE when it is finished exiting; we're waiting for that.
5654 while (arc_user_evicts_thread_exit
) {
5655 cv_signal(&arc_user_evicts_cv
);
5656 cv_wait(&arc_user_evicts_cv
, &arc_user_evicts_lock
);
5658 mutex_exit(&arc_user_evicts_lock
);
5660 /* Use TRUE to ensure *all* buffers are evicted */
5661 arc_flush(NULL
, TRUE
);
5665 if (arc_ksp
!= NULL
) {
5666 kstat_delete(arc_ksp
);
5670 taskq_wait(arc_prune_taskq
);
5671 taskq_destroy(arc_prune_taskq
);
5673 mutex_enter(&arc_prune_mtx
);
5674 while ((p
= list_head(&arc_prune_list
)) != NULL
) {
5675 list_remove(&arc_prune_list
, p
);
5676 refcount_remove(&p
->p_refcnt
, &arc_prune_list
);
5677 refcount_destroy(&p
->p_refcnt
);
5678 kmem_free(p
, sizeof (*p
));
5680 mutex_exit(&arc_prune_mtx
);
5682 list_destroy(&arc_prune_list
);
5683 mutex_destroy(&arc_prune_mtx
);
5684 mutex_destroy(&arc_reclaim_lock
);
5685 cv_destroy(&arc_reclaim_thread_cv
);
5686 cv_destroy(&arc_reclaim_waiters_cv
);
5688 mutex_destroy(&arc_user_evicts_lock
);
5689 cv_destroy(&arc_user_evicts_cv
);
5691 refcount_destroy(&arc_anon
->arcs_size
);
5692 refcount_destroy(&arc_mru
->arcs_size
);
5693 refcount_destroy(&arc_mru_ghost
->arcs_size
);
5694 refcount_destroy(&arc_mfu
->arcs_size
);
5695 refcount_destroy(&arc_mfu_ghost
->arcs_size
);
5696 refcount_destroy(&arc_l2c_only
->arcs_size
);
5698 multilist_destroy(&arc_mru
->arcs_list
[ARC_BUFC_METADATA
]);
5699 multilist_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
5700 multilist_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_METADATA
]);
5701 multilist_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
5702 multilist_destroy(&arc_mru
->arcs_list
[ARC_BUFC_DATA
]);
5703 multilist_destroy(&arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
]);
5704 multilist_destroy(&arc_mfu
->arcs_list
[ARC_BUFC_DATA
]);
5705 multilist_destroy(&arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
]);
5706 multilist_destroy(&arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]);
5707 multilist_destroy(&arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]);
5711 ASSERT0(arc_loaned_bytes
);
5717 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
5718 * It uses dedicated storage devices to hold cached data, which are populated
5719 * using large infrequent writes. The main role of this cache is to boost
5720 * the performance of random read workloads. The intended L2ARC devices
5721 * include short-stroked disks, solid state disks, and other media with
5722 * substantially faster read latency than disk.
5724 * +-----------------------+
5726 * +-----------------------+
5729 * l2arc_feed_thread() arc_read()
5733 * +---------------+ |
5735 * +---------------+ |
5740 * +-------+ +-------+
5742 * | cache | | cache |
5743 * +-------+ +-------+
5744 * +=========+ .-----.
5745 * : L2ARC : |-_____-|
5746 * : devices : | Disks |
5747 * +=========+ `-_____-'
5749 * Read requests are satisfied from the following sources, in order:
5752 * 2) vdev cache of L2ARC devices
5754 * 4) vdev cache of disks
5757 * Some L2ARC device types exhibit extremely slow write performance.
5758 * To accommodate for this there are some significant differences between
5759 * the L2ARC and traditional cache design:
5761 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
5762 * the ARC behave as usual, freeing buffers and placing headers on ghost
5763 * lists. The ARC does not send buffers to the L2ARC during eviction as
5764 * this would add inflated write latencies for all ARC memory pressure.
5766 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
5767 * It does this by periodically scanning buffers from the eviction-end of
5768 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
5769 * not already there. It scans until a headroom of buffers is satisfied,
5770 * which itself is a buffer for ARC eviction. If a compressible buffer is
5771 * found during scanning and selected for writing to an L2ARC device, we
5772 * temporarily boost scanning headroom during the next scan cycle to make
5773 * sure we adapt to compression effects (which might significantly reduce
5774 * the data volume we write to L2ARC). The thread that does this is
5775 * l2arc_feed_thread(), illustrated below; example sizes are included to
5776 * provide a better sense of ratio than this diagram:
5779 * +---------------------+----------+
5780 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
5781 * +---------------------+----------+ | o L2ARC eligible
5782 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
5783 * +---------------------+----------+ |
5784 * 15.9 Gbytes ^ 32 Mbytes |
5786 * l2arc_feed_thread()
5788 * l2arc write hand <--[oooo]--'
5792 * +==============================+
5793 * L2ARC dev |####|#|###|###| |####| ... |
5794 * +==============================+
5797 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
5798 * evicted, then the L2ARC has cached a buffer much sooner than it probably
5799 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
5800 * safe to say that this is an uncommon case, since buffers at the end of
5801 * the ARC lists have moved there due to inactivity.
5803 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
5804 * then the L2ARC simply misses copying some buffers. This serves as a
5805 * pressure valve to prevent heavy read workloads from both stalling the ARC
5806 * with waits and clogging the L2ARC with writes. This also helps prevent
5807 * the potential for the L2ARC to churn if it attempts to cache content too
5808 * quickly, such as during backups of the entire pool.
5810 * 5. After system boot and before the ARC has filled main memory, there are
5811 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
5812 * lists can remain mostly static. Instead of searching from tail of these
5813 * lists as pictured, the l2arc_feed_thread() will search from the list heads
5814 * for eligible buffers, greatly increasing its chance of finding them.
5816 * The L2ARC device write speed is also boosted during this time so that
5817 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
5818 * there are no L2ARC reads, and no fear of degrading read performance
5819 * through increased writes.
5821 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
5822 * the vdev queue can aggregate them into larger and fewer writes. Each
5823 * device is written to in a rotor fashion, sweeping writes through
5824 * available space then repeating.
5826 * 7. The L2ARC does not store dirty content. It never needs to flush
5827 * write buffers back to disk based storage.
5829 * 8. If an ARC buffer is written (and dirtied) which also exists in the
5830 * L2ARC, the now stale L2ARC buffer is immediately dropped.
5832 * The performance of the L2ARC can be tweaked by a number of tunables, which
5833 * may be necessary for different workloads:
5835 * l2arc_write_max max write bytes per interval
5836 * l2arc_write_boost extra write bytes during device warmup
5837 * l2arc_noprefetch skip caching prefetched buffers
5838 * l2arc_nocompress skip compressing buffers
5839 * l2arc_headroom number of max device writes to precache
5840 * l2arc_headroom_boost when we find compressed buffers during ARC
5841 * scanning, we multiply headroom by this
5842 * percentage factor for the next scan cycle,
5843 * since more compressed buffers are likely to
5845 * l2arc_feed_secs seconds between L2ARC writing
5847 * Tunables may be removed or added as future performance improvements are
5848 * integrated, and also may become zpool properties.
5850 * There are three key functions that control how the L2ARC warms up:
5852 * l2arc_write_eligible() check if a buffer is eligible to cache
5853 * l2arc_write_size() calculate how much to write
5854 * l2arc_write_interval() calculate sleep delay between writes
5856 * These three functions determine what to write, how much, and how quickly
5861 l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*hdr
)
5864 * A buffer is *not* eligible for the L2ARC if it:
5865 * 1. belongs to a different spa.
5866 * 2. is already cached on the L2ARC.
5867 * 3. has an I/O in progress (it may be an incomplete read).
5868 * 4. is flagged not eligible (zfs property).
5870 if (hdr
->b_spa
!= spa_guid
|| HDR_HAS_L2HDR(hdr
) ||
5871 HDR_IO_IN_PROGRESS(hdr
) || !HDR_L2CACHE(hdr
))
5878 l2arc_write_size(void)
5883 * Make sure our globals have meaningful values in case the user
5886 size
= l2arc_write_max
;
5888 cmn_err(CE_NOTE
, "Bad value for l2arc_write_max, value must "
5889 "be greater than zero, resetting it to the default (%d)",
5891 size
= l2arc_write_max
= L2ARC_WRITE_SIZE
;
5894 if (arc_warm
== B_FALSE
)
5895 size
+= l2arc_write_boost
;
5902 l2arc_write_interval(clock_t began
, uint64_t wanted
, uint64_t wrote
)
5904 clock_t interval
, next
, now
;
5907 * If the ARC lists are busy, increase our write rate; if the
5908 * lists are stale, idle back. This is achieved by checking
5909 * how much we previously wrote - if it was more than half of
5910 * what we wanted, schedule the next write much sooner.
5912 if (l2arc_feed_again
&& wrote
> (wanted
/ 2))
5913 interval
= (hz
* l2arc_feed_min_ms
) / 1000;
5915 interval
= hz
* l2arc_feed_secs
;
5917 now
= ddi_get_lbolt();
5918 next
= MAX(now
, MIN(now
+ interval
, began
+ interval
));
5924 * Cycle through L2ARC devices. This is how L2ARC load balances.
5925 * If a device is returned, this also returns holding the spa config lock.
5927 static l2arc_dev_t
*
5928 l2arc_dev_get_next(void)
5930 l2arc_dev_t
*first
, *next
= NULL
;
5933 * Lock out the removal of spas (spa_namespace_lock), then removal
5934 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
5935 * both locks will be dropped and a spa config lock held instead.
5937 mutex_enter(&spa_namespace_lock
);
5938 mutex_enter(&l2arc_dev_mtx
);
5940 /* if there are no vdevs, there is nothing to do */
5941 if (l2arc_ndev
== 0)
5945 next
= l2arc_dev_last
;
5947 /* loop around the list looking for a non-faulted vdev */
5949 next
= list_head(l2arc_dev_list
);
5951 next
= list_next(l2arc_dev_list
, next
);
5953 next
= list_head(l2arc_dev_list
);
5956 /* if we have come back to the start, bail out */
5959 else if (next
== first
)
5962 } while (vdev_is_dead(next
->l2ad_vdev
));
5964 /* if we were unable to find any usable vdevs, return NULL */
5965 if (vdev_is_dead(next
->l2ad_vdev
))
5968 l2arc_dev_last
= next
;
5971 mutex_exit(&l2arc_dev_mtx
);
5974 * Grab the config lock to prevent the 'next' device from being
5975 * removed while we are writing to it.
5978 spa_config_enter(next
->l2ad_spa
, SCL_L2ARC
, next
, RW_READER
);
5979 mutex_exit(&spa_namespace_lock
);
5985 * Free buffers that were tagged for destruction.
5988 l2arc_do_free_on_write(void)
5991 l2arc_data_free_t
*df
, *df_prev
;
5993 mutex_enter(&l2arc_free_on_write_mtx
);
5994 buflist
= l2arc_free_on_write
;
5996 for (df
= list_tail(buflist
); df
; df
= df_prev
) {
5997 df_prev
= list_prev(buflist
, df
);
5998 ASSERT(df
->l2df_data
!= NULL
);
5999 ASSERT(df
->l2df_func
!= NULL
);
6000 df
->l2df_func(df
->l2df_data
, df
->l2df_size
);
6001 list_remove(buflist
, df
);
6002 kmem_free(df
, sizeof (l2arc_data_free_t
));
6005 mutex_exit(&l2arc_free_on_write_mtx
);
6009 * A write to a cache device has completed. Update all headers to allow
6010 * reads from these buffers to begin.
6013 l2arc_write_done(zio_t
*zio
)
6015 l2arc_write_callback_t
*cb
;
6018 arc_buf_hdr_t
*head
, *hdr
, *hdr_prev
;
6019 kmutex_t
*hash_lock
;
6020 int64_t bytes_dropped
= 0;
6022 cb
= zio
->io_private
;
6024 dev
= cb
->l2wcb_dev
;
6025 ASSERT(dev
!= NULL
);
6026 head
= cb
->l2wcb_head
;
6027 ASSERT(head
!= NULL
);
6028 buflist
= &dev
->l2ad_buflist
;
6029 ASSERT(buflist
!= NULL
);
6030 DTRACE_PROBE2(l2arc__iodone
, zio_t
*, zio
,
6031 l2arc_write_callback_t
*, cb
);
6033 if (zio
->io_error
!= 0)
6034 ARCSTAT_BUMP(arcstat_l2_writes_error
);
6037 * All writes completed, or an error was hit.
6040 mutex_enter(&dev
->l2ad_mtx
);
6041 for (hdr
= list_prev(buflist
, head
); hdr
; hdr
= hdr_prev
) {
6042 hdr_prev
= list_prev(buflist
, hdr
);
6044 hash_lock
= HDR_LOCK(hdr
);
6047 * We cannot use mutex_enter or else we can deadlock
6048 * with l2arc_write_buffers (due to swapping the order
6049 * the hash lock and l2ad_mtx are taken).
6051 if (!mutex_tryenter(hash_lock
)) {
6053 * Missed the hash lock. We must retry so we
6054 * don't leave the ARC_FLAG_L2_WRITING bit set.
6056 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry
);
6059 * We don't want to rescan the headers we've
6060 * already marked as having been written out, so
6061 * we reinsert the head node so we can pick up
6062 * where we left off.
6064 list_remove(buflist
, head
);
6065 list_insert_after(buflist
, hdr
, head
);
6067 mutex_exit(&dev
->l2ad_mtx
);
6070 * We wait for the hash lock to become available
6071 * to try and prevent busy waiting, and increase
6072 * the chance we'll be able to acquire the lock
6073 * the next time around.
6075 mutex_enter(hash_lock
);
6076 mutex_exit(hash_lock
);
6081 * We could not have been moved into the arc_l2c_only
6082 * state while in-flight due to our ARC_FLAG_L2_WRITING
6083 * bit being set. Let's just ensure that's being enforced.
6085 ASSERT(HDR_HAS_L1HDR(hdr
));
6088 * We may have allocated a buffer for L2ARC compression,
6089 * we must release it to avoid leaking this data.
6091 l2arc_release_cdata_buf(hdr
);
6094 * Skipped - drop L2ARC entry and mark the header as no
6095 * longer L2 eligibile.
6097 if (hdr
->b_l2hdr
.b_daddr
== L2ARC_ADDR_UNSET
) {
6098 list_remove(buflist
, hdr
);
6099 hdr
->b_flags
&= ~ARC_FLAG_HAS_L2HDR
;
6100 hdr
->b_flags
&= ~ARC_FLAG_L2CACHE
;
6102 ARCSTAT_BUMP(arcstat_l2_writes_skip_toobig
);
6104 (void) refcount_remove_many(&dev
->l2ad_alloc
,
6105 hdr
->b_l2hdr
.b_asize
, hdr
);
6106 } else if (zio
->io_error
!= 0) {
6108 * Error - drop L2ARC entry.
6110 list_remove(buflist
, hdr
);
6111 hdr
->b_flags
&= ~ARC_FLAG_HAS_L2HDR
;
6113 ARCSTAT_INCR(arcstat_l2_asize
, -hdr
->b_l2hdr
.b_asize
);
6114 ARCSTAT_INCR(arcstat_l2_size
, -hdr
->b_size
);
6116 bytes_dropped
+= hdr
->b_l2hdr
.b_asize
;
6117 (void) refcount_remove_many(&dev
->l2ad_alloc
,
6118 hdr
->b_l2hdr
.b_asize
, hdr
);
6122 * Allow ARC to begin reads and ghost list evictions to
6125 hdr
->b_flags
&= ~ARC_FLAG_L2_WRITING
;
6127 mutex_exit(hash_lock
);
6130 atomic_inc_64(&l2arc_writes_done
);
6131 list_remove(buflist
, head
);
6132 ASSERT(!HDR_HAS_L1HDR(head
));
6133 kmem_cache_free(hdr_l2only_cache
, head
);
6134 mutex_exit(&dev
->l2ad_mtx
);
6136 vdev_space_update(dev
->l2ad_vdev
, -bytes_dropped
, 0, 0);
6138 l2arc_do_free_on_write();
6140 kmem_free(cb
, sizeof (l2arc_write_callback_t
));
6144 * A read to a cache device completed. Validate buffer contents before
6145 * handing over to the regular ARC routines.
6148 l2arc_read_done(zio_t
*zio
)
6150 l2arc_read_callback_t
*cb
;
6153 kmutex_t
*hash_lock
;
6156 ASSERT(zio
->io_vd
!= NULL
);
6157 ASSERT(zio
->io_flags
& ZIO_FLAG_DONT_PROPAGATE
);
6159 spa_config_exit(zio
->io_spa
, SCL_L2ARC
, zio
->io_vd
);
6161 cb
= zio
->io_private
;
6163 buf
= cb
->l2rcb_buf
;
6164 ASSERT(buf
!= NULL
);
6166 hash_lock
= HDR_LOCK(buf
->b_hdr
);
6167 mutex_enter(hash_lock
);
6169 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
6172 * If the buffer was compressed, decompress it first.
6174 if (cb
->l2rcb_compress
!= ZIO_COMPRESS_OFF
)
6175 l2arc_decompress_zio(zio
, hdr
, cb
->l2rcb_compress
);
6176 ASSERT(zio
->io_data
!= NULL
);
6177 ASSERT3U(zio
->io_size
, ==, hdr
->b_size
);
6178 ASSERT3U(BP_GET_LSIZE(&cb
->l2rcb_bp
), ==, hdr
->b_size
);
6181 * Check this survived the L2ARC journey.
6183 equal
= arc_cksum_equal(buf
);
6184 if (equal
&& zio
->io_error
== 0 && !HDR_L2_EVICTED(hdr
)) {
6185 mutex_exit(hash_lock
);
6186 zio
->io_private
= buf
;
6187 zio
->io_bp_copy
= cb
->l2rcb_bp
; /* XXX fix in L2ARC 2.0 */
6188 zio
->io_bp
= &zio
->io_bp_copy
; /* XXX fix in L2ARC 2.0 */
6191 mutex_exit(hash_lock
);
6193 * Buffer didn't survive caching. Increment stats and
6194 * reissue to the original storage device.
6196 if (zio
->io_error
!= 0) {
6197 ARCSTAT_BUMP(arcstat_l2_io_error
);
6199 zio
->io_error
= SET_ERROR(EIO
);
6202 ARCSTAT_BUMP(arcstat_l2_cksum_bad
);
6205 * If there's no waiter, issue an async i/o to the primary
6206 * storage now. If there *is* a waiter, the caller must
6207 * issue the i/o in a context where it's OK to block.
6209 if (zio
->io_waiter
== NULL
) {
6210 zio_t
*pio
= zio_unique_parent(zio
);
6212 ASSERT(!pio
|| pio
->io_child_type
== ZIO_CHILD_LOGICAL
);
6214 zio_nowait(zio_read(pio
, cb
->l2rcb_spa
, &cb
->l2rcb_bp
,
6215 buf
->b_data
, hdr
->b_size
, arc_read_done
, buf
,
6216 zio
->io_priority
, cb
->l2rcb_flags
, &cb
->l2rcb_zb
));
6220 kmem_free(cb
, sizeof (l2arc_read_callback_t
));
6224 * This is the list priority from which the L2ARC will search for pages to
6225 * cache. This is used within loops (0..3) to cycle through lists in the
6226 * desired order. This order can have a significant effect on cache
6229 * Currently the metadata lists are hit first, MFU then MRU, followed by
6230 * the data lists. This function returns a locked list, and also returns
6233 static multilist_sublist_t
*
6234 l2arc_sublist_lock(int list_num
)
6236 multilist_t
*ml
= NULL
;
6239 ASSERT(list_num
>= 0 && list_num
<= 3);
6243 ml
= &arc_mfu
->arcs_list
[ARC_BUFC_METADATA
];
6246 ml
= &arc_mru
->arcs_list
[ARC_BUFC_METADATA
];
6249 ml
= &arc_mfu
->arcs_list
[ARC_BUFC_DATA
];
6252 ml
= &arc_mru
->arcs_list
[ARC_BUFC_DATA
];
6257 * Return a randomly-selected sublist. This is acceptable
6258 * because the caller feeds only a little bit of data for each
6259 * call (8MB). Subsequent calls will result in different
6260 * sublists being selected.
6262 idx
= multilist_get_random_index(ml
);
6263 return (multilist_sublist_lock(ml
, idx
));
6267 * Evict buffers from the device write hand to the distance specified in
6268 * bytes. This distance may span populated buffers, it may span nothing.
6269 * This is clearing a region on the L2ARC device ready for writing.
6270 * If the 'all' boolean is set, every buffer is evicted.
6273 l2arc_evict(l2arc_dev_t
*dev
, uint64_t distance
, boolean_t all
)
6276 arc_buf_hdr_t
*hdr
, *hdr_prev
;
6277 kmutex_t
*hash_lock
;
6280 buflist
= &dev
->l2ad_buflist
;
6282 if (!all
&& dev
->l2ad_first
) {
6284 * This is the first sweep through the device. There is
6290 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- (2 * distance
))) {
6292 * When nearing the end of the device, evict to the end
6293 * before the device write hand jumps to the start.
6295 taddr
= dev
->l2ad_end
;
6297 taddr
= dev
->l2ad_hand
+ distance
;
6299 DTRACE_PROBE4(l2arc__evict
, l2arc_dev_t
*, dev
, list_t
*, buflist
,
6300 uint64_t, taddr
, boolean_t
, all
);
6303 mutex_enter(&dev
->l2ad_mtx
);
6304 for (hdr
= list_tail(buflist
); hdr
; hdr
= hdr_prev
) {
6305 hdr_prev
= list_prev(buflist
, hdr
);
6307 hash_lock
= HDR_LOCK(hdr
);
6310 * We cannot use mutex_enter or else we can deadlock
6311 * with l2arc_write_buffers (due to swapping the order
6312 * the hash lock and l2ad_mtx are taken).
6314 if (!mutex_tryenter(hash_lock
)) {
6316 * Missed the hash lock. Retry.
6318 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry
);
6319 mutex_exit(&dev
->l2ad_mtx
);
6320 mutex_enter(hash_lock
);
6321 mutex_exit(hash_lock
);
6325 if (HDR_L2_WRITE_HEAD(hdr
)) {
6327 * We hit a write head node. Leave it for
6328 * l2arc_write_done().
6330 list_remove(buflist
, hdr
);
6331 mutex_exit(hash_lock
);
6335 if (!all
&& HDR_HAS_L2HDR(hdr
) &&
6336 (hdr
->b_l2hdr
.b_daddr
> taddr
||
6337 hdr
->b_l2hdr
.b_daddr
< dev
->l2ad_hand
)) {
6339 * We've evicted to the target address,
6340 * or the end of the device.
6342 mutex_exit(hash_lock
);
6346 ASSERT(HDR_HAS_L2HDR(hdr
));
6347 if (!HDR_HAS_L1HDR(hdr
)) {
6348 ASSERT(!HDR_L2_READING(hdr
));
6350 * This doesn't exist in the ARC. Destroy.
6351 * arc_hdr_destroy() will call list_remove()
6352 * and decrement arcstat_l2_size.
6354 arc_change_state(arc_anon
, hdr
, hash_lock
);
6355 arc_hdr_destroy(hdr
);
6357 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_l2c_only
);
6358 ARCSTAT_BUMP(arcstat_l2_evict_l1cached
);
6360 * Invalidate issued or about to be issued
6361 * reads, since we may be about to write
6362 * over this location.
6364 if (HDR_L2_READING(hdr
)) {
6365 ARCSTAT_BUMP(arcstat_l2_evict_reading
);
6366 hdr
->b_flags
|= ARC_FLAG_L2_EVICTED
;
6369 /* Ensure this header has finished being written */
6370 ASSERT(!HDR_L2_WRITING(hdr
));
6371 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
6373 arc_hdr_l2hdr_destroy(hdr
);
6375 mutex_exit(hash_lock
);
6377 mutex_exit(&dev
->l2ad_mtx
);
6381 * Find and write ARC buffers to the L2ARC device.
6383 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
6384 * for reading until they have completed writing.
6385 * The headroom_boost is an in-out parameter used to maintain headroom boost
6386 * state between calls to this function.
6388 * Returns the number of bytes actually written (which may be smaller than
6389 * the delta by which the device hand has changed due to alignment).
6392 l2arc_write_buffers(spa_t
*spa
, l2arc_dev_t
*dev
, uint64_t target_sz
,
6393 boolean_t
*headroom_boost
)
6395 arc_buf_hdr_t
*hdr
, *hdr_prev
, *head
;
6396 uint64_t write_asize
, write_sz
, headroom
, buf_compress_minsz
,
6400 l2arc_write_callback_t
*cb
;
6402 uint64_t guid
= spa_load_guid(spa
);
6404 const boolean_t do_headroom_boost
= *headroom_boost
;
6406 ASSERT(dev
->l2ad_vdev
!= NULL
);
6408 /* Lower the flag now, we might want to raise it again later. */
6409 *headroom_boost
= B_FALSE
;
6412 write_sz
= write_asize
= 0;
6414 head
= kmem_cache_alloc(hdr_l2only_cache
, KM_PUSHPAGE
);
6415 head
->b_flags
|= ARC_FLAG_L2_WRITE_HEAD
;
6416 head
->b_flags
|= ARC_FLAG_HAS_L2HDR
;
6419 * We will want to try to compress buffers that are at least 2x the
6420 * device sector size.
6422 buf_compress_minsz
= 2 << dev
->l2ad_vdev
->vdev_ashift
;
6425 * Copy buffers for L2ARC writing.
6427 for (try = 0; try <= 3; try++) {
6428 multilist_sublist_t
*mls
= l2arc_sublist_lock(try);
6429 uint64_t passed_sz
= 0;
6432 * L2ARC fast warmup.
6434 * Until the ARC is warm and starts to evict, read from the
6435 * head of the ARC lists rather than the tail.
6437 if (arc_warm
== B_FALSE
)
6438 hdr
= multilist_sublist_head(mls
);
6440 hdr
= multilist_sublist_tail(mls
);
6442 headroom
= target_sz
* l2arc_headroom
;
6443 if (do_headroom_boost
)
6444 headroom
= (headroom
* l2arc_headroom_boost
) / 100;
6446 for (; hdr
; hdr
= hdr_prev
) {
6447 kmutex_t
*hash_lock
;
6451 if (arc_warm
== B_FALSE
)
6452 hdr_prev
= multilist_sublist_next(mls
, hdr
);
6454 hdr_prev
= multilist_sublist_prev(mls
, hdr
);
6456 hash_lock
= HDR_LOCK(hdr
);
6457 if (!mutex_tryenter(hash_lock
)) {
6459 * Skip this buffer rather than waiting.
6464 passed_sz
+= hdr
->b_size
;
6465 if (passed_sz
> headroom
) {
6469 mutex_exit(hash_lock
);
6473 if (!l2arc_write_eligible(guid
, hdr
)) {
6474 mutex_exit(hash_lock
);
6479 * Assume that the buffer is not going to be compressed
6480 * and could take more space on disk because of a larger
6483 buf_sz
= hdr
->b_size
;
6484 buf_a_sz
= vdev_psize_to_asize(dev
->l2ad_vdev
, buf_sz
);
6486 if ((write_asize
+ buf_a_sz
) > target_sz
) {
6488 mutex_exit(hash_lock
);
6494 * Insert a dummy header on the buflist so
6495 * l2arc_write_done() can find where the
6496 * write buffers begin without searching.
6498 mutex_enter(&dev
->l2ad_mtx
);
6499 list_insert_head(&dev
->l2ad_buflist
, head
);
6500 mutex_exit(&dev
->l2ad_mtx
);
6503 sizeof (l2arc_write_callback_t
), KM_SLEEP
);
6504 cb
->l2wcb_dev
= dev
;
6505 cb
->l2wcb_head
= head
;
6506 pio
= zio_root(spa
, l2arc_write_done
, cb
,
6511 * Create and add a new L2ARC header.
6513 hdr
->b_l2hdr
.b_dev
= dev
;
6514 hdr
->b_flags
|= ARC_FLAG_L2_WRITING
;
6516 * Temporarily stash the data buffer in b_tmp_cdata.
6517 * The subsequent write step will pick it up from
6518 * there. This is because can't access b_l1hdr.b_buf
6519 * without holding the hash_lock, which we in turn
6520 * can't access without holding the ARC list locks
6521 * (which we want to avoid during compression/writing)
6523 hdr
->b_l2hdr
.b_compress
= ZIO_COMPRESS_OFF
;
6524 hdr
->b_l2hdr
.b_asize
= hdr
->b_size
;
6525 hdr
->b_l2hdr
.b_hits
= 0;
6526 hdr
->b_l1hdr
.b_tmp_cdata
= hdr
->b_l1hdr
.b_buf
->b_data
;
6529 * Explicitly set the b_daddr field to a known
6530 * value which means "invalid address". This
6531 * enables us to differentiate which stage of
6532 * l2arc_write_buffers() the particular header
6533 * is in (e.g. this loop, or the one below).
6534 * ARC_FLAG_L2_WRITING is not enough to make
6535 * this distinction, and we need to know in
6536 * order to do proper l2arc vdev accounting in
6537 * arc_release() and arc_hdr_destroy().
6539 * Note, we can't use a new flag to distinguish
6540 * the two stages because we don't hold the
6541 * header's hash_lock below, in the second stage
6542 * of this function. Thus, we can't simply
6543 * change the b_flags field to denote that the
6544 * IO has been sent. We can change the b_daddr
6545 * field of the L2 portion, though, since we'll
6546 * be holding the l2ad_mtx; which is why we're
6547 * using it to denote the header's state change.
6549 hdr
->b_l2hdr
.b_daddr
= L2ARC_ADDR_UNSET
;
6550 hdr
->b_flags
|= ARC_FLAG_HAS_L2HDR
;
6552 mutex_enter(&dev
->l2ad_mtx
);
6553 list_insert_head(&dev
->l2ad_buflist
, hdr
);
6554 mutex_exit(&dev
->l2ad_mtx
);
6557 * Compute and store the buffer cksum before
6558 * writing. On debug the cksum is verified first.
6560 arc_cksum_verify(hdr
->b_l1hdr
.b_buf
);
6561 arc_cksum_compute(hdr
->b_l1hdr
.b_buf
, B_TRUE
);
6563 mutex_exit(hash_lock
);
6566 write_asize
+= buf_a_sz
;
6569 multilist_sublist_unlock(mls
);
6575 /* No buffers selected for writing? */
6578 ASSERT(!HDR_HAS_L1HDR(head
));
6579 kmem_cache_free(hdr_l2only_cache
, head
);
6583 mutex_enter(&dev
->l2ad_mtx
);
6586 * Note that elsewhere in this file arcstat_l2_asize
6587 * and the used space on l2ad_vdev are updated using b_asize,
6588 * which is not necessarily rounded up to the device block size.
6589 * Too keep accounting consistent we do the same here as well:
6590 * stats_size accumulates the sum of b_asize of the written buffers,
6591 * while write_asize accumulates the sum of b_asize rounded up
6592 * to the device block size.
6593 * The latter sum is used only to validate the corectness of the code.
6599 * Now start writing the buffers. We're starting at the write head
6600 * and work backwards, retracing the course of the buffer selector
6603 for (hdr
= list_prev(&dev
->l2ad_buflist
, head
); hdr
;
6604 hdr
= list_prev(&dev
->l2ad_buflist
, hdr
)) {
6608 * We rely on the L1 portion of the header below, so
6609 * it's invalid for this header to have been evicted out
6610 * of the ghost cache, prior to being written out. The
6611 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
6613 ASSERT(HDR_HAS_L1HDR(hdr
));
6616 * We shouldn't need to lock the buffer here, since we flagged
6617 * it as ARC_FLAG_L2_WRITING in the previous step, but we must
6618 * take care to only access its L2 cache parameters. In
6619 * particular, hdr->l1hdr.b_buf may be invalid by now due to
6622 hdr
->b_l2hdr
.b_daddr
= dev
->l2ad_hand
;
6624 if ((!l2arc_nocompress
&& HDR_L2COMPRESS(hdr
)) &&
6625 hdr
->b_l2hdr
.b_asize
>= buf_compress_minsz
) {
6626 if (l2arc_compress_buf(hdr
)) {
6628 * If compression succeeded, enable headroom
6629 * boost on the next scan cycle.
6631 *headroom_boost
= B_TRUE
;
6636 * Pick up the buffer data we had previously stashed away
6637 * (and now potentially also compressed).
6639 buf_data
= hdr
->b_l1hdr
.b_tmp_cdata
;
6640 buf_sz
= hdr
->b_l2hdr
.b_asize
;
6643 * We need to do this regardless if buf_sz is zero or
6644 * not, otherwise, when this l2hdr is evicted we'll
6645 * remove a reference that was never added.
6647 (void) refcount_add_many(&dev
->l2ad_alloc
, buf_sz
, hdr
);
6649 /* Compression may have squashed the buffer to zero length. */
6654 * Buffers which are larger than l2arc_max_block_size
6655 * after compression are skipped and removed from L2
6658 if (buf_sz
> l2arc_max_block_size
) {
6659 hdr
->b_l2hdr
.b_daddr
= L2ARC_ADDR_UNSET
;
6663 wzio
= zio_write_phys(pio
, dev
->l2ad_vdev
,
6664 dev
->l2ad_hand
, buf_sz
, buf_data
, ZIO_CHECKSUM_OFF
,
6665 NULL
, NULL
, ZIO_PRIORITY_ASYNC_WRITE
,
6666 ZIO_FLAG_CANFAIL
, B_FALSE
);
6668 DTRACE_PROBE2(l2arc__write
, vdev_t
*, dev
->l2ad_vdev
,
6670 (void) zio_nowait(wzio
);
6672 stats_size
+= buf_sz
;
6675 * Keep the clock hand suitably device-aligned.
6677 buf_a_sz
= vdev_psize_to_asize(dev
->l2ad_vdev
, buf_sz
);
6678 write_asize
+= buf_a_sz
;
6679 dev
->l2ad_hand
+= buf_a_sz
;
6683 mutex_exit(&dev
->l2ad_mtx
);
6685 ASSERT3U(write_asize
, <=, target_sz
);
6686 ARCSTAT_BUMP(arcstat_l2_writes_sent
);
6687 ARCSTAT_INCR(arcstat_l2_write_bytes
, write_asize
);
6688 ARCSTAT_INCR(arcstat_l2_size
, write_sz
);
6689 ARCSTAT_INCR(arcstat_l2_asize
, stats_size
);
6690 vdev_space_update(dev
->l2ad_vdev
, stats_size
, 0, 0);
6693 * Bump device hand to the device start if it is approaching the end.
6694 * l2arc_evict() will already have evicted ahead for this case.
6696 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- target_sz
)) {
6697 dev
->l2ad_hand
= dev
->l2ad_start
;
6698 dev
->l2ad_first
= B_FALSE
;
6701 dev
->l2ad_writing
= B_TRUE
;
6702 (void) zio_wait(pio
);
6703 dev
->l2ad_writing
= B_FALSE
;
6705 return (write_asize
);
6709 * Compresses an L2ARC buffer.
6710 * The data to be compressed must be prefilled in l1hdr.b_tmp_cdata and its
6711 * size in l2hdr->b_asize. This routine tries to compress the data and
6712 * depending on the compression result there are three possible outcomes:
6713 * *) The buffer was incompressible. The original l2hdr contents were left
6714 * untouched and are ready for writing to an L2 device.
6715 * *) The buffer was all-zeros, so there is no need to write it to an L2
6716 * device. To indicate this situation b_tmp_cdata is NULL'ed, b_asize is
6717 * set to zero and b_compress is set to ZIO_COMPRESS_EMPTY.
6718 * *) Compression succeeded and b_tmp_cdata was replaced with a temporary
6719 * data buffer which holds the compressed data to be written, and b_asize
6720 * tells us how much data there is. b_compress is set to the appropriate
6721 * compression algorithm. Once writing is done, invoke
6722 * l2arc_release_cdata_buf on this l2hdr to free this temporary buffer.
6724 * Returns B_TRUE if compression succeeded, or B_FALSE if it didn't (the
6725 * buffer was incompressible).
6728 l2arc_compress_buf(arc_buf_hdr_t
*hdr
)
6731 size_t csize
, len
, rounded
;
6732 l2arc_buf_hdr_t
*l2hdr
;
6734 ASSERT(HDR_HAS_L2HDR(hdr
));
6736 l2hdr
= &hdr
->b_l2hdr
;
6738 ASSERT(HDR_HAS_L1HDR(hdr
));
6739 ASSERT3U(l2hdr
->b_compress
, ==, ZIO_COMPRESS_OFF
);
6740 ASSERT(hdr
->b_l1hdr
.b_tmp_cdata
!= NULL
);
6742 len
= l2hdr
->b_asize
;
6743 cdata
= zio_data_buf_alloc(len
);
6744 ASSERT3P(cdata
, !=, NULL
);
6745 csize
= zio_compress_data(ZIO_COMPRESS_LZ4
, hdr
->b_l1hdr
.b_tmp_cdata
,
6746 cdata
, l2hdr
->b_asize
);
6748 rounded
= P2ROUNDUP(csize
, (size_t)SPA_MINBLOCKSIZE
);
6749 if (rounded
> csize
) {
6750 bzero((char *)cdata
+ csize
, rounded
- csize
);
6755 /* zero block, indicate that there's nothing to write */
6756 zio_data_buf_free(cdata
, len
);
6757 l2hdr
->b_compress
= ZIO_COMPRESS_EMPTY
;
6759 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6760 ARCSTAT_BUMP(arcstat_l2_compress_zeros
);
6762 } else if (csize
> 0 && csize
< len
) {
6764 * Compression succeeded, we'll keep the cdata around for
6765 * writing and release it afterwards.
6767 l2hdr
->b_compress
= ZIO_COMPRESS_LZ4
;
6768 l2hdr
->b_asize
= csize
;
6769 hdr
->b_l1hdr
.b_tmp_cdata
= cdata
;
6770 ARCSTAT_BUMP(arcstat_l2_compress_successes
);
6774 * Compression failed, release the compressed buffer.
6775 * l2hdr will be left unmodified.
6777 zio_data_buf_free(cdata
, len
);
6778 ARCSTAT_BUMP(arcstat_l2_compress_failures
);
6784 * Decompresses a zio read back from an l2arc device. On success, the
6785 * underlying zio's io_data buffer is overwritten by the uncompressed
6786 * version. On decompression error (corrupt compressed stream), the
6787 * zio->io_error value is set to signal an I/O error.
6789 * Please note that the compressed data stream is not checksummed, so
6790 * if the underlying device is experiencing data corruption, we may feed
6791 * corrupt data to the decompressor, so the decompressor needs to be
6792 * able to handle this situation (LZ4 does).
6795 l2arc_decompress_zio(zio_t
*zio
, arc_buf_hdr_t
*hdr
, enum zio_compress c
)
6800 ASSERT(L2ARC_IS_VALID_COMPRESS(c
));
6802 if (zio
->io_error
!= 0) {
6804 * An io error has occured, just restore the original io
6805 * size in preparation for a main pool read.
6807 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
6811 if (c
== ZIO_COMPRESS_EMPTY
) {
6813 * An empty buffer results in a null zio, which means we
6814 * need to fill its io_data after we're done restoring the
6815 * buffer's contents.
6817 ASSERT(hdr
->b_l1hdr
.b_buf
!= NULL
);
6818 bzero(hdr
->b_l1hdr
.b_buf
->b_data
, hdr
->b_size
);
6819 zio
->io_data
= zio
->io_orig_data
= hdr
->b_l1hdr
.b_buf
->b_data
;
6821 ASSERT(zio
->io_data
!= NULL
);
6823 * We copy the compressed data from the start of the arc buffer
6824 * (the zio_read will have pulled in only what we need, the
6825 * rest is garbage which we will overwrite at decompression)
6826 * and then decompress back to the ARC data buffer. This way we
6827 * can minimize copying by simply decompressing back over the
6828 * original compressed data (rather than decompressing to an
6829 * aux buffer and then copying back the uncompressed buffer,
6830 * which is likely to be much larger).
6832 csize
= zio
->io_size
;
6833 cdata
= zio_data_buf_alloc(csize
);
6834 bcopy(zio
->io_data
, cdata
, csize
);
6835 if (zio_decompress_data(c
, cdata
, zio
->io_data
, csize
,
6837 zio
->io_error
= EIO
;
6838 zio_data_buf_free(cdata
, csize
);
6841 /* Restore the expected uncompressed IO size. */
6842 zio
->io_orig_size
= zio
->io_size
= hdr
->b_size
;
6846 * Releases the temporary b_tmp_cdata buffer in an l2arc header structure.
6847 * This buffer serves as a temporary holder of compressed data while
6848 * the buffer entry is being written to an l2arc device. Once that is
6849 * done, we can dispose of it.
6852 l2arc_release_cdata_buf(arc_buf_hdr_t
*hdr
)
6854 enum zio_compress comp
;
6856 ASSERT(HDR_HAS_L1HDR(hdr
));
6857 ASSERT(HDR_HAS_L2HDR(hdr
));
6858 comp
= hdr
->b_l2hdr
.b_compress
;
6859 ASSERT(comp
== ZIO_COMPRESS_OFF
|| L2ARC_IS_VALID_COMPRESS(comp
));
6861 if (comp
== ZIO_COMPRESS_OFF
) {
6863 * In this case, b_tmp_cdata points to the same buffer
6864 * as the arc_buf_t's b_data field. We don't want to
6865 * free it, since the arc_buf_t will handle that.
6867 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6868 } else if (comp
== ZIO_COMPRESS_EMPTY
) {
6870 * In this case, b_tmp_cdata was compressed to an empty
6871 * buffer, thus there's nothing to free and b_tmp_cdata
6872 * should have been set to NULL in l2arc_write_buffers().
6874 ASSERT3P(hdr
->b_l1hdr
.b_tmp_cdata
, ==, NULL
);
6877 * If the data was compressed, then we've allocated a
6878 * temporary buffer for it, so now we need to release it.
6880 ASSERT(hdr
->b_l1hdr
.b_tmp_cdata
!= NULL
);
6881 zio_data_buf_free(hdr
->b_l1hdr
.b_tmp_cdata
,
6883 hdr
->b_l1hdr
.b_tmp_cdata
= NULL
;
6889 * This thread feeds the L2ARC at regular intervals. This is the beating
6890 * heart of the L2ARC.
6893 l2arc_feed_thread(void)
6898 uint64_t size
, wrote
;
6899 clock_t begin
, next
= ddi_get_lbolt();
6900 boolean_t headroom_boost
= B_FALSE
;
6901 fstrans_cookie_t cookie
;
6903 CALLB_CPR_INIT(&cpr
, &l2arc_feed_thr_lock
, callb_generic_cpr
, FTAG
);
6905 mutex_enter(&l2arc_feed_thr_lock
);
6907 cookie
= spl_fstrans_mark();
6908 while (l2arc_thread_exit
== 0) {
6909 CALLB_CPR_SAFE_BEGIN(&cpr
);
6910 (void) cv_timedwait_sig(&l2arc_feed_thr_cv
,
6911 &l2arc_feed_thr_lock
, next
);
6912 CALLB_CPR_SAFE_END(&cpr
, &l2arc_feed_thr_lock
);
6913 next
= ddi_get_lbolt() + hz
;
6916 * Quick check for L2ARC devices.
6918 mutex_enter(&l2arc_dev_mtx
);
6919 if (l2arc_ndev
== 0) {
6920 mutex_exit(&l2arc_dev_mtx
);
6923 mutex_exit(&l2arc_dev_mtx
);
6924 begin
= ddi_get_lbolt();
6927 * This selects the next l2arc device to write to, and in
6928 * doing so the next spa to feed from: dev->l2ad_spa. This
6929 * will return NULL if there are now no l2arc devices or if
6930 * they are all faulted.
6932 * If a device is returned, its spa's config lock is also
6933 * held to prevent device removal. l2arc_dev_get_next()
6934 * will grab and release l2arc_dev_mtx.
6936 if ((dev
= l2arc_dev_get_next()) == NULL
)
6939 spa
= dev
->l2ad_spa
;
6940 ASSERT(spa
!= NULL
);
6943 * If the pool is read-only then force the feed thread to
6944 * sleep a little longer.
6946 if (!spa_writeable(spa
)) {
6947 next
= ddi_get_lbolt() + 5 * l2arc_feed_secs
* hz
;
6948 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6953 * Avoid contributing to memory pressure.
6955 if (arc_reclaim_needed()) {
6956 ARCSTAT_BUMP(arcstat_l2_abort_lowmem
);
6957 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6961 ARCSTAT_BUMP(arcstat_l2_feeds
);
6963 size
= l2arc_write_size();
6966 * Evict L2ARC buffers that will be overwritten.
6968 l2arc_evict(dev
, size
, B_FALSE
);
6971 * Write ARC buffers.
6973 wrote
= l2arc_write_buffers(spa
, dev
, size
, &headroom_boost
);
6976 * Calculate interval between writes.
6978 next
= l2arc_write_interval(begin
, size
, wrote
);
6979 spa_config_exit(spa
, SCL_L2ARC
, dev
);
6981 spl_fstrans_unmark(cookie
);
6983 l2arc_thread_exit
= 0;
6984 cv_broadcast(&l2arc_feed_thr_cv
);
6985 CALLB_CPR_EXIT(&cpr
); /* drops l2arc_feed_thr_lock */
6990 l2arc_vdev_present(vdev_t
*vd
)
6994 mutex_enter(&l2arc_dev_mtx
);
6995 for (dev
= list_head(l2arc_dev_list
); dev
!= NULL
;
6996 dev
= list_next(l2arc_dev_list
, dev
)) {
6997 if (dev
->l2ad_vdev
== vd
)
7000 mutex_exit(&l2arc_dev_mtx
);
7002 return (dev
!= NULL
);
7006 * Add a vdev for use by the L2ARC. By this point the spa has already
7007 * validated the vdev and opened it.
7010 l2arc_add_vdev(spa_t
*spa
, vdev_t
*vd
)
7012 l2arc_dev_t
*adddev
;
7014 ASSERT(!l2arc_vdev_present(vd
));
7017 * Create a new l2arc device entry.
7019 adddev
= kmem_zalloc(sizeof (l2arc_dev_t
), KM_SLEEP
);
7020 adddev
->l2ad_spa
= spa
;
7021 adddev
->l2ad_vdev
= vd
;
7022 adddev
->l2ad_start
= VDEV_LABEL_START_SIZE
;
7023 adddev
->l2ad_end
= VDEV_LABEL_START_SIZE
+ vdev_get_min_asize(vd
);
7024 adddev
->l2ad_hand
= adddev
->l2ad_start
;
7025 adddev
->l2ad_first
= B_TRUE
;
7026 adddev
->l2ad_writing
= B_FALSE
;
7027 list_link_init(&adddev
->l2ad_node
);
7029 mutex_init(&adddev
->l2ad_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
7031 * This is a list of all ARC buffers that are still valid on the
7034 list_create(&adddev
->l2ad_buflist
, sizeof (arc_buf_hdr_t
),
7035 offsetof(arc_buf_hdr_t
, b_l2hdr
.b_l2node
));
7037 vdev_space_update(vd
, 0, 0, adddev
->l2ad_end
- adddev
->l2ad_hand
);
7038 refcount_create(&adddev
->l2ad_alloc
);
7041 * Add device to global list
7043 mutex_enter(&l2arc_dev_mtx
);
7044 list_insert_head(l2arc_dev_list
, adddev
);
7045 atomic_inc_64(&l2arc_ndev
);
7046 mutex_exit(&l2arc_dev_mtx
);
7050 * Remove a vdev from the L2ARC.
7053 l2arc_remove_vdev(vdev_t
*vd
)
7055 l2arc_dev_t
*dev
, *nextdev
, *remdev
= NULL
;
7058 * Find the device by vdev
7060 mutex_enter(&l2arc_dev_mtx
);
7061 for (dev
= list_head(l2arc_dev_list
); dev
; dev
= nextdev
) {
7062 nextdev
= list_next(l2arc_dev_list
, dev
);
7063 if (vd
== dev
->l2ad_vdev
) {
7068 ASSERT(remdev
!= NULL
);
7071 * Remove device from global list
7073 list_remove(l2arc_dev_list
, remdev
);
7074 l2arc_dev_last
= NULL
; /* may have been invalidated */
7075 atomic_dec_64(&l2arc_ndev
);
7076 mutex_exit(&l2arc_dev_mtx
);
7079 * Clear all buflists and ARC references. L2ARC device flush.
7081 l2arc_evict(remdev
, 0, B_TRUE
);
7082 list_destroy(&remdev
->l2ad_buflist
);
7083 mutex_destroy(&remdev
->l2ad_mtx
);
7084 refcount_destroy(&remdev
->l2ad_alloc
);
7085 kmem_free(remdev
, sizeof (l2arc_dev_t
));
7091 l2arc_thread_exit
= 0;
7093 l2arc_writes_sent
= 0;
7094 l2arc_writes_done
= 0;
7096 mutex_init(&l2arc_feed_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
7097 cv_init(&l2arc_feed_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
7098 mutex_init(&l2arc_dev_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
7099 mutex_init(&l2arc_free_on_write_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
7101 l2arc_dev_list
= &L2ARC_dev_list
;
7102 l2arc_free_on_write
= &L2ARC_free_on_write
;
7103 list_create(l2arc_dev_list
, sizeof (l2arc_dev_t
),
7104 offsetof(l2arc_dev_t
, l2ad_node
));
7105 list_create(l2arc_free_on_write
, sizeof (l2arc_data_free_t
),
7106 offsetof(l2arc_data_free_t
, l2df_list_node
));
7113 * This is called from dmu_fini(), which is called from spa_fini();
7114 * Because of this, we can assume that all l2arc devices have
7115 * already been removed when the pools themselves were removed.
7118 l2arc_do_free_on_write();
7120 mutex_destroy(&l2arc_feed_thr_lock
);
7121 cv_destroy(&l2arc_feed_thr_cv
);
7122 mutex_destroy(&l2arc_dev_mtx
);
7123 mutex_destroy(&l2arc_free_on_write_mtx
);
7125 list_destroy(l2arc_dev_list
);
7126 list_destroy(l2arc_free_on_write
);
7132 if (!(spa_mode_global
& FWRITE
))
7135 (void) thread_create(NULL
, 0, l2arc_feed_thread
, NULL
, 0, &p0
,
7136 TS_RUN
, defclsyspri
);
7142 if (!(spa_mode_global
& FWRITE
))
7145 mutex_enter(&l2arc_feed_thr_lock
);
7146 cv_signal(&l2arc_feed_thr_cv
); /* kick thread out of startup */
7147 l2arc_thread_exit
= 1;
7148 while (l2arc_thread_exit
!= 0)
7149 cv_wait(&l2arc_feed_thr_cv
, &l2arc_feed_thr_lock
);
7150 mutex_exit(&l2arc_feed_thr_lock
);
7153 #if defined(_KERNEL) && defined(HAVE_SPL)
7154 EXPORT_SYMBOL(arc_buf_size
);
7155 EXPORT_SYMBOL(arc_write
);
7156 EXPORT_SYMBOL(arc_read
);
7157 EXPORT_SYMBOL(arc_buf_remove_ref
);
7158 EXPORT_SYMBOL(arc_buf_info
);
7159 EXPORT_SYMBOL(arc_getbuf_func
);
7160 EXPORT_SYMBOL(arc_add_prune_callback
);
7161 EXPORT_SYMBOL(arc_remove_prune_callback
);
7163 module_param(zfs_arc_min
, ulong
, 0644);
7164 MODULE_PARM_DESC(zfs_arc_min
, "Min arc size");
7166 module_param(zfs_arc_max
, ulong
, 0644);
7167 MODULE_PARM_DESC(zfs_arc_max
, "Max arc size");
7169 module_param(zfs_arc_meta_limit
, ulong
, 0644);
7170 MODULE_PARM_DESC(zfs_arc_meta_limit
, "Meta limit for arc size");
7172 module_param(zfs_arc_meta_min
, ulong
, 0644);
7173 MODULE_PARM_DESC(zfs_arc_meta_min
, "Min arc metadata");
7175 module_param(zfs_arc_meta_prune
, int, 0644);
7176 MODULE_PARM_DESC(zfs_arc_meta_prune
, "Meta objects to scan for prune");
7178 module_param(zfs_arc_meta_adjust_restarts
, int, 0644);
7179 MODULE_PARM_DESC(zfs_arc_meta_adjust_restarts
,
7180 "Limit number of restarts in arc_adjust_meta");
7182 module_param(zfs_arc_meta_strategy
, int, 0644);
7183 MODULE_PARM_DESC(zfs_arc_meta_strategy
, "Meta reclaim strategy");
7185 module_param(zfs_arc_grow_retry
, int, 0644);
7186 MODULE_PARM_DESC(zfs_arc_grow_retry
, "Seconds before growing arc size");
7188 module_param(zfs_arc_p_aggressive_disable
, int, 0644);
7189 MODULE_PARM_DESC(zfs_arc_p_aggressive_disable
, "disable aggressive arc_p grow");
7191 module_param(zfs_arc_p_dampener_disable
, int, 0644);
7192 MODULE_PARM_DESC(zfs_arc_p_dampener_disable
, "disable arc_p adapt dampener");
7194 module_param(zfs_arc_shrink_shift
, int, 0644);
7195 MODULE_PARM_DESC(zfs_arc_shrink_shift
, "log2(fraction of arc to reclaim)");
7197 module_param(zfs_arc_p_min_shift
, int, 0644);
7198 MODULE_PARM_DESC(zfs_arc_p_min_shift
, "arc_c shift to calc min/max arc_p");
7200 module_param(zfs_disable_dup_eviction
, int, 0644);
7201 MODULE_PARM_DESC(zfs_disable_dup_eviction
, "disable duplicate buffer eviction");
7203 module_param(zfs_arc_average_blocksize
, int, 0444);
7204 MODULE_PARM_DESC(zfs_arc_average_blocksize
, "Target average block size");
7206 module_param(zfs_arc_min_prefetch_lifespan
, int, 0644);
7207 MODULE_PARM_DESC(zfs_arc_min_prefetch_lifespan
, "Min life of prefetch block");
7209 module_param(zfs_arc_num_sublists_per_state
, int, 0644);
7210 MODULE_PARM_DESC(zfs_arc_num_sublists_per_state
,
7211 "Number of sublists used in each of the ARC state lists");
7213 module_param(l2arc_write_max
, ulong
, 0644);
7214 MODULE_PARM_DESC(l2arc_write_max
, "Max write bytes per interval");
7216 module_param(l2arc_write_boost
, ulong
, 0644);
7217 MODULE_PARM_DESC(l2arc_write_boost
, "Extra write bytes during device warmup");
7219 module_param(l2arc_headroom
, ulong
, 0644);
7220 MODULE_PARM_DESC(l2arc_headroom
, "Number of max device writes to precache");
7222 module_param(l2arc_headroom_boost
, ulong
, 0644);
7223 MODULE_PARM_DESC(l2arc_headroom_boost
, "Compressed l2arc_headroom multiplier");
7225 module_param(l2arc_max_block_size
, ulong
, 0644);
7226 MODULE_PARM_DESC(l2arc_max_block_size
, "Skip L2ARC buffers larger than N");
7228 module_param(l2arc_feed_secs
, ulong
, 0644);
7229 MODULE_PARM_DESC(l2arc_feed_secs
, "Seconds between L2ARC writing");
7231 module_param(l2arc_feed_min_ms
, ulong
, 0644);
7232 MODULE_PARM_DESC(l2arc_feed_min_ms
, "Min feed interval in milliseconds");
7234 module_param(l2arc_noprefetch
, int, 0644);
7235 MODULE_PARM_DESC(l2arc_noprefetch
, "Skip caching prefetched buffers");
7237 module_param(l2arc_nocompress
, int, 0644);
7238 MODULE_PARM_DESC(l2arc_nocompress
, "Skip compressing L2ARC buffers");
7240 module_param(l2arc_feed_again
, int, 0644);
7241 MODULE_PARM_DESC(l2arc_feed_again
, "Turbo L2ARC warmup");
7243 module_param(l2arc_norw
, int, 0644);
7244 MODULE_PARM_DESC(l2arc_norw
, "No reads during writes");
7246 module_param(zfs_arc_lotsfree_percent
, int, 0644);
7247 MODULE_PARM_DESC(zfs_arc_lotsfree_percent
,
7248 "System free memory I/O throttle in bytes");
7250 module_param(zfs_arc_sys_free
, ulong
, 0644);
7251 MODULE_PARM_DESC(zfs_arc_sys_free
, "System free memory target size in bytes");
7253 module_param(zfs_arc_dnode_limit
, ulong
, 0644);
7254 MODULE_PARM_DESC(zfs_arc_dnode_limit
, "Minimum bytes of dnodes in arc");
7256 module_param(zfs_arc_dnode_reduce_percent
, ulong
, 0644);
7257 MODULE_PARM_DESC(zfs_arc_dnode_reduce_percent
,
7258 "Percentage of excess dnodes to try to unpin");