4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2012, Joyent, Inc. All rights reserved.
24 * Copyright (c) 2011, 2018 by Delphix. All rights reserved.
25 * Copyright (c) 2014 by Saso Kiselkov. All rights reserved.
26 * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
30 * DVA-based Adjustable Replacement Cache
32 * While much of the theory of operation used here is
33 * based on the self-tuning, low overhead replacement cache
34 * presented by Megiddo and Modha at FAST 2003, there are some
35 * significant differences:
37 * 1. The Megiddo and Modha model assumes any page is evictable.
38 * Pages in its cache cannot be "locked" into memory. This makes
39 * the eviction algorithm simple: evict the last page in the list.
40 * This also make the performance characteristics easy to reason
41 * about. Our cache is not so simple. At any given moment, some
42 * subset of the blocks in the cache are un-evictable because we
43 * have handed out a reference to them. Blocks are only evictable
44 * when there are no external references active. This makes
45 * eviction far more problematic: we choose to evict the evictable
46 * blocks that are the "lowest" in the list.
48 * There are times when it is not possible to evict the requested
49 * space. In these circumstances we are unable to adjust the cache
50 * size. To prevent the cache growing unbounded at these times we
51 * implement a "cache throttle" that slows the flow of new data
52 * into the cache until we can make space available.
54 * 2. The Megiddo and Modha model assumes a fixed cache size.
55 * Pages are evicted when the cache is full and there is a cache
56 * miss. Our model has a variable sized cache. It grows with
57 * high use, but also tries to react to memory pressure from the
58 * operating system: decreasing its size when system memory is
61 * 3. The Megiddo and Modha model assumes a fixed page size. All
62 * elements of the cache are therefore exactly the same size. So
63 * when adjusting the cache size following a cache miss, its simply
64 * a matter of choosing a single page to evict. In our model, we
65 * have variable sized cache blocks (rangeing from 512 bytes to
66 * 128K bytes). We therefore choose a set of blocks to evict to make
67 * space for a cache miss that approximates as closely as possible
68 * the space used by the new block.
70 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
71 * by N. Megiddo & D. Modha, FAST 2003
77 * A new reference to a cache buffer can be obtained in two
78 * ways: 1) via a hash table lookup using the DVA as a key,
79 * or 2) via one of the ARC lists. The arc_read() interface
80 * uses method 1, while the internal ARC algorithms for
81 * adjusting the cache use method 2. We therefore provide two
82 * types of locks: 1) the hash table lock array, and 2) the
85 * Buffers do not have their own mutexes, rather they rely on the
86 * hash table mutexes for the bulk of their protection (i.e. most
87 * fields in the arc_buf_hdr_t are protected by these mutexes).
89 * buf_hash_find() returns the appropriate mutex (held) when it
90 * locates the requested buffer in the hash table. It returns
91 * NULL for the mutex if the buffer was not in the table.
93 * buf_hash_remove() expects the appropriate hash mutex to be
94 * already held before it is invoked.
96 * Each ARC state also has a mutex which is used to protect the
97 * buffer list associated with the state. When attempting to
98 * obtain a hash table lock while holding an ARC list lock you
99 * must use: mutex_tryenter() to avoid deadlock. Also note that
100 * the active state mutex must be held before the ghost state mutex.
102 * It as also possible to register a callback which is run when the
103 * arc_meta_limit is reached and no buffers can be safely evicted. In
104 * this case the arc user should drop a reference on some arc buffers so
105 * they can be reclaimed and the arc_meta_limit honored. For example,
106 * when using the ZPL each dentry holds a references on a znode. These
107 * dentries must be pruned before the arc buffer holding the znode can
110 * Note that the majority of the performance stats are manipulated
111 * with atomic operations.
113 * The L2ARC uses the l2ad_mtx on each vdev for the following:
115 * - L2ARC buflist creation
116 * - L2ARC buflist eviction
117 * - L2ARC write completion, which walks L2ARC buflists
118 * - ARC header destruction, as it removes from L2ARC buflists
119 * - ARC header release, as it removes from L2ARC buflists
125 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure.
126 * This structure can point either to a block that is still in the cache or to
127 * one that is only accessible in an L2 ARC device, or it can provide
128 * information about a block that was recently evicted. If a block is
129 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough
130 * information to retrieve it from the L2ARC device. This information is
131 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block
132 * that is in this state cannot access the data directly.
134 * Blocks that are actively being referenced or have not been evicted
135 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within
136 * the arc_buf_hdr_t that will point to the data block in memory. A block can
137 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC
138 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and
139 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd).
141 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the
142 * ability to store the physical data (b_pabd) associated with the DVA of the
143 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block,
144 * it will match its on-disk compression characteristics. This behavior can be
145 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the
146 * compressed ARC functionality is disabled, the b_pabd will point to an
147 * uncompressed version of the on-disk data.
149 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each
150 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it.
151 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC
152 * consumer. The ARC will provide references to this data and will keep it
153 * cached until it is no longer in use. The ARC caches only the L1ARC's physical
154 * data block and will evict any arc_buf_t that is no longer referenced. The
155 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the
156 * "overhead_size" kstat.
158 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or
159 * compressed form. The typical case is that consumers will want uncompressed
160 * data, and when that happens a new data buffer is allocated where the data is
161 * decompressed for them to use. Currently the only consumer who wants
162 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it
163 * exists on disk. When this happens, the arc_buf_t's data buffer is shared
164 * with the arc_buf_hdr_t.
166 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The
167 * first one is owned by a compressed send consumer (and therefore references
168 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be
169 * used by any other consumer (and has its own uncompressed copy of the data
184 * | b_buf +------------>+-----------+ arc_buf_t
185 * | b_pabd +-+ |b_next +---->+-----------+
186 * +-----------+ | |-----------| |b_next +-->NULL
187 * | |b_comp = T | +-----------+
188 * | |b_data +-+ |b_comp = F |
189 * | +-----------+ | |b_data +-+
190 * +->+------+ | +-----------+ |
192 * data | |<--------------+ | uncompressed
193 * +------+ compressed, | data
194 * shared +-->+------+
199 * When a consumer reads a block, the ARC must first look to see if the
200 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new
201 * arc_buf_t and either copies uncompressed data into a new data buffer from an
202 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a
203 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the
204 * hdr is compressed and the desired compression characteristics of the
205 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the
206 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be
207 * the last buffer in the hdr's b_buf list, however a shared compressed buf can
208 * be anywhere in the hdr's list.
210 * The diagram below shows an example of an uncompressed ARC hdr that is
211 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is
212 * the last element in the buf list):
224 * | | arc_buf_t (shared)
225 * | b_buf +------------>+---------+ arc_buf_t
226 * | | |b_next +---->+---------+
227 * | b_pabd +-+ |---------| |b_next +-->NULL
228 * +-----------+ | | | +---------+
230 * | +---------+ | |b_data +-+
231 * +->+------+ | +---------+ |
233 * uncompressed | | | |
236 * | uncompressed | | |
239 * +---------------------------------+
241 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd
242 * since the physical block is about to be rewritten. The new data contents
243 * will be contained in the arc_buf_t. As the I/O pipeline performs the write,
244 * it may compress the data before writing it to disk. The ARC will be called
245 * with the transformed data and will bcopy the transformed on-disk block into
246 * a newly allocated b_pabd. Writes are always done into buffers which have
247 * either been loaned (and hence are new and don't have other readers) or
248 * buffers which have been released (and hence have their own hdr, if there
249 * were originally other readers of the buf's original hdr). This ensures that
250 * the ARC only needs to update a single buf and its hdr after a write occurs.
252 * When the L2ARC is in use, it will also take advantage of the b_pabd. The
253 * L2ARC will always write the contents of b_pabd to the L2ARC. This means
254 * that when compressed ARC is enabled that the L2ARC blocks are identical
255 * to the on-disk block in the main data pool. This provides a significant
256 * advantage since the ARC can leverage the bp's checksum when reading from the
257 * L2ARC to determine if the contents are valid. However, if the compressed
258 * ARC is disabled, then the L2ARC's block must be transformed to look
259 * like the physical block in the main data pool before comparing the
260 * checksum and determining its validity.
262 * The L1ARC has a slightly different system for storing encrypted data.
263 * Raw (encrypted + possibly compressed) data has a few subtle differences from
264 * data that is just compressed. The biggest difference is that it is not
265 * possible to decrypt encrypted data (or visa versa) if the keys aren't loaded.
266 * The other difference is that encryption cannot be treated as a suggestion.
267 * If a caller would prefer compressed data, but they actually wind up with
268 * uncompressed data the worst thing that could happen is there might be a
269 * performance hit. If the caller requests encrypted data, however, we must be
270 * sure they actually get it or else secret information could be leaked. Raw
271 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore,
272 * may have both an encrypted version and a decrypted version of its data at
273 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is
274 * copied out of this header. To avoid complications with b_pabd, raw buffers
280 #include <sys/spa_impl.h>
281 #include <sys/zio_compress.h>
282 #include <sys/zio_checksum.h>
283 #include <sys/zfs_context.h>
285 #include <sys/refcount.h>
286 #include <sys/vdev.h>
287 #include <sys/vdev_impl.h>
288 #include <sys/dsl_pool.h>
289 #include <sys/zio_checksum.h>
290 #include <sys/multilist.h>
293 #include <sys/fm/fs/zfs.h>
295 #include <sys/shrinker.h>
296 #include <sys/vmsystm.h>
298 #include <linux/page_compat.h>
300 #include <sys/callb.h>
301 #include <sys/kstat.h>
302 #include <sys/dmu_tx.h>
303 #include <zfs_fletcher.h>
304 #include <sys/arc_impl.h>
305 #include <sys/trace_arc.h>
306 #include <sys/aggsum.h>
307 #include <sys/cityhash.h>
310 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
311 boolean_t arc_watch
= B_FALSE
;
314 static kmutex_t arc_reclaim_lock
;
315 static kcondvar_t arc_reclaim_thread_cv
;
316 static boolean_t arc_reclaim_thread_exit
;
317 static kcondvar_t arc_reclaim_waiters_cv
;
320 * The number of headers to evict in arc_evict_state_impl() before
321 * dropping the sublist lock and evicting from another sublist. A lower
322 * value means we're more likely to evict the "correct" header (i.e. the
323 * oldest header in the arc state), but comes with higher overhead
324 * (i.e. more invocations of arc_evict_state_impl()).
326 int zfs_arc_evict_batch_limit
= 10;
328 /* number of seconds before growing cache again */
329 static int arc_grow_retry
= 5;
331 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */
332 int zfs_arc_overflow_shift
= 8;
334 /* shift of arc_c for calculating both min and max arc_p */
335 static int arc_p_min_shift
= 4;
337 /* log2(fraction of arc to reclaim) */
338 static int arc_shrink_shift
= 7;
340 /* percent of pagecache to reclaim arc to */
342 static uint_t zfs_arc_pc_percent
= 0;
346 * log2(fraction of ARC which must be free to allow growing).
347 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory,
348 * when reading a new block into the ARC, we will evict an equal-sized block
351 * This must be less than arc_shrink_shift, so that when we shrink the ARC,
352 * we will still not allow it to grow.
354 int arc_no_grow_shift
= 5;
358 * minimum lifespan of a prefetch block in clock ticks
359 * (initialized in arc_init())
361 static int arc_min_prefetch_ms
;
362 static int arc_min_prescient_prefetch_ms
;
365 * If this percent of memory is free, don't throttle.
367 int arc_lotsfree_percent
= 10;
372 * The arc has filled available memory and has now warmed up.
374 static boolean_t arc_warm
;
377 * log2 fraction of the zio arena to keep free.
379 int arc_zio_arena_free_shift
= 2;
382 * These tunables are for performance analysis.
384 unsigned long zfs_arc_max
= 0;
385 unsigned long zfs_arc_min
= 0;
386 unsigned long zfs_arc_meta_limit
= 0;
387 unsigned long zfs_arc_meta_min
= 0;
388 unsigned long zfs_arc_dnode_limit
= 0;
389 unsigned long zfs_arc_dnode_reduce_percent
= 10;
390 int zfs_arc_grow_retry
= 0;
391 int zfs_arc_shrink_shift
= 0;
392 int zfs_arc_p_min_shift
= 0;
393 int zfs_arc_average_blocksize
= 8 * 1024; /* 8KB */
396 * ARC dirty data constraints for arc_tempreserve_space() throttle.
398 unsigned long zfs_arc_dirty_limit_percent
= 50; /* total dirty data limit */
399 unsigned long zfs_arc_anon_limit_percent
= 25; /* anon block dirty limit */
400 unsigned long zfs_arc_pool_dirty_percent
= 20; /* each pool's anon allowance */
403 * Enable or disable compressed arc buffers.
405 int zfs_compressed_arc_enabled
= B_TRUE
;
408 * ARC will evict meta buffers that exceed arc_meta_limit. This
409 * tunable make arc_meta_limit adjustable for different workloads.
411 unsigned long zfs_arc_meta_limit_percent
= 75;
414 * Percentage that can be consumed by dnodes of ARC meta buffers.
416 unsigned long zfs_arc_dnode_limit_percent
= 10;
419 * These tunables are Linux specific
421 unsigned long zfs_arc_sys_free
= 0;
422 int zfs_arc_min_prefetch_ms
= 0;
423 int zfs_arc_min_prescient_prefetch_ms
= 0;
424 int zfs_arc_p_dampener_disable
= 1;
425 int zfs_arc_meta_prune
= 10000;
426 int zfs_arc_meta_strategy
= ARC_STRATEGY_META_BALANCED
;
427 int zfs_arc_meta_adjust_restarts
= 4096;
428 int zfs_arc_lotsfree_percent
= 10;
431 static arc_state_t ARC_anon
;
432 static arc_state_t ARC_mru
;
433 static arc_state_t ARC_mru_ghost
;
434 static arc_state_t ARC_mfu
;
435 static arc_state_t ARC_mfu_ghost
;
436 static arc_state_t ARC_l2c_only
;
438 typedef struct arc_stats
{
439 kstat_named_t arcstat_hits
;
440 kstat_named_t arcstat_misses
;
441 kstat_named_t arcstat_demand_data_hits
;
442 kstat_named_t arcstat_demand_data_misses
;
443 kstat_named_t arcstat_demand_metadata_hits
;
444 kstat_named_t arcstat_demand_metadata_misses
;
445 kstat_named_t arcstat_prefetch_data_hits
;
446 kstat_named_t arcstat_prefetch_data_misses
;
447 kstat_named_t arcstat_prefetch_metadata_hits
;
448 kstat_named_t arcstat_prefetch_metadata_misses
;
449 kstat_named_t arcstat_mru_hits
;
450 kstat_named_t arcstat_mru_ghost_hits
;
451 kstat_named_t arcstat_mfu_hits
;
452 kstat_named_t arcstat_mfu_ghost_hits
;
453 kstat_named_t arcstat_deleted
;
455 * Number of buffers that could not be evicted because the hash lock
456 * was held by another thread. The lock may not necessarily be held
457 * by something using the same buffer, since hash locks are shared
458 * by multiple buffers.
460 kstat_named_t arcstat_mutex_miss
;
462 * Number of buffers skipped when updating the access state due to the
463 * header having already been released after acquiring the hash lock.
465 kstat_named_t arcstat_access_skip
;
467 * Number of buffers skipped because they have I/O in progress, are
468 * indirect prefetch buffers that have not lived long enough, or are
469 * not from the spa we're trying to evict from.
471 kstat_named_t arcstat_evict_skip
;
473 * Number of times arc_evict_state() was unable to evict enough
474 * buffers to reach its target amount.
476 kstat_named_t arcstat_evict_not_enough
;
477 kstat_named_t arcstat_evict_l2_cached
;
478 kstat_named_t arcstat_evict_l2_eligible
;
479 kstat_named_t arcstat_evict_l2_ineligible
;
480 kstat_named_t arcstat_evict_l2_skip
;
481 kstat_named_t arcstat_hash_elements
;
482 kstat_named_t arcstat_hash_elements_max
;
483 kstat_named_t arcstat_hash_collisions
;
484 kstat_named_t arcstat_hash_chains
;
485 kstat_named_t arcstat_hash_chain_max
;
486 kstat_named_t arcstat_p
;
487 kstat_named_t arcstat_c
;
488 kstat_named_t arcstat_c_min
;
489 kstat_named_t arcstat_c_max
;
490 /* Not updated directly; only synced in arc_kstat_update. */
491 kstat_named_t arcstat_size
;
493 * Number of compressed bytes stored in the arc_buf_hdr_t's b_pabd.
494 * Note that the compressed bytes may match the uncompressed bytes
495 * if the block is either not compressed or compressed arc is disabled.
497 kstat_named_t arcstat_compressed_size
;
499 * Uncompressed size of the data stored in b_pabd. If compressed
500 * arc is disabled then this value will be identical to the stat
503 kstat_named_t arcstat_uncompressed_size
;
505 * Number of bytes stored in all the arc_buf_t's. This is classified
506 * as "overhead" since this data is typically short-lived and will
507 * be evicted from the arc when it becomes unreferenced unless the
508 * zfs_keep_uncompressed_metadata or zfs_keep_uncompressed_level
509 * values have been set (see comment in dbuf.c for more information).
511 kstat_named_t arcstat_overhead_size
;
513 * Number of bytes consumed by internal ARC structures necessary
514 * for tracking purposes; these structures are not actually
515 * backed by ARC buffers. This includes arc_buf_hdr_t structures
516 * (allocated via arc_buf_hdr_t_full and arc_buf_hdr_t_l2only
517 * caches), and arc_buf_t structures (allocated via arc_buf_t
519 * Not updated directly; only synced in arc_kstat_update.
521 kstat_named_t arcstat_hdr_size
;
523 * Number of bytes consumed by ARC buffers of type equal to
524 * ARC_BUFC_DATA. This is generally consumed by buffers backing
525 * on disk user data (e.g. plain file contents).
526 * Not updated directly; only synced in arc_kstat_update.
528 kstat_named_t arcstat_data_size
;
530 * Number of bytes consumed by ARC buffers of type equal to
531 * ARC_BUFC_METADATA. This is generally consumed by buffers
532 * backing on disk data that is used for internal ZFS
533 * structures (e.g. ZAP, dnode, indirect blocks, etc).
534 * Not updated directly; only synced in arc_kstat_update.
536 kstat_named_t arcstat_metadata_size
;
538 * Number of bytes consumed by dmu_buf_impl_t objects.
539 * Not updated directly; only synced in arc_kstat_update.
541 kstat_named_t arcstat_dbuf_size
;
543 * Number of bytes consumed by dnode_t objects.
544 * Not updated directly; only synced in arc_kstat_update.
546 kstat_named_t arcstat_dnode_size
;
548 * Number of bytes consumed by bonus buffers.
549 * Not updated directly; only synced in arc_kstat_update.
551 kstat_named_t arcstat_bonus_size
;
553 * Total number of bytes consumed by ARC buffers residing in the
554 * arc_anon state. This includes *all* buffers in the arc_anon
555 * state; e.g. data, metadata, evictable, and unevictable buffers
556 * are all included in this value.
557 * Not updated directly; only synced in arc_kstat_update.
559 kstat_named_t arcstat_anon_size
;
561 * Number of bytes consumed by ARC buffers that meet the
562 * following criteria: backing buffers of type ARC_BUFC_DATA,
563 * residing in the arc_anon state, and are eligible for eviction
564 * (e.g. have no outstanding holds on the buffer).
565 * Not updated directly; only synced in arc_kstat_update.
567 kstat_named_t arcstat_anon_evictable_data
;
569 * Number of bytes consumed by ARC buffers that meet the
570 * following criteria: backing buffers of type ARC_BUFC_METADATA,
571 * residing in the arc_anon state, and are eligible for eviction
572 * (e.g. have no outstanding holds on the buffer).
573 * Not updated directly; only synced in arc_kstat_update.
575 kstat_named_t arcstat_anon_evictable_metadata
;
577 * Total number of bytes consumed by ARC buffers residing in the
578 * arc_mru state. This includes *all* buffers in the arc_mru
579 * state; e.g. data, metadata, evictable, and unevictable buffers
580 * are all included in this value.
581 * Not updated directly; only synced in arc_kstat_update.
583 kstat_named_t arcstat_mru_size
;
585 * Number of bytes consumed by ARC buffers that meet the
586 * following criteria: backing buffers of type ARC_BUFC_DATA,
587 * residing in the arc_mru state, and are eligible for eviction
588 * (e.g. have no outstanding holds on the buffer).
589 * Not updated directly; only synced in arc_kstat_update.
591 kstat_named_t arcstat_mru_evictable_data
;
593 * Number of bytes consumed by ARC buffers that meet the
594 * following criteria: backing buffers of type ARC_BUFC_METADATA,
595 * residing in the arc_mru state, and are eligible for eviction
596 * (e.g. have no outstanding holds on the buffer).
597 * Not updated directly; only synced in arc_kstat_update.
599 kstat_named_t arcstat_mru_evictable_metadata
;
601 * Total number of bytes that *would have been* consumed by ARC
602 * buffers in the arc_mru_ghost state. The key thing to note
603 * here, is the fact that this size doesn't actually indicate
604 * RAM consumption. The ghost lists only consist of headers and
605 * don't actually have ARC buffers linked off of these headers.
606 * Thus, *if* the headers had associated ARC buffers, these
607 * buffers *would have* consumed this number of bytes.
608 * Not updated directly; only synced in arc_kstat_update.
610 kstat_named_t arcstat_mru_ghost_size
;
612 * Number of bytes that *would have been* consumed by ARC
613 * buffers that are eligible for eviction, of type
614 * ARC_BUFC_DATA, and linked off the arc_mru_ghost state.
615 * Not updated directly; only synced in arc_kstat_update.
617 kstat_named_t arcstat_mru_ghost_evictable_data
;
619 * Number of bytes that *would have been* consumed by ARC
620 * buffers that are eligible for eviction, of type
621 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
622 * Not updated directly; only synced in arc_kstat_update.
624 kstat_named_t arcstat_mru_ghost_evictable_metadata
;
626 * Total number of bytes consumed by ARC buffers residing in the
627 * arc_mfu state. This includes *all* buffers in the arc_mfu
628 * state; e.g. data, metadata, evictable, and unevictable buffers
629 * are all included in this value.
630 * Not updated directly; only synced in arc_kstat_update.
632 kstat_named_t arcstat_mfu_size
;
634 * Number of bytes consumed by ARC buffers that are eligible for
635 * eviction, of type ARC_BUFC_DATA, and reside in the arc_mfu
637 * Not updated directly; only synced in arc_kstat_update.
639 kstat_named_t arcstat_mfu_evictable_data
;
641 * Number of bytes consumed by ARC buffers that are eligible for
642 * eviction, of type ARC_BUFC_METADATA, and reside in the
644 * Not updated directly; only synced in arc_kstat_update.
646 kstat_named_t arcstat_mfu_evictable_metadata
;
648 * Total number of bytes that *would have been* consumed by ARC
649 * buffers in the arc_mfu_ghost state. See the comment above
650 * arcstat_mru_ghost_size for more details.
651 * Not updated directly; only synced in arc_kstat_update.
653 kstat_named_t arcstat_mfu_ghost_size
;
655 * Number of bytes that *would have been* consumed by ARC
656 * buffers that are eligible for eviction, of type
657 * ARC_BUFC_DATA, and linked off the arc_mfu_ghost state.
658 * Not updated directly; only synced in arc_kstat_update.
660 kstat_named_t arcstat_mfu_ghost_evictable_data
;
662 * Number of bytes that *would have been* consumed by ARC
663 * buffers that are eligible for eviction, of type
664 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
665 * Not updated directly; only synced in arc_kstat_update.
667 kstat_named_t arcstat_mfu_ghost_evictable_metadata
;
668 kstat_named_t arcstat_l2_hits
;
669 kstat_named_t arcstat_l2_misses
;
670 kstat_named_t arcstat_l2_feeds
;
671 kstat_named_t arcstat_l2_rw_clash
;
672 kstat_named_t arcstat_l2_read_bytes
;
673 kstat_named_t arcstat_l2_write_bytes
;
674 kstat_named_t arcstat_l2_writes_sent
;
675 kstat_named_t arcstat_l2_writes_done
;
676 kstat_named_t arcstat_l2_writes_error
;
677 kstat_named_t arcstat_l2_writes_lock_retry
;
678 kstat_named_t arcstat_l2_evict_lock_retry
;
679 kstat_named_t arcstat_l2_evict_reading
;
680 kstat_named_t arcstat_l2_evict_l1cached
;
681 kstat_named_t arcstat_l2_free_on_write
;
682 kstat_named_t arcstat_l2_abort_lowmem
;
683 kstat_named_t arcstat_l2_cksum_bad
;
684 kstat_named_t arcstat_l2_io_error
;
685 kstat_named_t arcstat_l2_lsize
;
686 kstat_named_t arcstat_l2_psize
;
687 /* Not updated directly; only synced in arc_kstat_update. */
688 kstat_named_t arcstat_l2_hdr_size
;
689 kstat_named_t arcstat_memory_throttle_count
;
690 kstat_named_t arcstat_memory_direct_count
;
691 kstat_named_t arcstat_memory_indirect_count
;
692 kstat_named_t arcstat_memory_all_bytes
;
693 kstat_named_t arcstat_memory_free_bytes
;
694 kstat_named_t arcstat_memory_available_bytes
;
695 kstat_named_t arcstat_no_grow
;
696 kstat_named_t arcstat_tempreserve
;
697 kstat_named_t arcstat_loaned_bytes
;
698 kstat_named_t arcstat_prune
;
699 /* Not updated directly; only synced in arc_kstat_update. */
700 kstat_named_t arcstat_meta_used
;
701 kstat_named_t arcstat_meta_limit
;
702 kstat_named_t arcstat_dnode_limit
;
703 kstat_named_t arcstat_meta_max
;
704 kstat_named_t arcstat_meta_min
;
705 kstat_named_t arcstat_async_upgrade_sync
;
706 kstat_named_t arcstat_demand_hit_predictive_prefetch
;
707 kstat_named_t arcstat_demand_hit_prescient_prefetch
;
708 kstat_named_t arcstat_need_free
;
709 kstat_named_t arcstat_sys_free
;
710 kstat_named_t arcstat_raw_size
;
713 static arc_stats_t arc_stats
= {
714 { "hits", KSTAT_DATA_UINT64
},
715 { "misses", KSTAT_DATA_UINT64
},
716 { "demand_data_hits", KSTAT_DATA_UINT64
},
717 { "demand_data_misses", KSTAT_DATA_UINT64
},
718 { "demand_metadata_hits", KSTAT_DATA_UINT64
},
719 { "demand_metadata_misses", KSTAT_DATA_UINT64
},
720 { "prefetch_data_hits", KSTAT_DATA_UINT64
},
721 { "prefetch_data_misses", KSTAT_DATA_UINT64
},
722 { "prefetch_metadata_hits", KSTAT_DATA_UINT64
},
723 { "prefetch_metadata_misses", KSTAT_DATA_UINT64
},
724 { "mru_hits", KSTAT_DATA_UINT64
},
725 { "mru_ghost_hits", KSTAT_DATA_UINT64
},
726 { "mfu_hits", KSTAT_DATA_UINT64
},
727 { "mfu_ghost_hits", KSTAT_DATA_UINT64
},
728 { "deleted", KSTAT_DATA_UINT64
},
729 { "mutex_miss", KSTAT_DATA_UINT64
},
730 { "access_skip", KSTAT_DATA_UINT64
},
731 { "evict_skip", KSTAT_DATA_UINT64
},
732 { "evict_not_enough", KSTAT_DATA_UINT64
},
733 { "evict_l2_cached", KSTAT_DATA_UINT64
},
734 { "evict_l2_eligible", KSTAT_DATA_UINT64
},
735 { "evict_l2_ineligible", KSTAT_DATA_UINT64
},
736 { "evict_l2_skip", KSTAT_DATA_UINT64
},
737 { "hash_elements", KSTAT_DATA_UINT64
},
738 { "hash_elements_max", KSTAT_DATA_UINT64
},
739 { "hash_collisions", KSTAT_DATA_UINT64
},
740 { "hash_chains", KSTAT_DATA_UINT64
},
741 { "hash_chain_max", KSTAT_DATA_UINT64
},
742 { "p", KSTAT_DATA_UINT64
},
743 { "c", KSTAT_DATA_UINT64
},
744 { "c_min", KSTAT_DATA_UINT64
},
745 { "c_max", KSTAT_DATA_UINT64
},
746 { "size", KSTAT_DATA_UINT64
},
747 { "compressed_size", KSTAT_DATA_UINT64
},
748 { "uncompressed_size", KSTAT_DATA_UINT64
},
749 { "overhead_size", KSTAT_DATA_UINT64
},
750 { "hdr_size", KSTAT_DATA_UINT64
},
751 { "data_size", KSTAT_DATA_UINT64
},
752 { "metadata_size", KSTAT_DATA_UINT64
},
753 { "dbuf_size", KSTAT_DATA_UINT64
},
754 { "dnode_size", KSTAT_DATA_UINT64
},
755 { "bonus_size", KSTAT_DATA_UINT64
},
756 { "anon_size", KSTAT_DATA_UINT64
},
757 { "anon_evictable_data", KSTAT_DATA_UINT64
},
758 { "anon_evictable_metadata", KSTAT_DATA_UINT64
},
759 { "mru_size", KSTAT_DATA_UINT64
},
760 { "mru_evictable_data", KSTAT_DATA_UINT64
},
761 { "mru_evictable_metadata", KSTAT_DATA_UINT64
},
762 { "mru_ghost_size", KSTAT_DATA_UINT64
},
763 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64
},
764 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
765 { "mfu_size", KSTAT_DATA_UINT64
},
766 { "mfu_evictable_data", KSTAT_DATA_UINT64
},
767 { "mfu_evictable_metadata", KSTAT_DATA_UINT64
},
768 { "mfu_ghost_size", KSTAT_DATA_UINT64
},
769 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64
},
770 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
771 { "l2_hits", KSTAT_DATA_UINT64
},
772 { "l2_misses", KSTAT_DATA_UINT64
},
773 { "l2_feeds", KSTAT_DATA_UINT64
},
774 { "l2_rw_clash", KSTAT_DATA_UINT64
},
775 { "l2_read_bytes", KSTAT_DATA_UINT64
},
776 { "l2_write_bytes", KSTAT_DATA_UINT64
},
777 { "l2_writes_sent", KSTAT_DATA_UINT64
},
778 { "l2_writes_done", KSTAT_DATA_UINT64
},
779 { "l2_writes_error", KSTAT_DATA_UINT64
},
780 { "l2_writes_lock_retry", KSTAT_DATA_UINT64
},
781 { "l2_evict_lock_retry", KSTAT_DATA_UINT64
},
782 { "l2_evict_reading", KSTAT_DATA_UINT64
},
783 { "l2_evict_l1cached", KSTAT_DATA_UINT64
},
784 { "l2_free_on_write", KSTAT_DATA_UINT64
},
785 { "l2_abort_lowmem", KSTAT_DATA_UINT64
},
786 { "l2_cksum_bad", KSTAT_DATA_UINT64
},
787 { "l2_io_error", KSTAT_DATA_UINT64
},
788 { "l2_size", KSTAT_DATA_UINT64
},
789 { "l2_asize", KSTAT_DATA_UINT64
},
790 { "l2_hdr_size", KSTAT_DATA_UINT64
},
791 { "memory_throttle_count", KSTAT_DATA_UINT64
},
792 { "memory_direct_count", KSTAT_DATA_UINT64
},
793 { "memory_indirect_count", KSTAT_DATA_UINT64
},
794 { "memory_all_bytes", KSTAT_DATA_UINT64
},
795 { "memory_free_bytes", KSTAT_DATA_UINT64
},
796 { "memory_available_bytes", KSTAT_DATA_INT64
},
797 { "arc_no_grow", KSTAT_DATA_UINT64
},
798 { "arc_tempreserve", KSTAT_DATA_UINT64
},
799 { "arc_loaned_bytes", KSTAT_DATA_UINT64
},
800 { "arc_prune", KSTAT_DATA_UINT64
},
801 { "arc_meta_used", KSTAT_DATA_UINT64
},
802 { "arc_meta_limit", KSTAT_DATA_UINT64
},
803 { "arc_dnode_limit", KSTAT_DATA_UINT64
},
804 { "arc_meta_max", KSTAT_DATA_UINT64
},
805 { "arc_meta_min", KSTAT_DATA_UINT64
},
806 { "async_upgrade_sync", KSTAT_DATA_UINT64
},
807 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64
},
808 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64
},
809 { "arc_need_free", KSTAT_DATA_UINT64
},
810 { "arc_sys_free", KSTAT_DATA_UINT64
},
811 { "arc_raw_size", KSTAT_DATA_UINT64
}
814 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
816 #define ARCSTAT_INCR(stat, val) \
817 atomic_add_64(&arc_stats.stat.value.ui64, (val))
819 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
820 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
822 #define ARCSTAT_MAX(stat, val) { \
824 while ((val) > (m = arc_stats.stat.value.ui64) && \
825 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
829 #define ARCSTAT_MAXSTAT(stat) \
830 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
833 * We define a macro to allow ARC hits/misses to be easily broken down by
834 * two separate conditions, giving a total of four different subtypes for
835 * each of hits and misses (so eight statistics total).
837 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
840 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
842 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
846 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
848 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
853 static arc_state_t
*arc_anon
;
854 static arc_state_t
*arc_mru
;
855 static arc_state_t
*arc_mru_ghost
;
856 static arc_state_t
*arc_mfu
;
857 static arc_state_t
*arc_mfu_ghost
;
858 static arc_state_t
*arc_l2c_only
;
861 * There are several ARC variables that are critical to export as kstats --
862 * but we don't want to have to grovel around in the kstat whenever we wish to
863 * manipulate them. For these variables, we therefore define them to be in
864 * terms of the statistic variable. This assures that we are not introducing
865 * the possibility of inconsistency by having shadow copies of the variables,
866 * while still allowing the code to be readable.
868 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
869 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
870 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
871 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
872 #define arc_no_grow ARCSTAT(arcstat_no_grow) /* do not grow cache size */
873 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
874 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
875 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
876 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */
877 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */
878 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
879 #define arc_need_free ARCSTAT(arcstat_need_free) /* bytes to be freed */
880 #define arc_sys_free ARCSTAT(arcstat_sys_free) /* target system free bytes */
882 /* size of all b_rabd's in entire arc */
883 #define arc_raw_size ARCSTAT(arcstat_raw_size)
884 /* compressed size of entire arc */
885 #define arc_compressed_size ARCSTAT(arcstat_compressed_size)
886 /* uncompressed size of entire arc */
887 #define arc_uncompressed_size ARCSTAT(arcstat_uncompressed_size)
888 /* number of bytes in the arc from arc_buf_t's */
889 #define arc_overhead_size ARCSTAT(arcstat_overhead_size)
892 * There are also some ARC variables that we want to export, but that are
893 * updated so often that having the canonical representation be the statistic
894 * variable causes a performance bottleneck. We want to use aggsum_t's for these
895 * instead, but still be able to export the kstat in the same way as before.
896 * The solution is to always use the aggsum version, except in the kstat update
900 aggsum_t arc_meta_used
;
901 aggsum_t astat_data_size
;
902 aggsum_t astat_metadata_size
;
903 aggsum_t astat_dbuf_size
;
904 aggsum_t astat_dnode_size
;
905 aggsum_t astat_bonus_size
;
906 aggsum_t astat_hdr_size
;
907 aggsum_t astat_l2_hdr_size
;
909 static list_t arc_prune_list
;
910 static kmutex_t arc_prune_mtx
;
911 static taskq_t
*arc_prune_taskq
;
913 #define GHOST_STATE(state) \
914 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
915 (state) == arc_l2c_only)
917 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
918 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
919 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
920 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
921 #define HDR_PRESCIENT_PREFETCH(hdr) \
922 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH)
923 #define HDR_COMPRESSION_ENABLED(hdr) \
924 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC)
926 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
927 #define HDR_L2_READING(hdr) \
928 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
929 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
930 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
931 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
932 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
933 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED)
934 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH)
935 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA)
937 #define HDR_ISTYPE_METADATA(hdr) \
938 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
939 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
941 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
942 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
943 #define HDR_HAS_RABD(hdr) \
944 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \
945 (hdr)->b_crypt_hdr.b_rabd != NULL)
946 #define HDR_ENCRYPTED(hdr) \
947 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
948 #define HDR_AUTHENTICATED(hdr) \
949 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
951 /* For storing compression mode in b_flags */
952 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1)
954 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \
955 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS))
956 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \
957 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp));
959 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL)
960 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED)
961 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED)
962 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED)
968 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
969 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr))
970 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
973 * Hash table routines
976 #define HT_LOCK_ALIGN 64
977 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
982 unsigned char pad
[HT_LOCK_PAD
];
986 #define BUF_LOCKS 8192
987 typedef struct buf_hash_table
{
989 arc_buf_hdr_t
**ht_table
;
990 struct ht_lock ht_locks
[BUF_LOCKS
];
993 static buf_hash_table_t buf_hash_table
;
995 #define BUF_HASH_INDEX(spa, dva, birth) \
996 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
997 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
998 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
999 #define HDR_LOCK(hdr) \
1000 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
1002 uint64_t zfs_crc64_table
[256];
1008 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
1009 #define L2ARC_HEADROOM 2 /* num of writes */
1012 * If we discover during ARC scan any buffers to be compressed, we boost
1013 * our headroom for the next scanning cycle by this percentage multiple.
1015 #define L2ARC_HEADROOM_BOOST 200
1016 #define L2ARC_FEED_SECS 1 /* caching interval secs */
1017 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
1020 * We can feed L2ARC from two states of ARC buffers, mru and mfu,
1021 * and each of the state has two types: data and metadata.
1023 #define L2ARC_FEED_TYPES 4
1025 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
1026 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
1028 /* L2ARC Performance Tunables */
1029 unsigned long l2arc_write_max
= L2ARC_WRITE_SIZE
; /* def max write size */
1030 unsigned long l2arc_write_boost
= L2ARC_WRITE_SIZE
; /* extra warmup write */
1031 unsigned long l2arc_headroom
= L2ARC_HEADROOM
; /* # of dev writes */
1032 unsigned long l2arc_headroom_boost
= L2ARC_HEADROOM_BOOST
;
1033 unsigned long l2arc_feed_secs
= L2ARC_FEED_SECS
; /* interval seconds */
1034 unsigned long l2arc_feed_min_ms
= L2ARC_FEED_MIN_MS
; /* min interval msecs */
1035 int l2arc_noprefetch
= B_TRUE
; /* don't cache prefetch bufs */
1036 int l2arc_feed_again
= B_TRUE
; /* turbo warmup */
1037 int l2arc_norw
= B_FALSE
; /* no reads during writes */
1042 static list_t L2ARC_dev_list
; /* device list */
1043 static list_t
*l2arc_dev_list
; /* device list pointer */
1044 static kmutex_t l2arc_dev_mtx
; /* device list mutex */
1045 static l2arc_dev_t
*l2arc_dev_last
; /* last device used */
1046 static list_t L2ARC_free_on_write
; /* free after write buf list */
1047 static list_t
*l2arc_free_on_write
; /* free after write list ptr */
1048 static kmutex_t l2arc_free_on_write_mtx
; /* mutex for list */
1049 static uint64_t l2arc_ndev
; /* number of devices */
1051 typedef struct l2arc_read_callback
{
1052 arc_buf_hdr_t
*l2rcb_hdr
; /* read header */
1053 blkptr_t l2rcb_bp
; /* original blkptr */
1054 zbookmark_phys_t l2rcb_zb
; /* original bookmark */
1055 int l2rcb_flags
; /* original flags */
1056 abd_t
*l2rcb_abd
; /* temporary buffer */
1057 } l2arc_read_callback_t
;
1059 typedef struct l2arc_data_free
{
1060 /* protected by l2arc_free_on_write_mtx */
1063 arc_buf_contents_t l2df_type
;
1064 list_node_t l2df_list_node
;
1065 } l2arc_data_free_t
;
1067 typedef enum arc_fill_flags
{
1068 ARC_FILL_LOCKED
= 1 << 0, /* hdr lock is held */
1069 ARC_FILL_COMPRESSED
= 1 << 1, /* fill with compressed data */
1070 ARC_FILL_ENCRYPTED
= 1 << 2, /* fill with encrypted data */
1071 ARC_FILL_NOAUTH
= 1 << 3, /* don't attempt to authenticate */
1072 ARC_FILL_IN_PLACE
= 1 << 4 /* fill in place (special case) */
1075 static kmutex_t l2arc_feed_thr_lock
;
1076 static kcondvar_t l2arc_feed_thr_cv
;
1077 static uint8_t l2arc_thread_exit
;
1079 static abd_t
*arc_get_data_abd(arc_buf_hdr_t
*, uint64_t, void *);
1080 static void *arc_get_data_buf(arc_buf_hdr_t
*, uint64_t, void *);
1081 static void arc_get_data_impl(arc_buf_hdr_t
*, uint64_t, void *);
1082 static void arc_free_data_abd(arc_buf_hdr_t
*, abd_t
*, uint64_t, void *);
1083 static void arc_free_data_buf(arc_buf_hdr_t
*, void *, uint64_t, void *);
1084 static void arc_free_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
);
1085 static void arc_hdr_free_abd(arc_buf_hdr_t
*, boolean_t
);
1086 static void arc_hdr_alloc_abd(arc_buf_hdr_t
*, boolean_t
);
1087 static void arc_access(arc_buf_hdr_t
*, kmutex_t
*);
1088 static boolean_t
arc_is_overflowing(void);
1089 static void arc_buf_watch(arc_buf_t
*);
1090 static void arc_tuning_update(void);
1091 static void arc_prune_async(int64_t);
1092 static uint64_t arc_all_memory(void);
1094 static arc_buf_contents_t
arc_buf_type(arc_buf_hdr_t
*);
1095 static uint32_t arc_bufc_to_flags(arc_buf_contents_t
);
1096 static inline void arc_hdr_set_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
);
1097 static inline void arc_hdr_clear_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
);
1099 static boolean_t
l2arc_write_eligible(uint64_t, arc_buf_hdr_t
*);
1100 static void l2arc_read_done(zio_t
*);
1104 * We use Cityhash for this. It's fast, and has good hash properties without
1105 * requiring any large static buffers.
1108 buf_hash(uint64_t spa
, const dva_t
*dva
, uint64_t birth
)
1110 return (cityhash4(spa
, dva
->dva_word
[0], dva
->dva_word
[1], birth
));
1113 #define HDR_EMPTY(hdr) \
1114 ((hdr)->b_dva.dva_word[0] == 0 && \
1115 (hdr)->b_dva.dva_word[1] == 0)
1117 #define HDR_EQUAL(spa, dva, birth, hdr) \
1118 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
1119 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
1120 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa)
1123 buf_discard_identity(arc_buf_hdr_t
*hdr
)
1125 hdr
->b_dva
.dva_word
[0] = 0;
1126 hdr
->b_dva
.dva_word
[1] = 0;
1130 static arc_buf_hdr_t
*
1131 buf_hash_find(uint64_t spa
, const blkptr_t
*bp
, kmutex_t
**lockp
)
1133 const dva_t
*dva
= BP_IDENTITY(bp
);
1134 uint64_t birth
= BP_PHYSICAL_BIRTH(bp
);
1135 uint64_t idx
= BUF_HASH_INDEX(spa
, dva
, birth
);
1136 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
1139 mutex_enter(hash_lock
);
1140 for (hdr
= buf_hash_table
.ht_table
[idx
]; hdr
!= NULL
;
1141 hdr
= hdr
->b_hash_next
) {
1142 if (HDR_EQUAL(spa
, dva
, birth
, hdr
)) {
1147 mutex_exit(hash_lock
);
1153 * Insert an entry into the hash table. If there is already an element
1154 * equal to elem in the hash table, then the already existing element
1155 * will be returned and the new element will not be inserted.
1156 * Otherwise returns NULL.
1157 * If lockp == NULL, the caller is assumed to already hold the hash lock.
1159 static arc_buf_hdr_t
*
1160 buf_hash_insert(arc_buf_hdr_t
*hdr
, kmutex_t
**lockp
)
1162 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
1163 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
1164 arc_buf_hdr_t
*fhdr
;
1167 ASSERT(!DVA_IS_EMPTY(&hdr
->b_dva
));
1168 ASSERT(hdr
->b_birth
!= 0);
1169 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
1171 if (lockp
!= NULL
) {
1173 mutex_enter(hash_lock
);
1175 ASSERT(MUTEX_HELD(hash_lock
));
1178 for (fhdr
= buf_hash_table
.ht_table
[idx
], i
= 0; fhdr
!= NULL
;
1179 fhdr
= fhdr
->b_hash_next
, i
++) {
1180 if (HDR_EQUAL(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
, fhdr
))
1184 hdr
->b_hash_next
= buf_hash_table
.ht_table
[idx
];
1185 buf_hash_table
.ht_table
[idx
] = hdr
;
1186 arc_hdr_set_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
1188 /* collect some hash table performance data */
1190 ARCSTAT_BUMP(arcstat_hash_collisions
);
1192 ARCSTAT_BUMP(arcstat_hash_chains
);
1194 ARCSTAT_MAX(arcstat_hash_chain_max
, i
);
1197 ARCSTAT_BUMP(arcstat_hash_elements
);
1198 ARCSTAT_MAXSTAT(arcstat_hash_elements
);
1204 buf_hash_remove(arc_buf_hdr_t
*hdr
)
1206 arc_buf_hdr_t
*fhdr
, **hdrp
;
1207 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
1209 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx
)));
1210 ASSERT(HDR_IN_HASH_TABLE(hdr
));
1212 hdrp
= &buf_hash_table
.ht_table
[idx
];
1213 while ((fhdr
= *hdrp
) != hdr
) {
1214 ASSERT3P(fhdr
, !=, NULL
);
1215 hdrp
= &fhdr
->b_hash_next
;
1217 *hdrp
= hdr
->b_hash_next
;
1218 hdr
->b_hash_next
= NULL
;
1219 arc_hdr_clear_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
1221 /* collect some hash table performance data */
1222 ARCSTAT_BUMPDOWN(arcstat_hash_elements
);
1224 if (buf_hash_table
.ht_table
[idx
] &&
1225 buf_hash_table
.ht_table
[idx
]->b_hash_next
== NULL
)
1226 ARCSTAT_BUMPDOWN(arcstat_hash_chains
);
1230 * Global data structures and functions for the buf kmem cache.
1233 static kmem_cache_t
*hdr_full_cache
;
1234 static kmem_cache_t
*hdr_full_crypt_cache
;
1235 static kmem_cache_t
*hdr_l2only_cache
;
1236 static kmem_cache_t
*buf_cache
;
1243 #if defined(_KERNEL)
1245 * Large allocations which do not require contiguous pages
1246 * should be using vmem_free() in the linux kernel\
1248 vmem_free(buf_hash_table
.ht_table
,
1249 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
1251 kmem_free(buf_hash_table
.ht_table
,
1252 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
1254 for (i
= 0; i
< BUF_LOCKS
; i
++)
1255 mutex_destroy(&buf_hash_table
.ht_locks
[i
].ht_lock
);
1256 kmem_cache_destroy(hdr_full_cache
);
1257 kmem_cache_destroy(hdr_full_crypt_cache
);
1258 kmem_cache_destroy(hdr_l2only_cache
);
1259 kmem_cache_destroy(buf_cache
);
1263 * Constructor callback - called when the cache is empty
1264 * and a new buf is requested.
1268 hdr_full_cons(void *vbuf
, void *unused
, int kmflag
)
1270 arc_buf_hdr_t
*hdr
= vbuf
;
1272 bzero(hdr
, HDR_FULL_SIZE
);
1273 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
1274 cv_init(&hdr
->b_l1hdr
.b_cv
, NULL
, CV_DEFAULT
, NULL
);
1275 refcount_create(&hdr
->b_l1hdr
.b_refcnt
);
1276 mutex_init(&hdr
->b_l1hdr
.b_freeze_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1277 list_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1278 list_link_init(&hdr
->b_l2hdr
.b_l2node
);
1279 multilist_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1280 arc_space_consume(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1287 hdr_full_crypt_cons(void *vbuf
, void *unused
, int kmflag
)
1289 arc_buf_hdr_t
*hdr
= vbuf
;
1291 hdr_full_cons(vbuf
, unused
, kmflag
);
1292 bzero(&hdr
->b_crypt_hdr
, sizeof (hdr
->b_crypt_hdr
));
1293 arc_space_consume(sizeof (hdr
->b_crypt_hdr
), ARC_SPACE_HDRS
);
1300 hdr_l2only_cons(void *vbuf
, void *unused
, int kmflag
)
1302 arc_buf_hdr_t
*hdr
= vbuf
;
1304 bzero(hdr
, HDR_L2ONLY_SIZE
);
1305 arc_space_consume(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1312 buf_cons(void *vbuf
, void *unused
, int kmflag
)
1314 arc_buf_t
*buf
= vbuf
;
1316 bzero(buf
, sizeof (arc_buf_t
));
1317 mutex_init(&buf
->b_evict_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1318 arc_space_consume(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1324 * Destructor callback - called when a cached buf is
1325 * no longer required.
1329 hdr_full_dest(void *vbuf
, void *unused
)
1331 arc_buf_hdr_t
*hdr
= vbuf
;
1333 ASSERT(HDR_EMPTY(hdr
));
1334 cv_destroy(&hdr
->b_l1hdr
.b_cv
);
1335 refcount_destroy(&hdr
->b_l1hdr
.b_refcnt
);
1336 mutex_destroy(&hdr
->b_l1hdr
.b_freeze_lock
);
1337 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1338 arc_space_return(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1343 hdr_full_crypt_dest(void *vbuf
, void *unused
)
1345 arc_buf_hdr_t
*hdr
= vbuf
;
1347 hdr_full_dest(vbuf
, unused
);
1348 arc_space_return(sizeof (hdr
->b_crypt_hdr
), ARC_SPACE_HDRS
);
1353 hdr_l2only_dest(void *vbuf
, void *unused
)
1355 ASSERTV(arc_buf_hdr_t
*hdr
= vbuf
);
1357 ASSERT(HDR_EMPTY(hdr
));
1358 arc_space_return(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1363 buf_dest(void *vbuf
, void *unused
)
1365 arc_buf_t
*buf
= vbuf
;
1367 mutex_destroy(&buf
->b_evict_lock
);
1368 arc_space_return(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1372 * Reclaim callback -- invoked when memory is low.
1376 hdr_recl(void *unused
)
1378 dprintf("hdr_recl called\n");
1380 * umem calls the reclaim func when we destroy the buf cache,
1381 * which is after we do arc_fini().
1384 cv_signal(&arc_reclaim_thread_cv
);
1390 uint64_t *ct
= NULL
;
1391 uint64_t hsize
= 1ULL << 12;
1395 * The hash table is big enough to fill all of physical memory
1396 * with an average block size of zfs_arc_average_blocksize (default 8K).
1397 * By default, the table will take up
1398 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
1400 while (hsize
* zfs_arc_average_blocksize
< arc_all_memory())
1403 buf_hash_table
.ht_mask
= hsize
- 1;
1404 #if defined(_KERNEL)
1406 * Large allocations which do not require contiguous pages
1407 * should be using vmem_alloc() in the linux kernel
1409 buf_hash_table
.ht_table
=
1410 vmem_zalloc(hsize
* sizeof (void*), KM_SLEEP
);
1412 buf_hash_table
.ht_table
=
1413 kmem_zalloc(hsize
* sizeof (void*), KM_NOSLEEP
);
1415 if (buf_hash_table
.ht_table
== NULL
) {
1416 ASSERT(hsize
> (1ULL << 8));
1421 hdr_full_cache
= kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE
,
1422 0, hdr_full_cons
, hdr_full_dest
, hdr_recl
, NULL
, NULL
, 0);
1423 hdr_full_crypt_cache
= kmem_cache_create("arc_buf_hdr_t_full_crypt",
1424 HDR_FULL_CRYPT_SIZE
, 0, hdr_full_crypt_cons
, hdr_full_crypt_dest
,
1425 hdr_recl
, NULL
, NULL
, 0);
1426 hdr_l2only_cache
= kmem_cache_create("arc_buf_hdr_t_l2only",
1427 HDR_L2ONLY_SIZE
, 0, hdr_l2only_cons
, hdr_l2only_dest
, hdr_recl
,
1429 buf_cache
= kmem_cache_create("arc_buf_t", sizeof (arc_buf_t
),
1430 0, buf_cons
, buf_dest
, NULL
, NULL
, NULL
, 0);
1432 for (i
= 0; i
< 256; i
++)
1433 for (ct
= zfs_crc64_table
+ i
, *ct
= i
, j
= 8; j
> 0; j
--)
1434 *ct
= (*ct
>> 1) ^ (-(*ct
& 1) & ZFS_CRC64_POLY
);
1436 for (i
= 0; i
< BUF_LOCKS
; i
++) {
1437 mutex_init(&buf_hash_table
.ht_locks
[i
].ht_lock
,
1438 NULL
, MUTEX_DEFAULT
, NULL
);
1442 #define ARC_MINTIME (hz>>4) /* 62 ms */
1445 * This is the size that the buf occupies in memory. If the buf is compressed,
1446 * it will correspond to the compressed size. You should use this method of
1447 * getting the buf size unless you explicitly need the logical size.
1450 arc_buf_size(arc_buf_t
*buf
)
1452 return (ARC_BUF_COMPRESSED(buf
) ?
1453 HDR_GET_PSIZE(buf
->b_hdr
) : HDR_GET_LSIZE(buf
->b_hdr
));
1457 arc_buf_lsize(arc_buf_t
*buf
)
1459 return (HDR_GET_LSIZE(buf
->b_hdr
));
1463 * This function will return B_TRUE if the buffer is encrypted in memory.
1464 * This buffer can be decrypted by calling arc_untransform().
1467 arc_is_encrypted(arc_buf_t
*buf
)
1469 return (ARC_BUF_ENCRYPTED(buf
) != 0);
1473 * Returns B_TRUE if the buffer represents data that has not had its MAC
1477 arc_is_unauthenticated(arc_buf_t
*buf
)
1479 return (HDR_NOAUTH(buf
->b_hdr
) != 0);
1483 arc_get_raw_params(arc_buf_t
*buf
, boolean_t
*byteorder
, uint8_t *salt
,
1484 uint8_t *iv
, uint8_t *mac
)
1486 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1488 ASSERT(HDR_PROTECTED(hdr
));
1490 bcopy(hdr
->b_crypt_hdr
.b_salt
, salt
, ZIO_DATA_SALT_LEN
);
1491 bcopy(hdr
->b_crypt_hdr
.b_iv
, iv
, ZIO_DATA_IV_LEN
);
1492 bcopy(hdr
->b_crypt_hdr
.b_mac
, mac
, ZIO_DATA_MAC_LEN
);
1493 *byteorder
= (hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
) ?
1494 ZFS_HOST_BYTEORDER
: !ZFS_HOST_BYTEORDER
;
1498 * Indicates how this buffer is compressed in memory. If it is not compressed
1499 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with
1500 * arc_untransform() as long as it is also unencrypted.
1503 arc_get_compression(arc_buf_t
*buf
)
1505 return (ARC_BUF_COMPRESSED(buf
) ?
1506 HDR_GET_COMPRESS(buf
->b_hdr
) : ZIO_COMPRESS_OFF
);
1510 * Return the compression algorithm used to store this data in the ARC. If ARC
1511 * compression is enabled or this is an encrypted block, this will be the same
1512 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF.
1514 static inline enum zio_compress
1515 arc_hdr_get_compress(arc_buf_hdr_t
*hdr
)
1517 return (HDR_COMPRESSION_ENABLED(hdr
) ?
1518 HDR_GET_COMPRESS(hdr
) : ZIO_COMPRESS_OFF
);
1521 static inline boolean_t
1522 arc_buf_is_shared(arc_buf_t
*buf
)
1524 boolean_t shared
= (buf
->b_data
!= NULL
&&
1525 buf
->b_hdr
->b_l1hdr
.b_pabd
!= NULL
&&
1526 abd_is_linear(buf
->b_hdr
->b_l1hdr
.b_pabd
) &&
1527 buf
->b_data
== abd_to_buf(buf
->b_hdr
->b_l1hdr
.b_pabd
));
1528 IMPLY(shared
, HDR_SHARED_DATA(buf
->b_hdr
));
1529 IMPLY(shared
, ARC_BUF_SHARED(buf
));
1530 IMPLY(shared
, ARC_BUF_COMPRESSED(buf
) || ARC_BUF_LAST(buf
));
1533 * It would be nice to assert arc_can_share() too, but the "hdr isn't
1534 * already being shared" requirement prevents us from doing that.
1541 * Free the checksum associated with this header. If there is no checksum, this
1545 arc_cksum_free(arc_buf_hdr_t
*hdr
)
1547 ASSERT(HDR_HAS_L1HDR(hdr
));
1549 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
1550 if (hdr
->b_l1hdr
.b_freeze_cksum
!= NULL
) {
1551 kmem_free(hdr
->b_l1hdr
.b_freeze_cksum
, sizeof (zio_cksum_t
));
1552 hdr
->b_l1hdr
.b_freeze_cksum
= NULL
;
1554 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1558 * Return true iff at least one of the bufs on hdr is not compressed.
1559 * Encrypted buffers count as compressed.
1562 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t
*hdr
)
1564 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
||
1565 MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1567 for (arc_buf_t
*b
= hdr
->b_l1hdr
.b_buf
; b
!= NULL
; b
= b
->b_next
) {
1568 if (!ARC_BUF_COMPRESSED(b
)) {
1577 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data
1578 * matches the checksum that is stored in the hdr. If there is no checksum,
1579 * or if the buf is compressed, this is a no-op.
1582 arc_cksum_verify(arc_buf_t
*buf
)
1584 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1587 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1590 if (ARC_BUF_COMPRESSED(buf
))
1593 ASSERT(HDR_HAS_L1HDR(hdr
));
1595 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
1597 if (hdr
->b_l1hdr
.b_freeze_cksum
== NULL
|| HDR_IO_ERROR(hdr
)) {
1598 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1602 fletcher_2_native(buf
->b_data
, arc_buf_size(buf
), NULL
, &zc
);
1603 if (!ZIO_CHECKSUM_EQUAL(*hdr
->b_l1hdr
.b_freeze_cksum
, zc
))
1604 panic("buffer modified while frozen!");
1605 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1609 * This function makes the assumption that data stored in the L2ARC
1610 * will be transformed exactly as it is in the main pool. Because of
1611 * this we can verify the checksum against the reading process's bp.
1614 arc_cksum_is_equal(arc_buf_hdr_t
*hdr
, zio_t
*zio
)
1616 ASSERT(!BP_IS_EMBEDDED(zio
->io_bp
));
1617 VERIFY3U(BP_GET_PSIZE(zio
->io_bp
), ==, HDR_GET_PSIZE(hdr
));
1620 * Block pointers always store the checksum for the logical data.
1621 * If the block pointer has the gang bit set, then the checksum
1622 * it represents is for the reconstituted data and not for an
1623 * individual gang member. The zio pipeline, however, must be able to
1624 * determine the checksum of each of the gang constituents so it
1625 * treats the checksum comparison differently than what we need
1626 * for l2arc blocks. This prevents us from using the
1627 * zio_checksum_error() interface directly. Instead we must call the
1628 * zio_checksum_error_impl() so that we can ensure the checksum is
1629 * generated using the correct checksum algorithm and accounts for the
1630 * logical I/O size and not just a gang fragment.
1632 return (zio_checksum_error_impl(zio
->io_spa
, zio
->io_bp
,
1633 BP_GET_CHECKSUM(zio
->io_bp
), zio
->io_abd
, zio
->io_size
,
1634 zio
->io_offset
, NULL
) == 0);
1638 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a
1639 * checksum and attaches it to the buf's hdr so that we can ensure that the buf
1640 * isn't modified later on. If buf is compressed or there is already a checksum
1641 * on the hdr, this is a no-op (we only checksum uncompressed bufs).
1644 arc_cksum_compute(arc_buf_t
*buf
)
1646 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1648 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1651 ASSERT(HDR_HAS_L1HDR(hdr
));
1653 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1654 if (hdr
->b_l1hdr
.b_freeze_cksum
!= NULL
|| ARC_BUF_COMPRESSED(buf
)) {
1655 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1659 ASSERT(!ARC_BUF_ENCRYPTED(buf
));
1660 ASSERT(!ARC_BUF_COMPRESSED(buf
));
1661 hdr
->b_l1hdr
.b_freeze_cksum
= kmem_alloc(sizeof (zio_cksum_t
),
1663 fletcher_2_native(buf
->b_data
, arc_buf_size(buf
), NULL
,
1664 hdr
->b_l1hdr
.b_freeze_cksum
);
1665 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1671 arc_buf_sigsegv(int sig
, siginfo_t
*si
, void *unused
)
1673 panic("Got SIGSEGV at address: 0x%lx\n", (long)si
->si_addr
);
1679 arc_buf_unwatch(arc_buf_t
*buf
)
1683 ASSERT0(mprotect(buf
->b_data
, arc_buf_size(buf
),
1684 PROT_READ
| PROT_WRITE
));
1691 arc_buf_watch(arc_buf_t
*buf
)
1695 ASSERT0(mprotect(buf
->b_data
, arc_buf_size(buf
),
1700 static arc_buf_contents_t
1701 arc_buf_type(arc_buf_hdr_t
*hdr
)
1703 arc_buf_contents_t type
;
1704 if (HDR_ISTYPE_METADATA(hdr
)) {
1705 type
= ARC_BUFC_METADATA
;
1707 type
= ARC_BUFC_DATA
;
1709 VERIFY3U(hdr
->b_type
, ==, type
);
1714 arc_is_metadata(arc_buf_t
*buf
)
1716 return (HDR_ISTYPE_METADATA(buf
->b_hdr
) != 0);
1720 arc_bufc_to_flags(arc_buf_contents_t type
)
1724 /* metadata field is 0 if buffer contains normal data */
1726 case ARC_BUFC_METADATA
:
1727 return (ARC_FLAG_BUFC_METADATA
);
1731 panic("undefined ARC buffer type!");
1732 return ((uint32_t)-1);
1736 arc_buf_thaw(arc_buf_t
*buf
)
1738 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1740 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
1741 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
1743 arc_cksum_verify(buf
);
1746 * Compressed buffers do not manipulate the b_freeze_cksum.
1748 if (ARC_BUF_COMPRESSED(buf
))
1751 ASSERT(HDR_HAS_L1HDR(hdr
));
1752 arc_cksum_free(hdr
);
1753 arc_buf_unwatch(buf
);
1757 arc_buf_freeze(arc_buf_t
*buf
)
1759 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1762 if (ARC_BUF_COMPRESSED(buf
))
1765 ASSERT(HDR_HAS_L1HDR(buf
->b_hdr
));
1766 arc_cksum_compute(buf
);
1770 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead,
1771 * the following functions should be used to ensure that the flags are
1772 * updated in a thread-safe way. When manipulating the flags either
1773 * the hash_lock must be held or the hdr must be undiscoverable. This
1774 * ensures that we're not racing with any other threads when updating
1778 arc_hdr_set_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
)
1780 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1781 hdr
->b_flags
|= flags
;
1785 arc_hdr_clear_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
)
1787 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1788 hdr
->b_flags
&= ~flags
;
1792 * Setting the compression bits in the arc_buf_hdr_t's b_flags is
1793 * done in a special way since we have to clear and set bits
1794 * at the same time. Consumers that wish to set the compression bits
1795 * must use this function to ensure that the flags are updated in
1796 * thread-safe manner.
1799 arc_hdr_set_compress(arc_buf_hdr_t
*hdr
, enum zio_compress cmp
)
1801 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1804 * Holes and embedded blocks will always have a psize = 0 so
1805 * we ignore the compression of the blkptr and set the
1806 * want to uncompress them. Mark them as uncompressed.
1808 if (!zfs_compressed_arc_enabled
|| HDR_GET_PSIZE(hdr
) == 0) {
1809 arc_hdr_clear_flags(hdr
, ARC_FLAG_COMPRESSED_ARC
);
1810 ASSERT(!HDR_COMPRESSION_ENABLED(hdr
));
1812 arc_hdr_set_flags(hdr
, ARC_FLAG_COMPRESSED_ARC
);
1813 ASSERT(HDR_COMPRESSION_ENABLED(hdr
));
1816 HDR_SET_COMPRESS(hdr
, cmp
);
1817 ASSERT3U(HDR_GET_COMPRESS(hdr
), ==, cmp
);
1821 * Looks for another buf on the same hdr which has the data decompressed, copies
1822 * from it, and returns true. If no such buf exists, returns false.
1825 arc_buf_try_copy_decompressed_data(arc_buf_t
*buf
)
1827 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1828 boolean_t copied
= B_FALSE
;
1830 ASSERT(HDR_HAS_L1HDR(hdr
));
1831 ASSERT3P(buf
->b_data
, !=, NULL
);
1832 ASSERT(!ARC_BUF_COMPRESSED(buf
));
1834 for (arc_buf_t
*from
= hdr
->b_l1hdr
.b_buf
; from
!= NULL
;
1835 from
= from
->b_next
) {
1836 /* can't use our own data buffer */
1841 if (!ARC_BUF_COMPRESSED(from
)) {
1842 bcopy(from
->b_data
, buf
->b_data
, arc_buf_size(buf
));
1849 * There were no decompressed bufs, so there should not be a
1850 * checksum on the hdr either.
1852 EQUIV(!copied
, hdr
->b_l1hdr
.b_freeze_cksum
== NULL
);
1858 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t.
1861 arc_hdr_size(arc_buf_hdr_t
*hdr
)
1865 if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
&&
1866 HDR_GET_PSIZE(hdr
) > 0) {
1867 size
= HDR_GET_PSIZE(hdr
);
1869 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, 0);
1870 size
= HDR_GET_LSIZE(hdr
);
1876 arc_hdr_authenticate(arc_buf_hdr_t
*hdr
, spa_t
*spa
, uint64_t dsobj
)
1880 uint64_t lsize
= HDR_GET_LSIZE(hdr
);
1881 uint64_t psize
= HDR_GET_PSIZE(hdr
);
1882 void *tmpbuf
= NULL
;
1883 abd_t
*abd
= hdr
->b_l1hdr
.b_pabd
;
1885 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1886 ASSERT(HDR_AUTHENTICATED(hdr
));
1887 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
1890 * The MAC is calculated on the compressed data that is stored on disk.
1891 * However, if compressed arc is disabled we will only have the
1892 * decompressed data available to us now. Compress it into a temporary
1893 * abd so we can verify the MAC. The performance overhead of this will
1894 * be relatively low, since most objects in an encrypted objset will
1895 * be encrypted (instead of authenticated) anyway.
1897 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
1898 !HDR_COMPRESSION_ENABLED(hdr
)) {
1899 tmpbuf
= zio_buf_alloc(lsize
);
1900 abd
= abd_get_from_buf(tmpbuf
, lsize
);
1901 abd_take_ownership_of_buf(abd
, B_TRUE
);
1903 csize
= zio_compress_data(HDR_GET_COMPRESS(hdr
),
1904 hdr
->b_l1hdr
.b_pabd
, tmpbuf
, lsize
);
1905 ASSERT3U(csize
, <=, psize
);
1906 abd_zero_off(abd
, csize
, psize
- csize
);
1910 * Authentication is best effort. We authenticate whenever the key is
1911 * available. If we succeed we clear ARC_FLAG_NOAUTH.
1913 if (hdr
->b_crypt_hdr
.b_ot
== DMU_OT_OBJSET
) {
1914 ASSERT3U(HDR_GET_COMPRESS(hdr
), ==, ZIO_COMPRESS_OFF
);
1915 ASSERT3U(lsize
, ==, psize
);
1916 ret
= spa_do_crypt_objset_mac_abd(B_FALSE
, spa
, dsobj
, abd
,
1917 psize
, hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
1919 ret
= spa_do_crypt_mac_abd(B_FALSE
, spa
, dsobj
, abd
, psize
,
1920 hdr
->b_crypt_hdr
.b_mac
);
1924 arc_hdr_clear_flags(hdr
, ARC_FLAG_NOAUTH
);
1925 else if (ret
!= ENOENT
)
1941 * This function will take a header that only has raw encrypted data in
1942 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in
1943 * b_l1hdr.b_pabd. If designated in the header flags, this function will
1944 * also decompress the data.
1947 arc_hdr_decrypt(arc_buf_hdr_t
*hdr
, spa_t
*spa
, const zbookmark_phys_t
*zb
)
1952 boolean_t no_crypt
= B_FALSE
;
1953 boolean_t bswap
= (hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
1955 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1956 ASSERT(HDR_ENCRYPTED(hdr
));
1958 arc_hdr_alloc_abd(hdr
, B_FALSE
);
1960 ret
= spa_do_crypt_abd(B_FALSE
, spa
, zb
, hdr
->b_crypt_hdr
.b_ot
,
1961 B_FALSE
, bswap
, hdr
->b_crypt_hdr
.b_salt
, hdr
->b_crypt_hdr
.b_iv
,
1962 hdr
->b_crypt_hdr
.b_mac
, HDR_GET_PSIZE(hdr
), hdr
->b_l1hdr
.b_pabd
,
1963 hdr
->b_crypt_hdr
.b_rabd
, &no_crypt
);
1968 abd_copy(hdr
->b_l1hdr
.b_pabd
, hdr
->b_crypt_hdr
.b_rabd
,
1969 HDR_GET_PSIZE(hdr
));
1973 * If this header has disabled arc compression but the b_pabd is
1974 * compressed after decrypting it, we need to decompress the newly
1977 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
1978 !HDR_COMPRESSION_ENABLED(hdr
)) {
1980 * We want to make sure that we are correctly honoring the
1981 * zfs_abd_scatter_enabled setting, so we allocate an abd here
1982 * and then loan a buffer from it, rather than allocating a
1983 * linear buffer and wrapping it in an abd later.
1985 cabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
1986 tmp
= abd_borrow_buf(cabd
, arc_hdr_size(hdr
));
1988 ret
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
1989 hdr
->b_l1hdr
.b_pabd
, tmp
, HDR_GET_PSIZE(hdr
),
1990 HDR_GET_LSIZE(hdr
));
1992 abd_return_buf(cabd
, tmp
, arc_hdr_size(hdr
));
1996 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
1997 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
1998 arc_hdr_size(hdr
), hdr
);
1999 hdr
->b_l1hdr
.b_pabd
= cabd
;
2005 arc_hdr_free_abd(hdr
, B_FALSE
);
2007 arc_free_data_buf(hdr
, cabd
, arc_hdr_size(hdr
), hdr
);
2013 * This function is called during arc_buf_fill() to prepare the header's
2014 * abd plaintext pointer for use. This involves authenticated protected
2015 * data and decrypting encrypted data into the plaintext abd.
2018 arc_fill_hdr_crypt(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, spa_t
*spa
,
2019 const zbookmark_phys_t
*zb
, boolean_t noauth
)
2023 ASSERT(HDR_PROTECTED(hdr
));
2025 if (hash_lock
!= NULL
)
2026 mutex_enter(hash_lock
);
2028 if (HDR_NOAUTH(hdr
) && !noauth
) {
2030 * The caller requested authenticated data but our data has
2031 * not been authenticated yet. Verify the MAC now if we can.
2033 ret
= arc_hdr_authenticate(hdr
, spa
, zb
->zb_objset
);
2036 } else if (HDR_HAS_RABD(hdr
) && hdr
->b_l1hdr
.b_pabd
== NULL
) {
2038 * If we only have the encrypted version of the data, but the
2039 * unencrypted version was requested we take this opportunity
2040 * to store the decrypted version in the header for future use.
2042 ret
= arc_hdr_decrypt(hdr
, spa
, zb
);
2047 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
2049 if (hash_lock
!= NULL
)
2050 mutex_exit(hash_lock
);
2055 if (hash_lock
!= NULL
)
2056 mutex_exit(hash_lock
);
2062 * This function is used by the dbuf code to decrypt bonus buffers in place.
2063 * The dbuf code itself doesn't have any locking for decrypting a shared dnode
2064 * block, so we use the hash lock here to protect against concurrent calls to
2068 arc_buf_untransform_in_place(arc_buf_t
*buf
, kmutex_t
*hash_lock
)
2070 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2072 ASSERT(HDR_ENCRYPTED(hdr
));
2073 ASSERT3U(hdr
->b_crypt_hdr
.b_ot
, ==, DMU_OT_DNODE
);
2074 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
2075 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
2077 zio_crypt_copy_dnode_bonus(hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
2079 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
2080 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
2081 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
2085 * Given a buf that has a data buffer attached to it, this function will
2086 * efficiently fill the buf with data of the specified compression setting from
2087 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr
2088 * are already sharing a data buf, no copy is performed.
2090 * If the buf is marked as compressed but uncompressed data was requested, this
2091 * will allocate a new data buffer for the buf, remove that flag, and fill the
2092 * buf with uncompressed data. You can't request a compressed buf on a hdr with
2093 * uncompressed data, and (since we haven't added support for it yet) if you
2094 * want compressed data your buf must already be marked as compressed and have
2095 * the correct-sized data buffer.
2098 arc_buf_fill(arc_buf_t
*buf
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2099 arc_fill_flags_t flags
)
2102 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2103 boolean_t hdr_compressed
=
2104 (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
2105 boolean_t compressed
= (flags
& ARC_FILL_COMPRESSED
) != 0;
2106 boolean_t encrypted
= (flags
& ARC_FILL_ENCRYPTED
) != 0;
2107 dmu_object_byteswap_t bswap
= hdr
->b_l1hdr
.b_byteswap
;
2108 kmutex_t
*hash_lock
= (flags
& ARC_FILL_LOCKED
) ? NULL
: HDR_LOCK(hdr
);
2110 ASSERT3P(buf
->b_data
, !=, NULL
);
2111 IMPLY(compressed
, hdr_compressed
|| ARC_BUF_ENCRYPTED(buf
));
2112 IMPLY(compressed
, ARC_BUF_COMPRESSED(buf
));
2113 IMPLY(encrypted
, HDR_ENCRYPTED(hdr
));
2114 IMPLY(encrypted
, ARC_BUF_ENCRYPTED(buf
));
2115 IMPLY(encrypted
, ARC_BUF_COMPRESSED(buf
));
2116 IMPLY(encrypted
, !ARC_BUF_SHARED(buf
));
2119 * If the caller wanted encrypted data we just need to copy it from
2120 * b_rabd and potentially byteswap it. We won't be able to do any
2121 * further transforms on it.
2124 ASSERT(HDR_HAS_RABD(hdr
));
2125 abd_copy_to_buf(buf
->b_data
, hdr
->b_crypt_hdr
.b_rabd
,
2126 HDR_GET_PSIZE(hdr
));
2131 * Adjust encrypted and authenticated headers to accomodate
2132 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are
2133 * allowed to fail decryption due to keys not being loaded
2134 * without being marked as an IO error.
2136 if (HDR_PROTECTED(hdr
)) {
2137 error
= arc_fill_hdr_crypt(hdr
, hash_lock
, spa
,
2138 zb
, !!(flags
& ARC_FILL_NOAUTH
));
2139 if (error
== EACCES
&& (flags
& ARC_FILL_IN_PLACE
) != 0) {
2141 } else if (error
!= 0) {
2142 if (hash_lock
!= NULL
)
2143 mutex_enter(hash_lock
);
2144 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
2145 if (hash_lock
!= NULL
)
2146 mutex_exit(hash_lock
);
2152 * There is a special case here for dnode blocks which are
2153 * decrypting their bonus buffers. These blocks may request to
2154 * be decrypted in-place. This is necessary because there may
2155 * be many dnodes pointing into this buffer and there is
2156 * currently no method to synchronize replacing the backing
2157 * b_data buffer and updating all of the pointers. Here we use
2158 * the hash lock to ensure there are no races. If the need
2159 * arises for other types to be decrypted in-place, they must
2160 * add handling here as well.
2162 if ((flags
& ARC_FILL_IN_PLACE
) != 0) {
2163 ASSERT(!hdr_compressed
);
2164 ASSERT(!compressed
);
2167 if (HDR_ENCRYPTED(hdr
) && ARC_BUF_ENCRYPTED(buf
)) {
2168 ASSERT3U(hdr
->b_crypt_hdr
.b_ot
, ==, DMU_OT_DNODE
);
2170 if (hash_lock
!= NULL
)
2171 mutex_enter(hash_lock
);
2172 arc_buf_untransform_in_place(buf
, hash_lock
);
2173 if (hash_lock
!= NULL
)
2174 mutex_exit(hash_lock
);
2176 /* Compute the hdr's checksum if necessary */
2177 arc_cksum_compute(buf
);
2183 if (hdr_compressed
== compressed
) {
2184 if (!arc_buf_is_shared(buf
)) {
2185 abd_copy_to_buf(buf
->b_data
, hdr
->b_l1hdr
.b_pabd
,
2189 ASSERT(hdr_compressed
);
2190 ASSERT(!compressed
);
2191 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, HDR_GET_PSIZE(hdr
));
2194 * If the buf is sharing its data with the hdr, unlink it and
2195 * allocate a new data buffer for the buf.
2197 if (arc_buf_is_shared(buf
)) {
2198 ASSERT(ARC_BUF_COMPRESSED(buf
));
2200 /* We need to give the buf it's own b_data */
2201 buf
->b_flags
&= ~ARC_BUF_FLAG_SHARED
;
2203 arc_get_data_buf(hdr
, HDR_GET_LSIZE(hdr
), buf
);
2204 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
2206 /* Previously overhead was 0; just add new overhead */
2207 ARCSTAT_INCR(arcstat_overhead_size
, HDR_GET_LSIZE(hdr
));
2208 } else if (ARC_BUF_COMPRESSED(buf
)) {
2209 /* We need to reallocate the buf's b_data */
2210 arc_free_data_buf(hdr
, buf
->b_data
, HDR_GET_PSIZE(hdr
),
2213 arc_get_data_buf(hdr
, HDR_GET_LSIZE(hdr
), buf
);
2215 /* We increased the size of b_data; update overhead */
2216 ARCSTAT_INCR(arcstat_overhead_size
,
2217 HDR_GET_LSIZE(hdr
) - HDR_GET_PSIZE(hdr
));
2221 * Regardless of the buf's previous compression settings, it
2222 * should not be compressed at the end of this function.
2224 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
2227 * Try copying the data from another buf which already has a
2228 * decompressed version. If that's not possible, it's time to
2229 * bite the bullet and decompress the data from the hdr.
2231 if (arc_buf_try_copy_decompressed_data(buf
)) {
2232 /* Skip byteswapping and checksumming (already done) */
2233 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, !=, NULL
);
2236 error
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
2237 hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
2238 HDR_GET_PSIZE(hdr
), HDR_GET_LSIZE(hdr
));
2241 * Absent hardware errors or software bugs, this should
2242 * be impossible, but log it anyway so we can debug it.
2246 "hdr %p, compress %d, psize %d, lsize %d",
2247 hdr
, arc_hdr_get_compress(hdr
),
2248 HDR_GET_PSIZE(hdr
), HDR_GET_LSIZE(hdr
));
2249 if (hash_lock
!= NULL
)
2250 mutex_enter(hash_lock
);
2251 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
2252 if (hash_lock
!= NULL
)
2253 mutex_exit(hash_lock
);
2254 return (SET_ERROR(EIO
));
2260 /* Byteswap the buf's data if necessary */
2261 if (bswap
!= DMU_BSWAP_NUMFUNCS
) {
2262 ASSERT(!HDR_SHARED_DATA(hdr
));
2263 ASSERT3U(bswap
, <, DMU_BSWAP_NUMFUNCS
);
2264 dmu_ot_byteswap
[bswap
].ob_func(buf
->b_data
, HDR_GET_LSIZE(hdr
));
2267 /* Compute the hdr's checksum if necessary */
2268 arc_cksum_compute(buf
);
2274 * If this function is being called to decrypt an encrypted buffer or verify an
2275 * authenticated one, the key must be loaded and a mapping must be made
2276 * available in the keystore via spa_keystore_create_mapping() or one of its
2280 arc_untransform(arc_buf_t
*buf
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2284 arc_fill_flags_t flags
= 0;
2287 flags
|= ARC_FILL_IN_PLACE
;
2289 ret
= arc_buf_fill(buf
, spa
, zb
, flags
);
2290 if (ret
== ECKSUM
) {
2292 * Convert authentication and decryption errors to EIO
2293 * (and generate an ereport) before leaving the ARC.
2295 ret
= SET_ERROR(EIO
);
2296 spa_log_error(spa
, zb
);
2297 zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION
,
2298 spa
, NULL
, zb
, NULL
, 0, 0);
2305 * Increment the amount of evictable space in the arc_state_t's refcount.
2306 * We account for the space used by the hdr and the arc buf individually
2307 * so that we can add and remove them from the refcount individually.
2310 arc_evictable_space_increment(arc_buf_hdr_t
*hdr
, arc_state_t
*state
)
2312 arc_buf_contents_t type
= arc_buf_type(hdr
);
2314 ASSERT(HDR_HAS_L1HDR(hdr
));
2316 if (GHOST_STATE(state
)) {
2317 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
2318 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2319 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2320 ASSERT(!HDR_HAS_RABD(hdr
));
2321 (void) refcount_add_many(&state
->arcs_esize
[type
],
2322 HDR_GET_LSIZE(hdr
), hdr
);
2326 ASSERT(!GHOST_STATE(state
));
2327 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2328 (void) refcount_add_many(&state
->arcs_esize
[type
],
2329 arc_hdr_size(hdr
), hdr
);
2331 if (HDR_HAS_RABD(hdr
)) {
2332 (void) refcount_add_many(&state
->arcs_esize
[type
],
2333 HDR_GET_PSIZE(hdr
), hdr
);
2336 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2337 buf
= buf
->b_next
) {
2338 if (arc_buf_is_shared(buf
))
2340 (void) refcount_add_many(&state
->arcs_esize
[type
],
2341 arc_buf_size(buf
), buf
);
2346 * Decrement the amount of evictable space in the arc_state_t's refcount.
2347 * We account for the space used by the hdr and the arc buf individually
2348 * so that we can add and remove them from the refcount individually.
2351 arc_evictable_space_decrement(arc_buf_hdr_t
*hdr
, arc_state_t
*state
)
2353 arc_buf_contents_t type
= arc_buf_type(hdr
);
2355 ASSERT(HDR_HAS_L1HDR(hdr
));
2357 if (GHOST_STATE(state
)) {
2358 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
2359 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2360 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2361 ASSERT(!HDR_HAS_RABD(hdr
));
2362 (void) refcount_remove_many(&state
->arcs_esize
[type
],
2363 HDR_GET_LSIZE(hdr
), hdr
);
2367 ASSERT(!GHOST_STATE(state
));
2368 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2369 (void) refcount_remove_many(&state
->arcs_esize
[type
],
2370 arc_hdr_size(hdr
), hdr
);
2372 if (HDR_HAS_RABD(hdr
)) {
2373 (void) refcount_remove_many(&state
->arcs_esize
[type
],
2374 HDR_GET_PSIZE(hdr
), hdr
);
2377 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2378 buf
= buf
->b_next
) {
2379 if (arc_buf_is_shared(buf
))
2381 (void) refcount_remove_many(&state
->arcs_esize
[type
],
2382 arc_buf_size(buf
), buf
);
2387 * Add a reference to this hdr indicating that someone is actively
2388 * referencing that memory. When the refcount transitions from 0 to 1,
2389 * we remove it from the respective arc_state_t list to indicate that
2390 * it is not evictable.
2393 add_reference(arc_buf_hdr_t
*hdr
, void *tag
)
2397 ASSERT(HDR_HAS_L1HDR(hdr
));
2398 if (!MUTEX_HELD(HDR_LOCK(hdr
))) {
2399 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
2400 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2401 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2404 state
= hdr
->b_l1hdr
.b_state
;
2406 if ((refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
) == 1) &&
2407 (state
!= arc_anon
)) {
2408 /* We don't use the L2-only state list. */
2409 if (state
!= arc_l2c_only
) {
2410 multilist_remove(state
->arcs_list
[arc_buf_type(hdr
)],
2412 arc_evictable_space_decrement(hdr
, state
);
2414 /* remove the prefetch flag if we get a reference */
2415 arc_hdr_clear_flags(hdr
, ARC_FLAG_PREFETCH
);
2420 * Remove a reference from this hdr. When the reference transitions from
2421 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's
2422 * list making it eligible for eviction.
2425 remove_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
2428 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
2430 ASSERT(HDR_HAS_L1HDR(hdr
));
2431 ASSERT(state
== arc_anon
|| MUTEX_HELD(hash_lock
));
2432 ASSERT(!GHOST_STATE(state
));
2435 * arc_l2c_only counts as a ghost state so we don't need to explicitly
2436 * check to prevent usage of the arc_l2c_only list.
2438 if (((cnt
= refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
)) == 0) &&
2439 (state
!= arc_anon
)) {
2440 multilist_insert(state
->arcs_list
[arc_buf_type(hdr
)], hdr
);
2441 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, >, 0);
2442 arc_evictable_space_increment(hdr
, state
);
2448 * Returns detailed information about a specific arc buffer. When the
2449 * state_index argument is set the function will calculate the arc header
2450 * list position for its arc state. Since this requires a linear traversal
2451 * callers are strongly encourage not to do this. However, it can be helpful
2452 * for targeted analysis so the functionality is provided.
2455 arc_buf_info(arc_buf_t
*ab
, arc_buf_info_t
*abi
, int state_index
)
2457 arc_buf_hdr_t
*hdr
= ab
->b_hdr
;
2458 l1arc_buf_hdr_t
*l1hdr
= NULL
;
2459 l2arc_buf_hdr_t
*l2hdr
= NULL
;
2460 arc_state_t
*state
= NULL
;
2462 memset(abi
, 0, sizeof (arc_buf_info_t
));
2467 abi
->abi_flags
= hdr
->b_flags
;
2469 if (HDR_HAS_L1HDR(hdr
)) {
2470 l1hdr
= &hdr
->b_l1hdr
;
2471 state
= l1hdr
->b_state
;
2473 if (HDR_HAS_L2HDR(hdr
))
2474 l2hdr
= &hdr
->b_l2hdr
;
2477 abi
->abi_bufcnt
= l1hdr
->b_bufcnt
;
2478 abi
->abi_access
= l1hdr
->b_arc_access
;
2479 abi
->abi_mru_hits
= l1hdr
->b_mru_hits
;
2480 abi
->abi_mru_ghost_hits
= l1hdr
->b_mru_ghost_hits
;
2481 abi
->abi_mfu_hits
= l1hdr
->b_mfu_hits
;
2482 abi
->abi_mfu_ghost_hits
= l1hdr
->b_mfu_ghost_hits
;
2483 abi
->abi_holds
= refcount_count(&l1hdr
->b_refcnt
);
2487 abi
->abi_l2arc_dattr
= l2hdr
->b_daddr
;
2488 abi
->abi_l2arc_hits
= l2hdr
->b_hits
;
2491 abi
->abi_state_type
= state
? state
->arcs_state
: ARC_STATE_ANON
;
2492 abi
->abi_state_contents
= arc_buf_type(hdr
);
2493 abi
->abi_size
= arc_hdr_size(hdr
);
2497 * Move the supplied buffer to the indicated state. The hash lock
2498 * for the buffer must be held by the caller.
2501 arc_change_state(arc_state_t
*new_state
, arc_buf_hdr_t
*hdr
,
2502 kmutex_t
*hash_lock
)
2504 arc_state_t
*old_state
;
2507 boolean_t update_old
, update_new
;
2508 arc_buf_contents_t buftype
= arc_buf_type(hdr
);
2511 * We almost always have an L1 hdr here, since we call arc_hdr_realloc()
2512 * in arc_read() when bringing a buffer out of the L2ARC. However, the
2513 * L1 hdr doesn't always exist when we change state to arc_anon before
2514 * destroying a header, in which case reallocating to add the L1 hdr is
2517 if (HDR_HAS_L1HDR(hdr
)) {
2518 old_state
= hdr
->b_l1hdr
.b_state
;
2519 refcnt
= refcount_count(&hdr
->b_l1hdr
.b_refcnt
);
2520 bufcnt
= hdr
->b_l1hdr
.b_bufcnt
;
2521 update_old
= (bufcnt
> 0 || hdr
->b_l1hdr
.b_pabd
!= NULL
||
2524 old_state
= arc_l2c_only
;
2527 update_old
= B_FALSE
;
2529 update_new
= update_old
;
2531 ASSERT(MUTEX_HELD(hash_lock
));
2532 ASSERT3P(new_state
, !=, old_state
);
2533 ASSERT(!GHOST_STATE(new_state
) || bufcnt
== 0);
2534 ASSERT(old_state
!= arc_anon
|| bufcnt
<= 1);
2537 * If this buffer is evictable, transfer it from the
2538 * old state list to the new state list.
2541 if (old_state
!= arc_anon
&& old_state
!= arc_l2c_only
) {
2542 ASSERT(HDR_HAS_L1HDR(hdr
));
2543 multilist_remove(old_state
->arcs_list
[buftype
], hdr
);
2545 if (GHOST_STATE(old_state
)) {
2547 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2548 update_old
= B_TRUE
;
2550 arc_evictable_space_decrement(hdr
, old_state
);
2552 if (new_state
!= arc_anon
&& new_state
!= arc_l2c_only
) {
2554 * An L1 header always exists here, since if we're
2555 * moving to some L1-cached state (i.e. not l2c_only or
2556 * anonymous), we realloc the header to add an L1hdr
2559 ASSERT(HDR_HAS_L1HDR(hdr
));
2560 multilist_insert(new_state
->arcs_list
[buftype
], hdr
);
2562 if (GHOST_STATE(new_state
)) {
2564 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2565 update_new
= B_TRUE
;
2567 arc_evictable_space_increment(hdr
, new_state
);
2571 ASSERT(!HDR_EMPTY(hdr
));
2572 if (new_state
== arc_anon
&& HDR_IN_HASH_TABLE(hdr
))
2573 buf_hash_remove(hdr
);
2575 /* adjust state sizes (ignore arc_l2c_only) */
2577 if (update_new
&& new_state
!= arc_l2c_only
) {
2578 ASSERT(HDR_HAS_L1HDR(hdr
));
2579 if (GHOST_STATE(new_state
)) {
2583 * When moving a header to a ghost state, we first
2584 * remove all arc buffers. Thus, we'll have a
2585 * bufcnt of zero, and no arc buffer to use for
2586 * the reference. As a result, we use the arc
2587 * header pointer for the reference.
2589 (void) refcount_add_many(&new_state
->arcs_size
,
2590 HDR_GET_LSIZE(hdr
), hdr
);
2591 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2592 ASSERT(!HDR_HAS_RABD(hdr
));
2594 uint32_t buffers
= 0;
2597 * Each individual buffer holds a unique reference,
2598 * thus we must remove each of these references one
2601 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2602 buf
= buf
->b_next
) {
2603 ASSERT3U(bufcnt
, !=, 0);
2607 * When the arc_buf_t is sharing the data
2608 * block with the hdr, the owner of the
2609 * reference belongs to the hdr. Only
2610 * add to the refcount if the arc_buf_t is
2613 if (arc_buf_is_shared(buf
))
2616 (void) refcount_add_many(&new_state
->arcs_size
,
2617 arc_buf_size(buf
), buf
);
2619 ASSERT3U(bufcnt
, ==, buffers
);
2621 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2622 (void) refcount_add_many(&new_state
->arcs_size
,
2623 arc_hdr_size(hdr
), hdr
);
2626 if (HDR_HAS_RABD(hdr
)) {
2627 (void) refcount_add_many(&new_state
->arcs_size
,
2628 HDR_GET_PSIZE(hdr
), hdr
);
2633 if (update_old
&& old_state
!= arc_l2c_only
) {
2634 ASSERT(HDR_HAS_L1HDR(hdr
));
2635 if (GHOST_STATE(old_state
)) {
2637 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2638 ASSERT(!HDR_HAS_RABD(hdr
));
2641 * When moving a header off of a ghost state,
2642 * the header will not contain any arc buffers.
2643 * We use the arc header pointer for the reference
2644 * which is exactly what we did when we put the
2645 * header on the ghost state.
2648 (void) refcount_remove_many(&old_state
->arcs_size
,
2649 HDR_GET_LSIZE(hdr
), hdr
);
2651 uint32_t buffers
= 0;
2654 * Each individual buffer holds a unique reference,
2655 * thus we must remove each of these references one
2658 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2659 buf
= buf
->b_next
) {
2660 ASSERT3U(bufcnt
, !=, 0);
2664 * When the arc_buf_t is sharing the data
2665 * block with the hdr, the owner of the
2666 * reference belongs to the hdr. Only
2667 * add to the refcount if the arc_buf_t is
2670 if (arc_buf_is_shared(buf
))
2673 (void) refcount_remove_many(
2674 &old_state
->arcs_size
, arc_buf_size(buf
),
2677 ASSERT3U(bufcnt
, ==, buffers
);
2678 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
2681 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2682 (void) refcount_remove_many(
2683 &old_state
->arcs_size
, arc_hdr_size(hdr
),
2687 if (HDR_HAS_RABD(hdr
)) {
2688 (void) refcount_remove_many(
2689 &old_state
->arcs_size
, HDR_GET_PSIZE(hdr
),
2695 if (HDR_HAS_L1HDR(hdr
))
2696 hdr
->b_l1hdr
.b_state
= new_state
;
2699 * L2 headers should never be on the L2 state list since they don't
2700 * have L1 headers allocated.
2702 ASSERT(multilist_is_empty(arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]) &&
2703 multilist_is_empty(arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]));
2707 arc_space_consume(uint64_t space
, arc_space_type_t type
)
2709 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
2714 case ARC_SPACE_DATA
:
2715 aggsum_add(&astat_data_size
, space
);
2717 case ARC_SPACE_META
:
2718 aggsum_add(&astat_metadata_size
, space
);
2720 case ARC_SPACE_BONUS
:
2721 aggsum_add(&astat_bonus_size
, space
);
2723 case ARC_SPACE_DNODE
:
2724 aggsum_add(&astat_dnode_size
, space
);
2726 case ARC_SPACE_DBUF
:
2727 aggsum_add(&astat_dbuf_size
, space
);
2729 case ARC_SPACE_HDRS
:
2730 aggsum_add(&astat_hdr_size
, space
);
2732 case ARC_SPACE_L2HDRS
:
2733 aggsum_add(&astat_l2_hdr_size
, space
);
2737 if (type
!= ARC_SPACE_DATA
)
2738 aggsum_add(&arc_meta_used
, space
);
2740 aggsum_add(&arc_size
, space
);
2744 arc_space_return(uint64_t space
, arc_space_type_t type
)
2746 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
2751 case ARC_SPACE_DATA
:
2752 aggsum_add(&astat_data_size
, -space
);
2754 case ARC_SPACE_META
:
2755 aggsum_add(&astat_metadata_size
, -space
);
2757 case ARC_SPACE_BONUS
:
2758 aggsum_add(&astat_bonus_size
, -space
);
2760 case ARC_SPACE_DNODE
:
2761 aggsum_add(&astat_dnode_size
, -space
);
2763 case ARC_SPACE_DBUF
:
2764 aggsum_add(&astat_dbuf_size
, -space
);
2766 case ARC_SPACE_HDRS
:
2767 aggsum_add(&astat_hdr_size
, -space
);
2769 case ARC_SPACE_L2HDRS
:
2770 aggsum_add(&astat_l2_hdr_size
, -space
);
2774 if (type
!= ARC_SPACE_DATA
) {
2775 ASSERT(aggsum_compare(&arc_meta_used
, space
) >= 0);
2777 * We use the upper bound here rather than the precise value
2778 * because the arc_meta_max value doesn't need to be
2779 * precise. It's only consumed by humans via arcstats.
2781 if (arc_meta_max
< aggsum_upper_bound(&arc_meta_used
))
2782 arc_meta_max
= aggsum_upper_bound(&arc_meta_used
);
2783 aggsum_add(&arc_meta_used
, -space
);
2786 ASSERT(aggsum_compare(&arc_size
, space
) >= 0);
2787 aggsum_add(&arc_size
, -space
);
2791 * Given a hdr and a buf, returns whether that buf can share its b_data buffer
2792 * with the hdr's b_pabd.
2795 arc_can_share(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
2798 * The criteria for sharing a hdr's data are:
2799 * 1. the buffer is not encrypted
2800 * 2. the hdr's compression matches the buf's compression
2801 * 3. the hdr doesn't need to be byteswapped
2802 * 4. the hdr isn't already being shared
2803 * 5. the buf is either compressed or it is the last buf in the hdr list
2805 * Criterion #5 maintains the invariant that shared uncompressed
2806 * bufs must be the final buf in the hdr's b_buf list. Reading this, you
2807 * might ask, "if a compressed buf is allocated first, won't that be the
2808 * last thing in the list?", but in that case it's impossible to create
2809 * a shared uncompressed buf anyway (because the hdr must be compressed
2810 * to have the compressed buf). You might also think that #3 is
2811 * sufficient to make this guarantee, however it's possible
2812 * (specifically in the rare L2ARC write race mentioned in
2813 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that
2814 * is sharable, but wasn't at the time of its allocation. Rather than
2815 * allow a new shared uncompressed buf to be created and then shuffle
2816 * the list around to make it the last element, this simply disallows
2817 * sharing if the new buf isn't the first to be added.
2819 ASSERT3P(buf
->b_hdr
, ==, hdr
);
2820 boolean_t hdr_compressed
=
2821 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
;
2822 boolean_t buf_compressed
= ARC_BUF_COMPRESSED(buf
) != 0;
2823 return (!ARC_BUF_ENCRYPTED(buf
) &&
2824 buf_compressed
== hdr_compressed
&&
2825 hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
&&
2826 !HDR_SHARED_DATA(hdr
) &&
2827 (ARC_BUF_LAST(buf
) || ARC_BUF_COMPRESSED(buf
)));
2831 * Allocate a buf for this hdr. If you care about the data that's in the hdr,
2832 * or if you want a compressed buffer, pass those flags in. Returns 0 if the
2833 * copy was made successfully, or an error code otherwise.
2836 arc_buf_alloc_impl(arc_buf_hdr_t
*hdr
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2837 void *tag
, boolean_t encrypted
, boolean_t compressed
, boolean_t noauth
,
2838 boolean_t fill
, arc_buf_t
**ret
)
2841 arc_fill_flags_t flags
= ARC_FILL_LOCKED
;
2843 ASSERT(HDR_HAS_L1HDR(hdr
));
2844 ASSERT3U(HDR_GET_LSIZE(hdr
), >, 0);
2845 VERIFY(hdr
->b_type
== ARC_BUFC_DATA
||
2846 hdr
->b_type
== ARC_BUFC_METADATA
);
2847 ASSERT3P(ret
, !=, NULL
);
2848 ASSERT3P(*ret
, ==, NULL
);
2849 IMPLY(encrypted
, compressed
);
2851 hdr
->b_l1hdr
.b_mru_hits
= 0;
2852 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
2853 hdr
->b_l1hdr
.b_mfu_hits
= 0;
2854 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
2855 hdr
->b_l1hdr
.b_l2_hits
= 0;
2857 buf
= *ret
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
2860 buf
->b_next
= hdr
->b_l1hdr
.b_buf
;
2863 add_reference(hdr
, tag
);
2866 * We're about to change the hdr's b_flags. We must either
2867 * hold the hash_lock or be undiscoverable.
2869 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
2872 * Only honor requests for compressed bufs if the hdr is actually
2873 * compressed. This must be overriden if the buffer is encrypted since
2874 * encrypted buffers cannot be decompressed.
2877 buf
->b_flags
|= ARC_BUF_FLAG_COMPRESSED
;
2878 buf
->b_flags
|= ARC_BUF_FLAG_ENCRYPTED
;
2879 flags
|= ARC_FILL_COMPRESSED
| ARC_FILL_ENCRYPTED
;
2880 } else if (compressed
&&
2881 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
) {
2882 buf
->b_flags
|= ARC_BUF_FLAG_COMPRESSED
;
2883 flags
|= ARC_FILL_COMPRESSED
;
2888 flags
|= ARC_FILL_NOAUTH
;
2892 * If the hdr's data can be shared then we share the data buffer and
2893 * set the appropriate bit in the hdr's b_flags to indicate the hdr is
2894 * allocate a new buffer to store the buf's data.
2896 * There are two additional restrictions here because we're sharing
2897 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be
2898 * actively involved in an L2ARC write, because if this buf is used by
2899 * an arc_write() then the hdr's data buffer will be released when the
2900 * write completes, even though the L2ARC write might still be using it.
2901 * Second, the hdr's ABD must be linear so that the buf's user doesn't
2902 * need to be ABD-aware.
2904 boolean_t can_share
= arc_can_share(hdr
, buf
) && !HDR_L2_WRITING(hdr
) &&
2905 hdr
->b_l1hdr
.b_pabd
!= NULL
&& abd_is_linear(hdr
->b_l1hdr
.b_pabd
);
2907 /* Set up b_data and sharing */
2909 buf
->b_data
= abd_to_buf(hdr
->b_l1hdr
.b_pabd
);
2910 buf
->b_flags
|= ARC_BUF_FLAG_SHARED
;
2911 arc_hdr_set_flags(hdr
, ARC_FLAG_SHARED_DATA
);
2914 arc_get_data_buf(hdr
, arc_buf_size(buf
), buf
);
2915 ARCSTAT_INCR(arcstat_overhead_size
, arc_buf_size(buf
));
2917 VERIFY3P(buf
->b_data
, !=, NULL
);
2919 hdr
->b_l1hdr
.b_buf
= buf
;
2920 hdr
->b_l1hdr
.b_bufcnt
+= 1;
2922 hdr
->b_crypt_hdr
.b_ebufcnt
+= 1;
2925 * If the user wants the data from the hdr, we need to either copy or
2926 * decompress the data.
2929 ASSERT3P(zb
, !=, NULL
);
2930 return (arc_buf_fill(buf
, spa
, zb
, flags
));
2936 static char *arc_onloan_tag
= "onloan";
2939 arc_loaned_bytes_update(int64_t delta
)
2941 atomic_add_64(&arc_loaned_bytes
, delta
);
2943 /* assert that it did not wrap around */
2944 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes
, 0), >=, 0);
2948 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
2949 * flight data by arc_tempreserve_space() until they are "returned". Loaned
2950 * buffers must be returned to the arc before they can be used by the DMU or
2954 arc_loan_buf(spa_t
*spa
, boolean_t is_metadata
, int size
)
2956 arc_buf_t
*buf
= arc_alloc_buf(spa
, arc_onloan_tag
,
2957 is_metadata
? ARC_BUFC_METADATA
: ARC_BUFC_DATA
, size
);
2959 arc_loaned_bytes_update(arc_buf_size(buf
));
2965 arc_loan_compressed_buf(spa_t
*spa
, uint64_t psize
, uint64_t lsize
,
2966 enum zio_compress compression_type
)
2968 arc_buf_t
*buf
= arc_alloc_compressed_buf(spa
, arc_onloan_tag
,
2969 psize
, lsize
, compression_type
);
2971 arc_loaned_bytes_update(arc_buf_size(buf
));
2977 arc_loan_raw_buf(spa_t
*spa
, uint64_t dsobj
, boolean_t byteorder
,
2978 const uint8_t *salt
, const uint8_t *iv
, const uint8_t *mac
,
2979 dmu_object_type_t ot
, uint64_t psize
, uint64_t lsize
,
2980 enum zio_compress compression_type
)
2982 arc_buf_t
*buf
= arc_alloc_raw_buf(spa
, arc_onloan_tag
, dsobj
,
2983 byteorder
, salt
, iv
, mac
, ot
, psize
, lsize
, compression_type
);
2985 atomic_add_64(&arc_loaned_bytes
, psize
);
2991 * Return a loaned arc buffer to the arc.
2994 arc_return_buf(arc_buf_t
*buf
, void *tag
)
2996 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2998 ASSERT3P(buf
->b_data
, !=, NULL
);
2999 ASSERT(HDR_HAS_L1HDR(hdr
));
3000 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
3001 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
3003 arc_loaned_bytes_update(-arc_buf_size(buf
));
3006 /* Detach an arc_buf from a dbuf (tag) */
3008 arc_loan_inuse_buf(arc_buf_t
*buf
, void *tag
)
3010 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3012 ASSERT3P(buf
->b_data
, !=, NULL
);
3013 ASSERT(HDR_HAS_L1HDR(hdr
));
3014 (void) refcount_add(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
3015 (void) refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
);
3017 arc_loaned_bytes_update(arc_buf_size(buf
));
3021 l2arc_free_abd_on_write(abd_t
*abd
, size_t size
, arc_buf_contents_t type
)
3023 l2arc_data_free_t
*df
= kmem_alloc(sizeof (*df
), KM_SLEEP
);
3026 df
->l2df_size
= size
;
3027 df
->l2df_type
= type
;
3028 mutex_enter(&l2arc_free_on_write_mtx
);
3029 list_insert_head(l2arc_free_on_write
, df
);
3030 mutex_exit(&l2arc_free_on_write_mtx
);
3034 arc_hdr_free_on_write(arc_buf_hdr_t
*hdr
, boolean_t free_rdata
)
3036 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
3037 arc_buf_contents_t type
= arc_buf_type(hdr
);
3038 uint64_t size
= (free_rdata
) ? HDR_GET_PSIZE(hdr
) : arc_hdr_size(hdr
);
3040 /* protected by hash lock, if in the hash table */
3041 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
3042 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3043 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
3045 (void) refcount_remove_many(&state
->arcs_esize
[type
],
3048 (void) refcount_remove_many(&state
->arcs_size
, size
, hdr
);
3049 if (type
== ARC_BUFC_METADATA
) {
3050 arc_space_return(size
, ARC_SPACE_META
);
3052 ASSERT(type
== ARC_BUFC_DATA
);
3053 arc_space_return(size
, ARC_SPACE_DATA
);
3057 l2arc_free_abd_on_write(hdr
->b_crypt_hdr
.b_rabd
, size
, type
);
3059 l2arc_free_abd_on_write(hdr
->b_l1hdr
.b_pabd
, size
, type
);
3064 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the
3065 * data buffer, we transfer the refcount ownership to the hdr and update
3066 * the appropriate kstats.
3069 arc_share_buf(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3071 ASSERT(arc_can_share(hdr
, buf
));
3072 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3073 ASSERT(!ARC_BUF_ENCRYPTED(buf
));
3074 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3077 * Start sharing the data buffer. We transfer the
3078 * refcount ownership to the hdr since it always owns
3079 * the refcount whenever an arc_buf_t is shared.
3081 refcount_transfer_ownership(&hdr
->b_l1hdr
.b_state
->arcs_size
, buf
, hdr
);
3082 hdr
->b_l1hdr
.b_pabd
= abd_get_from_buf(buf
->b_data
, arc_buf_size(buf
));
3083 abd_take_ownership_of_buf(hdr
->b_l1hdr
.b_pabd
,
3084 HDR_ISTYPE_METADATA(hdr
));
3085 arc_hdr_set_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3086 buf
->b_flags
|= ARC_BUF_FLAG_SHARED
;
3089 * Since we've transferred ownership to the hdr we need
3090 * to increment its compressed and uncompressed kstats and
3091 * decrement the overhead size.
3093 ARCSTAT_INCR(arcstat_compressed_size
, arc_hdr_size(hdr
));
3094 ARCSTAT_INCR(arcstat_uncompressed_size
, HDR_GET_LSIZE(hdr
));
3095 ARCSTAT_INCR(arcstat_overhead_size
, -arc_buf_size(buf
));
3099 arc_unshare_buf(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3101 ASSERT(arc_buf_is_shared(buf
));
3102 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3103 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3106 * We are no longer sharing this buffer so we need
3107 * to transfer its ownership to the rightful owner.
3109 refcount_transfer_ownership(&hdr
->b_l1hdr
.b_state
->arcs_size
, hdr
, buf
);
3110 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3111 abd_release_ownership_of_buf(hdr
->b_l1hdr
.b_pabd
);
3112 abd_put(hdr
->b_l1hdr
.b_pabd
);
3113 hdr
->b_l1hdr
.b_pabd
= NULL
;
3114 buf
->b_flags
&= ~ARC_BUF_FLAG_SHARED
;
3117 * Since the buffer is no longer shared between
3118 * the arc buf and the hdr, count it as overhead.
3120 ARCSTAT_INCR(arcstat_compressed_size
, -arc_hdr_size(hdr
));
3121 ARCSTAT_INCR(arcstat_uncompressed_size
, -HDR_GET_LSIZE(hdr
));
3122 ARCSTAT_INCR(arcstat_overhead_size
, arc_buf_size(buf
));
3126 * Remove an arc_buf_t from the hdr's buf list and return the last
3127 * arc_buf_t on the list. If no buffers remain on the list then return
3131 arc_buf_remove(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3133 ASSERT(HDR_HAS_L1HDR(hdr
));
3134 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3136 arc_buf_t
**bufp
= &hdr
->b_l1hdr
.b_buf
;
3137 arc_buf_t
*lastbuf
= NULL
;
3140 * Remove the buf from the hdr list and locate the last
3141 * remaining buffer on the list.
3143 while (*bufp
!= NULL
) {
3145 *bufp
= buf
->b_next
;
3148 * If we've removed a buffer in the middle of
3149 * the list then update the lastbuf and update
3152 if (*bufp
!= NULL
) {
3154 bufp
= &(*bufp
)->b_next
;
3158 ASSERT3P(lastbuf
, !=, buf
);
3159 IMPLY(hdr
->b_l1hdr
.b_bufcnt
> 0, lastbuf
!= NULL
);
3160 IMPLY(hdr
->b_l1hdr
.b_bufcnt
> 0, hdr
->b_l1hdr
.b_buf
!= NULL
);
3161 IMPLY(lastbuf
!= NULL
, ARC_BUF_LAST(lastbuf
));
3167 * Free up buf->b_data and pull the arc_buf_t off of the the arc_buf_hdr_t's
3171 arc_buf_destroy_impl(arc_buf_t
*buf
)
3173 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3176 * Free up the data associated with the buf but only if we're not
3177 * sharing this with the hdr. If we are sharing it with the hdr, the
3178 * hdr is responsible for doing the free.
3180 if (buf
->b_data
!= NULL
) {
3182 * We're about to change the hdr's b_flags. We must either
3183 * hold the hash_lock or be undiscoverable.
3185 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3187 arc_cksum_verify(buf
);
3188 arc_buf_unwatch(buf
);
3190 if (arc_buf_is_shared(buf
)) {
3191 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3193 uint64_t size
= arc_buf_size(buf
);
3194 arc_free_data_buf(hdr
, buf
->b_data
, size
, buf
);
3195 ARCSTAT_INCR(arcstat_overhead_size
, -size
);
3199 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
3200 hdr
->b_l1hdr
.b_bufcnt
-= 1;
3202 if (ARC_BUF_ENCRYPTED(buf
)) {
3203 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
3206 * If we have no more encrypted buffers and we've
3207 * already gotten a copy of the decrypted data we can
3208 * free b_rabd to save some space.
3210 if (hdr
->b_crypt_hdr
.b_ebufcnt
== 0 &&
3211 HDR_HAS_RABD(hdr
) && hdr
->b_l1hdr
.b_pabd
!= NULL
&&
3212 !HDR_IO_IN_PROGRESS(hdr
)) {
3213 arc_hdr_free_abd(hdr
, B_TRUE
);
3218 arc_buf_t
*lastbuf
= arc_buf_remove(hdr
, buf
);
3220 if (ARC_BUF_SHARED(buf
) && !ARC_BUF_COMPRESSED(buf
)) {
3222 * If the current arc_buf_t is sharing its data buffer with the
3223 * hdr, then reassign the hdr's b_pabd to share it with the new
3224 * buffer at the end of the list. The shared buffer is always
3225 * the last one on the hdr's buffer list.
3227 * There is an equivalent case for compressed bufs, but since
3228 * they aren't guaranteed to be the last buf in the list and
3229 * that is an exceedingly rare case, we just allow that space be
3230 * wasted temporarily. We must also be careful not to share
3231 * encrypted buffers, since they cannot be shared.
3233 if (lastbuf
!= NULL
&& !ARC_BUF_ENCRYPTED(lastbuf
)) {
3234 /* Only one buf can be shared at once */
3235 VERIFY(!arc_buf_is_shared(lastbuf
));
3236 /* hdr is uncompressed so can't have compressed buf */
3237 VERIFY(!ARC_BUF_COMPRESSED(lastbuf
));
3239 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3240 arc_hdr_free_abd(hdr
, B_FALSE
);
3243 * We must setup a new shared block between the
3244 * last buffer and the hdr. The data would have
3245 * been allocated by the arc buf so we need to transfer
3246 * ownership to the hdr since it's now being shared.
3248 arc_share_buf(hdr
, lastbuf
);
3250 } else if (HDR_SHARED_DATA(hdr
)) {
3252 * Uncompressed shared buffers are always at the end
3253 * of the list. Compressed buffers don't have the
3254 * same requirements. This makes it hard to
3255 * simply assert that the lastbuf is shared so
3256 * we rely on the hdr's compression flags to determine
3257 * if we have a compressed, shared buffer.
3259 ASSERT3P(lastbuf
, !=, NULL
);
3260 ASSERT(arc_buf_is_shared(lastbuf
) ||
3261 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
3265 * Free the checksum if we're removing the last uncompressed buf from
3268 if (!arc_hdr_has_uncompressed_buf(hdr
)) {
3269 arc_cksum_free(hdr
);
3272 /* clean up the buf */
3274 kmem_cache_free(buf_cache
, buf
);
3278 arc_hdr_alloc_abd(arc_buf_hdr_t
*hdr
, boolean_t alloc_rdata
)
3282 ASSERT3U(HDR_GET_LSIZE(hdr
), >, 0);
3283 ASSERT(HDR_HAS_L1HDR(hdr
));
3284 ASSERT(!HDR_SHARED_DATA(hdr
) || alloc_rdata
);
3285 IMPLY(alloc_rdata
, HDR_PROTECTED(hdr
));
3288 size
= HDR_GET_PSIZE(hdr
);
3289 ASSERT3P(hdr
->b_crypt_hdr
.b_rabd
, ==, NULL
);
3290 hdr
->b_crypt_hdr
.b_rabd
= arc_get_data_abd(hdr
, size
, hdr
);
3291 ASSERT3P(hdr
->b_crypt_hdr
.b_rabd
, !=, NULL
);
3292 ARCSTAT_INCR(arcstat_raw_size
, size
);
3294 size
= arc_hdr_size(hdr
);
3295 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3296 hdr
->b_l1hdr
.b_pabd
= arc_get_data_abd(hdr
, size
, hdr
);
3297 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3300 ARCSTAT_INCR(arcstat_compressed_size
, size
);
3301 ARCSTAT_INCR(arcstat_uncompressed_size
, HDR_GET_LSIZE(hdr
));
3305 arc_hdr_free_abd(arc_buf_hdr_t
*hdr
, boolean_t free_rdata
)
3307 uint64_t size
= (free_rdata
) ? HDR_GET_PSIZE(hdr
) : arc_hdr_size(hdr
);
3309 ASSERT(HDR_HAS_L1HDR(hdr
));
3310 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
3311 IMPLY(free_rdata
, HDR_HAS_RABD(hdr
));
3314 * If the hdr is currently being written to the l2arc then
3315 * we defer freeing the data by adding it to the l2arc_free_on_write
3316 * list. The l2arc will free the data once it's finished
3317 * writing it to the l2arc device.
3319 if (HDR_L2_WRITING(hdr
)) {
3320 arc_hdr_free_on_write(hdr
, free_rdata
);
3321 ARCSTAT_BUMP(arcstat_l2_free_on_write
);
3322 } else if (free_rdata
) {
3323 arc_free_data_abd(hdr
, hdr
->b_crypt_hdr
.b_rabd
, size
, hdr
);
3325 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
, size
, hdr
);
3329 hdr
->b_crypt_hdr
.b_rabd
= NULL
;
3330 ARCSTAT_INCR(arcstat_raw_size
, -size
);
3332 hdr
->b_l1hdr
.b_pabd
= NULL
;
3335 if (hdr
->b_l1hdr
.b_pabd
== NULL
&& !HDR_HAS_RABD(hdr
))
3336 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
3338 ARCSTAT_INCR(arcstat_compressed_size
, -size
);
3339 ARCSTAT_INCR(arcstat_uncompressed_size
, -HDR_GET_LSIZE(hdr
));
3342 static arc_buf_hdr_t
*
3343 arc_hdr_alloc(uint64_t spa
, int32_t psize
, int32_t lsize
,
3344 boolean_t
protected, enum zio_compress compression_type
,
3345 arc_buf_contents_t type
, boolean_t alloc_rdata
)
3349 VERIFY(type
== ARC_BUFC_DATA
|| type
== ARC_BUFC_METADATA
);
3351 hdr
= kmem_cache_alloc(hdr_full_crypt_cache
, KM_PUSHPAGE
);
3353 hdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
3356 ASSERT(HDR_EMPTY(hdr
));
3357 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3358 HDR_SET_PSIZE(hdr
, psize
);
3359 HDR_SET_LSIZE(hdr
, lsize
);
3363 arc_hdr_set_flags(hdr
, arc_bufc_to_flags(type
) | ARC_FLAG_HAS_L1HDR
);
3364 arc_hdr_set_compress(hdr
, compression_type
);
3366 arc_hdr_set_flags(hdr
, ARC_FLAG_PROTECTED
);
3368 hdr
->b_l1hdr
.b_state
= arc_anon
;
3369 hdr
->b_l1hdr
.b_arc_access
= 0;
3370 hdr
->b_l1hdr
.b_bufcnt
= 0;
3371 hdr
->b_l1hdr
.b_buf
= NULL
;
3374 * Allocate the hdr's buffer. This will contain either
3375 * the compressed or uncompressed data depending on the block
3376 * it references and compressed arc enablement.
3378 arc_hdr_alloc_abd(hdr
, alloc_rdata
);
3379 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3385 * Transition between the two allocation states for the arc_buf_hdr struct.
3386 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
3387 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
3388 * version is used when a cache buffer is only in the L2ARC in order to reduce
3391 static arc_buf_hdr_t
*
3392 arc_hdr_realloc(arc_buf_hdr_t
*hdr
, kmem_cache_t
*old
, kmem_cache_t
*new)
3394 ASSERT(HDR_HAS_L2HDR(hdr
));
3396 arc_buf_hdr_t
*nhdr
;
3397 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
3399 ASSERT((old
== hdr_full_cache
&& new == hdr_l2only_cache
) ||
3400 (old
== hdr_l2only_cache
&& new == hdr_full_cache
));
3403 * if the caller wanted a new full header and the header is to be
3404 * encrypted we will actually allocate the header from the full crypt
3405 * cache instead. The same applies to freeing from the old cache.
3407 if (HDR_PROTECTED(hdr
) && new == hdr_full_cache
)
3408 new = hdr_full_crypt_cache
;
3409 if (HDR_PROTECTED(hdr
) && old
== hdr_full_cache
)
3410 old
= hdr_full_crypt_cache
;
3412 nhdr
= kmem_cache_alloc(new, KM_PUSHPAGE
);
3414 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
3415 buf_hash_remove(hdr
);
3417 bcopy(hdr
, nhdr
, HDR_L2ONLY_SIZE
);
3419 if (new == hdr_full_cache
|| new == hdr_full_crypt_cache
) {
3420 arc_hdr_set_flags(nhdr
, ARC_FLAG_HAS_L1HDR
);
3422 * arc_access and arc_change_state need to be aware that a
3423 * header has just come out of L2ARC, so we set its state to
3424 * l2c_only even though it's about to change.
3426 nhdr
->b_l1hdr
.b_state
= arc_l2c_only
;
3428 /* Verify previous threads set to NULL before freeing */
3429 ASSERT3P(nhdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3430 ASSERT(!HDR_HAS_RABD(hdr
));
3432 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
3433 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
3434 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3437 * If we've reached here, We must have been called from
3438 * arc_evict_hdr(), as such we should have already been
3439 * removed from any ghost list we were previously on
3440 * (which protects us from racing with arc_evict_state),
3441 * thus no locking is needed during this check.
3443 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3446 * A buffer must not be moved into the arc_l2c_only
3447 * state if it's not finished being written out to the
3448 * l2arc device. Otherwise, the b_l1hdr.b_pabd field
3449 * might try to be accessed, even though it was removed.
3451 VERIFY(!HDR_L2_WRITING(hdr
));
3452 VERIFY3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3453 ASSERT(!HDR_HAS_RABD(hdr
));
3455 arc_hdr_clear_flags(nhdr
, ARC_FLAG_HAS_L1HDR
);
3458 * The header has been reallocated so we need to re-insert it into any
3461 (void) buf_hash_insert(nhdr
, NULL
);
3463 ASSERT(list_link_active(&hdr
->b_l2hdr
.b_l2node
));
3465 mutex_enter(&dev
->l2ad_mtx
);
3468 * We must place the realloc'ed header back into the list at
3469 * the same spot. Otherwise, if it's placed earlier in the list,
3470 * l2arc_write_buffers() could find it during the function's
3471 * write phase, and try to write it out to the l2arc.
3473 list_insert_after(&dev
->l2ad_buflist
, hdr
, nhdr
);
3474 list_remove(&dev
->l2ad_buflist
, hdr
);
3476 mutex_exit(&dev
->l2ad_mtx
);
3479 * Since we're using the pointer address as the tag when
3480 * incrementing and decrementing the l2ad_alloc refcount, we
3481 * must remove the old pointer (that we're about to destroy) and
3482 * add the new pointer to the refcount. Otherwise we'd remove
3483 * the wrong pointer address when calling arc_hdr_destroy() later.
3486 (void) refcount_remove_many(&dev
->l2ad_alloc
, arc_hdr_size(hdr
), hdr
);
3487 (void) refcount_add_many(&dev
->l2ad_alloc
, arc_hdr_size(nhdr
), nhdr
);
3489 buf_discard_identity(hdr
);
3490 kmem_cache_free(old
, hdr
);
3496 * This function allows an L1 header to be reallocated as a crypt
3497 * header and vice versa. If we are going to a crypt header, the
3498 * new fields will be zeroed out.
3500 static arc_buf_hdr_t
*
3501 arc_hdr_realloc_crypt(arc_buf_hdr_t
*hdr
, boolean_t need_crypt
)
3503 arc_buf_hdr_t
*nhdr
;
3505 kmem_cache_t
*ncache
, *ocache
;
3506 unsigned nsize
, osize
;
3509 * This function requires that hdr is in the arc_anon state.
3510 * Therefore it won't have any L2ARC data for us to worry
3513 ASSERT(HDR_HAS_L1HDR(hdr
));
3514 ASSERT(!HDR_HAS_L2HDR(hdr
));
3515 ASSERT3U(!!HDR_PROTECTED(hdr
), !=, need_crypt
);
3516 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3517 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3518 ASSERT(!list_link_active(&hdr
->b_l2hdr
.b_l2node
));
3519 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
3522 ncache
= hdr_full_crypt_cache
;
3523 nsize
= sizeof (hdr
->b_crypt_hdr
);
3524 ocache
= hdr_full_cache
;
3525 osize
= HDR_FULL_SIZE
;
3527 ncache
= hdr_full_cache
;
3528 nsize
= HDR_FULL_SIZE
;
3529 ocache
= hdr_full_crypt_cache
;
3530 osize
= sizeof (hdr
->b_crypt_hdr
);
3533 nhdr
= kmem_cache_alloc(ncache
, KM_PUSHPAGE
);
3536 * Copy all members that aren't locks or condvars to the new header.
3537 * No lists are pointing to us (as we asserted above), so we don't
3538 * need to worry about the list nodes.
3540 nhdr
->b_dva
= hdr
->b_dva
;
3541 nhdr
->b_birth
= hdr
->b_birth
;
3542 nhdr
->b_type
= hdr
->b_type
;
3543 nhdr
->b_flags
= hdr
->b_flags
;
3544 nhdr
->b_psize
= hdr
->b_psize
;
3545 nhdr
->b_lsize
= hdr
->b_lsize
;
3546 nhdr
->b_spa
= hdr
->b_spa
;
3547 nhdr
->b_l1hdr
.b_freeze_cksum
= hdr
->b_l1hdr
.b_freeze_cksum
;
3548 nhdr
->b_l1hdr
.b_bufcnt
= hdr
->b_l1hdr
.b_bufcnt
;
3549 nhdr
->b_l1hdr
.b_byteswap
= hdr
->b_l1hdr
.b_byteswap
;
3550 nhdr
->b_l1hdr
.b_state
= hdr
->b_l1hdr
.b_state
;
3551 nhdr
->b_l1hdr
.b_arc_access
= hdr
->b_l1hdr
.b_arc_access
;
3552 nhdr
->b_l1hdr
.b_mru_hits
= hdr
->b_l1hdr
.b_mru_hits
;
3553 nhdr
->b_l1hdr
.b_mru_ghost_hits
= hdr
->b_l1hdr
.b_mru_ghost_hits
;
3554 nhdr
->b_l1hdr
.b_mfu_hits
= hdr
->b_l1hdr
.b_mfu_hits
;
3555 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= hdr
->b_l1hdr
.b_mfu_ghost_hits
;
3556 nhdr
->b_l1hdr
.b_l2_hits
= hdr
->b_l1hdr
.b_l2_hits
;
3557 nhdr
->b_l1hdr
.b_acb
= hdr
->b_l1hdr
.b_acb
;
3558 nhdr
->b_l1hdr
.b_pabd
= hdr
->b_l1hdr
.b_pabd
;
3561 * This refcount_add() exists only to ensure that the individual
3562 * arc buffers always point to a header that is referenced, avoiding
3563 * a small race condition that could trigger ASSERTs.
3565 (void) refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, FTAG
);
3566 nhdr
->b_l1hdr
.b_buf
= hdr
->b_l1hdr
.b_buf
;
3567 for (buf
= nhdr
->b_l1hdr
.b_buf
; buf
!= NULL
; buf
= buf
->b_next
) {
3568 mutex_enter(&buf
->b_evict_lock
);
3570 mutex_exit(&buf
->b_evict_lock
);
3573 refcount_transfer(&nhdr
->b_l1hdr
.b_refcnt
, &hdr
->b_l1hdr
.b_refcnt
);
3574 (void) refcount_remove(&nhdr
->b_l1hdr
.b_refcnt
, FTAG
);
3575 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
3578 arc_hdr_set_flags(nhdr
, ARC_FLAG_PROTECTED
);
3580 arc_hdr_clear_flags(nhdr
, ARC_FLAG_PROTECTED
);
3583 /* unset all members of the original hdr */
3584 bzero(&hdr
->b_dva
, sizeof (dva_t
));
3586 hdr
->b_type
= ARC_BUFC_INVALID
;
3591 hdr
->b_l1hdr
.b_freeze_cksum
= NULL
;
3592 hdr
->b_l1hdr
.b_buf
= NULL
;
3593 hdr
->b_l1hdr
.b_bufcnt
= 0;
3594 hdr
->b_l1hdr
.b_byteswap
= 0;
3595 hdr
->b_l1hdr
.b_state
= NULL
;
3596 hdr
->b_l1hdr
.b_arc_access
= 0;
3597 hdr
->b_l1hdr
.b_mru_hits
= 0;
3598 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
3599 hdr
->b_l1hdr
.b_mfu_hits
= 0;
3600 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
3601 hdr
->b_l1hdr
.b_l2_hits
= 0;
3602 hdr
->b_l1hdr
.b_acb
= NULL
;
3603 hdr
->b_l1hdr
.b_pabd
= NULL
;
3605 if (ocache
== hdr_full_crypt_cache
) {
3606 ASSERT(!HDR_HAS_RABD(hdr
));
3607 hdr
->b_crypt_hdr
.b_ot
= DMU_OT_NONE
;
3608 hdr
->b_crypt_hdr
.b_ebufcnt
= 0;
3609 hdr
->b_crypt_hdr
.b_dsobj
= 0;
3610 bzero(hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3611 bzero(hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3612 bzero(hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3615 buf_discard_identity(hdr
);
3616 kmem_cache_free(ocache
, hdr
);
3622 * This function is used by the send / receive code to convert a newly
3623 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It
3624 * is also used to allow the root objset block to be uupdated without altering
3625 * its embedded MACs. Both block types will always be uncompressed so we do not
3626 * have to worry about compression type or psize.
3629 arc_convert_to_raw(arc_buf_t
*buf
, uint64_t dsobj
, boolean_t byteorder
,
3630 dmu_object_type_t ot
, const uint8_t *salt
, const uint8_t *iv
,
3633 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3635 ASSERT(ot
== DMU_OT_DNODE
|| ot
== DMU_OT_OBJSET
);
3636 ASSERT(HDR_HAS_L1HDR(hdr
));
3637 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3639 buf
->b_flags
|= (ARC_BUF_FLAG_COMPRESSED
| ARC_BUF_FLAG_ENCRYPTED
);
3640 if (!HDR_PROTECTED(hdr
))
3641 hdr
= arc_hdr_realloc_crypt(hdr
, B_TRUE
);
3642 hdr
->b_crypt_hdr
.b_dsobj
= dsobj
;
3643 hdr
->b_crypt_hdr
.b_ot
= ot
;
3644 hdr
->b_l1hdr
.b_byteswap
= (byteorder
== ZFS_HOST_BYTEORDER
) ?
3645 DMU_BSWAP_NUMFUNCS
: DMU_OT_BYTESWAP(ot
);
3646 if (!arc_hdr_has_uncompressed_buf(hdr
))
3647 arc_cksum_free(hdr
);
3650 bcopy(salt
, hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3652 bcopy(iv
, hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3654 bcopy(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3658 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller.
3659 * The buf is returned thawed since we expect the consumer to modify it.
3662 arc_alloc_buf(spa_t
*spa
, void *tag
, arc_buf_contents_t type
, int32_t size
)
3664 arc_buf_hdr_t
*hdr
= arc_hdr_alloc(spa_load_guid(spa
), size
, size
,
3665 B_FALSE
, ZIO_COMPRESS_OFF
, type
, B_FALSE
);
3666 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3668 arc_buf_t
*buf
= NULL
;
3669 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_FALSE
, B_FALSE
,
3670 B_FALSE
, B_FALSE
, &buf
));
3677 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this
3678 * for bufs containing metadata.
3681 arc_alloc_compressed_buf(spa_t
*spa
, void *tag
, uint64_t psize
, uint64_t lsize
,
3682 enum zio_compress compression_type
)
3684 ASSERT3U(lsize
, >, 0);
3685 ASSERT3U(lsize
, >=, psize
);
3686 ASSERT3U(compression_type
, >, ZIO_COMPRESS_OFF
);
3687 ASSERT3U(compression_type
, <, ZIO_COMPRESS_FUNCTIONS
);
3689 arc_buf_hdr_t
*hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
,
3690 B_FALSE
, compression_type
, ARC_BUFC_DATA
, B_FALSE
);
3691 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3693 arc_buf_t
*buf
= NULL
;
3694 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_FALSE
,
3695 B_TRUE
, B_FALSE
, B_FALSE
, &buf
));
3697 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3699 if (!arc_buf_is_shared(buf
)) {
3701 * To ensure that the hdr has the correct data in it if we call
3702 * arc_untransform() on this buf before it's been written to
3703 * disk, it's easiest if we just set up sharing between the
3706 ASSERT(!abd_is_linear(hdr
->b_l1hdr
.b_pabd
));
3707 arc_hdr_free_abd(hdr
, B_FALSE
);
3708 arc_share_buf(hdr
, buf
);
3715 arc_alloc_raw_buf(spa_t
*spa
, void *tag
, uint64_t dsobj
, boolean_t byteorder
,
3716 const uint8_t *salt
, const uint8_t *iv
, const uint8_t *mac
,
3717 dmu_object_type_t ot
, uint64_t psize
, uint64_t lsize
,
3718 enum zio_compress compression_type
)
3722 arc_buf_contents_t type
= DMU_OT_IS_METADATA(ot
) ?
3723 ARC_BUFC_METADATA
: ARC_BUFC_DATA
;
3725 ASSERT3U(lsize
, >, 0);
3726 ASSERT3U(lsize
, >=, psize
);
3727 ASSERT3U(compression_type
, >=, ZIO_COMPRESS_OFF
);
3728 ASSERT3U(compression_type
, <, ZIO_COMPRESS_FUNCTIONS
);
3730 hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
, B_TRUE
,
3731 compression_type
, type
, B_TRUE
);
3732 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3734 hdr
->b_crypt_hdr
.b_dsobj
= dsobj
;
3735 hdr
->b_crypt_hdr
.b_ot
= ot
;
3736 hdr
->b_l1hdr
.b_byteswap
= (byteorder
== ZFS_HOST_BYTEORDER
) ?
3737 DMU_BSWAP_NUMFUNCS
: DMU_OT_BYTESWAP(ot
);
3738 bcopy(salt
, hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3739 bcopy(iv
, hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3740 bcopy(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3743 * This buffer will be considered encrypted even if the ot is not an
3744 * encrypted type. It will become authenticated instead in
3745 * arc_write_ready().
3748 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_TRUE
, B_TRUE
,
3749 B_FALSE
, B_FALSE
, &buf
));
3751 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3757 arc_hdr_l2hdr_destroy(arc_buf_hdr_t
*hdr
)
3759 l2arc_buf_hdr_t
*l2hdr
= &hdr
->b_l2hdr
;
3760 l2arc_dev_t
*dev
= l2hdr
->b_dev
;
3761 uint64_t psize
= arc_hdr_size(hdr
);
3763 ASSERT(MUTEX_HELD(&dev
->l2ad_mtx
));
3764 ASSERT(HDR_HAS_L2HDR(hdr
));
3766 list_remove(&dev
->l2ad_buflist
, hdr
);
3768 ARCSTAT_INCR(arcstat_l2_psize
, -psize
);
3769 ARCSTAT_INCR(arcstat_l2_lsize
, -HDR_GET_LSIZE(hdr
));
3771 vdev_space_update(dev
->l2ad_vdev
, -psize
, 0, 0);
3773 (void) refcount_remove_many(&dev
->l2ad_alloc
, psize
, hdr
);
3774 arc_hdr_clear_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
3778 arc_hdr_destroy(arc_buf_hdr_t
*hdr
)
3780 if (HDR_HAS_L1HDR(hdr
)) {
3781 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
||
3782 hdr
->b_l1hdr
.b_bufcnt
> 0);
3783 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3784 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3786 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3787 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
3789 if (!HDR_EMPTY(hdr
))
3790 buf_discard_identity(hdr
);
3792 if (HDR_HAS_L2HDR(hdr
)) {
3793 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
3794 boolean_t buflist_held
= MUTEX_HELD(&dev
->l2ad_mtx
);
3797 mutex_enter(&dev
->l2ad_mtx
);
3800 * Even though we checked this conditional above, we
3801 * need to check this again now that we have the
3802 * l2ad_mtx. This is because we could be racing with
3803 * another thread calling l2arc_evict() which might have
3804 * destroyed this header's L2 portion as we were waiting
3805 * to acquire the l2ad_mtx. If that happens, we don't
3806 * want to re-destroy the header's L2 portion.
3808 if (HDR_HAS_L2HDR(hdr
))
3809 arc_hdr_l2hdr_destroy(hdr
);
3812 mutex_exit(&dev
->l2ad_mtx
);
3815 if (HDR_HAS_L1HDR(hdr
)) {
3816 arc_cksum_free(hdr
);
3818 while (hdr
->b_l1hdr
.b_buf
!= NULL
)
3819 arc_buf_destroy_impl(hdr
->b_l1hdr
.b_buf
);
3821 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
3822 arc_hdr_free_abd(hdr
, B_FALSE
);
3825 if (HDR_HAS_RABD(hdr
))
3826 arc_hdr_free_abd(hdr
, B_TRUE
);
3829 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
3830 if (HDR_HAS_L1HDR(hdr
)) {
3831 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3832 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
3834 if (!HDR_PROTECTED(hdr
)) {
3835 kmem_cache_free(hdr_full_cache
, hdr
);
3837 kmem_cache_free(hdr_full_crypt_cache
, hdr
);
3840 kmem_cache_free(hdr_l2only_cache
, hdr
);
3845 arc_buf_destroy(arc_buf_t
*buf
, void* tag
)
3847 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3848 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
3850 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
3851 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
3852 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3853 VERIFY0(remove_reference(hdr
, NULL
, tag
));
3854 arc_hdr_destroy(hdr
);
3858 mutex_enter(hash_lock
);
3859 ASSERT3P(hdr
, ==, buf
->b_hdr
);
3860 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
3861 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
3862 ASSERT3P(hdr
->b_l1hdr
.b_state
, !=, arc_anon
);
3863 ASSERT3P(buf
->b_data
, !=, NULL
);
3865 (void) remove_reference(hdr
, hash_lock
, tag
);
3866 arc_buf_destroy_impl(buf
);
3867 mutex_exit(hash_lock
);
3871 * Evict the arc_buf_hdr that is provided as a parameter. The resultant
3872 * state of the header is dependent on its state prior to entering this
3873 * function. The following transitions are possible:
3875 * - arc_mru -> arc_mru_ghost
3876 * - arc_mfu -> arc_mfu_ghost
3877 * - arc_mru_ghost -> arc_l2c_only
3878 * - arc_mru_ghost -> deleted
3879 * - arc_mfu_ghost -> arc_l2c_only
3880 * - arc_mfu_ghost -> deleted
3883 arc_evict_hdr(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
3885 arc_state_t
*evicted_state
, *state
;
3886 int64_t bytes_evicted
= 0;
3887 int min_lifetime
= HDR_PRESCIENT_PREFETCH(hdr
) ?
3888 arc_min_prescient_prefetch_ms
: arc_min_prefetch_ms
;
3890 ASSERT(MUTEX_HELD(hash_lock
));
3891 ASSERT(HDR_HAS_L1HDR(hdr
));
3893 state
= hdr
->b_l1hdr
.b_state
;
3894 if (GHOST_STATE(state
)) {
3895 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3896 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
3899 * l2arc_write_buffers() relies on a header's L1 portion
3900 * (i.e. its b_pabd field) during it's write phase.
3901 * Thus, we cannot push a header onto the arc_l2c_only
3902 * state (removing its L1 piece) until the header is
3903 * done being written to the l2arc.
3905 if (HDR_HAS_L2HDR(hdr
) && HDR_L2_WRITING(hdr
)) {
3906 ARCSTAT_BUMP(arcstat_evict_l2_skip
);
3907 return (bytes_evicted
);
3910 ARCSTAT_BUMP(arcstat_deleted
);
3911 bytes_evicted
+= HDR_GET_LSIZE(hdr
);
3913 DTRACE_PROBE1(arc__delete
, arc_buf_hdr_t
*, hdr
);
3915 if (HDR_HAS_L2HDR(hdr
)) {
3916 ASSERT(hdr
->b_l1hdr
.b_pabd
== NULL
);
3917 ASSERT(!HDR_HAS_RABD(hdr
));
3919 * This buffer is cached on the 2nd Level ARC;
3920 * don't destroy the header.
3922 arc_change_state(arc_l2c_only
, hdr
, hash_lock
);
3924 * dropping from L1+L2 cached to L2-only,
3925 * realloc to remove the L1 header.
3927 hdr
= arc_hdr_realloc(hdr
, hdr_full_cache
,
3930 arc_change_state(arc_anon
, hdr
, hash_lock
);
3931 arc_hdr_destroy(hdr
);
3933 return (bytes_evicted
);
3936 ASSERT(state
== arc_mru
|| state
== arc_mfu
);
3937 evicted_state
= (state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
3939 /* prefetch buffers have a minimum lifespan */
3940 if (HDR_IO_IN_PROGRESS(hdr
) ||
3941 ((hdr
->b_flags
& (ARC_FLAG_PREFETCH
| ARC_FLAG_INDIRECT
)) &&
3942 ddi_get_lbolt() - hdr
->b_l1hdr
.b_arc_access
<
3943 MSEC_TO_TICK(min_lifetime
))) {
3944 ARCSTAT_BUMP(arcstat_evict_skip
);
3945 return (bytes_evicted
);
3948 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
3949 while (hdr
->b_l1hdr
.b_buf
) {
3950 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
3951 if (!mutex_tryenter(&buf
->b_evict_lock
)) {
3952 ARCSTAT_BUMP(arcstat_mutex_miss
);
3955 if (buf
->b_data
!= NULL
)
3956 bytes_evicted
+= HDR_GET_LSIZE(hdr
);
3957 mutex_exit(&buf
->b_evict_lock
);
3958 arc_buf_destroy_impl(buf
);
3961 if (HDR_HAS_L2HDR(hdr
)) {
3962 ARCSTAT_INCR(arcstat_evict_l2_cached
, HDR_GET_LSIZE(hdr
));
3964 if (l2arc_write_eligible(hdr
->b_spa
, hdr
)) {
3965 ARCSTAT_INCR(arcstat_evict_l2_eligible
,
3966 HDR_GET_LSIZE(hdr
));
3968 ARCSTAT_INCR(arcstat_evict_l2_ineligible
,
3969 HDR_GET_LSIZE(hdr
));
3973 if (hdr
->b_l1hdr
.b_bufcnt
== 0) {
3974 arc_cksum_free(hdr
);
3976 bytes_evicted
+= arc_hdr_size(hdr
);
3979 * If this hdr is being evicted and has a compressed
3980 * buffer then we discard it here before we change states.
3981 * This ensures that the accounting is updated correctly
3982 * in arc_free_data_impl().
3984 if (hdr
->b_l1hdr
.b_pabd
!= NULL
)
3985 arc_hdr_free_abd(hdr
, B_FALSE
);
3987 if (HDR_HAS_RABD(hdr
))
3988 arc_hdr_free_abd(hdr
, B_TRUE
);
3990 arc_change_state(evicted_state
, hdr
, hash_lock
);
3991 ASSERT(HDR_IN_HASH_TABLE(hdr
));
3992 arc_hdr_set_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
3993 DTRACE_PROBE1(arc__evict
, arc_buf_hdr_t
*, hdr
);
3996 return (bytes_evicted
);
4000 arc_evict_state_impl(multilist_t
*ml
, int idx
, arc_buf_hdr_t
*marker
,
4001 uint64_t spa
, int64_t bytes
)
4003 multilist_sublist_t
*mls
;
4004 uint64_t bytes_evicted
= 0;
4006 kmutex_t
*hash_lock
;
4007 int evict_count
= 0;
4009 ASSERT3P(marker
, !=, NULL
);
4010 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
4012 mls
= multilist_sublist_lock(ml
, idx
);
4014 for (hdr
= multilist_sublist_prev(mls
, marker
); hdr
!= NULL
;
4015 hdr
= multilist_sublist_prev(mls
, marker
)) {
4016 if ((bytes
!= ARC_EVICT_ALL
&& bytes_evicted
>= bytes
) ||
4017 (evict_count
>= zfs_arc_evict_batch_limit
))
4021 * To keep our iteration location, move the marker
4022 * forward. Since we're not holding hdr's hash lock, we
4023 * must be very careful and not remove 'hdr' from the
4024 * sublist. Otherwise, other consumers might mistake the
4025 * 'hdr' as not being on a sublist when they call the
4026 * multilist_link_active() function (they all rely on
4027 * the hash lock protecting concurrent insertions and
4028 * removals). multilist_sublist_move_forward() was
4029 * specifically implemented to ensure this is the case
4030 * (only 'marker' will be removed and re-inserted).
4032 multilist_sublist_move_forward(mls
, marker
);
4035 * The only case where the b_spa field should ever be
4036 * zero, is the marker headers inserted by
4037 * arc_evict_state(). It's possible for multiple threads
4038 * to be calling arc_evict_state() concurrently (e.g.
4039 * dsl_pool_close() and zio_inject_fault()), so we must
4040 * skip any markers we see from these other threads.
4042 if (hdr
->b_spa
== 0)
4045 /* we're only interested in evicting buffers of a certain spa */
4046 if (spa
!= 0 && hdr
->b_spa
!= spa
) {
4047 ARCSTAT_BUMP(arcstat_evict_skip
);
4051 hash_lock
= HDR_LOCK(hdr
);
4054 * We aren't calling this function from any code path
4055 * that would already be holding a hash lock, so we're
4056 * asserting on this assumption to be defensive in case
4057 * this ever changes. Without this check, it would be
4058 * possible to incorrectly increment arcstat_mutex_miss
4059 * below (e.g. if the code changed such that we called
4060 * this function with a hash lock held).
4062 ASSERT(!MUTEX_HELD(hash_lock
));
4064 if (mutex_tryenter(hash_lock
)) {
4065 uint64_t evicted
= arc_evict_hdr(hdr
, hash_lock
);
4066 mutex_exit(hash_lock
);
4068 bytes_evicted
+= evicted
;
4071 * If evicted is zero, arc_evict_hdr() must have
4072 * decided to skip this header, don't increment
4073 * evict_count in this case.
4079 * If arc_size isn't overflowing, signal any
4080 * threads that might happen to be waiting.
4082 * For each header evicted, we wake up a single
4083 * thread. If we used cv_broadcast, we could
4084 * wake up "too many" threads causing arc_size
4085 * to significantly overflow arc_c; since
4086 * arc_get_data_impl() doesn't check for overflow
4087 * when it's woken up (it doesn't because it's
4088 * possible for the ARC to be overflowing while
4089 * full of un-evictable buffers, and the
4090 * function should proceed in this case).
4092 * If threads are left sleeping, due to not
4093 * using cv_broadcast, they will be woken up
4094 * just before arc_reclaim_thread() sleeps.
4096 mutex_enter(&arc_reclaim_lock
);
4097 if (!arc_is_overflowing())
4098 cv_signal(&arc_reclaim_waiters_cv
);
4099 mutex_exit(&arc_reclaim_lock
);
4101 ARCSTAT_BUMP(arcstat_mutex_miss
);
4105 multilist_sublist_unlock(mls
);
4107 return (bytes_evicted
);
4111 * Evict buffers from the given arc state, until we've removed the
4112 * specified number of bytes. Move the removed buffers to the
4113 * appropriate evict state.
4115 * This function makes a "best effort". It skips over any buffers
4116 * it can't get a hash_lock on, and so, may not catch all candidates.
4117 * It may also return without evicting as much space as requested.
4119 * If bytes is specified using the special value ARC_EVICT_ALL, this
4120 * will evict all available (i.e. unlocked and evictable) buffers from
4121 * the given arc state; which is used by arc_flush().
4124 arc_evict_state(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
4125 arc_buf_contents_t type
)
4127 uint64_t total_evicted
= 0;
4128 multilist_t
*ml
= state
->arcs_list
[type
];
4130 arc_buf_hdr_t
**markers
;
4132 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
4134 num_sublists
= multilist_get_num_sublists(ml
);
4137 * If we've tried to evict from each sublist, made some
4138 * progress, but still have not hit the target number of bytes
4139 * to evict, we want to keep trying. The markers allow us to
4140 * pick up where we left off for each individual sublist, rather
4141 * than starting from the tail each time.
4143 markers
= kmem_zalloc(sizeof (*markers
) * num_sublists
, KM_SLEEP
);
4144 for (int i
= 0; i
< num_sublists
; i
++) {
4145 multilist_sublist_t
*mls
;
4147 markers
[i
] = kmem_cache_alloc(hdr_full_cache
, KM_SLEEP
);
4150 * A b_spa of 0 is used to indicate that this header is
4151 * a marker. This fact is used in arc_adjust_type() and
4152 * arc_evict_state_impl().
4154 markers
[i
]->b_spa
= 0;
4156 mls
= multilist_sublist_lock(ml
, i
);
4157 multilist_sublist_insert_tail(mls
, markers
[i
]);
4158 multilist_sublist_unlock(mls
);
4162 * While we haven't hit our target number of bytes to evict, or
4163 * we're evicting all available buffers.
4165 while (total_evicted
< bytes
|| bytes
== ARC_EVICT_ALL
) {
4166 int sublist_idx
= multilist_get_random_index(ml
);
4167 uint64_t scan_evicted
= 0;
4170 * Try to reduce pinned dnodes with a floor of arc_dnode_limit.
4171 * Request that 10% of the LRUs be scanned by the superblock
4174 if (type
== ARC_BUFC_DATA
&& aggsum_compare(&astat_dnode_size
,
4175 arc_dnode_limit
) > 0) {
4176 arc_prune_async((aggsum_upper_bound(&astat_dnode_size
) -
4177 arc_dnode_limit
) / sizeof (dnode_t
) /
4178 zfs_arc_dnode_reduce_percent
);
4182 * Start eviction using a randomly selected sublist,
4183 * this is to try and evenly balance eviction across all
4184 * sublists. Always starting at the same sublist
4185 * (e.g. index 0) would cause evictions to favor certain
4186 * sublists over others.
4188 for (int i
= 0; i
< num_sublists
; i
++) {
4189 uint64_t bytes_remaining
;
4190 uint64_t bytes_evicted
;
4192 if (bytes
== ARC_EVICT_ALL
)
4193 bytes_remaining
= ARC_EVICT_ALL
;
4194 else if (total_evicted
< bytes
)
4195 bytes_remaining
= bytes
- total_evicted
;
4199 bytes_evicted
= arc_evict_state_impl(ml
, sublist_idx
,
4200 markers
[sublist_idx
], spa
, bytes_remaining
);
4202 scan_evicted
+= bytes_evicted
;
4203 total_evicted
+= bytes_evicted
;
4205 /* we've reached the end, wrap to the beginning */
4206 if (++sublist_idx
>= num_sublists
)
4211 * If we didn't evict anything during this scan, we have
4212 * no reason to believe we'll evict more during another
4213 * scan, so break the loop.
4215 if (scan_evicted
== 0) {
4216 /* This isn't possible, let's make that obvious */
4217 ASSERT3S(bytes
, !=, 0);
4220 * When bytes is ARC_EVICT_ALL, the only way to
4221 * break the loop is when scan_evicted is zero.
4222 * In that case, we actually have evicted enough,
4223 * so we don't want to increment the kstat.
4225 if (bytes
!= ARC_EVICT_ALL
) {
4226 ASSERT3S(total_evicted
, <, bytes
);
4227 ARCSTAT_BUMP(arcstat_evict_not_enough
);
4234 for (int i
= 0; i
< num_sublists
; i
++) {
4235 multilist_sublist_t
*mls
= multilist_sublist_lock(ml
, i
);
4236 multilist_sublist_remove(mls
, markers
[i
]);
4237 multilist_sublist_unlock(mls
);
4239 kmem_cache_free(hdr_full_cache
, markers
[i
]);
4241 kmem_free(markers
, sizeof (*markers
) * num_sublists
);
4243 return (total_evicted
);
4247 * Flush all "evictable" data of the given type from the arc state
4248 * specified. This will not evict any "active" buffers (i.e. referenced).
4250 * When 'retry' is set to B_FALSE, the function will make a single pass
4251 * over the state and evict any buffers that it can. Since it doesn't
4252 * continually retry the eviction, it might end up leaving some buffers
4253 * in the ARC due to lock misses.
4255 * When 'retry' is set to B_TRUE, the function will continually retry the
4256 * eviction until *all* evictable buffers have been removed from the
4257 * state. As a result, if concurrent insertions into the state are
4258 * allowed (e.g. if the ARC isn't shutting down), this function might
4259 * wind up in an infinite loop, continually trying to evict buffers.
4262 arc_flush_state(arc_state_t
*state
, uint64_t spa
, arc_buf_contents_t type
,
4265 uint64_t evicted
= 0;
4267 while (refcount_count(&state
->arcs_esize
[type
]) != 0) {
4268 evicted
+= arc_evict_state(state
, spa
, ARC_EVICT_ALL
, type
);
4278 * Helper function for arc_prune_async() it is responsible for safely
4279 * handling the execution of a registered arc_prune_func_t.
4282 arc_prune_task(void *ptr
)
4284 arc_prune_t
*ap
= (arc_prune_t
*)ptr
;
4285 arc_prune_func_t
*func
= ap
->p_pfunc
;
4288 func(ap
->p_adjust
, ap
->p_private
);
4290 refcount_remove(&ap
->p_refcnt
, func
);
4294 * Notify registered consumers they must drop holds on a portion of the ARC
4295 * buffered they reference. This provides a mechanism to ensure the ARC can
4296 * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This
4297 * is analogous to dnlc_reduce_cache() but more generic.
4299 * This operation is performed asynchronously so it may be safely called
4300 * in the context of the arc_reclaim_thread(). A reference is taken here
4301 * for each registered arc_prune_t and the arc_prune_task() is responsible
4302 * for releasing it once the registered arc_prune_func_t has completed.
4305 arc_prune_async(int64_t adjust
)
4309 mutex_enter(&arc_prune_mtx
);
4310 for (ap
= list_head(&arc_prune_list
); ap
!= NULL
;
4311 ap
= list_next(&arc_prune_list
, ap
)) {
4313 if (refcount_count(&ap
->p_refcnt
) >= 2)
4316 refcount_add(&ap
->p_refcnt
, ap
->p_pfunc
);
4317 ap
->p_adjust
= adjust
;
4318 if (taskq_dispatch(arc_prune_taskq
, arc_prune_task
,
4319 ap
, TQ_SLEEP
) == TASKQID_INVALID
) {
4320 refcount_remove(&ap
->p_refcnt
, ap
->p_pfunc
);
4323 ARCSTAT_BUMP(arcstat_prune
);
4325 mutex_exit(&arc_prune_mtx
);
4329 * Evict the specified number of bytes from the state specified,
4330 * restricting eviction to the spa and type given. This function
4331 * prevents us from trying to evict more from a state's list than
4332 * is "evictable", and to skip evicting altogether when passed a
4333 * negative value for "bytes". In contrast, arc_evict_state() will
4334 * evict everything it can, when passed a negative value for "bytes".
4337 arc_adjust_impl(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
4338 arc_buf_contents_t type
)
4342 if (bytes
> 0 && refcount_count(&state
->arcs_esize
[type
]) > 0) {
4343 delta
= MIN(refcount_count(&state
->arcs_esize
[type
]), bytes
);
4344 return (arc_evict_state(state
, spa
, delta
, type
));
4351 * The goal of this function is to evict enough meta data buffers from the
4352 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly
4353 * more complicated than it appears because it is common for data buffers
4354 * to have holds on meta data buffers. In addition, dnode meta data buffers
4355 * will be held by the dnodes in the block preventing them from being freed.
4356 * This means we can't simply traverse the ARC and expect to always find
4357 * enough unheld meta data buffer to release.
4359 * Therefore, this function has been updated to make alternating passes
4360 * over the ARC releasing data buffers and then newly unheld meta data
4361 * buffers. This ensures forward progress is maintained and meta_used
4362 * will decrease. Normally this is sufficient, but if required the ARC
4363 * will call the registered prune callbacks causing dentry and inodes to
4364 * be dropped from the VFS cache. This will make dnode meta data buffers
4365 * available for reclaim.
4368 arc_adjust_meta_balanced(uint64_t meta_used
)
4370 int64_t delta
, prune
= 0, adjustmnt
;
4371 uint64_t total_evicted
= 0;
4372 arc_buf_contents_t type
= ARC_BUFC_DATA
;
4373 int restarts
= MAX(zfs_arc_meta_adjust_restarts
, 0);
4377 * This slightly differs than the way we evict from the mru in
4378 * arc_adjust because we don't have a "target" value (i.e. no
4379 * "meta" arc_p). As a result, I think we can completely
4380 * cannibalize the metadata in the MRU before we evict the
4381 * metadata from the MFU. I think we probably need to implement a
4382 * "metadata arc_p" value to do this properly.
4384 adjustmnt
= meta_used
- arc_meta_limit
;
4386 if (adjustmnt
> 0 && refcount_count(&arc_mru
->arcs_esize
[type
]) > 0) {
4387 delta
= MIN(refcount_count(&arc_mru
->arcs_esize
[type
]),
4389 total_evicted
+= arc_adjust_impl(arc_mru
, 0, delta
, type
);
4394 * We can't afford to recalculate adjustmnt here. If we do,
4395 * new metadata buffers can sneak into the MRU or ANON lists,
4396 * thus penalize the MFU metadata. Although the fudge factor is
4397 * small, it has been empirically shown to be significant for
4398 * certain workloads (e.g. creating many empty directories). As
4399 * such, we use the original calculation for adjustmnt, and
4400 * simply decrement the amount of data evicted from the MRU.
4403 if (adjustmnt
> 0 && refcount_count(&arc_mfu
->arcs_esize
[type
]) > 0) {
4404 delta
= MIN(refcount_count(&arc_mfu
->arcs_esize
[type
]),
4406 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, delta
, type
);
4409 adjustmnt
= meta_used
- arc_meta_limit
;
4411 if (adjustmnt
> 0 &&
4412 refcount_count(&arc_mru_ghost
->arcs_esize
[type
]) > 0) {
4413 delta
= MIN(adjustmnt
,
4414 refcount_count(&arc_mru_ghost
->arcs_esize
[type
]));
4415 total_evicted
+= arc_adjust_impl(arc_mru_ghost
, 0, delta
, type
);
4419 if (adjustmnt
> 0 &&
4420 refcount_count(&arc_mfu_ghost
->arcs_esize
[type
]) > 0) {
4421 delta
= MIN(adjustmnt
,
4422 refcount_count(&arc_mfu_ghost
->arcs_esize
[type
]));
4423 total_evicted
+= arc_adjust_impl(arc_mfu_ghost
, 0, delta
, type
);
4427 * If after attempting to make the requested adjustment to the ARC
4428 * the meta limit is still being exceeded then request that the
4429 * higher layers drop some cached objects which have holds on ARC
4430 * meta buffers. Requests to the upper layers will be made with
4431 * increasingly large scan sizes until the ARC is below the limit.
4433 if (meta_used
> arc_meta_limit
) {
4434 if (type
== ARC_BUFC_DATA
) {
4435 type
= ARC_BUFC_METADATA
;
4437 type
= ARC_BUFC_DATA
;
4439 if (zfs_arc_meta_prune
) {
4440 prune
+= zfs_arc_meta_prune
;
4441 arc_prune_async(prune
);
4450 return (total_evicted
);
4454 * Evict metadata buffers from the cache, such that arc_meta_used is
4455 * capped by the arc_meta_limit tunable.
4458 arc_adjust_meta_only(uint64_t meta_used
)
4460 uint64_t total_evicted
= 0;
4464 * If we're over the meta limit, we want to evict enough
4465 * metadata to get back under the meta limit. We don't want to
4466 * evict so much that we drop the MRU below arc_p, though. If
4467 * we're over the meta limit more than we're over arc_p, we
4468 * evict some from the MRU here, and some from the MFU below.
4470 target
= MIN((int64_t)(meta_used
- arc_meta_limit
),
4471 (int64_t)(refcount_count(&arc_anon
->arcs_size
) +
4472 refcount_count(&arc_mru
->arcs_size
) - arc_p
));
4474 total_evicted
+= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4477 * Similar to the above, we want to evict enough bytes to get us
4478 * below the meta limit, but not so much as to drop us below the
4479 * space allotted to the MFU (which is defined as arc_c - arc_p).
4481 target
= MIN((int64_t)(meta_used
- arc_meta_limit
),
4482 (int64_t)(refcount_count(&arc_mfu
->arcs_size
) -
4485 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4487 return (total_evicted
);
4491 arc_adjust_meta(uint64_t meta_used
)
4493 if (zfs_arc_meta_strategy
== ARC_STRATEGY_META_ONLY
)
4494 return (arc_adjust_meta_only(meta_used
));
4496 return (arc_adjust_meta_balanced(meta_used
));
4500 * Return the type of the oldest buffer in the given arc state
4502 * This function will select a random sublist of type ARC_BUFC_DATA and
4503 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist
4504 * is compared, and the type which contains the "older" buffer will be
4507 static arc_buf_contents_t
4508 arc_adjust_type(arc_state_t
*state
)
4510 multilist_t
*data_ml
= state
->arcs_list
[ARC_BUFC_DATA
];
4511 multilist_t
*meta_ml
= state
->arcs_list
[ARC_BUFC_METADATA
];
4512 int data_idx
= multilist_get_random_index(data_ml
);
4513 int meta_idx
= multilist_get_random_index(meta_ml
);
4514 multilist_sublist_t
*data_mls
;
4515 multilist_sublist_t
*meta_mls
;
4516 arc_buf_contents_t type
;
4517 arc_buf_hdr_t
*data_hdr
;
4518 arc_buf_hdr_t
*meta_hdr
;
4521 * We keep the sublist lock until we're finished, to prevent
4522 * the headers from being destroyed via arc_evict_state().
4524 data_mls
= multilist_sublist_lock(data_ml
, data_idx
);
4525 meta_mls
= multilist_sublist_lock(meta_ml
, meta_idx
);
4528 * These two loops are to ensure we skip any markers that
4529 * might be at the tail of the lists due to arc_evict_state().
4532 for (data_hdr
= multilist_sublist_tail(data_mls
); data_hdr
!= NULL
;
4533 data_hdr
= multilist_sublist_prev(data_mls
, data_hdr
)) {
4534 if (data_hdr
->b_spa
!= 0)
4538 for (meta_hdr
= multilist_sublist_tail(meta_mls
); meta_hdr
!= NULL
;
4539 meta_hdr
= multilist_sublist_prev(meta_mls
, meta_hdr
)) {
4540 if (meta_hdr
->b_spa
!= 0)
4544 if (data_hdr
== NULL
&& meta_hdr
== NULL
) {
4545 type
= ARC_BUFC_DATA
;
4546 } else if (data_hdr
== NULL
) {
4547 ASSERT3P(meta_hdr
, !=, NULL
);
4548 type
= ARC_BUFC_METADATA
;
4549 } else if (meta_hdr
== NULL
) {
4550 ASSERT3P(data_hdr
, !=, NULL
);
4551 type
= ARC_BUFC_DATA
;
4553 ASSERT3P(data_hdr
, !=, NULL
);
4554 ASSERT3P(meta_hdr
, !=, NULL
);
4556 /* The headers can't be on the sublist without an L1 header */
4557 ASSERT(HDR_HAS_L1HDR(data_hdr
));
4558 ASSERT(HDR_HAS_L1HDR(meta_hdr
));
4560 if (data_hdr
->b_l1hdr
.b_arc_access
<
4561 meta_hdr
->b_l1hdr
.b_arc_access
) {
4562 type
= ARC_BUFC_DATA
;
4564 type
= ARC_BUFC_METADATA
;
4568 multilist_sublist_unlock(meta_mls
);
4569 multilist_sublist_unlock(data_mls
);
4575 * Evict buffers from the cache, such that arc_size is capped by arc_c.
4580 uint64_t total_evicted
= 0;
4583 uint64_t asize
= aggsum_value(&arc_size
);
4584 uint64_t ameta
= aggsum_value(&arc_meta_used
);
4587 * If we're over arc_meta_limit, we want to correct that before
4588 * potentially evicting data buffers below.
4590 total_evicted
+= arc_adjust_meta(ameta
);
4595 * If we're over the target cache size, we want to evict enough
4596 * from the list to get back to our target size. We don't want
4597 * to evict too much from the MRU, such that it drops below
4598 * arc_p. So, if we're over our target cache size more than
4599 * the MRU is over arc_p, we'll evict enough to get back to
4600 * arc_p here, and then evict more from the MFU below.
4602 target
= MIN((int64_t)(asize
- arc_c
),
4603 (int64_t)(refcount_count(&arc_anon
->arcs_size
) +
4604 refcount_count(&arc_mru
->arcs_size
) + ameta
- arc_p
));
4607 * If we're below arc_meta_min, always prefer to evict data.
4608 * Otherwise, try to satisfy the requested number of bytes to
4609 * evict from the type which contains older buffers; in an
4610 * effort to keep newer buffers in the cache regardless of their
4611 * type. If we cannot satisfy the number of bytes from this
4612 * type, spill over into the next type.
4614 if (arc_adjust_type(arc_mru
) == ARC_BUFC_METADATA
&&
4615 ameta
> arc_meta_min
) {
4616 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4617 total_evicted
+= bytes
;
4620 * If we couldn't evict our target number of bytes from
4621 * metadata, we try to get the rest from data.
4626 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
4628 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
4629 total_evicted
+= bytes
;
4632 * If we couldn't evict our target number of bytes from
4633 * data, we try to get the rest from metadata.
4638 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4644 * Now that we've tried to evict enough from the MRU to get its
4645 * size back to arc_p, if we're still above the target cache
4646 * size, we evict the rest from the MFU.
4648 target
= asize
- arc_c
;
4650 if (arc_adjust_type(arc_mfu
) == ARC_BUFC_METADATA
&&
4651 ameta
> arc_meta_min
) {
4652 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4653 total_evicted
+= bytes
;
4656 * If we couldn't evict our target number of bytes from
4657 * metadata, we try to get the rest from data.
4662 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
4664 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
4665 total_evicted
+= bytes
;
4668 * If we couldn't evict our target number of bytes from
4669 * data, we try to get the rest from data.
4674 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4678 * Adjust ghost lists
4680 * In addition to the above, the ARC also defines target values
4681 * for the ghost lists. The sum of the mru list and mru ghost
4682 * list should never exceed the target size of the cache, and
4683 * the sum of the mru list, mfu list, mru ghost list, and mfu
4684 * ghost list should never exceed twice the target size of the
4685 * cache. The following logic enforces these limits on the ghost
4686 * caches, and evicts from them as needed.
4688 target
= refcount_count(&arc_mru
->arcs_size
) +
4689 refcount_count(&arc_mru_ghost
->arcs_size
) - arc_c
;
4691 bytes
= arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_DATA
);
4692 total_evicted
+= bytes
;
4697 arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_METADATA
);
4700 * We assume the sum of the mru list and mfu list is less than
4701 * or equal to arc_c (we enforced this above), which means we
4702 * can use the simpler of the two equations below:
4704 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
4705 * mru ghost + mfu ghost <= arc_c
4707 target
= refcount_count(&arc_mru_ghost
->arcs_size
) +
4708 refcount_count(&arc_mfu_ghost
->arcs_size
) - arc_c
;
4710 bytes
= arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_DATA
);
4711 total_evicted
+= bytes
;
4716 arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_METADATA
);
4718 return (total_evicted
);
4722 arc_flush(spa_t
*spa
, boolean_t retry
)
4727 * If retry is B_TRUE, a spa must not be specified since we have
4728 * no good way to determine if all of a spa's buffers have been
4729 * evicted from an arc state.
4731 ASSERT(!retry
|| spa
== 0);
4734 guid
= spa_load_guid(spa
);
4736 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_DATA
, retry
);
4737 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_METADATA
, retry
);
4739 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_DATA
, retry
);
4740 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_METADATA
, retry
);
4742 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_DATA
, retry
);
4743 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
4745 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_DATA
, retry
);
4746 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
4750 arc_shrink(int64_t to_free
)
4752 uint64_t asize
= aggsum_value(&arc_size
);
4755 if (c
> to_free
&& c
- to_free
> arc_c_min
) {
4756 arc_c
= c
- to_free
;
4757 atomic_add_64(&arc_p
, -(arc_p
>> arc_shrink_shift
));
4759 arc_c
= MAX(asize
, arc_c_min
);
4761 arc_p
= (arc_c
>> 1);
4762 ASSERT(arc_c
>= arc_c_min
);
4763 ASSERT((int64_t)arc_p
>= 0);
4769 (void) arc_adjust();
4773 * Return maximum amount of memory that we could possibly use. Reduced
4774 * to half of all memory in user space which is primarily used for testing.
4777 arc_all_memory(void)
4780 #ifdef CONFIG_HIGHMEM
4781 return (ptob(totalram_pages
- totalhigh_pages
));
4783 return (ptob(totalram_pages
));
4784 #endif /* CONFIG_HIGHMEM */
4786 return (ptob(physmem
) / 2);
4787 #endif /* _KERNEL */
4791 * Return the amount of memory that is considered free. In user space
4792 * which is primarily used for testing we pretend that free memory ranges
4793 * from 0-20% of all memory.
4796 arc_free_memory(void)
4799 #ifdef CONFIG_HIGHMEM
4802 return (ptob(si
.freeram
- si
.freehigh
));
4804 return (ptob(nr_free_pages() +
4805 nr_inactive_file_pages() +
4806 nr_inactive_anon_pages() +
4807 nr_slab_reclaimable_pages()));
4809 #endif /* CONFIG_HIGHMEM */
4811 return (spa_get_random(arc_all_memory() * 20 / 100));
4812 #endif /* _KERNEL */
4815 typedef enum free_memory_reason_t
{
4820 FMR_PAGES_PP_MAXIMUM
,
4823 } free_memory_reason_t
;
4825 int64_t last_free_memory
;
4826 free_memory_reason_t last_free_reason
;
4830 * Additional reserve of pages for pp_reserve.
4832 int64_t arc_pages_pp_reserve
= 64;
4835 * Additional reserve of pages for swapfs.
4837 int64_t arc_swapfs_reserve
= 64;
4838 #endif /* _KERNEL */
4841 * Return the amount of memory that can be consumed before reclaim will be
4842 * needed. Positive if there is sufficient free memory, negative indicates
4843 * the amount of memory that needs to be freed up.
4846 arc_available_memory(void)
4848 int64_t lowest
= INT64_MAX
;
4849 free_memory_reason_t r
= FMR_UNKNOWN
;
4856 pgcnt_t needfree
= btop(arc_need_free
);
4857 pgcnt_t lotsfree
= btop(arc_sys_free
);
4858 pgcnt_t desfree
= 0;
4859 pgcnt_t freemem
= btop(arc_free_memory());
4863 n
= PAGESIZE
* (-needfree
);
4871 * check that we're out of range of the pageout scanner. It starts to
4872 * schedule paging if freemem is less than lotsfree and needfree.
4873 * lotsfree is the high-water mark for pageout, and needfree is the
4874 * number of needed free pages. We add extra pages here to make sure
4875 * the scanner doesn't start up while we're freeing memory.
4877 n
= PAGESIZE
* (freemem
- lotsfree
- needfree
- desfree
);
4885 * check to make sure that swapfs has enough space so that anon
4886 * reservations can still succeed. anon_resvmem() checks that the
4887 * availrmem is greater than swapfs_minfree, and the number of reserved
4888 * swap pages. We also add a bit of extra here just to prevent
4889 * circumstances from getting really dire.
4891 n
= PAGESIZE
* (availrmem
- swapfs_minfree
- swapfs_reserve
-
4892 desfree
- arc_swapfs_reserve
);
4895 r
= FMR_SWAPFS_MINFREE
;
4899 * Check that we have enough availrmem that memory locking (e.g., via
4900 * mlock(3C) or memcntl(2)) can still succeed. (pages_pp_maximum
4901 * stores the number of pages that cannot be locked; when availrmem
4902 * drops below pages_pp_maximum, page locking mechanisms such as
4903 * page_pp_lock() will fail.)
4905 n
= PAGESIZE
* (availrmem
- pages_pp_maximum
-
4906 arc_pages_pp_reserve
);
4909 r
= FMR_PAGES_PP_MAXIMUM
;
4915 * If we're on a 32-bit platform, it's possible that we'll exhaust the
4916 * kernel heap space before we ever run out of available physical
4917 * memory. Most checks of the size of the heap_area compare against
4918 * tune.t_minarmem, which is the minimum available real memory that we
4919 * can have in the system. However, this is generally fixed at 25 pages
4920 * which is so low that it's useless. In this comparison, we seek to
4921 * calculate the total heap-size, and reclaim if more than 3/4ths of the
4922 * heap is allocated. (Or, in the calculation, if less than 1/4th is
4925 n
= vmem_size(heap_arena
, VMEM_FREE
) -
4926 (vmem_size(heap_arena
, VMEM_FREE
| VMEM_ALLOC
) >> 2);
4934 * If zio data pages are being allocated out of a separate heap segment,
4935 * then enforce that the size of available vmem for this arena remains
4936 * above about 1/4th (1/(2^arc_zio_arena_free_shift)) free.
4938 * Note that reducing the arc_zio_arena_free_shift keeps more virtual
4939 * memory (in the zio_arena) free, which can avoid memory
4940 * fragmentation issues.
4942 if (zio_arena
!= NULL
) {
4943 n
= (int64_t)vmem_size(zio_arena
, VMEM_FREE
) -
4944 (vmem_size(zio_arena
, VMEM_ALLOC
) >>
4945 arc_zio_arena_free_shift
);
4952 /* Every 100 calls, free a small amount */
4953 if (spa_get_random(100) == 0)
4955 #endif /* _KERNEL */
4957 last_free_memory
= lowest
;
4958 last_free_reason
= r
;
4964 * Determine if the system is under memory pressure and is asking
4965 * to reclaim memory. A return value of B_TRUE indicates that the system
4966 * is under memory pressure and that the arc should adjust accordingly.
4969 arc_reclaim_needed(void)
4971 return (arc_available_memory() < 0);
4975 arc_kmem_reap_now(void)
4978 kmem_cache_t
*prev_cache
= NULL
;
4979 kmem_cache_t
*prev_data_cache
= NULL
;
4980 extern kmem_cache_t
*zio_buf_cache
[];
4981 extern kmem_cache_t
*zio_data_buf_cache
[];
4982 extern kmem_cache_t
*range_seg_cache
;
4985 if ((aggsum_compare(&arc_meta_used
, arc_meta_limit
) >= 0) &&
4986 zfs_arc_meta_prune
) {
4988 * We are exceeding our meta-data cache limit.
4989 * Prune some entries to release holds on meta-data.
4991 arc_prune_async(zfs_arc_meta_prune
);
4995 * Reclaim unused memory from all kmem caches.
5001 for (i
= 0; i
< SPA_MAXBLOCKSIZE
>> SPA_MINBLOCKSHIFT
; i
++) {
5003 /* reach upper limit of cache size on 32-bit */
5004 if (zio_buf_cache
[i
] == NULL
)
5007 if (zio_buf_cache
[i
] != prev_cache
) {
5008 prev_cache
= zio_buf_cache
[i
];
5009 kmem_cache_reap_now(zio_buf_cache
[i
]);
5011 if (zio_data_buf_cache
[i
] != prev_data_cache
) {
5012 prev_data_cache
= zio_data_buf_cache
[i
];
5013 kmem_cache_reap_now(zio_data_buf_cache
[i
]);
5016 kmem_cache_reap_now(buf_cache
);
5017 kmem_cache_reap_now(hdr_full_cache
);
5018 kmem_cache_reap_now(hdr_l2only_cache
);
5019 kmem_cache_reap_now(range_seg_cache
);
5021 if (zio_arena
!= NULL
) {
5023 * Ask the vmem arena to reclaim unused memory from its
5026 vmem_qcache_reap(zio_arena
);
5031 * Threads can block in arc_get_data_impl() waiting for this thread to evict
5032 * enough data and signal them to proceed. When this happens, the threads in
5033 * arc_get_data_impl() are sleeping while holding the hash lock for their
5034 * particular arc header. Thus, we must be careful to never sleep on a
5035 * hash lock in this thread. This is to prevent the following deadlock:
5037 * - Thread A sleeps on CV in arc_get_data_impl() holding hash lock "L",
5038 * waiting for the reclaim thread to signal it.
5040 * - arc_reclaim_thread() tries to acquire hash lock "L" using mutex_enter,
5041 * fails, and goes to sleep forever.
5043 * This possible deadlock is avoided by always acquiring a hash lock
5044 * using mutex_tryenter() from arc_reclaim_thread().
5048 arc_reclaim_thread(void *unused
)
5050 fstrans_cookie_t cookie
= spl_fstrans_mark();
5051 hrtime_t growtime
= 0;
5054 CALLB_CPR_INIT(&cpr
, &arc_reclaim_lock
, callb_generic_cpr
, FTAG
);
5056 mutex_enter(&arc_reclaim_lock
);
5057 while (!arc_reclaim_thread_exit
) {
5058 uint64_t evicted
= 0;
5059 uint64_t need_free
= arc_need_free
;
5060 arc_tuning_update();
5063 * This is necessary in order for the mdb ::arc dcmd to
5064 * show up to date information. Since the ::arc command
5065 * does not call the kstat's update function, without
5066 * this call, the command may show stale stats for the
5067 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
5068 * with this change, the data might be up to 1 second
5069 * out of date; but that should suffice. The arc_state_t
5070 * structures can be queried directly if more accurate
5071 * information is needed.
5074 if (arc_ksp
!= NULL
)
5075 arc_ksp
->ks_update(arc_ksp
, KSTAT_READ
);
5077 mutex_exit(&arc_reclaim_lock
);
5080 * We call arc_adjust() before (possibly) calling
5081 * arc_kmem_reap_now(), so that we can wake up
5082 * arc_get_data_buf() sooner.
5084 evicted
= arc_adjust();
5086 int64_t free_memory
= arc_available_memory();
5087 if (free_memory
< 0) {
5089 arc_no_grow
= B_TRUE
;
5093 * Wait at least zfs_grow_retry (default 5) seconds
5094 * before considering growing.
5096 growtime
= gethrtime() + SEC2NSEC(arc_grow_retry
);
5098 arc_kmem_reap_now();
5101 * If we are still low on memory, shrink the ARC
5102 * so that we have arc_shrink_min free space.
5104 free_memory
= arc_available_memory();
5107 (arc_c
>> arc_shrink_shift
) - free_memory
;
5110 to_free
= MAX(to_free
, need_free
);
5112 arc_shrink(to_free
);
5114 } else if (free_memory
< arc_c
>> arc_no_grow_shift
) {
5115 arc_no_grow
= B_TRUE
;
5116 } else if (gethrtime() >= growtime
) {
5117 arc_no_grow
= B_FALSE
;
5120 mutex_enter(&arc_reclaim_lock
);
5123 * If evicted is zero, we couldn't evict anything via
5124 * arc_adjust(). This could be due to hash lock
5125 * collisions, but more likely due to the majority of
5126 * arc buffers being unevictable. Therefore, even if
5127 * arc_size is above arc_c, another pass is unlikely to
5128 * be helpful and could potentially cause us to enter an
5131 if (aggsum_compare(&arc_size
, arc_c
) <= 0|| evicted
== 0) {
5133 * We're either no longer overflowing, or we
5134 * can't evict anything more, so we should wake
5135 * up any threads before we go to sleep and remove
5136 * the bytes we were working on from arc_need_free
5137 * since nothing more will be done here.
5139 cv_broadcast(&arc_reclaim_waiters_cv
);
5140 ARCSTAT_INCR(arcstat_need_free
, -need_free
);
5143 * Block until signaled, or after one second (we
5144 * might need to perform arc_kmem_reap_now()
5145 * even if we aren't being signalled)
5147 CALLB_CPR_SAFE_BEGIN(&cpr
);
5148 (void) cv_timedwait_sig_hires(&arc_reclaim_thread_cv
,
5149 &arc_reclaim_lock
, SEC2NSEC(1), MSEC2NSEC(1), 0);
5150 CALLB_CPR_SAFE_END(&cpr
, &arc_reclaim_lock
);
5154 arc_reclaim_thread_exit
= B_FALSE
;
5155 cv_broadcast(&arc_reclaim_thread_cv
);
5156 CALLB_CPR_EXIT(&cpr
); /* drops arc_reclaim_lock */
5157 spl_fstrans_unmark(cookie
);
5163 * Determine the amount of memory eligible for eviction contained in the
5164 * ARC. All clean data reported by the ghost lists can always be safely
5165 * evicted. Due to arc_c_min, the same does not hold for all clean data
5166 * contained by the regular mru and mfu lists.
5168 * In the case of the regular mru and mfu lists, we need to report as
5169 * much clean data as possible, such that evicting that same reported
5170 * data will not bring arc_size below arc_c_min. Thus, in certain
5171 * circumstances, the total amount of clean data in the mru and mfu
5172 * lists might not actually be evictable.
5174 * The following two distinct cases are accounted for:
5176 * 1. The sum of the amount of dirty data contained by both the mru and
5177 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
5178 * is greater than or equal to arc_c_min.
5179 * (i.e. amount of dirty data >= arc_c_min)
5181 * This is the easy case; all clean data contained by the mru and mfu
5182 * lists is evictable. Evicting all clean data can only drop arc_size
5183 * to the amount of dirty data, which is greater than arc_c_min.
5185 * 2. The sum of the amount of dirty data contained by both the mru and
5186 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
5187 * is less than arc_c_min.
5188 * (i.e. arc_c_min > amount of dirty data)
5190 * 2.1. arc_size is greater than or equal arc_c_min.
5191 * (i.e. arc_size >= arc_c_min > amount of dirty data)
5193 * In this case, not all clean data from the regular mru and mfu
5194 * lists is actually evictable; we must leave enough clean data
5195 * to keep arc_size above arc_c_min. Thus, the maximum amount of
5196 * evictable data from the two lists combined, is exactly the
5197 * difference between arc_size and arc_c_min.
5199 * 2.2. arc_size is less than arc_c_min
5200 * (i.e. arc_c_min > arc_size > amount of dirty data)
5202 * In this case, none of the data contained in the mru and mfu
5203 * lists is evictable, even if it's clean. Since arc_size is
5204 * already below arc_c_min, evicting any more would only
5205 * increase this negative difference.
5208 arc_evictable_memory(void)
5210 int64_t asize
= aggsum_value(&arc_size
);
5211 uint64_t arc_clean
=
5212 refcount_count(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]) +
5213 refcount_count(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]) +
5214 refcount_count(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]) +
5215 refcount_count(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
5216 uint64_t arc_dirty
= MAX((int64_t)asize
- (int64_t)arc_clean
, 0);
5219 * Scale reported evictable memory in proportion to page cache, cap
5220 * at specified min/max.
5222 uint64_t min
= (ptob(nr_file_pages()) / 100) * zfs_arc_pc_percent
;
5223 min
= MAX(arc_c_min
, MIN(arc_c_max
, min
));
5225 if (arc_dirty
>= min
)
5228 return (MAX((int64_t)asize
- (int64_t)min
, 0));
5232 * If sc->nr_to_scan is zero, the caller is requesting a query of the
5233 * number of objects which can potentially be freed. If it is nonzero,
5234 * the request is to free that many objects.
5236 * Linux kernels >= 3.12 have the count_objects and scan_objects callbacks
5237 * in struct shrinker and also require the shrinker to return the number
5240 * Older kernels require the shrinker to return the number of freeable
5241 * objects following the freeing of nr_to_free.
5243 static spl_shrinker_t
5244 __arc_shrinker_func(struct shrinker
*shrink
, struct shrink_control
*sc
)
5248 /* The arc is considered warm once reclaim has occurred */
5249 if (unlikely(arc_warm
== B_FALSE
))
5252 /* Return the potential number of reclaimable pages */
5253 pages
= btop((int64_t)arc_evictable_memory());
5254 if (sc
->nr_to_scan
== 0)
5257 /* Not allowed to perform filesystem reclaim */
5258 if (!(sc
->gfp_mask
& __GFP_FS
))
5259 return (SHRINK_STOP
);
5261 /* Reclaim in progress */
5262 if (mutex_tryenter(&arc_reclaim_lock
) == 0) {
5263 ARCSTAT_INCR(arcstat_need_free
, ptob(sc
->nr_to_scan
));
5267 mutex_exit(&arc_reclaim_lock
);
5270 * Evict the requested number of pages by shrinking arc_c the
5274 arc_shrink(ptob(sc
->nr_to_scan
));
5275 if (current_is_kswapd())
5276 arc_kmem_reap_now();
5277 #ifdef HAVE_SPLIT_SHRINKER_CALLBACK
5278 pages
= MAX((int64_t)pages
-
5279 (int64_t)btop(arc_evictable_memory()), 0);
5281 pages
= btop(arc_evictable_memory());
5284 * We've shrunk what we can, wake up threads.
5286 cv_broadcast(&arc_reclaim_waiters_cv
);
5288 pages
= SHRINK_STOP
;
5291 * When direct reclaim is observed it usually indicates a rapid
5292 * increase in memory pressure. This occurs because the kswapd
5293 * threads were unable to asynchronously keep enough free memory
5294 * available. In this case set arc_no_grow to briefly pause arc
5295 * growth to avoid compounding the memory pressure.
5297 if (current_is_kswapd()) {
5298 ARCSTAT_BUMP(arcstat_memory_indirect_count
);
5300 arc_no_grow
= B_TRUE
;
5301 arc_kmem_reap_now();
5302 ARCSTAT_BUMP(arcstat_memory_direct_count
);
5307 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func
);
5309 SPL_SHRINKER_DECLARE(arc_shrinker
, arc_shrinker_func
, DEFAULT_SEEKS
);
5310 #endif /* _KERNEL */
5313 * Adapt arc info given the number of bytes we are trying to add and
5314 * the state that we are coming from. This function is only called
5315 * when we are adding new content to the cache.
5318 arc_adapt(int bytes
, arc_state_t
*state
)
5321 uint64_t arc_p_min
= (arc_c
>> arc_p_min_shift
);
5322 int64_t mrug_size
= refcount_count(&arc_mru_ghost
->arcs_size
);
5323 int64_t mfug_size
= refcount_count(&arc_mfu_ghost
->arcs_size
);
5325 if (state
== arc_l2c_only
)
5330 * Adapt the target size of the MRU list:
5331 * - if we just hit in the MRU ghost list, then increase
5332 * the target size of the MRU list.
5333 * - if we just hit in the MFU ghost list, then increase
5334 * the target size of the MFU list by decreasing the
5335 * target size of the MRU list.
5337 if (state
== arc_mru_ghost
) {
5338 mult
= (mrug_size
>= mfug_size
) ? 1 : (mfug_size
/ mrug_size
);
5339 if (!zfs_arc_p_dampener_disable
)
5340 mult
= MIN(mult
, 10); /* avoid wild arc_p adjustment */
5342 arc_p
= MIN(arc_c
- arc_p_min
, arc_p
+ bytes
* mult
);
5343 } else if (state
== arc_mfu_ghost
) {
5346 mult
= (mfug_size
>= mrug_size
) ? 1 : (mrug_size
/ mfug_size
);
5347 if (!zfs_arc_p_dampener_disable
)
5348 mult
= MIN(mult
, 10);
5350 delta
= MIN(bytes
* mult
, arc_p
);
5351 arc_p
= MAX(arc_p_min
, arc_p
- delta
);
5353 ASSERT((int64_t)arc_p
>= 0);
5355 if (arc_reclaim_needed()) {
5356 cv_signal(&arc_reclaim_thread_cv
);
5363 if (arc_c
>= arc_c_max
)
5367 * If we're within (2 * maxblocksize) bytes of the target
5368 * cache size, increment the target cache size
5370 ASSERT3U(arc_c
, >=, 2ULL << SPA_MAXBLOCKSHIFT
);
5371 if (aggsum_compare(&arc_size
, arc_c
- (2ULL << SPA_MAXBLOCKSHIFT
)) >=
5373 atomic_add_64(&arc_c
, (int64_t)bytes
);
5374 if (arc_c
> arc_c_max
)
5376 else if (state
== arc_anon
)
5377 atomic_add_64(&arc_p
, (int64_t)bytes
);
5381 ASSERT((int64_t)arc_p
>= 0);
5385 * Check if arc_size has grown past our upper threshold, determined by
5386 * zfs_arc_overflow_shift.
5389 arc_is_overflowing(void)
5391 /* Always allow at least one block of overflow */
5392 uint64_t overflow
= MAX(SPA_MAXBLOCKSIZE
,
5393 arc_c
>> zfs_arc_overflow_shift
);
5396 * We just compare the lower bound here for performance reasons. Our
5397 * primary goals are to make sure that the arc never grows without
5398 * bound, and that it can reach its maximum size. This check
5399 * accomplishes both goals. The maximum amount we could run over by is
5400 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block
5401 * in the ARC. In practice, that's in the tens of MB, which is low
5402 * enough to be safe.
5404 return (aggsum_lower_bound(&arc_size
) >= arc_c
+ overflow
);
5408 arc_get_data_abd(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5410 arc_buf_contents_t type
= arc_buf_type(hdr
);
5412 arc_get_data_impl(hdr
, size
, tag
);
5413 if (type
== ARC_BUFC_METADATA
) {
5414 return (abd_alloc(size
, B_TRUE
));
5416 ASSERT(type
== ARC_BUFC_DATA
);
5417 return (abd_alloc(size
, B_FALSE
));
5422 arc_get_data_buf(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5424 arc_buf_contents_t type
= arc_buf_type(hdr
);
5426 arc_get_data_impl(hdr
, size
, tag
);
5427 if (type
== ARC_BUFC_METADATA
) {
5428 return (zio_buf_alloc(size
));
5430 ASSERT(type
== ARC_BUFC_DATA
);
5431 return (zio_data_buf_alloc(size
));
5436 * Allocate a block and return it to the caller. If we are hitting the
5437 * hard limit for the cache size, we must sleep, waiting for the eviction
5438 * thread to catch up. If we're past the target size but below the hard
5439 * limit, we'll only signal the reclaim thread and continue on.
5442 arc_get_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5444 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
5445 arc_buf_contents_t type
= arc_buf_type(hdr
);
5447 arc_adapt(size
, state
);
5450 * If arc_size is currently overflowing, and has grown past our
5451 * upper limit, we must be adding data faster than the evict
5452 * thread can evict. Thus, to ensure we don't compound the
5453 * problem by adding more data and forcing arc_size to grow even
5454 * further past it's target size, we halt and wait for the
5455 * eviction thread to catch up.
5457 * It's also possible that the reclaim thread is unable to evict
5458 * enough buffers to get arc_size below the overflow limit (e.g.
5459 * due to buffers being un-evictable, or hash lock collisions).
5460 * In this case, we want to proceed regardless if we're
5461 * overflowing; thus we don't use a while loop here.
5463 if (arc_is_overflowing()) {
5464 mutex_enter(&arc_reclaim_lock
);
5467 * Now that we've acquired the lock, we may no longer be
5468 * over the overflow limit, lets check.
5470 * We're ignoring the case of spurious wake ups. If that
5471 * were to happen, it'd let this thread consume an ARC
5472 * buffer before it should have (i.e. before we're under
5473 * the overflow limit and were signalled by the reclaim
5474 * thread). As long as that is a rare occurrence, it
5475 * shouldn't cause any harm.
5477 if (arc_is_overflowing()) {
5478 cv_signal(&arc_reclaim_thread_cv
);
5479 cv_wait(&arc_reclaim_waiters_cv
, &arc_reclaim_lock
);
5482 mutex_exit(&arc_reclaim_lock
);
5485 VERIFY3U(hdr
->b_type
, ==, type
);
5486 if (type
== ARC_BUFC_METADATA
) {
5487 arc_space_consume(size
, ARC_SPACE_META
);
5489 arc_space_consume(size
, ARC_SPACE_DATA
);
5493 * Update the state size. Note that ghost states have a
5494 * "ghost size" and so don't need to be updated.
5496 if (!GHOST_STATE(state
)) {
5498 (void) refcount_add_many(&state
->arcs_size
, size
, tag
);
5501 * If this is reached via arc_read, the link is
5502 * protected by the hash lock. If reached via
5503 * arc_buf_alloc, the header should not be accessed by
5504 * any other thread. And, if reached via arc_read_done,
5505 * the hash lock will protect it if it's found in the
5506 * hash table; otherwise no other thread should be
5507 * trying to [add|remove]_reference it.
5509 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
5510 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
5511 (void) refcount_add_many(&state
->arcs_esize
[type
],
5516 * If we are growing the cache, and we are adding anonymous
5517 * data, and we have outgrown arc_p, update arc_p
5519 if (aggsum_compare(&arc_size
, arc_c
) < 0 &&
5520 hdr
->b_l1hdr
.b_state
== arc_anon
&&
5521 (refcount_count(&arc_anon
->arcs_size
) +
5522 refcount_count(&arc_mru
->arcs_size
) > arc_p
))
5523 arc_p
= MIN(arc_c
, arc_p
+ size
);
5528 arc_free_data_abd(arc_buf_hdr_t
*hdr
, abd_t
*abd
, uint64_t size
, void *tag
)
5530 arc_free_data_impl(hdr
, size
, tag
);
5535 arc_free_data_buf(arc_buf_hdr_t
*hdr
, void *buf
, uint64_t size
, void *tag
)
5537 arc_buf_contents_t type
= arc_buf_type(hdr
);
5539 arc_free_data_impl(hdr
, size
, tag
);
5540 if (type
== ARC_BUFC_METADATA
) {
5541 zio_buf_free(buf
, size
);
5543 ASSERT(type
== ARC_BUFC_DATA
);
5544 zio_data_buf_free(buf
, size
);
5549 * Free the arc data buffer.
5552 arc_free_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5554 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
5555 arc_buf_contents_t type
= arc_buf_type(hdr
);
5557 /* protected by hash lock, if in the hash table */
5558 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
5559 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
5560 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
5562 (void) refcount_remove_many(&state
->arcs_esize
[type
],
5565 (void) refcount_remove_many(&state
->arcs_size
, size
, tag
);
5567 VERIFY3U(hdr
->b_type
, ==, type
);
5568 if (type
== ARC_BUFC_METADATA
) {
5569 arc_space_return(size
, ARC_SPACE_META
);
5571 ASSERT(type
== ARC_BUFC_DATA
);
5572 arc_space_return(size
, ARC_SPACE_DATA
);
5577 * This routine is called whenever a buffer is accessed.
5578 * NOTE: the hash lock is dropped in this function.
5581 arc_access(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
5585 ASSERT(MUTEX_HELD(hash_lock
));
5586 ASSERT(HDR_HAS_L1HDR(hdr
));
5588 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
5590 * This buffer is not in the cache, and does not
5591 * appear in our "ghost" list. Add the new buffer
5595 ASSERT0(hdr
->b_l1hdr
.b_arc_access
);
5596 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5597 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
5598 arc_change_state(arc_mru
, hdr
, hash_lock
);
5600 } else if (hdr
->b_l1hdr
.b_state
== arc_mru
) {
5601 now
= ddi_get_lbolt();
5604 * If this buffer is here because of a prefetch, then either:
5605 * - clear the flag if this is a "referencing" read
5606 * (any subsequent access will bump this into the MFU state).
5608 * - move the buffer to the head of the list if this is
5609 * another prefetch (to make it less likely to be evicted).
5611 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5612 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
5613 /* link protected by hash lock */
5614 ASSERT(multilist_link_active(
5615 &hdr
->b_l1hdr
.b_arc_node
));
5617 arc_hdr_clear_flags(hdr
,
5619 ARC_FLAG_PRESCIENT_PREFETCH
);
5620 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
5621 ARCSTAT_BUMP(arcstat_mru_hits
);
5623 hdr
->b_l1hdr
.b_arc_access
= now
;
5628 * This buffer has been "accessed" only once so far,
5629 * but it is still in the cache. Move it to the MFU
5632 if (ddi_time_after(now
, hdr
->b_l1hdr
.b_arc_access
+
5635 * More than 125ms have passed since we
5636 * instantiated this buffer. Move it to the
5637 * most frequently used state.
5639 hdr
->b_l1hdr
.b_arc_access
= now
;
5640 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5641 arc_change_state(arc_mfu
, hdr
, hash_lock
);
5643 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
5644 ARCSTAT_BUMP(arcstat_mru_hits
);
5645 } else if (hdr
->b_l1hdr
.b_state
== arc_mru_ghost
) {
5646 arc_state_t
*new_state
;
5648 * This buffer has been "accessed" recently, but
5649 * was evicted from the cache. Move it to the
5653 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5654 new_state
= arc_mru
;
5655 if (refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0) {
5656 arc_hdr_clear_flags(hdr
,
5658 ARC_FLAG_PRESCIENT_PREFETCH
);
5660 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
5662 new_state
= arc_mfu
;
5663 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5666 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5667 arc_change_state(new_state
, hdr
, hash_lock
);
5669 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_ghost_hits
);
5670 ARCSTAT_BUMP(arcstat_mru_ghost_hits
);
5671 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu
) {
5673 * This buffer has been accessed more than once and is
5674 * still in the cache. Keep it in the MFU state.
5676 * NOTE: an add_reference() that occurred when we did
5677 * the arc_read() will have kicked this off the list.
5678 * If it was a prefetch, we will explicitly move it to
5679 * the head of the list now.
5682 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_hits
);
5683 ARCSTAT_BUMP(arcstat_mfu_hits
);
5684 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5685 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu_ghost
) {
5686 arc_state_t
*new_state
= arc_mfu
;
5688 * This buffer has been accessed more than once but has
5689 * been evicted from the cache. Move it back to the
5693 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5695 * This is a prefetch access...
5696 * move this block back to the MRU state.
5698 new_state
= arc_mru
;
5701 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5702 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5703 arc_change_state(new_state
, hdr
, hash_lock
);
5705 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_ghost_hits
);
5706 ARCSTAT_BUMP(arcstat_mfu_ghost_hits
);
5707 } else if (hdr
->b_l1hdr
.b_state
== arc_l2c_only
) {
5709 * This buffer is on the 2nd Level ARC.
5712 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5713 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5714 arc_change_state(arc_mfu
, hdr
, hash_lock
);
5716 cmn_err(CE_PANIC
, "invalid arc state 0x%p",
5717 hdr
->b_l1hdr
.b_state
);
5722 * This routine is called by dbuf_hold() to update the arc_access() state
5723 * which otherwise would be skipped for entries in the dbuf cache.
5726 arc_buf_access(arc_buf_t
*buf
)
5728 mutex_enter(&buf
->b_evict_lock
);
5729 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
5732 * Avoid taking the hash_lock when possible as an optimization.
5733 * The header must be checked again under the hash_lock in order
5734 * to handle the case where it is concurrently being released.
5736 if (hdr
->b_l1hdr
.b_state
== arc_anon
|| HDR_EMPTY(hdr
)) {
5737 mutex_exit(&buf
->b_evict_lock
);
5741 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
5742 mutex_enter(hash_lock
);
5744 if (hdr
->b_l1hdr
.b_state
== arc_anon
|| HDR_EMPTY(hdr
)) {
5745 mutex_exit(hash_lock
);
5746 mutex_exit(&buf
->b_evict_lock
);
5747 ARCSTAT_BUMP(arcstat_access_skip
);
5751 mutex_exit(&buf
->b_evict_lock
);
5753 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
5754 hdr
->b_l1hdr
.b_state
== arc_mfu
);
5756 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
5757 arc_access(hdr
, hash_lock
);
5758 mutex_exit(hash_lock
);
5760 ARCSTAT_BUMP(arcstat_hits
);
5761 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
) && !HDR_PRESCIENT_PREFETCH(hdr
),
5762 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
), data
, metadata
, hits
);
5765 /* a generic arc_read_done_func_t which you can use */
5768 arc_bcopy_func(zio_t
*zio
, const zbookmark_phys_t
*zb
, const blkptr_t
*bp
,
5769 arc_buf_t
*buf
, void *arg
)
5774 bcopy(buf
->b_data
, arg
, arc_buf_size(buf
));
5775 arc_buf_destroy(buf
, arg
);
5778 /* a generic arc_read_done_func_t */
5781 arc_getbuf_func(zio_t
*zio
, const zbookmark_phys_t
*zb
, const blkptr_t
*bp
,
5782 arc_buf_t
*buf
, void *arg
)
5784 arc_buf_t
**bufp
= arg
;
5787 ASSERT(zio
== NULL
|| zio
->io_error
!= 0);
5790 ASSERT(zio
== NULL
|| zio
->io_error
== 0);
5792 ASSERT(buf
->b_data
!= NULL
);
5797 arc_hdr_verify(arc_buf_hdr_t
*hdr
, blkptr_t
*bp
)
5799 if (BP_IS_HOLE(bp
) || BP_IS_EMBEDDED(bp
)) {
5800 ASSERT3U(HDR_GET_PSIZE(hdr
), ==, 0);
5801 ASSERT3U(arc_hdr_get_compress(hdr
), ==, ZIO_COMPRESS_OFF
);
5803 if (HDR_COMPRESSION_ENABLED(hdr
)) {
5804 ASSERT3U(arc_hdr_get_compress(hdr
), ==,
5805 BP_GET_COMPRESS(bp
));
5807 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, BP_GET_LSIZE(bp
));
5808 ASSERT3U(HDR_GET_PSIZE(hdr
), ==, BP_GET_PSIZE(bp
));
5809 ASSERT3U(!!HDR_PROTECTED(hdr
), ==, BP_IS_PROTECTED(bp
));
5814 arc_read_done(zio_t
*zio
)
5816 blkptr_t
*bp
= zio
->io_bp
;
5817 arc_buf_hdr_t
*hdr
= zio
->io_private
;
5818 kmutex_t
*hash_lock
= NULL
;
5819 arc_callback_t
*callback_list
;
5820 arc_callback_t
*acb
;
5821 boolean_t freeable
= B_FALSE
;
5824 * The hdr was inserted into hash-table and removed from lists
5825 * prior to starting I/O. We should find this header, since
5826 * it's in the hash table, and it should be legit since it's
5827 * not possible to evict it during the I/O. The only possible
5828 * reason for it not to be found is if we were freed during the
5831 if (HDR_IN_HASH_TABLE(hdr
)) {
5832 arc_buf_hdr_t
*found
;
5834 ASSERT3U(hdr
->b_birth
, ==, BP_PHYSICAL_BIRTH(zio
->io_bp
));
5835 ASSERT3U(hdr
->b_dva
.dva_word
[0], ==,
5836 BP_IDENTITY(zio
->io_bp
)->dva_word
[0]);
5837 ASSERT3U(hdr
->b_dva
.dva_word
[1], ==,
5838 BP_IDENTITY(zio
->io_bp
)->dva_word
[1]);
5840 found
= buf_hash_find(hdr
->b_spa
, zio
->io_bp
, &hash_lock
);
5842 ASSERT((found
== hdr
&&
5843 DVA_EQUAL(&hdr
->b_dva
, BP_IDENTITY(zio
->io_bp
))) ||
5844 (found
== hdr
&& HDR_L2_READING(hdr
)));
5845 ASSERT3P(hash_lock
, !=, NULL
);
5848 if (BP_IS_PROTECTED(bp
)) {
5849 hdr
->b_crypt_hdr
.b_ot
= BP_GET_TYPE(bp
);
5850 hdr
->b_crypt_hdr
.b_dsobj
= zio
->io_bookmark
.zb_objset
;
5851 zio_crypt_decode_params_bp(bp
, hdr
->b_crypt_hdr
.b_salt
,
5852 hdr
->b_crypt_hdr
.b_iv
);
5854 if (BP_GET_TYPE(bp
) == DMU_OT_INTENT_LOG
) {
5857 tmpbuf
= abd_borrow_buf_copy(zio
->io_abd
,
5858 sizeof (zil_chain_t
));
5859 zio_crypt_decode_mac_zil(tmpbuf
,
5860 hdr
->b_crypt_hdr
.b_mac
);
5861 abd_return_buf(zio
->io_abd
, tmpbuf
,
5862 sizeof (zil_chain_t
));
5864 zio_crypt_decode_mac_bp(bp
, hdr
->b_crypt_hdr
.b_mac
);
5868 if (zio
->io_error
== 0) {
5869 /* byteswap if necessary */
5870 if (BP_SHOULD_BYTESWAP(zio
->io_bp
)) {
5871 if (BP_GET_LEVEL(zio
->io_bp
) > 0) {
5872 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_UINT64
;
5874 hdr
->b_l1hdr
.b_byteswap
=
5875 DMU_OT_BYTESWAP(BP_GET_TYPE(zio
->io_bp
));
5878 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
5882 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2_EVICTED
);
5883 if (l2arc_noprefetch
&& HDR_PREFETCH(hdr
))
5884 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2CACHE
);
5886 callback_list
= hdr
->b_l1hdr
.b_acb
;
5887 ASSERT3P(callback_list
, !=, NULL
);
5889 if (hash_lock
&& zio
->io_error
== 0 &&
5890 hdr
->b_l1hdr
.b_state
== arc_anon
) {
5892 * Only call arc_access on anonymous buffers. This is because
5893 * if we've issued an I/O for an evicted buffer, we've already
5894 * called arc_access (to prevent any simultaneous readers from
5895 * getting confused).
5897 arc_access(hdr
, hash_lock
);
5901 * If a read request has a callback (i.e. acb_done is not NULL), then we
5902 * make a buf containing the data according to the parameters which were
5903 * passed in. The implementation of arc_buf_alloc_impl() ensures that we
5904 * aren't needlessly decompressing the data multiple times.
5906 int callback_cnt
= 0;
5907 for (acb
= callback_list
; acb
!= NULL
; acb
= acb
->acb_next
) {
5913 if (zio
->io_error
!= 0)
5916 int error
= arc_buf_alloc_impl(hdr
, zio
->io_spa
,
5917 &acb
->acb_zb
, acb
->acb_private
, acb
->acb_encrypted
,
5918 acb
->acb_compressed
, acb
->acb_noauth
, B_TRUE
,
5922 * Assert non-speculative zios didn't fail because an
5923 * encryption key wasn't loaded
5925 ASSERT((zio
->io_flags
& ZIO_FLAG_SPECULATIVE
) ||
5929 * If we failed to decrypt, report an error now (as the zio
5930 * layer would have done if it had done the transforms).
5932 if (error
== ECKSUM
) {
5933 ASSERT(BP_IS_PROTECTED(bp
));
5934 error
= SET_ERROR(EIO
);
5935 if ((zio
->io_flags
& ZIO_FLAG_SPECULATIVE
) == 0) {
5936 spa_log_error(zio
->io_spa
, &acb
->acb_zb
);
5937 zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION
,
5938 zio
->io_spa
, NULL
, &acb
->acb_zb
, zio
, 0, 0);
5944 * Decompression or decryption failed. Set
5945 * io_error so that when we call acb_done
5946 * (below), we will indicate that the read
5947 * failed. Note that in the unusual case
5948 * where one callback is compressed and another
5949 * uncompressed, we will mark all of them
5950 * as failed, even though the uncompressed
5951 * one can't actually fail. In this case,
5952 * the hdr will not be anonymous, because
5953 * if there are multiple callbacks, it's
5954 * because multiple threads found the same
5955 * arc buf in the hash table.
5957 zio
->io_error
= error
;
5962 * If there are multiple callbacks, we must have the hash lock,
5963 * because the only way for multiple threads to find this hdr is
5964 * in the hash table. This ensures that if there are multiple
5965 * callbacks, the hdr is not anonymous. If it were anonymous,
5966 * we couldn't use arc_buf_destroy() in the error case below.
5968 ASSERT(callback_cnt
< 2 || hash_lock
!= NULL
);
5970 hdr
->b_l1hdr
.b_acb
= NULL
;
5971 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
5972 if (callback_cnt
== 0)
5973 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
5975 ASSERT(refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
) ||
5976 callback_list
!= NULL
);
5978 if (zio
->io_error
== 0) {
5979 arc_hdr_verify(hdr
, zio
->io_bp
);
5981 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
5982 if (hdr
->b_l1hdr
.b_state
!= arc_anon
)
5983 arc_change_state(arc_anon
, hdr
, hash_lock
);
5984 if (HDR_IN_HASH_TABLE(hdr
))
5985 buf_hash_remove(hdr
);
5986 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
5990 * Broadcast before we drop the hash_lock to avoid the possibility
5991 * that the hdr (and hence the cv) might be freed before we get to
5992 * the cv_broadcast().
5994 cv_broadcast(&hdr
->b_l1hdr
.b_cv
);
5996 if (hash_lock
!= NULL
) {
5997 mutex_exit(hash_lock
);
6000 * This block was freed while we waited for the read to
6001 * complete. It has been removed from the hash table and
6002 * moved to the anonymous state (so that it won't show up
6005 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
6006 freeable
= refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
6009 /* execute each callback and free its structure */
6010 while ((acb
= callback_list
) != NULL
) {
6011 if (acb
->acb_done
!= NULL
) {
6012 if (zio
->io_error
!= 0 && acb
->acb_buf
!= NULL
) {
6014 * If arc_buf_alloc_impl() fails during
6015 * decompression, the buf will still be
6016 * allocated, and needs to be freed here.
6018 arc_buf_destroy(acb
->acb_buf
,
6020 acb
->acb_buf
= NULL
;
6022 acb
->acb_done(zio
, &zio
->io_bookmark
, zio
->io_bp
,
6023 acb
->acb_buf
, acb
->acb_private
);
6026 if (acb
->acb_zio_dummy
!= NULL
) {
6027 acb
->acb_zio_dummy
->io_error
= zio
->io_error
;
6028 zio_nowait(acb
->acb_zio_dummy
);
6031 callback_list
= acb
->acb_next
;
6032 kmem_free(acb
, sizeof (arc_callback_t
));
6036 arc_hdr_destroy(hdr
);
6040 * "Read" the block at the specified DVA (in bp) via the
6041 * cache. If the block is found in the cache, invoke the provided
6042 * callback immediately and return. Note that the `zio' parameter
6043 * in the callback will be NULL in this case, since no IO was
6044 * required. If the block is not in the cache pass the read request
6045 * on to the spa with a substitute callback function, so that the
6046 * requested block will be added to the cache.
6048 * If a read request arrives for a block that has a read in-progress,
6049 * either wait for the in-progress read to complete (and return the
6050 * results); or, if this is a read with a "done" func, add a record
6051 * to the read to invoke the "done" func when the read completes,
6052 * and return; or just return.
6054 * arc_read_done() will invoke all the requested "done" functions
6055 * for readers of this block.
6058 arc_read(zio_t
*pio
, spa_t
*spa
, const blkptr_t
*bp
,
6059 arc_read_done_func_t
*done
, void *private, zio_priority_t priority
,
6060 int zio_flags
, arc_flags_t
*arc_flags
, const zbookmark_phys_t
*zb
)
6062 arc_buf_hdr_t
*hdr
= NULL
;
6063 kmutex_t
*hash_lock
= NULL
;
6065 uint64_t guid
= spa_load_guid(spa
);
6066 boolean_t compressed_read
= (zio_flags
& ZIO_FLAG_RAW_COMPRESS
) != 0;
6067 boolean_t encrypted_read
= BP_IS_ENCRYPTED(bp
) &&
6068 (zio_flags
& ZIO_FLAG_RAW_ENCRYPT
) != 0;
6069 boolean_t noauth_read
= BP_IS_AUTHENTICATED(bp
) &&
6070 (zio_flags
& ZIO_FLAG_RAW_ENCRYPT
) != 0;
6073 ASSERT(!BP_IS_EMBEDDED(bp
) ||
6074 BPE_GET_ETYPE(bp
) == BP_EMBEDDED_TYPE_DATA
);
6077 if (!BP_IS_EMBEDDED(bp
)) {
6079 * Embedded BP's have no DVA and require no I/O to "read".
6080 * Create an anonymous arc buf to back it.
6082 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
6086 * Determine if we have an L1 cache hit or a cache miss. For simplicity
6087 * we maintain encrypted data seperately from compressed / uncompressed
6088 * data. If the user is requesting raw encrypted data and we don't have
6089 * that in the header we will read from disk to guarantee that we can
6090 * get it even if the encryption keys aren't loaded.
6092 if (hdr
!= NULL
&& HDR_HAS_L1HDR(hdr
) && (HDR_HAS_RABD(hdr
) ||
6093 (hdr
->b_l1hdr
.b_pabd
!= NULL
&& !encrypted_read
))) {
6094 arc_buf_t
*buf
= NULL
;
6095 *arc_flags
|= ARC_FLAG_CACHED
;
6097 if (HDR_IO_IN_PROGRESS(hdr
)) {
6098 zio_t
*head_zio
= hdr
->b_l1hdr
.b_acb
->acb_zio_head
;
6100 ASSERT3P(head_zio
, !=, NULL
);
6101 if ((hdr
->b_flags
& ARC_FLAG_PRIO_ASYNC_READ
) &&
6102 priority
== ZIO_PRIORITY_SYNC_READ
) {
6104 * This is a sync read that needs to wait for
6105 * an in-flight async read. Request that the
6106 * zio have its priority upgraded.
6108 zio_change_priority(head_zio
, priority
);
6109 DTRACE_PROBE1(arc__async__upgrade__sync
,
6110 arc_buf_hdr_t
*, hdr
);
6111 ARCSTAT_BUMP(arcstat_async_upgrade_sync
);
6113 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
6114 arc_hdr_clear_flags(hdr
,
6115 ARC_FLAG_PREDICTIVE_PREFETCH
);
6118 if (*arc_flags
& ARC_FLAG_WAIT
) {
6119 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
6120 mutex_exit(hash_lock
);
6123 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
6126 arc_callback_t
*acb
= NULL
;
6128 acb
= kmem_zalloc(sizeof (arc_callback_t
),
6130 acb
->acb_done
= done
;
6131 acb
->acb_private
= private;
6132 acb
->acb_compressed
= compressed_read
;
6133 acb
->acb_encrypted
= encrypted_read
;
6134 acb
->acb_noauth
= noauth_read
;
6137 acb
->acb_zio_dummy
= zio_null(pio
,
6138 spa
, NULL
, NULL
, NULL
, zio_flags
);
6140 ASSERT3P(acb
->acb_done
, !=, NULL
);
6141 acb
->acb_zio_head
= head_zio
;
6142 acb
->acb_next
= hdr
->b_l1hdr
.b_acb
;
6143 hdr
->b_l1hdr
.b_acb
= acb
;
6144 mutex_exit(hash_lock
);
6147 mutex_exit(hash_lock
);
6151 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
6152 hdr
->b_l1hdr
.b_state
== arc_mfu
);
6155 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
6157 * This is a demand read which does not have to
6158 * wait for i/o because we did a predictive
6159 * prefetch i/o for it, which has completed.
6162 arc__demand__hit__predictive__prefetch
,
6163 arc_buf_hdr_t
*, hdr
);
6165 arcstat_demand_hit_predictive_prefetch
);
6166 arc_hdr_clear_flags(hdr
,
6167 ARC_FLAG_PREDICTIVE_PREFETCH
);
6170 if (hdr
->b_flags
& ARC_FLAG_PRESCIENT_PREFETCH
) {
6172 arcstat_demand_hit_prescient_prefetch
);
6173 arc_hdr_clear_flags(hdr
,
6174 ARC_FLAG_PRESCIENT_PREFETCH
);
6177 ASSERT(!BP_IS_EMBEDDED(bp
) || !BP_IS_HOLE(bp
));
6179 /* Get a buf with the desired data in it. */
6180 rc
= arc_buf_alloc_impl(hdr
, spa
, zb
, private,
6181 encrypted_read
, compressed_read
, noauth_read
,
6185 * Convert authentication and decryption errors
6186 * to EIO (and generate an ereport if needed)
6187 * before leaving the ARC.
6189 rc
= SET_ERROR(EIO
);
6190 if ((zio_flags
& ZIO_FLAG_SPECULATIVE
) == 0) {
6191 spa_log_error(spa
, zb
);
6193 FM_EREPORT_ZFS_AUTHENTICATION
,
6194 spa
, NULL
, zb
, NULL
, 0, 0);
6198 (void) remove_reference(hdr
, hash_lock
,
6200 arc_buf_destroy_impl(buf
);
6204 /* assert any errors weren't due to unloaded keys */
6205 ASSERT((zio_flags
& ZIO_FLAG_SPECULATIVE
) ||
6207 } else if (*arc_flags
& ARC_FLAG_PREFETCH
&&
6208 refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
6209 arc_hdr_set_flags(hdr
, ARC_FLAG_PREFETCH
);
6211 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
6212 arc_access(hdr
, hash_lock
);
6213 if (*arc_flags
& ARC_FLAG_PRESCIENT_PREFETCH
)
6214 arc_hdr_set_flags(hdr
, ARC_FLAG_PRESCIENT_PREFETCH
);
6215 if (*arc_flags
& ARC_FLAG_L2CACHE
)
6216 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
6217 mutex_exit(hash_lock
);
6218 ARCSTAT_BUMP(arcstat_hits
);
6219 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
6220 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
6221 data
, metadata
, hits
);
6224 done(NULL
, zb
, bp
, buf
, private);
6226 uint64_t lsize
= BP_GET_LSIZE(bp
);
6227 uint64_t psize
= BP_GET_PSIZE(bp
);
6228 arc_callback_t
*acb
;
6231 boolean_t devw
= B_FALSE
;
6236 * Gracefully handle a damaged logical block size as a
6239 if (lsize
> spa_maxblocksize(spa
)) {
6240 rc
= SET_ERROR(ECKSUM
);
6245 /* this block is not in the cache */
6246 arc_buf_hdr_t
*exists
= NULL
;
6247 arc_buf_contents_t type
= BP_GET_BUFC_TYPE(bp
);
6248 hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
,
6249 BP_IS_PROTECTED(bp
), BP_GET_COMPRESS(bp
), type
,
6252 if (!BP_IS_EMBEDDED(bp
)) {
6253 hdr
->b_dva
= *BP_IDENTITY(bp
);
6254 hdr
->b_birth
= BP_PHYSICAL_BIRTH(bp
);
6255 exists
= buf_hash_insert(hdr
, &hash_lock
);
6257 if (exists
!= NULL
) {
6258 /* somebody beat us to the hash insert */
6259 mutex_exit(hash_lock
);
6260 buf_discard_identity(hdr
);
6261 arc_hdr_destroy(hdr
);
6262 goto top
; /* restart the IO request */
6266 * This block is in the ghost cache or encrypted data
6267 * was requested and we didn't have it. If it was
6268 * L2-only (and thus didn't have an L1 hdr),
6269 * we realloc the header to add an L1 hdr.
6271 if (!HDR_HAS_L1HDR(hdr
)) {
6272 hdr
= arc_hdr_realloc(hdr
, hdr_l2only_cache
,
6276 if (GHOST_STATE(hdr
->b_l1hdr
.b_state
)) {
6277 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
6278 ASSERT(!HDR_HAS_RABD(hdr
));
6279 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6280 ASSERT0(refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
6281 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
6282 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
6283 } else if (HDR_IO_IN_PROGRESS(hdr
)) {
6285 * If this header already had an IO in progress
6286 * and we are performing another IO to fetch
6287 * encrypted data we must wait until the first
6288 * IO completes so as not to confuse
6289 * arc_read_done(). This should be very rare
6290 * and so the performance impact shouldn't
6293 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
6294 mutex_exit(hash_lock
);
6299 * This is a delicate dance that we play here.
6300 * This hdr might be in the ghost list so we access
6301 * it to move it out of the ghost list before we
6302 * initiate the read. If it's a prefetch then
6303 * it won't have a callback so we'll remove the
6304 * reference that arc_buf_alloc_impl() created. We
6305 * do this after we've called arc_access() to
6306 * avoid hitting an assert in remove_reference().
6308 arc_access(hdr
, hash_lock
);
6309 arc_hdr_alloc_abd(hdr
, encrypted_read
);
6312 if (encrypted_read
) {
6313 ASSERT(HDR_HAS_RABD(hdr
));
6314 size
= HDR_GET_PSIZE(hdr
);
6315 hdr_abd
= hdr
->b_crypt_hdr
.b_rabd
;
6316 zio_flags
|= ZIO_FLAG_RAW
;
6318 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
6319 size
= arc_hdr_size(hdr
);
6320 hdr_abd
= hdr
->b_l1hdr
.b_pabd
;
6322 if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
) {
6323 zio_flags
|= ZIO_FLAG_RAW_COMPRESS
;
6327 * For authenticated bp's, we do not ask the ZIO layer
6328 * to authenticate them since this will cause the entire
6329 * IO to fail if the key isn't loaded. Instead, we
6330 * defer authentication until arc_buf_fill(), which will
6331 * verify the data when the key is available.
6333 if (BP_IS_AUTHENTICATED(bp
))
6334 zio_flags
|= ZIO_FLAG_RAW_ENCRYPT
;
6337 if (*arc_flags
& ARC_FLAG_PREFETCH
&&
6338 refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
))
6339 arc_hdr_set_flags(hdr
, ARC_FLAG_PREFETCH
);
6340 if (*arc_flags
& ARC_FLAG_PRESCIENT_PREFETCH
)
6341 arc_hdr_set_flags(hdr
, ARC_FLAG_PRESCIENT_PREFETCH
);
6342 if (*arc_flags
& ARC_FLAG_L2CACHE
)
6343 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
6344 if (BP_IS_AUTHENTICATED(bp
))
6345 arc_hdr_set_flags(hdr
, ARC_FLAG_NOAUTH
);
6346 if (BP_GET_LEVEL(bp
) > 0)
6347 arc_hdr_set_flags(hdr
, ARC_FLAG_INDIRECT
);
6348 if (*arc_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
)
6349 arc_hdr_set_flags(hdr
, ARC_FLAG_PREDICTIVE_PREFETCH
);
6350 ASSERT(!GHOST_STATE(hdr
->b_l1hdr
.b_state
));
6352 acb
= kmem_zalloc(sizeof (arc_callback_t
), KM_SLEEP
);
6353 acb
->acb_done
= done
;
6354 acb
->acb_private
= private;
6355 acb
->acb_compressed
= compressed_read
;
6356 acb
->acb_encrypted
= encrypted_read
;
6357 acb
->acb_noauth
= noauth_read
;
6360 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
6361 hdr
->b_l1hdr
.b_acb
= acb
;
6362 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
6364 if (HDR_HAS_L2HDR(hdr
) &&
6365 (vd
= hdr
->b_l2hdr
.b_dev
->l2ad_vdev
) != NULL
) {
6366 devw
= hdr
->b_l2hdr
.b_dev
->l2ad_writing
;
6367 addr
= hdr
->b_l2hdr
.b_daddr
;
6369 * Lock out L2ARC device removal.
6371 if (vdev_is_dead(vd
) ||
6372 !spa_config_tryenter(spa
, SCL_L2ARC
, vd
, RW_READER
))
6377 * We count both async reads and scrub IOs as asynchronous so
6378 * that both can be upgraded in the event of a cache hit while
6379 * the read IO is still in-flight.
6381 if (priority
== ZIO_PRIORITY_ASYNC_READ
||
6382 priority
== ZIO_PRIORITY_SCRUB
)
6383 arc_hdr_set_flags(hdr
, ARC_FLAG_PRIO_ASYNC_READ
);
6385 arc_hdr_clear_flags(hdr
, ARC_FLAG_PRIO_ASYNC_READ
);
6388 * At this point, we have a level 1 cache miss. Try again in
6389 * L2ARC if possible.
6391 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, lsize
);
6393 DTRACE_PROBE4(arc__miss
, arc_buf_hdr_t
*, hdr
, blkptr_t
*, bp
,
6394 uint64_t, lsize
, zbookmark_phys_t
*, zb
);
6395 ARCSTAT_BUMP(arcstat_misses
);
6396 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
6397 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
6398 data
, metadata
, misses
);
6400 if (vd
!= NULL
&& l2arc_ndev
!= 0 && !(l2arc_norw
&& devw
)) {
6402 * Read from the L2ARC if the following are true:
6403 * 1. The L2ARC vdev was previously cached.
6404 * 2. This buffer still has L2ARC metadata.
6405 * 3. This buffer isn't currently writing to the L2ARC.
6406 * 4. The L2ARC entry wasn't evicted, which may
6407 * also have invalidated the vdev.
6408 * 5. This isn't prefetch and l2arc_noprefetch is set.
6410 if (HDR_HAS_L2HDR(hdr
) &&
6411 !HDR_L2_WRITING(hdr
) && !HDR_L2_EVICTED(hdr
) &&
6412 !(l2arc_noprefetch
&& HDR_PREFETCH(hdr
))) {
6413 l2arc_read_callback_t
*cb
;
6417 DTRACE_PROBE1(l2arc__hit
, arc_buf_hdr_t
*, hdr
);
6418 ARCSTAT_BUMP(arcstat_l2_hits
);
6419 atomic_inc_32(&hdr
->b_l2hdr
.b_hits
);
6421 cb
= kmem_zalloc(sizeof (l2arc_read_callback_t
),
6423 cb
->l2rcb_hdr
= hdr
;
6426 cb
->l2rcb_flags
= zio_flags
;
6428 asize
= vdev_psize_to_asize(vd
, size
);
6429 if (asize
!= size
) {
6430 abd
= abd_alloc_for_io(asize
,
6431 HDR_ISTYPE_METADATA(hdr
));
6432 cb
->l2rcb_abd
= abd
;
6437 ASSERT(addr
>= VDEV_LABEL_START_SIZE
&&
6438 addr
+ asize
<= vd
->vdev_psize
-
6439 VDEV_LABEL_END_SIZE
);
6442 * l2arc read. The SCL_L2ARC lock will be
6443 * released by l2arc_read_done().
6444 * Issue a null zio if the underlying buffer
6445 * was squashed to zero size by compression.
6447 ASSERT3U(arc_hdr_get_compress(hdr
), !=,
6448 ZIO_COMPRESS_EMPTY
);
6449 rzio
= zio_read_phys(pio
, vd
, addr
,
6452 l2arc_read_done
, cb
, priority
,
6453 zio_flags
| ZIO_FLAG_DONT_CACHE
|
6455 ZIO_FLAG_DONT_PROPAGATE
|
6456 ZIO_FLAG_DONT_RETRY
, B_FALSE
);
6457 acb
->acb_zio_head
= rzio
;
6459 if (hash_lock
!= NULL
)
6460 mutex_exit(hash_lock
);
6462 DTRACE_PROBE2(l2arc__read
, vdev_t
*, vd
,
6464 ARCSTAT_INCR(arcstat_l2_read_bytes
,
6465 HDR_GET_PSIZE(hdr
));
6467 if (*arc_flags
& ARC_FLAG_NOWAIT
) {
6472 ASSERT(*arc_flags
& ARC_FLAG_WAIT
);
6473 if (zio_wait(rzio
) == 0)
6476 /* l2arc read error; goto zio_read() */
6477 if (hash_lock
!= NULL
)
6478 mutex_enter(hash_lock
);
6480 DTRACE_PROBE1(l2arc__miss
,
6481 arc_buf_hdr_t
*, hdr
);
6482 ARCSTAT_BUMP(arcstat_l2_misses
);
6483 if (HDR_L2_WRITING(hdr
))
6484 ARCSTAT_BUMP(arcstat_l2_rw_clash
);
6485 spa_config_exit(spa
, SCL_L2ARC
, vd
);
6489 spa_config_exit(spa
, SCL_L2ARC
, vd
);
6490 if (l2arc_ndev
!= 0) {
6491 DTRACE_PROBE1(l2arc__miss
,
6492 arc_buf_hdr_t
*, hdr
);
6493 ARCSTAT_BUMP(arcstat_l2_misses
);
6497 rzio
= zio_read(pio
, spa
, bp
, hdr_abd
, size
,
6498 arc_read_done
, hdr
, priority
, zio_flags
, zb
);
6499 acb
->acb_zio_head
= rzio
;
6501 if (hash_lock
!= NULL
)
6502 mutex_exit(hash_lock
);
6504 if (*arc_flags
& ARC_FLAG_WAIT
) {
6505 rc
= zio_wait(rzio
);
6509 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
6514 /* embedded bps don't actually go to disk */
6515 if (!BP_IS_EMBEDDED(bp
))
6516 spa_read_history_add(spa
, zb
, *arc_flags
);
6521 arc_add_prune_callback(arc_prune_func_t
*func
, void *private)
6525 p
= kmem_alloc(sizeof (*p
), KM_SLEEP
);
6527 p
->p_private
= private;
6528 list_link_init(&p
->p_node
);
6529 refcount_create(&p
->p_refcnt
);
6531 mutex_enter(&arc_prune_mtx
);
6532 refcount_add(&p
->p_refcnt
, &arc_prune_list
);
6533 list_insert_head(&arc_prune_list
, p
);
6534 mutex_exit(&arc_prune_mtx
);
6540 arc_remove_prune_callback(arc_prune_t
*p
)
6542 boolean_t wait
= B_FALSE
;
6543 mutex_enter(&arc_prune_mtx
);
6544 list_remove(&arc_prune_list
, p
);
6545 if (refcount_remove(&p
->p_refcnt
, &arc_prune_list
) > 0)
6547 mutex_exit(&arc_prune_mtx
);
6549 /* wait for arc_prune_task to finish */
6551 taskq_wait_outstanding(arc_prune_taskq
, 0);
6552 ASSERT0(refcount_count(&p
->p_refcnt
));
6553 refcount_destroy(&p
->p_refcnt
);
6554 kmem_free(p
, sizeof (*p
));
6558 * Notify the arc that a block was freed, and thus will never be used again.
6561 arc_freed(spa_t
*spa
, const blkptr_t
*bp
)
6564 kmutex_t
*hash_lock
;
6565 uint64_t guid
= spa_load_guid(spa
);
6567 ASSERT(!BP_IS_EMBEDDED(bp
));
6569 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
6574 * We might be trying to free a block that is still doing I/O
6575 * (i.e. prefetch) or has a reference (i.e. a dedup-ed,
6576 * dmu_sync-ed block). If this block is being prefetched, then it
6577 * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr
6578 * until the I/O completes. A block may also have a reference if it is
6579 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would
6580 * have written the new block to its final resting place on disk but
6581 * without the dedup flag set. This would have left the hdr in the MRU
6582 * state and discoverable. When the txg finally syncs it detects that
6583 * the block was overridden in open context and issues an override I/O.
6584 * Since this is a dedup block, the override I/O will determine if the
6585 * block is already in the DDT. If so, then it will replace the io_bp
6586 * with the bp from the DDT and allow the I/O to finish. When the I/O
6587 * reaches the done callback, dbuf_write_override_done, it will
6588 * check to see if the io_bp and io_bp_override are identical.
6589 * If they are not, then it indicates that the bp was replaced with
6590 * the bp in the DDT and the override bp is freed. This allows
6591 * us to arrive here with a reference on a block that is being
6592 * freed. So if we have an I/O in progress, or a reference to
6593 * this hdr, then we don't destroy the hdr.
6595 if (!HDR_HAS_L1HDR(hdr
) || (!HDR_IO_IN_PROGRESS(hdr
) &&
6596 refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
))) {
6597 arc_change_state(arc_anon
, hdr
, hash_lock
);
6598 arc_hdr_destroy(hdr
);
6599 mutex_exit(hash_lock
);
6601 mutex_exit(hash_lock
);
6607 * Release this buffer from the cache, making it an anonymous buffer. This
6608 * must be done after a read and prior to modifying the buffer contents.
6609 * If the buffer has more than one reference, we must make
6610 * a new hdr for the buffer.
6613 arc_release(arc_buf_t
*buf
, void *tag
)
6615 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
6618 * It would be nice to assert that if its DMU metadata (level >
6619 * 0 || it's the dnode file), then it must be syncing context.
6620 * But we don't know that information at this level.
6623 mutex_enter(&buf
->b_evict_lock
);
6625 ASSERT(HDR_HAS_L1HDR(hdr
));
6628 * We don't grab the hash lock prior to this check, because if
6629 * the buffer's header is in the arc_anon state, it won't be
6630 * linked into the hash table.
6632 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
6633 mutex_exit(&buf
->b_evict_lock
);
6634 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6635 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
6636 ASSERT(!HDR_HAS_L2HDR(hdr
));
6637 ASSERT(HDR_EMPTY(hdr
));
6639 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
6640 ASSERT3S(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), ==, 1);
6641 ASSERT(!list_link_active(&hdr
->b_l1hdr
.b_arc_node
));
6643 hdr
->b_l1hdr
.b_arc_access
= 0;
6646 * If the buf is being overridden then it may already
6647 * have a hdr that is not empty.
6649 buf_discard_identity(hdr
);
6655 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
6656 mutex_enter(hash_lock
);
6659 * This assignment is only valid as long as the hash_lock is
6660 * held, we must be careful not to reference state or the
6661 * b_state field after dropping the lock.
6663 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
6664 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
6665 ASSERT3P(state
, !=, arc_anon
);
6667 /* this buffer is not on any list */
6668 ASSERT3S(refcount_count(&hdr
->b_l1hdr
.b_refcnt
), >, 0);
6670 if (HDR_HAS_L2HDR(hdr
)) {
6671 mutex_enter(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
6674 * We have to recheck this conditional again now that
6675 * we're holding the l2ad_mtx to prevent a race with
6676 * another thread which might be concurrently calling
6677 * l2arc_evict(). In that case, l2arc_evict() might have
6678 * destroyed the header's L2 portion as we were waiting
6679 * to acquire the l2ad_mtx.
6681 if (HDR_HAS_L2HDR(hdr
))
6682 arc_hdr_l2hdr_destroy(hdr
);
6684 mutex_exit(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
6688 * Do we have more than one buf?
6690 if (hdr
->b_l1hdr
.b_bufcnt
> 1) {
6691 arc_buf_hdr_t
*nhdr
;
6692 uint64_t spa
= hdr
->b_spa
;
6693 uint64_t psize
= HDR_GET_PSIZE(hdr
);
6694 uint64_t lsize
= HDR_GET_LSIZE(hdr
);
6695 boolean_t
protected = HDR_PROTECTED(hdr
);
6696 enum zio_compress compress
= arc_hdr_get_compress(hdr
);
6697 arc_buf_contents_t type
= arc_buf_type(hdr
);
6698 VERIFY3U(hdr
->b_type
, ==, type
);
6700 ASSERT(hdr
->b_l1hdr
.b_buf
!= buf
|| buf
->b_next
!= NULL
);
6701 (void) remove_reference(hdr
, hash_lock
, tag
);
6703 if (arc_buf_is_shared(buf
) && !ARC_BUF_COMPRESSED(buf
)) {
6704 ASSERT3P(hdr
->b_l1hdr
.b_buf
, !=, buf
);
6705 ASSERT(ARC_BUF_LAST(buf
));
6709 * Pull the data off of this hdr and attach it to
6710 * a new anonymous hdr. Also find the last buffer
6711 * in the hdr's buffer list.
6713 arc_buf_t
*lastbuf
= arc_buf_remove(hdr
, buf
);
6714 ASSERT3P(lastbuf
, !=, NULL
);
6717 * If the current arc_buf_t and the hdr are sharing their data
6718 * buffer, then we must stop sharing that block.
6720 if (arc_buf_is_shared(buf
)) {
6721 ASSERT3P(hdr
->b_l1hdr
.b_buf
, !=, buf
);
6722 VERIFY(!arc_buf_is_shared(lastbuf
));
6725 * First, sever the block sharing relationship between
6726 * buf and the arc_buf_hdr_t.
6728 arc_unshare_buf(hdr
, buf
);
6731 * Now we need to recreate the hdr's b_pabd. Since we
6732 * have lastbuf handy, we try to share with it, but if
6733 * we can't then we allocate a new b_pabd and copy the
6734 * data from buf into it.
6736 if (arc_can_share(hdr
, lastbuf
)) {
6737 arc_share_buf(hdr
, lastbuf
);
6739 arc_hdr_alloc_abd(hdr
, B_FALSE
);
6740 abd_copy_from_buf(hdr
->b_l1hdr
.b_pabd
,
6741 buf
->b_data
, psize
);
6743 VERIFY3P(lastbuf
->b_data
, !=, NULL
);
6744 } else if (HDR_SHARED_DATA(hdr
)) {
6746 * Uncompressed shared buffers are always at the end
6747 * of the list. Compressed buffers don't have the
6748 * same requirements. This makes it hard to
6749 * simply assert that the lastbuf is shared so
6750 * we rely on the hdr's compression flags to determine
6751 * if we have a compressed, shared buffer.
6753 ASSERT(arc_buf_is_shared(lastbuf
) ||
6754 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
6755 ASSERT(!ARC_BUF_SHARED(buf
));
6758 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
6759 ASSERT3P(state
, !=, arc_l2c_only
);
6761 (void) refcount_remove_many(&state
->arcs_size
,
6762 arc_buf_size(buf
), buf
);
6764 if (refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
)) {
6765 ASSERT3P(state
, !=, arc_l2c_only
);
6766 (void) refcount_remove_many(&state
->arcs_esize
[type
],
6767 arc_buf_size(buf
), buf
);
6770 hdr
->b_l1hdr
.b_bufcnt
-= 1;
6771 if (ARC_BUF_ENCRYPTED(buf
))
6772 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
6774 arc_cksum_verify(buf
);
6775 arc_buf_unwatch(buf
);
6777 /* if this is the last uncompressed buf free the checksum */
6778 if (!arc_hdr_has_uncompressed_buf(hdr
))
6779 arc_cksum_free(hdr
);
6781 mutex_exit(hash_lock
);
6784 * Allocate a new hdr. The new hdr will contain a b_pabd
6785 * buffer which will be freed in arc_write().
6787 nhdr
= arc_hdr_alloc(spa
, psize
, lsize
, protected,
6788 compress
, type
, HDR_HAS_RABD(hdr
));
6789 ASSERT3P(nhdr
->b_l1hdr
.b_buf
, ==, NULL
);
6790 ASSERT0(nhdr
->b_l1hdr
.b_bufcnt
);
6791 ASSERT0(refcount_count(&nhdr
->b_l1hdr
.b_refcnt
));
6792 VERIFY3U(nhdr
->b_type
, ==, type
);
6793 ASSERT(!HDR_SHARED_DATA(nhdr
));
6795 nhdr
->b_l1hdr
.b_buf
= buf
;
6796 nhdr
->b_l1hdr
.b_bufcnt
= 1;
6797 if (ARC_BUF_ENCRYPTED(buf
))
6798 nhdr
->b_crypt_hdr
.b_ebufcnt
= 1;
6799 nhdr
->b_l1hdr
.b_mru_hits
= 0;
6800 nhdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
6801 nhdr
->b_l1hdr
.b_mfu_hits
= 0;
6802 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
6803 nhdr
->b_l1hdr
.b_l2_hits
= 0;
6804 (void) refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, tag
);
6807 mutex_exit(&buf
->b_evict_lock
);
6808 (void) refcount_add_many(&arc_anon
->arcs_size
,
6809 HDR_GET_LSIZE(nhdr
), buf
);
6811 mutex_exit(&buf
->b_evict_lock
);
6812 ASSERT(refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 1);
6813 /* protected by hash lock, or hdr is on arc_anon */
6814 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
6815 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6816 hdr
->b_l1hdr
.b_mru_hits
= 0;
6817 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
6818 hdr
->b_l1hdr
.b_mfu_hits
= 0;
6819 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
6820 hdr
->b_l1hdr
.b_l2_hits
= 0;
6821 arc_change_state(arc_anon
, hdr
, hash_lock
);
6822 hdr
->b_l1hdr
.b_arc_access
= 0;
6824 mutex_exit(hash_lock
);
6825 buf_discard_identity(hdr
);
6831 arc_released(arc_buf_t
*buf
)
6835 mutex_enter(&buf
->b_evict_lock
);
6836 released
= (buf
->b_data
!= NULL
&&
6837 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
6838 mutex_exit(&buf
->b_evict_lock
);
6844 arc_referenced(arc_buf_t
*buf
)
6848 mutex_enter(&buf
->b_evict_lock
);
6849 referenced
= (refcount_count(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
6850 mutex_exit(&buf
->b_evict_lock
);
6851 return (referenced
);
6856 arc_write_ready(zio_t
*zio
)
6858 arc_write_callback_t
*callback
= zio
->io_private
;
6859 arc_buf_t
*buf
= callback
->awcb_buf
;
6860 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
6861 blkptr_t
*bp
= zio
->io_bp
;
6862 uint64_t psize
= BP_IS_HOLE(bp
) ? 0 : BP_GET_PSIZE(bp
);
6863 fstrans_cookie_t cookie
= spl_fstrans_mark();
6865 ASSERT(HDR_HAS_L1HDR(hdr
));
6866 ASSERT(!refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
6867 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
6870 * If we're reexecuting this zio because the pool suspended, then
6871 * cleanup any state that was previously set the first time the
6872 * callback was invoked.
6874 if (zio
->io_flags
& ZIO_FLAG_REEXECUTED
) {
6875 arc_cksum_free(hdr
);
6876 arc_buf_unwatch(buf
);
6877 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
6878 if (arc_buf_is_shared(buf
)) {
6879 arc_unshare_buf(hdr
, buf
);
6881 arc_hdr_free_abd(hdr
, B_FALSE
);
6885 if (HDR_HAS_RABD(hdr
))
6886 arc_hdr_free_abd(hdr
, B_TRUE
);
6888 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
6889 ASSERT(!HDR_HAS_RABD(hdr
));
6890 ASSERT(!HDR_SHARED_DATA(hdr
));
6891 ASSERT(!arc_buf_is_shared(buf
));
6893 callback
->awcb_ready(zio
, buf
, callback
->awcb_private
);
6895 if (HDR_IO_IN_PROGRESS(hdr
))
6896 ASSERT(zio
->io_flags
& ZIO_FLAG_REEXECUTED
);
6898 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
6900 if (BP_IS_PROTECTED(bp
) != !!HDR_PROTECTED(hdr
))
6901 hdr
= arc_hdr_realloc_crypt(hdr
, BP_IS_PROTECTED(bp
));
6903 if (BP_IS_PROTECTED(bp
)) {
6904 /* ZIL blocks are written through zio_rewrite */
6905 ASSERT3U(BP_GET_TYPE(bp
), !=, DMU_OT_INTENT_LOG
);
6906 ASSERT(HDR_PROTECTED(hdr
));
6908 if (BP_SHOULD_BYTESWAP(bp
)) {
6909 if (BP_GET_LEVEL(bp
) > 0) {
6910 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_UINT64
;
6912 hdr
->b_l1hdr
.b_byteswap
=
6913 DMU_OT_BYTESWAP(BP_GET_TYPE(bp
));
6916 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
6919 hdr
->b_crypt_hdr
.b_ot
= BP_GET_TYPE(bp
);
6920 hdr
->b_crypt_hdr
.b_dsobj
= zio
->io_bookmark
.zb_objset
;
6921 zio_crypt_decode_params_bp(bp
, hdr
->b_crypt_hdr
.b_salt
,
6922 hdr
->b_crypt_hdr
.b_iv
);
6923 zio_crypt_decode_mac_bp(bp
, hdr
->b_crypt_hdr
.b_mac
);
6927 * If this block was written for raw encryption but the zio layer
6928 * ended up only authenticating it, adjust the buffer flags now.
6930 if (BP_IS_AUTHENTICATED(bp
) && ARC_BUF_ENCRYPTED(buf
)) {
6931 arc_hdr_set_flags(hdr
, ARC_FLAG_NOAUTH
);
6932 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
6933 if (BP_GET_COMPRESS(bp
) == ZIO_COMPRESS_OFF
)
6934 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
6935 } else if (BP_IS_HOLE(bp
) && ARC_BUF_ENCRYPTED(buf
)) {
6936 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
6937 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
6940 /* this must be done after the buffer flags are adjusted */
6941 arc_cksum_compute(buf
);
6943 enum zio_compress compress
;
6944 if (BP_IS_HOLE(bp
) || BP_IS_EMBEDDED(bp
)) {
6945 compress
= ZIO_COMPRESS_OFF
;
6947 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, BP_GET_LSIZE(bp
));
6948 compress
= BP_GET_COMPRESS(bp
);
6950 HDR_SET_PSIZE(hdr
, psize
);
6951 arc_hdr_set_compress(hdr
, compress
);
6953 if (zio
->io_error
!= 0 || psize
== 0)
6957 * Fill the hdr with data. If the buffer is encrypted we have no choice
6958 * but to copy the data into b_radb. If the hdr is compressed, the data
6959 * we want is available from the zio, otherwise we can take it from
6962 * We might be able to share the buf's data with the hdr here. However,
6963 * doing so would cause the ARC to be full of linear ABDs if we write a
6964 * lot of shareable data. As a compromise, we check whether scattered
6965 * ABDs are allowed, and assume that if they are then the user wants
6966 * the ARC to be primarily filled with them regardless of the data being
6967 * written. Therefore, if they're allowed then we allocate one and copy
6968 * the data into it; otherwise, we share the data directly if we can.
6970 if (ARC_BUF_ENCRYPTED(buf
)) {
6971 ASSERT3U(psize
, >, 0);
6972 ASSERT(ARC_BUF_COMPRESSED(buf
));
6973 arc_hdr_alloc_abd(hdr
, B_TRUE
);
6974 abd_copy(hdr
->b_crypt_hdr
.b_rabd
, zio
->io_abd
, psize
);
6975 } else if (zfs_abd_scatter_enabled
|| !arc_can_share(hdr
, buf
)) {
6977 * Ideally, we would always copy the io_abd into b_pabd, but the
6978 * user may have disabled compressed ARC, thus we must check the
6979 * hdr's compression setting rather than the io_bp's.
6981 if (BP_IS_ENCRYPTED(bp
)) {
6982 ASSERT3U(psize
, >, 0);
6983 arc_hdr_alloc_abd(hdr
, B_TRUE
);
6984 abd_copy(hdr
->b_crypt_hdr
.b_rabd
, zio
->io_abd
, psize
);
6985 } else if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
&&
6986 !ARC_BUF_COMPRESSED(buf
)) {
6987 ASSERT3U(psize
, >, 0);
6988 arc_hdr_alloc_abd(hdr
, B_FALSE
);
6989 abd_copy(hdr
->b_l1hdr
.b_pabd
, zio
->io_abd
, psize
);
6991 ASSERT3U(zio
->io_orig_size
, ==, arc_hdr_size(hdr
));
6992 arc_hdr_alloc_abd(hdr
, B_FALSE
);
6993 abd_copy_from_buf(hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
6997 ASSERT3P(buf
->b_data
, ==, abd_to_buf(zio
->io_orig_abd
));
6998 ASSERT3U(zio
->io_orig_size
, ==, arc_buf_size(buf
));
6999 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
7001 arc_share_buf(hdr
, buf
);
7005 arc_hdr_verify(hdr
, bp
);
7006 spl_fstrans_unmark(cookie
);
7010 arc_write_children_ready(zio_t
*zio
)
7012 arc_write_callback_t
*callback
= zio
->io_private
;
7013 arc_buf_t
*buf
= callback
->awcb_buf
;
7015 callback
->awcb_children_ready(zio
, buf
, callback
->awcb_private
);
7019 * The SPA calls this callback for each physical write that happens on behalf
7020 * of a logical write. See the comment in dbuf_write_physdone() for details.
7023 arc_write_physdone(zio_t
*zio
)
7025 arc_write_callback_t
*cb
= zio
->io_private
;
7026 if (cb
->awcb_physdone
!= NULL
)
7027 cb
->awcb_physdone(zio
, cb
->awcb_buf
, cb
->awcb_private
);
7031 arc_write_done(zio_t
*zio
)
7033 arc_write_callback_t
*callback
= zio
->io_private
;
7034 arc_buf_t
*buf
= callback
->awcb_buf
;
7035 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
7037 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
7039 if (zio
->io_error
== 0) {
7040 arc_hdr_verify(hdr
, zio
->io_bp
);
7042 if (BP_IS_HOLE(zio
->io_bp
) || BP_IS_EMBEDDED(zio
->io_bp
)) {
7043 buf_discard_identity(hdr
);
7045 hdr
->b_dva
= *BP_IDENTITY(zio
->io_bp
);
7046 hdr
->b_birth
= BP_PHYSICAL_BIRTH(zio
->io_bp
);
7049 ASSERT(HDR_EMPTY(hdr
));
7053 * If the block to be written was all-zero or compressed enough to be
7054 * embedded in the BP, no write was performed so there will be no
7055 * dva/birth/checksum. The buffer must therefore remain anonymous
7058 if (!HDR_EMPTY(hdr
)) {
7059 arc_buf_hdr_t
*exists
;
7060 kmutex_t
*hash_lock
;
7062 ASSERT3U(zio
->io_error
, ==, 0);
7064 arc_cksum_verify(buf
);
7066 exists
= buf_hash_insert(hdr
, &hash_lock
);
7067 if (exists
!= NULL
) {
7069 * This can only happen if we overwrite for
7070 * sync-to-convergence, because we remove
7071 * buffers from the hash table when we arc_free().
7073 if (zio
->io_flags
& ZIO_FLAG_IO_REWRITE
) {
7074 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
7075 panic("bad overwrite, hdr=%p exists=%p",
7076 (void *)hdr
, (void *)exists
);
7077 ASSERT(refcount_is_zero(
7078 &exists
->b_l1hdr
.b_refcnt
));
7079 arc_change_state(arc_anon
, exists
, hash_lock
);
7080 mutex_exit(hash_lock
);
7081 arc_hdr_destroy(exists
);
7082 exists
= buf_hash_insert(hdr
, &hash_lock
);
7083 ASSERT3P(exists
, ==, NULL
);
7084 } else if (zio
->io_flags
& ZIO_FLAG_NOPWRITE
) {
7086 ASSERT(zio
->io_prop
.zp_nopwrite
);
7087 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
7088 panic("bad nopwrite, hdr=%p exists=%p",
7089 (void *)hdr
, (void *)exists
);
7092 ASSERT(hdr
->b_l1hdr
.b_bufcnt
== 1);
7093 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
7094 ASSERT(BP_GET_DEDUP(zio
->io_bp
));
7095 ASSERT(BP_GET_LEVEL(zio
->io_bp
) == 0);
7098 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
7099 /* if it's not anon, we are doing a scrub */
7100 if (exists
== NULL
&& hdr
->b_l1hdr
.b_state
== arc_anon
)
7101 arc_access(hdr
, hash_lock
);
7102 mutex_exit(hash_lock
);
7104 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
7107 ASSERT(!refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
7108 callback
->awcb_done(zio
, buf
, callback
->awcb_private
);
7110 abd_put(zio
->io_abd
);
7111 kmem_free(callback
, sizeof (arc_write_callback_t
));
7115 arc_write(zio_t
*pio
, spa_t
*spa
, uint64_t txg
,
7116 blkptr_t
*bp
, arc_buf_t
*buf
, boolean_t l2arc
,
7117 const zio_prop_t
*zp
, arc_write_done_func_t
*ready
,
7118 arc_write_done_func_t
*children_ready
, arc_write_done_func_t
*physdone
,
7119 arc_write_done_func_t
*done
, void *private, zio_priority_t priority
,
7120 int zio_flags
, const zbookmark_phys_t
*zb
)
7122 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
7123 arc_write_callback_t
*callback
;
7125 zio_prop_t localprop
= *zp
;
7127 ASSERT3P(ready
, !=, NULL
);
7128 ASSERT3P(done
, !=, NULL
);
7129 ASSERT(!HDR_IO_ERROR(hdr
));
7130 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
7131 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
7132 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, >, 0);
7134 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
7136 if (ARC_BUF_ENCRYPTED(buf
)) {
7137 ASSERT(ARC_BUF_COMPRESSED(buf
));
7138 localprop
.zp_encrypt
= B_TRUE
;
7139 localprop
.zp_compress
= HDR_GET_COMPRESS(hdr
);
7140 localprop
.zp_byteorder
=
7141 (hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
) ?
7142 ZFS_HOST_BYTEORDER
: !ZFS_HOST_BYTEORDER
;
7143 bcopy(hdr
->b_crypt_hdr
.b_salt
, localprop
.zp_salt
,
7145 bcopy(hdr
->b_crypt_hdr
.b_iv
, localprop
.zp_iv
,
7147 bcopy(hdr
->b_crypt_hdr
.b_mac
, localprop
.zp_mac
,
7149 if (DMU_OT_IS_ENCRYPTED(localprop
.zp_type
)) {
7150 localprop
.zp_nopwrite
= B_FALSE
;
7151 localprop
.zp_copies
=
7152 MIN(localprop
.zp_copies
, SPA_DVAS_PER_BP
- 1);
7154 zio_flags
|= ZIO_FLAG_RAW
;
7155 } else if (ARC_BUF_COMPRESSED(buf
)) {
7156 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, arc_buf_size(buf
));
7157 localprop
.zp_compress
= HDR_GET_COMPRESS(hdr
);
7158 zio_flags
|= ZIO_FLAG_RAW_COMPRESS
;
7160 callback
= kmem_zalloc(sizeof (arc_write_callback_t
), KM_SLEEP
);
7161 callback
->awcb_ready
= ready
;
7162 callback
->awcb_children_ready
= children_ready
;
7163 callback
->awcb_physdone
= physdone
;
7164 callback
->awcb_done
= done
;
7165 callback
->awcb_private
= private;
7166 callback
->awcb_buf
= buf
;
7169 * The hdr's b_pabd is now stale, free it now. A new data block
7170 * will be allocated when the zio pipeline calls arc_write_ready().
7172 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
7174 * If the buf is currently sharing the data block with
7175 * the hdr then we need to break that relationship here.
7176 * The hdr will remain with a NULL data pointer and the
7177 * buf will take sole ownership of the block.
7179 if (arc_buf_is_shared(buf
)) {
7180 arc_unshare_buf(hdr
, buf
);
7182 arc_hdr_free_abd(hdr
, B_FALSE
);
7184 VERIFY3P(buf
->b_data
, !=, NULL
);
7187 if (HDR_HAS_RABD(hdr
))
7188 arc_hdr_free_abd(hdr
, B_TRUE
);
7190 if (!(zio_flags
& ZIO_FLAG_RAW
))
7191 arc_hdr_set_compress(hdr
, ZIO_COMPRESS_OFF
);
7193 ASSERT(!arc_buf_is_shared(buf
));
7194 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
7196 zio
= zio_write(pio
, spa
, txg
, bp
,
7197 abd_get_from_buf(buf
->b_data
, HDR_GET_LSIZE(hdr
)),
7198 HDR_GET_LSIZE(hdr
), arc_buf_size(buf
), &localprop
, arc_write_ready
,
7199 (children_ready
!= NULL
) ? arc_write_children_ready
: NULL
,
7200 arc_write_physdone
, arc_write_done
, callback
,
7201 priority
, zio_flags
, zb
);
7207 arc_memory_throttle(spa_t
*spa
, uint64_t reserve
, uint64_t txg
)
7210 uint64_t available_memory
= arc_free_memory();
7214 MIN(available_memory
, vmem_size(heap_arena
, VMEM_FREE
));
7217 if (available_memory
> arc_all_memory() * arc_lotsfree_percent
/ 100)
7220 if (txg
> spa
->spa_lowmem_last_txg
) {
7221 spa
->spa_lowmem_last_txg
= txg
;
7222 spa
->spa_lowmem_page_load
= 0;
7225 * If we are in pageout, we know that memory is already tight,
7226 * the arc is already going to be evicting, so we just want to
7227 * continue to let page writes occur as quickly as possible.
7229 if (current_is_kswapd()) {
7230 if (spa
->spa_lowmem_page_load
>
7231 MAX(arc_sys_free
/ 4, available_memory
) / 4) {
7232 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
7233 return (SET_ERROR(ERESTART
));
7235 /* Note: reserve is inflated, so we deflate */
7236 atomic_add_64(&spa
->spa_lowmem_page_load
, reserve
/ 8);
7238 } else if (spa
->spa_lowmem_page_load
> 0 && arc_reclaim_needed()) {
7239 /* memory is low, delay before restarting */
7240 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
7241 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
7242 return (SET_ERROR(EAGAIN
));
7244 spa
->spa_lowmem_page_load
= 0;
7245 #endif /* _KERNEL */
7250 arc_tempreserve_clear(uint64_t reserve
)
7252 atomic_add_64(&arc_tempreserve
, -reserve
);
7253 ASSERT((int64_t)arc_tempreserve
>= 0);
7257 arc_tempreserve_space(spa_t
*spa
, uint64_t reserve
, uint64_t txg
)
7263 reserve
> arc_c
/4 &&
7264 reserve
* 4 > (2ULL << SPA_MAXBLOCKSHIFT
))
7265 arc_c
= MIN(arc_c_max
, reserve
* 4);
7268 * Throttle when the calculated memory footprint for the TXG
7269 * exceeds the target ARC size.
7271 if (reserve
> arc_c
) {
7272 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve
);
7273 return (SET_ERROR(ERESTART
));
7277 * Don't count loaned bufs as in flight dirty data to prevent long
7278 * network delays from blocking transactions that are ready to be
7279 * assigned to a txg.
7282 /* assert that it has not wrapped around */
7283 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes
, 0), >=, 0);
7285 anon_size
= MAX((int64_t)(refcount_count(&arc_anon
->arcs_size
) -
7286 arc_loaned_bytes
), 0);
7289 * Writes will, almost always, require additional memory allocations
7290 * in order to compress/encrypt/etc the data. We therefore need to
7291 * make sure that there is sufficient available memory for this.
7293 error
= arc_memory_throttle(spa
, reserve
, txg
);
7298 * Throttle writes when the amount of dirty data in the cache
7299 * gets too large. We try to keep the cache less than half full
7300 * of dirty blocks so that our sync times don't grow too large.
7302 * In the case of one pool being built on another pool, we want
7303 * to make sure we don't end up throttling the lower (backing)
7304 * pool when the upper pool is the majority contributor to dirty
7305 * data. To insure we make forward progress during throttling, we
7306 * also check the current pool's net dirty data and only throttle
7307 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty
7308 * data in the cache.
7310 * Note: if two requests come in concurrently, we might let them
7311 * both succeed, when one of them should fail. Not a huge deal.
7313 uint64_t total_dirty
= reserve
+ arc_tempreserve
+ anon_size
;
7314 uint64_t spa_dirty_anon
= spa_dirty_data(spa
);
7316 if (total_dirty
> arc_c
* zfs_arc_dirty_limit_percent
/ 100 &&
7317 anon_size
> arc_c
* zfs_arc_anon_limit_percent
/ 100 &&
7318 spa_dirty_anon
> anon_size
* zfs_arc_pool_dirty_percent
/ 100) {
7320 uint64_t meta_esize
=
7321 refcount_count(&arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7322 uint64_t data_esize
=
7323 refcount_count(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7324 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
7325 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
7326 arc_tempreserve
>> 10, meta_esize
>> 10,
7327 data_esize
>> 10, reserve
>> 10, arc_c
>> 10);
7329 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle
);
7330 return (SET_ERROR(ERESTART
));
7332 atomic_add_64(&arc_tempreserve
, reserve
);
7337 arc_kstat_update_state(arc_state_t
*state
, kstat_named_t
*size
,
7338 kstat_named_t
*evict_data
, kstat_named_t
*evict_metadata
)
7340 size
->value
.ui64
= refcount_count(&state
->arcs_size
);
7341 evict_data
->value
.ui64
=
7342 refcount_count(&state
->arcs_esize
[ARC_BUFC_DATA
]);
7343 evict_metadata
->value
.ui64
=
7344 refcount_count(&state
->arcs_esize
[ARC_BUFC_METADATA
]);
7348 arc_kstat_update(kstat_t
*ksp
, int rw
)
7350 arc_stats_t
*as
= ksp
->ks_data
;
7352 if (rw
== KSTAT_WRITE
) {
7353 return (SET_ERROR(EACCES
));
7355 arc_kstat_update_state(arc_anon
,
7356 &as
->arcstat_anon_size
,
7357 &as
->arcstat_anon_evictable_data
,
7358 &as
->arcstat_anon_evictable_metadata
);
7359 arc_kstat_update_state(arc_mru
,
7360 &as
->arcstat_mru_size
,
7361 &as
->arcstat_mru_evictable_data
,
7362 &as
->arcstat_mru_evictable_metadata
);
7363 arc_kstat_update_state(arc_mru_ghost
,
7364 &as
->arcstat_mru_ghost_size
,
7365 &as
->arcstat_mru_ghost_evictable_data
,
7366 &as
->arcstat_mru_ghost_evictable_metadata
);
7367 arc_kstat_update_state(arc_mfu
,
7368 &as
->arcstat_mfu_size
,
7369 &as
->arcstat_mfu_evictable_data
,
7370 &as
->arcstat_mfu_evictable_metadata
);
7371 arc_kstat_update_state(arc_mfu_ghost
,
7372 &as
->arcstat_mfu_ghost_size
,
7373 &as
->arcstat_mfu_ghost_evictable_data
,
7374 &as
->arcstat_mfu_ghost_evictable_metadata
);
7376 ARCSTAT(arcstat_size
) = aggsum_value(&arc_size
);
7377 ARCSTAT(arcstat_meta_used
) = aggsum_value(&arc_meta_used
);
7378 ARCSTAT(arcstat_data_size
) = aggsum_value(&astat_data_size
);
7379 ARCSTAT(arcstat_metadata_size
) =
7380 aggsum_value(&astat_metadata_size
);
7381 ARCSTAT(arcstat_hdr_size
) = aggsum_value(&astat_hdr_size
);
7382 ARCSTAT(arcstat_l2_hdr_size
) = aggsum_value(&astat_l2_hdr_size
);
7383 ARCSTAT(arcstat_dbuf_size
) = aggsum_value(&astat_dbuf_size
);
7384 ARCSTAT(arcstat_dnode_size
) = aggsum_value(&astat_dnode_size
);
7385 ARCSTAT(arcstat_bonus_size
) = aggsum_value(&astat_bonus_size
);
7387 as
->arcstat_memory_all_bytes
.value
.ui64
=
7389 as
->arcstat_memory_free_bytes
.value
.ui64
=
7391 as
->arcstat_memory_available_bytes
.value
.i64
=
7392 arc_available_memory();
7399 * This function *must* return indices evenly distributed between all
7400 * sublists of the multilist. This is needed due to how the ARC eviction
7401 * code is laid out; arc_evict_state() assumes ARC buffers are evenly
7402 * distributed between all sublists and uses this assumption when
7403 * deciding which sublist to evict from and how much to evict from it.
7406 arc_state_multilist_index_func(multilist_t
*ml
, void *obj
)
7408 arc_buf_hdr_t
*hdr
= obj
;
7411 * We rely on b_dva to generate evenly distributed index
7412 * numbers using buf_hash below. So, as an added precaution,
7413 * let's make sure we never add empty buffers to the arc lists.
7415 ASSERT(!HDR_EMPTY(hdr
));
7418 * The assumption here, is the hash value for a given
7419 * arc_buf_hdr_t will remain constant throughout its lifetime
7420 * (i.e. its b_spa, b_dva, and b_birth fields don't change).
7421 * Thus, we don't need to store the header's sublist index
7422 * on insertion, as this index can be recalculated on removal.
7424 * Also, the low order bits of the hash value are thought to be
7425 * distributed evenly. Otherwise, in the case that the multilist
7426 * has a power of two number of sublists, each sublists' usage
7427 * would not be evenly distributed.
7429 return (buf_hash(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
) %
7430 multilist_get_num_sublists(ml
));
7434 * Called during module initialization and periodically thereafter to
7435 * apply reasonable changes to the exposed performance tunings. Non-zero
7436 * zfs_* values which differ from the currently set values will be applied.
7439 arc_tuning_update(void)
7441 uint64_t allmem
= arc_all_memory();
7442 unsigned long limit
;
7444 /* Valid range: 64M - <all physical memory> */
7445 if ((zfs_arc_max
) && (zfs_arc_max
!= arc_c_max
) &&
7446 (zfs_arc_max
>= 64 << 20) && (zfs_arc_max
< allmem
) &&
7447 (zfs_arc_max
> arc_c_min
)) {
7448 arc_c_max
= zfs_arc_max
;
7450 arc_p
= (arc_c
>> 1);
7451 if (arc_meta_limit
> arc_c_max
)
7452 arc_meta_limit
= arc_c_max
;
7453 if (arc_dnode_limit
> arc_meta_limit
)
7454 arc_dnode_limit
= arc_meta_limit
;
7457 /* Valid range: 32M - <arc_c_max> */
7458 if ((zfs_arc_min
) && (zfs_arc_min
!= arc_c_min
) &&
7459 (zfs_arc_min
>= 2ULL << SPA_MAXBLOCKSHIFT
) &&
7460 (zfs_arc_min
<= arc_c_max
)) {
7461 arc_c_min
= zfs_arc_min
;
7462 arc_c
= MAX(arc_c
, arc_c_min
);
7465 /* Valid range: 16M - <arc_c_max> */
7466 if ((zfs_arc_meta_min
) && (zfs_arc_meta_min
!= arc_meta_min
) &&
7467 (zfs_arc_meta_min
>= 1ULL << SPA_MAXBLOCKSHIFT
) &&
7468 (zfs_arc_meta_min
<= arc_c_max
)) {
7469 arc_meta_min
= zfs_arc_meta_min
;
7470 if (arc_meta_limit
< arc_meta_min
)
7471 arc_meta_limit
= arc_meta_min
;
7472 if (arc_dnode_limit
< arc_meta_min
)
7473 arc_dnode_limit
= arc_meta_min
;
7476 /* Valid range: <arc_meta_min> - <arc_c_max> */
7477 limit
= zfs_arc_meta_limit
? zfs_arc_meta_limit
:
7478 MIN(zfs_arc_meta_limit_percent
, 100) * arc_c_max
/ 100;
7479 if ((limit
!= arc_meta_limit
) &&
7480 (limit
>= arc_meta_min
) &&
7481 (limit
<= arc_c_max
))
7482 arc_meta_limit
= limit
;
7484 /* Valid range: <arc_meta_min> - <arc_meta_limit> */
7485 limit
= zfs_arc_dnode_limit
? zfs_arc_dnode_limit
:
7486 MIN(zfs_arc_dnode_limit_percent
, 100) * arc_meta_limit
/ 100;
7487 if ((limit
!= arc_dnode_limit
) &&
7488 (limit
>= arc_meta_min
) &&
7489 (limit
<= arc_meta_limit
))
7490 arc_dnode_limit
= limit
;
7492 /* Valid range: 1 - N */
7493 if (zfs_arc_grow_retry
)
7494 arc_grow_retry
= zfs_arc_grow_retry
;
7496 /* Valid range: 1 - N */
7497 if (zfs_arc_shrink_shift
) {
7498 arc_shrink_shift
= zfs_arc_shrink_shift
;
7499 arc_no_grow_shift
= MIN(arc_no_grow_shift
, arc_shrink_shift
-1);
7502 /* Valid range: 1 - N */
7503 if (zfs_arc_p_min_shift
)
7504 arc_p_min_shift
= zfs_arc_p_min_shift
;
7506 /* Valid range: 1 - N ms */
7507 if (zfs_arc_min_prefetch_ms
)
7508 arc_min_prefetch_ms
= zfs_arc_min_prefetch_ms
;
7510 /* Valid range: 1 - N ms */
7511 if (zfs_arc_min_prescient_prefetch_ms
) {
7512 arc_min_prescient_prefetch_ms
=
7513 zfs_arc_min_prescient_prefetch_ms
;
7516 /* Valid range: 0 - 100 */
7517 if ((zfs_arc_lotsfree_percent
>= 0) &&
7518 (zfs_arc_lotsfree_percent
<= 100))
7519 arc_lotsfree_percent
= zfs_arc_lotsfree_percent
;
7521 /* Valid range: 0 - <all physical memory> */
7522 if ((zfs_arc_sys_free
) && (zfs_arc_sys_free
!= arc_sys_free
))
7523 arc_sys_free
= MIN(MAX(zfs_arc_sys_free
, 0), allmem
);
7528 arc_state_init(void)
7530 arc_anon
= &ARC_anon
;
7532 arc_mru_ghost
= &ARC_mru_ghost
;
7534 arc_mfu_ghost
= &ARC_mfu_ghost
;
7535 arc_l2c_only
= &ARC_l2c_only
;
7537 arc_mru
->arcs_list
[ARC_BUFC_METADATA
] =
7538 multilist_create(sizeof (arc_buf_hdr_t
),
7539 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7540 arc_state_multilist_index_func
);
7541 arc_mru
->arcs_list
[ARC_BUFC_DATA
] =
7542 multilist_create(sizeof (arc_buf_hdr_t
),
7543 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7544 arc_state_multilist_index_func
);
7545 arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
] =
7546 multilist_create(sizeof (arc_buf_hdr_t
),
7547 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7548 arc_state_multilist_index_func
);
7549 arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
] =
7550 multilist_create(sizeof (arc_buf_hdr_t
),
7551 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7552 arc_state_multilist_index_func
);
7553 arc_mfu
->arcs_list
[ARC_BUFC_METADATA
] =
7554 multilist_create(sizeof (arc_buf_hdr_t
),
7555 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7556 arc_state_multilist_index_func
);
7557 arc_mfu
->arcs_list
[ARC_BUFC_DATA
] =
7558 multilist_create(sizeof (arc_buf_hdr_t
),
7559 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7560 arc_state_multilist_index_func
);
7561 arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
] =
7562 multilist_create(sizeof (arc_buf_hdr_t
),
7563 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7564 arc_state_multilist_index_func
);
7565 arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
] =
7566 multilist_create(sizeof (arc_buf_hdr_t
),
7567 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7568 arc_state_multilist_index_func
);
7569 arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
] =
7570 multilist_create(sizeof (arc_buf_hdr_t
),
7571 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7572 arc_state_multilist_index_func
);
7573 arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
] =
7574 multilist_create(sizeof (arc_buf_hdr_t
),
7575 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7576 arc_state_multilist_index_func
);
7578 refcount_create(&arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7579 refcount_create(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7580 refcount_create(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]);
7581 refcount_create(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]);
7582 refcount_create(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7583 refcount_create(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7584 refcount_create(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
7585 refcount_create(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]);
7586 refcount_create(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7587 refcount_create(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7588 refcount_create(&arc_l2c_only
->arcs_esize
[ARC_BUFC_METADATA
]);
7589 refcount_create(&arc_l2c_only
->arcs_esize
[ARC_BUFC_DATA
]);
7591 refcount_create(&arc_anon
->arcs_size
);
7592 refcount_create(&arc_mru
->arcs_size
);
7593 refcount_create(&arc_mru_ghost
->arcs_size
);
7594 refcount_create(&arc_mfu
->arcs_size
);
7595 refcount_create(&arc_mfu_ghost
->arcs_size
);
7596 refcount_create(&arc_l2c_only
->arcs_size
);
7598 aggsum_init(&arc_meta_used
, 0);
7599 aggsum_init(&arc_size
, 0);
7600 aggsum_init(&astat_data_size
, 0);
7601 aggsum_init(&astat_metadata_size
, 0);
7602 aggsum_init(&astat_hdr_size
, 0);
7603 aggsum_init(&astat_l2_hdr_size
, 0);
7604 aggsum_init(&astat_bonus_size
, 0);
7605 aggsum_init(&astat_dnode_size
, 0);
7606 aggsum_init(&astat_dbuf_size
, 0);
7608 arc_anon
->arcs_state
= ARC_STATE_ANON
;
7609 arc_mru
->arcs_state
= ARC_STATE_MRU
;
7610 arc_mru_ghost
->arcs_state
= ARC_STATE_MRU_GHOST
;
7611 arc_mfu
->arcs_state
= ARC_STATE_MFU
;
7612 arc_mfu_ghost
->arcs_state
= ARC_STATE_MFU_GHOST
;
7613 arc_l2c_only
->arcs_state
= ARC_STATE_L2C_ONLY
;
7617 arc_state_fini(void)
7619 refcount_destroy(&arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7620 refcount_destroy(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7621 refcount_destroy(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]);
7622 refcount_destroy(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]);
7623 refcount_destroy(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7624 refcount_destroy(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7625 refcount_destroy(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
7626 refcount_destroy(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]);
7627 refcount_destroy(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7628 refcount_destroy(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7629 refcount_destroy(&arc_l2c_only
->arcs_esize
[ARC_BUFC_METADATA
]);
7630 refcount_destroy(&arc_l2c_only
->arcs_esize
[ARC_BUFC_DATA
]);
7632 refcount_destroy(&arc_anon
->arcs_size
);
7633 refcount_destroy(&arc_mru
->arcs_size
);
7634 refcount_destroy(&arc_mru_ghost
->arcs_size
);
7635 refcount_destroy(&arc_mfu
->arcs_size
);
7636 refcount_destroy(&arc_mfu_ghost
->arcs_size
);
7637 refcount_destroy(&arc_l2c_only
->arcs_size
);
7639 multilist_destroy(arc_mru
->arcs_list
[ARC_BUFC_METADATA
]);
7640 multilist_destroy(arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
7641 multilist_destroy(arc_mfu
->arcs_list
[ARC_BUFC_METADATA
]);
7642 multilist_destroy(arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
7643 multilist_destroy(arc_mru
->arcs_list
[ARC_BUFC_DATA
]);
7644 multilist_destroy(arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
]);
7645 multilist_destroy(arc_mfu
->arcs_list
[ARC_BUFC_DATA
]);
7646 multilist_destroy(arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
]);
7647 multilist_destroy(arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]);
7648 multilist_destroy(arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]);
7650 aggsum_fini(&arc_meta_used
);
7651 aggsum_fini(&arc_size
);
7652 aggsum_fini(&astat_data_size
);
7653 aggsum_fini(&astat_metadata_size
);
7654 aggsum_fini(&astat_hdr_size
);
7655 aggsum_fini(&astat_l2_hdr_size
);
7656 aggsum_fini(&astat_bonus_size
);
7657 aggsum_fini(&astat_dnode_size
);
7658 aggsum_fini(&astat_dbuf_size
);
7662 arc_target_bytes(void)
7670 uint64_t percent
, allmem
= arc_all_memory();
7672 mutex_init(&arc_reclaim_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
7673 cv_init(&arc_reclaim_thread_cv
, NULL
, CV_DEFAULT
, NULL
);
7674 cv_init(&arc_reclaim_waiters_cv
, NULL
, CV_DEFAULT
, NULL
);
7676 arc_min_prefetch_ms
= 1000;
7677 arc_min_prescient_prefetch_ms
= 6000;
7681 * Register a shrinker to support synchronous (direct) memory
7682 * reclaim from the arc. This is done to prevent kswapd from
7683 * swapping out pages when it is preferable to shrink the arc.
7685 spl_register_shrinker(&arc_shrinker
);
7687 /* Set to 1/64 of all memory or a minimum of 512K */
7688 arc_sys_free
= MAX(allmem
/ 64, (512 * 1024));
7692 /* Set max to 1/2 of all memory */
7693 arc_c_max
= allmem
/ 2;
7696 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more */
7697 arc_c_min
= MAX(allmem
/ 32, 2ULL << SPA_MAXBLOCKSHIFT
);
7700 * In userland, there's only the memory pressure that we artificially
7701 * create (see arc_available_memory()). Don't let arc_c get too
7702 * small, because it can cause transactions to be larger than
7703 * arc_c, causing arc_tempreserve_space() to fail.
7705 arc_c_min
= MAX(arc_c_max
/ 2, 2ULL << SPA_MAXBLOCKSHIFT
);
7709 arc_p
= (arc_c
>> 1);
7711 /* Set min to 1/2 of arc_c_min */
7712 arc_meta_min
= 1ULL << SPA_MAXBLOCKSHIFT
;
7713 /* Initialize maximum observed usage to zero */
7716 * Set arc_meta_limit to a percent of arc_c_max with a floor of
7717 * arc_meta_min, and a ceiling of arc_c_max.
7719 percent
= MIN(zfs_arc_meta_limit_percent
, 100);
7720 arc_meta_limit
= MAX(arc_meta_min
, (percent
* arc_c_max
) / 100);
7721 percent
= MIN(zfs_arc_dnode_limit_percent
, 100);
7722 arc_dnode_limit
= (percent
* arc_meta_limit
) / 100;
7724 /* Apply user specified tunings */
7725 arc_tuning_update();
7727 /* if kmem_flags are set, lets try to use less memory */
7728 if (kmem_debugging())
7730 if (arc_c
< arc_c_min
)
7736 list_create(&arc_prune_list
, sizeof (arc_prune_t
),
7737 offsetof(arc_prune_t
, p_node
));
7738 mutex_init(&arc_prune_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
7740 arc_prune_taskq
= taskq_create("arc_prune", max_ncpus
, defclsyspri
,
7741 max_ncpus
, INT_MAX
, TASKQ_PREPOPULATE
| TASKQ_DYNAMIC
);
7743 arc_reclaim_thread_exit
= B_FALSE
;
7745 arc_ksp
= kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED
,
7746 sizeof (arc_stats
) / sizeof (kstat_named_t
), KSTAT_FLAG_VIRTUAL
);
7748 if (arc_ksp
!= NULL
) {
7749 arc_ksp
->ks_data
= &arc_stats
;
7750 arc_ksp
->ks_update
= arc_kstat_update
;
7751 kstat_install(arc_ksp
);
7754 (void) thread_create(NULL
, 0, arc_reclaim_thread
, NULL
, 0, &p0
,
7755 TS_RUN
, defclsyspri
);
7761 * Calculate maximum amount of dirty data per pool.
7763 * If it has been set by a module parameter, take that.
7764 * Otherwise, use a percentage of physical memory defined by
7765 * zfs_dirty_data_max_percent (default 10%) with a cap at
7766 * zfs_dirty_data_max_max (default 4G or 25% of physical memory).
7768 if (zfs_dirty_data_max_max
== 0)
7769 zfs_dirty_data_max_max
= MIN(4ULL * 1024 * 1024 * 1024,
7770 allmem
* zfs_dirty_data_max_max_percent
/ 100);
7772 if (zfs_dirty_data_max
== 0) {
7773 zfs_dirty_data_max
= allmem
*
7774 zfs_dirty_data_max_percent
/ 100;
7775 zfs_dirty_data_max
= MIN(zfs_dirty_data_max
,
7776 zfs_dirty_data_max_max
);
7786 spl_unregister_shrinker(&arc_shrinker
);
7787 #endif /* _KERNEL */
7789 mutex_enter(&arc_reclaim_lock
);
7790 arc_reclaim_thread_exit
= B_TRUE
;
7792 * The reclaim thread will set arc_reclaim_thread_exit back to
7793 * B_FALSE when it is finished exiting; we're waiting for that.
7795 while (arc_reclaim_thread_exit
) {
7796 cv_signal(&arc_reclaim_thread_cv
);
7797 cv_wait(&arc_reclaim_thread_cv
, &arc_reclaim_lock
);
7799 mutex_exit(&arc_reclaim_lock
);
7801 /* Use B_TRUE to ensure *all* buffers are evicted */
7802 arc_flush(NULL
, B_TRUE
);
7806 if (arc_ksp
!= NULL
) {
7807 kstat_delete(arc_ksp
);
7811 taskq_wait(arc_prune_taskq
);
7812 taskq_destroy(arc_prune_taskq
);
7814 mutex_enter(&arc_prune_mtx
);
7815 while ((p
= list_head(&arc_prune_list
)) != NULL
) {
7816 list_remove(&arc_prune_list
, p
);
7817 refcount_remove(&p
->p_refcnt
, &arc_prune_list
);
7818 refcount_destroy(&p
->p_refcnt
);
7819 kmem_free(p
, sizeof (*p
));
7821 mutex_exit(&arc_prune_mtx
);
7823 list_destroy(&arc_prune_list
);
7824 mutex_destroy(&arc_prune_mtx
);
7825 mutex_destroy(&arc_reclaim_lock
);
7826 cv_destroy(&arc_reclaim_thread_cv
);
7827 cv_destroy(&arc_reclaim_waiters_cv
);
7832 ASSERT0(arc_loaned_bytes
);
7838 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
7839 * It uses dedicated storage devices to hold cached data, which are populated
7840 * using large infrequent writes. The main role of this cache is to boost
7841 * the performance of random read workloads. The intended L2ARC devices
7842 * include short-stroked disks, solid state disks, and other media with
7843 * substantially faster read latency than disk.
7845 * +-----------------------+
7847 * +-----------------------+
7850 * l2arc_feed_thread() arc_read()
7854 * +---------------+ |
7856 * +---------------+ |
7861 * +-------+ +-------+
7863 * | cache | | cache |
7864 * +-------+ +-------+
7865 * +=========+ .-----.
7866 * : L2ARC : |-_____-|
7867 * : devices : | Disks |
7868 * +=========+ `-_____-'
7870 * Read requests are satisfied from the following sources, in order:
7873 * 2) vdev cache of L2ARC devices
7875 * 4) vdev cache of disks
7878 * Some L2ARC device types exhibit extremely slow write performance.
7879 * To accommodate for this there are some significant differences between
7880 * the L2ARC and traditional cache design:
7882 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
7883 * the ARC behave as usual, freeing buffers and placing headers on ghost
7884 * lists. The ARC does not send buffers to the L2ARC during eviction as
7885 * this would add inflated write latencies for all ARC memory pressure.
7887 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
7888 * It does this by periodically scanning buffers from the eviction-end of
7889 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
7890 * not already there. It scans until a headroom of buffers is satisfied,
7891 * which itself is a buffer for ARC eviction. If a compressible buffer is
7892 * found during scanning and selected for writing to an L2ARC device, we
7893 * temporarily boost scanning headroom during the next scan cycle to make
7894 * sure we adapt to compression effects (which might significantly reduce
7895 * the data volume we write to L2ARC). The thread that does this is
7896 * l2arc_feed_thread(), illustrated below; example sizes are included to
7897 * provide a better sense of ratio than this diagram:
7900 * +---------------------+----------+
7901 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
7902 * +---------------------+----------+ | o L2ARC eligible
7903 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
7904 * +---------------------+----------+ |
7905 * 15.9 Gbytes ^ 32 Mbytes |
7907 * l2arc_feed_thread()
7909 * l2arc write hand <--[oooo]--'
7913 * +==============================+
7914 * L2ARC dev |####|#|###|###| |####| ... |
7915 * +==============================+
7918 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
7919 * evicted, then the L2ARC has cached a buffer much sooner than it probably
7920 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
7921 * safe to say that this is an uncommon case, since buffers at the end of
7922 * the ARC lists have moved there due to inactivity.
7924 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
7925 * then the L2ARC simply misses copying some buffers. This serves as a
7926 * pressure valve to prevent heavy read workloads from both stalling the ARC
7927 * with waits and clogging the L2ARC with writes. This also helps prevent
7928 * the potential for the L2ARC to churn if it attempts to cache content too
7929 * quickly, such as during backups of the entire pool.
7931 * 5. After system boot and before the ARC has filled main memory, there are
7932 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
7933 * lists can remain mostly static. Instead of searching from tail of these
7934 * lists as pictured, the l2arc_feed_thread() will search from the list heads
7935 * for eligible buffers, greatly increasing its chance of finding them.
7937 * The L2ARC device write speed is also boosted during this time so that
7938 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
7939 * there are no L2ARC reads, and no fear of degrading read performance
7940 * through increased writes.
7942 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
7943 * the vdev queue can aggregate them into larger and fewer writes. Each
7944 * device is written to in a rotor fashion, sweeping writes through
7945 * available space then repeating.
7947 * 7. The L2ARC does not store dirty content. It never needs to flush
7948 * write buffers back to disk based storage.
7950 * 8. If an ARC buffer is written (and dirtied) which also exists in the
7951 * L2ARC, the now stale L2ARC buffer is immediately dropped.
7953 * The performance of the L2ARC can be tweaked by a number of tunables, which
7954 * may be necessary for different workloads:
7956 * l2arc_write_max max write bytes per interval
7957 * l2arc_write_boost extra write bytes during device warmup
7958 * l2arc_noprefetch skip caching prefetched buffers
7959 * l2arc_headroom number of max device writes to precache
7960 * l2arc_headroom_boost when we find compressed buffers during ARC
7961 * scanning, we multiply headroom by this
7962 * percentage factor for the next scan cycle,
7963 * since more compressed buffers are likely to
7965 * l2arc_feed_secs seconds between L2ARC writing
7967 * Tunables may be removed or added as future performance improvements are
7968 * integrated, and also may become zpool properties.
7970 * There are three key functions that control how the L2ARC warms up:
7972 * l2arc_write_eligible() check if a buffer is eligible to cache
7973 * l2arc_write_size() calculate how much to write
7974 * l2arc_write_interval() calculate sleep delay between writes
7976 * These three functions determine what to write, how much, and how quickly
7981 l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*hdr
)
7984 * A buffer is *not* eligible for the L2ARC if it:
7985 * 1. belongs to a different spa.
7986 * 2. is already cached on the L2ARC.
7987 * 3. has an I/O in progress (it may be an incomplete read).
7988 * 4. is flagged not eligible (zfs property).
7990 if (hdr
->b_spa
!= spa_guid
|| HDR_HAS_L2HDR(hdr
) ||
7991 HDR_IO_IN_PROGRESS(hdr
) || !HDR_L2CACHE(hdr
))
7998 l2arc_write_size(void)
8003 * Make sure our globals have meaningful values in case the user
8006 size
= l2arc_write_max
;
8008 cmn_err(CE_NOTE
, "Bad value for l2arc_write_max, value must "
8009 "be greater than zero, resetting it to the default (%d)",
8011 size
= l2arc_write_max
= L2ARC_WRITE_SIZE
;
8014 if (arc_warm
== B_FALSE
)
8015 size
+= l2arc_write_boost
;
8022 l2arc_write_interval(clock_t began
, uint64_t wanted
, uint64_t wrote
)
8024 clock_t interval
, next
, now
;
8027 * If the ARC lists are busy, increase our write rate; if the
8028 * lists are stale, idle back. This is achieved by checking
8029 * how much we previously wrote - if it was more than half of
8030 * what we wanted, schedule the next write much sooner.
8032 if (l2arc_feed_again
&& wrote
> (wanted
/ 2))
8033 interval
= (hz
* l2arc_feed_min_ms
) / 1000;
8035 interval
= hz
* l2arc_feed_secs
;
8037 now
= ddi_get_lbolt();
8038 next
= MAX(now
, MIN(now
+ interval
, began
+ interval
));
8044 * Cycle through L2ARC devices. This is how L2ARC load balances.
8045 * If a device is returned, this also returns holding the spa config lock.
8047 static l2arc_dev_t
*
8048 l2arc_dev_get_next(void)
8050 l2arc_dev_t
*first
, *next
= NULL
;
8053 * Lock out the removal of spas (spa_namespace_lock), then removal
8054 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
8055 * both locks will be dropped and a spa config lock held instead.
8057 mutex_enter(&spa_namespace_lock
);
8058 mutex_enter(&l2arc_dev_mtx
);
8060 /* if there are no vdevs, there is nothing to do */
8061 if (l2arc_ndev
== 0)
8065 next
= l2arc_dev_last
;
8067 /* loop around the list looking for a non-faulted vdev */
8069 next
= list_head(l2arc_dev_list
);
8071 next
= list_next(l2arc_dev_list
, next
);
8073 next
= list_head(l2arc_dev_list
);
8076 /* if we have come back to the start, bail out */
8079 else if (next
== first
)
8082 } while (vdev_is_dead(next
->l2ad_vdev
));
8084 /* if we were unable to find any usable vdevs, return NULL */
8085 if (vdev_is_dead(next
->l2ad_vdev
))
8088 l2arc_dev_last
= next
;
8091 mutex_exit(&l2arc_dev_mtx
);
8094 * Grab the config lock to prevent the 'next' device from being
8095 * removed while we are writing to it.
8098 spa_config_enter(next
->l2ad_spa
, SCL_L2ARC
, next
, RW_READER
);
8099 mutex_exit(&spa_namespace_lock
);
8105 * Free buffers that were tagged for destruction.
8108 l2arc_do_free_on_write(void)
8111 l2arc_data_free_t
*df
, *df_prev
;
8113 mutex_enter(&l2arc_free_on_write_mtx
);
8114 buflist
= l2arc_free_on_write
;
8116 for (df
= list_tail(buflist
); df
; df
= df_prev
) {
8117 df_prev
= list_prev(buflist
, df
);
8118 ASSERT3P(df
->l2df_abd
, !=, NULL
);
8119 abd_free(df
->l2df_abd
);
8120 list_remove(buflist
, df
);
8121 kmem_free(df
, sizeof (l2arc_data_free_t
));
8124 mutex_exit(&l2arc_free_on_write_mtx
);
8128 * A write to a cache device has completed. Update all headers to allow
8129 * reads from these buffers to begin.
8132 l2arc_write_done(zio_t
*zio
)
8134 l2arc_write_callback_t
*cb
;
8137 arc_buf_hdr_t
*head
, *hdr
, *hdr_prev
;
8138 kmutex_t
*hash_lock
;
8139 int64_t bytes_dropped
= 0;
8141 cb
= zio
->io_private
;
8142 ASSERT3P(cb
, !=, NULL
);
8143 dev
= cb
->l2wcb_dev
;
8144 ASSERT3P(dev
, !=, NULL
);
8145 head
= cb
->l2wcb_head
;
8146 ASSERT3P(head
, !=, NULL
);
8147 buflist
= &dev
->l2ad_buflist
;
8148 ASSERT3P(buflist
, !=, NULL
);
8149 DTRACE_PROBE2(l2arc__iodone
, zio_t
*, zio
,
8150 l2arc_write_callback_t
*, cb
);
8152 if (zio
->io_error
!= 0)
8153 ARCSTAT_BUMP(arcstat_l2_writes_error
);
8156 * All writes completed, or an error was hit.
8159 mutex_enter(&dev
->l2ad_mtx
);
8160 for (hdr
= list_prev(buflist
, head
); hdr
; hdr
= hdr_prev
) {
8161 hdr_prev
= list_prev(buflist
, hdr
);
8163 hash_lock
= HDR_LOCK(hdr
);
8166 * We cannot use mutex_enter or else we can deadlock
8167 * with l2arc_write_buffers (due to swapping the order
8168 * the hash lock and l2ad_mtx are taken).
8170 if (!mutex_tryenter(hash_lock
)) {
8172 * Missed the hash lock. We must retry so we
8173 * don't leave the ARC_FLAG_L2_WRITING bit set.
8175 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry
);
8178 * We don't want to rescan the headers we've
8179 * already marked as having been written out, so
8180 * we reinsert the head node so we can pick up
8181 * where we left off.
8183 list_remove(buflist
, head
);
8184 list_insert_after(buflist
, hdr
, head
);
8186 mutex_exit(&dev
->l2ad_mtx
);
8189 * We wait for the hash lock to become available
8190 * to try and prevent busy waiting, and increase
8191 * the chance we'll be able to acquire the lock
8192 * the next time around.
8194 mutex_enter(hash_lock
);
8195 mutex_exit(hash_lock
);
8200 * We could not have been moved into the arc_l2c_only
8201 * state while in-flight due to our ARC_FLAG_L2_WRITING
8202 * bit being set. Let's just ensure that's being enforced.
8204 ASSERT(HDR_HAS_L1HDR(hdr
));
8207 * Skipped - drop L2ARC entry and mark the header as no
8208 * longer L2 eligibile.
8210 if (zio
->io_error
!= 0) {
8212 * Error - drop L2ARC entry.
8214 list_remove(buflist
, hdr
);
8215 arc_hdr_clear_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
8217 ARCSTAT_INCR(arcstat_l2_psize
, -arc_hdr_size(hdr
));
8218 ARCSTAT_INCR(arcstat_l2_lsize
, -HDR_GET_LSIZE(hdr
));
8220 bytes_dropped
+= arc_hdr_size(hdr
);
8221 (void) refcount_remove_many(&dev
->l2ad_alloc
,
8222 arc_hdr_size(hdr
), hdr
);
8226 * Allow ARC to begin reads and ghost list evictions to
8229 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2_WRITING
);
8231 mutex_exit(hash_lock
);
8234 atomic_inc_64(&l2arc_writes_done
);
8235 list_remove(buflist
, head
);
8236 ASSERT(!HDR_HAS_L1HDR(head
));
8237 kmem_cache_free(hdr_l2only_cache
, head
);
8238 mutex_exit(&dev
->l2ad_mtx
);
8240 vdev_space_update(dev
->l2ad_vdev
, -bytes_dropped
, 0, 0);
8242 l2arc_do_free_on_write();
8244 kmem_free(cb
, sizeof (l2arc_write_callback_t
));
8248 l2arc_untransform(zio_t
*zio
, l2arc_read_callback_t
*cb
)
8251 spa_t
*spa
= zio
->io_spa
;
8252 arc_buf_hdr_t
*hdr
= cb
->l2rcb_hdr
;
8253 blkptr_t
*bp
= zio
->io_bp
;
8254 uint8_t salt
[ZIO_DATA_SALT_LEN
];
8255 uint8_t iv
[ZIO_DATA_IV_LEN
];
8256 uint8_t mac
[ZIO_DATA_MAC_LEN
];
8257 boolean_t no_crypt
= B_FALSE
;
8260 * ZIL data is never be written to the L2ARC, so we don't need
8261 * special handling for its unique MAC storage.
8263 ASSERT3U(BP_GET_TYPE(bp
), !=, DMU_OT_INTENT_LOG
);
8264 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
8265 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
8268 * If the data was encrypted, decrypt it now. Note that
8269 * we must check the bp here and not the hdr, since the
8270 * hdr does not have its encryption parameters updated
8271 * until arc_read_done().
8273 if (BP_IS_ENCRYPTED(bp
)) {
8274 abd_t
*eabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
8276 zio_crypt_decode_params_bp(bp
, salt
, iv
);
8277 zio_crypt_decode_mac_bp(bp
, mac
);
8279 ret
= spa_do_crypt_abd(B_FALSE
, spa
, &cb
->l2rcb_zb
,
8280 BP_GET_TYPE(bp
), BP_GET_DEDUP(bp
), BP_SHOULD_BYTESWAP(bp
),
8281 salt
, iv
, mac
, HDR_GET_PSIZE(hdr
), eabd
,
8282 hdr
->b_l1hdr
.b_pabd
, &no_crypt
);
8284 arc_free_data_abd(hdr
, eabd
, arc_hdr_size(hdr
), hdr
);
8289 * If we actually performed decryption, replace b_pabd
8290 * with the decrypted data. Otherwise we can just throw
8291 * our decryption buffer away.
8294 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
8295 arc_hdr_size(hdr
), hdr
);
8296 hdr
->b_l1hdr
.b_pabd
= eabd
;
8299 arc_free_data_abd(hdr
, eabd
, arc_hdr_size(hdr
), hdr
);
8304 * If the L2ARC block was compressed, but ARC compression
8305 * is disabled we decompress the data into a new buffer and
8306 * replace the existing data.
8308 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
8309 !HDR_COMPRESSION_ENABLED(hdr
)) {
8310 abd_t
*cabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
8311 void *tmp
= abd_borrow_buf(cabd
, arc_hdr_size(hdr
));
8313 ret
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
8314 hdr
->b_l1hdr
.b_pabd
, tmp
, HDR_GET_PSIZE(hdr
),
8315 HDR_GET_LSIZE(hdr
));
8317 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
8318 arc_free_data_abd(hdr
, cabd
, arc_hdr_size(hdr
), hdr
);
8322 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
8323 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
8324 arc_hdr_size(hdr
), hdr
);
8325 hdr
->b_l1hdr
.b_pabd
= cabd
;
8327 zio
->io_size
= HDR_GET_LSIZE(hdr
);
8338 * A read to a cache device completed. Validate buffer contents before
8339 * handing over to the regular ARC routines.
8342 l2arc_read_done(zio_t
*zio
)
8345 l2arc_read_callback_t
*cb
= zio
->io_private
;
8347 kmutex_t
*hash_lock
;
8348 boolean_t valid_cksum
;
8349 boolean_t using_rdata
= (BP_IS_ENCRYPTED(&cb
->l2rcb_bp
) &&
8350 (cb
->l2rcb_flags
& ZIO_FLAG_RAW_ENCRYPT
));
8352 ASSERT3P(zio
->io_vd
, !=, NULL
);
8353 ASSERT(zio
->io_flags
& ZIO_FLAG_DONT_PROPAGATE
);
8355 spa_config_exit(zio
->io_spa
, SCL_L2ARC
, zio
->io_vd
);
8357 ASSERT3P(cb
, !=, NULL
);
8358 hdr
= cb
->l2rcb_hdr
;
8359 ASSERT3P(hdr
, !=, NULL
);
8361 hash_lock
= HDR_LOCK(hdr
);
8362 mutex_enter(hash_lock
);
8363 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
8366 * If the data was read into a temporary buffer,
8367 * move it and free the buffer.
8369 if (cb
->l2rcb_abd
!= NULL
) {
8370 ASSERT3U(arc_hdr_size(hdr
), <, zio
->io_size
);
8371 if (zio
->io_error
== 0) {
8373 abd_copy(hdr
->b_crypt_hdr
.b_rabd
,
8374 cb
->l2rcb_abd
, arc_hdr_size(hdr
));
8376 abd_copy(hdr
->b_l1hdr
.b_pabd
,
8377 cb
->l2rcb_abd
, arc_hdr_size(hdr
));
8382 * The following must be done regardless of whether
8383 * there was an error:
8384 * - free the temporary buffer
8385 * - point zio to the real ARC buffer
8386 * - set zio size accordingly
8387 * These are required because zio is either re-used for
8388 * an I/O of the block in the case of the error
8389 * or the zio is passed to arc_read_done() and it
8392 abd_free(cb
->l2rcb_abd
);
8393 zio
->io_size
= zio
->io_orig_size
= arc_hdr_size(hdr
);
8396 ASSERT(HDR_HAS_RABD(hdr
));
8397 zio
->io_abd
= zio
->io_orig_abd
=
8398 hdr
->b_crypt_hdr
.b_rabd
;
8400 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
8401 zio
->io_abd
= zio
->io_orig_abd
= hdr
->b_l1hdr
.b_pabd
;
8405 ASSERT3P(zio
->io_abd
, !=, NULL
);
8408 * Check this survived the L2ARC journey.
8410 ASSERT(zio
->io_abd
== hdr
->b_l1hdr
.b_pabd
||
8411 (HDR_HAS_RABD(hdr
) && zio
->io_abd
== hdr
->b_crypt_hdr
.b_rabd
));
8412 zio
->io_bp_copy
= cb
->l2rcb_bp
; /* XXX fix in L2ARC 2.0 */
8413 zio
->io_bp
= &zio
->io_bp_copy
; /* XXX fix in L2ARC 2.0 */
8415 valid_cksum
= arc_cksum_is_equal(hdr
, zio
);
8418 * b_rabd will always match the data as it exists on disk if it is
8419 * being used. Therefore if we are reading into b_rabd we do not
8420 * attempt to untransform the data.
8422 if (valid_cksum
&& !using_rdata
)
8423 tfm_error
= l2arc_untransform(zio
, cb
);
8425 if (valid_cksum
&& tfm_error
== 0 && zio
->io_error
== 0 &&
8426 !HDR_L2_EVICTED(hdr
)) {
8427 mutex_exit(hash_lock
);
8428 zio
->io_private
= hdr
;
8431 mutex_exit(hash_lock
);
8433 * Buffer didn't survive caching. Increment stats and
8434 * reissue to the original storage device.
8436 if (zio
->io_error
!= 0) {
8437 ARCSTAT_BUMP(arcstat_l2_io_error
);
8439 zio
->io_error
= SET_ERROR(EIO
);
8441 if (!valid_cksum
|| tfm_error
!= 0)
8442 ARCSTAT_BUMP(arcstat_l2_cksum_bad
);
8445 * If there's no waiter, issue an async i/o to the primary
8446 * storage now. If there *is* a waiter, the caller must
8447 * issue the i/o in a context where it's OK to block.
8449 if (zio
->io_waiter
== NULL
) {
8450 zio_t
*pio
= zio_unique_parent(zio
);
8451 void *abd
= (using_rdata
) ?
8452 hdr
->b_crypt_hdr
.b_rabd
: hdr
->b_l1hdr
.b_pabd
;
8454 ASSERT(!pio
|| pio
->io_child_type
== ZIO_CHILD_LOGICAL
);
8456 zio_nowait(zio_read(pio
, zio
->io_spa
, zio
->io_bp
,
8457 abd
, zio
->io_size
, arc_read_done
,
8458 hdr
, zio
->io_priority
, cb
->l2rcb_flags
,
8463 kmem_free(cb
, sizeof (l2arc_read_callback_t
));
8467 * This is the list priority from which the L2ARC will search for pages to
8468 * cache. This is used within loops (0..3) to cycle through lists in the
8469 * desired order. This order can have a significant effect on cache
8472 * Currently the metadata lists are hit first, MFU then MRU, followed by
8473 * the data lists. This function returns a locked list, and also returns
8476 static multilist_sublist_t
*
8477 l2arc_sublist_lock(int list_num
)
8479 multilist_t
*ml
= NULL
;
8482 ASSERT(list_num
>= 0 && list_num
< L2ARC_FEED_TYPES
);
8486 ml
= arc_mfu
->arcs_list
[ARC_BUFC_METADATA
];
8489 ml
= arc_mru
->arcs_list
[ARC_BUFC_METADATA
];
8492 ml
= arc_mfu
->arcs_list
[ARC_BUFC_DATA
];
8495 ml
= arc_mru
->arcs_list
[ARC_BUFC_DATA
];
8502 * Return a randomly-selected sublist. This is acceptable
8503 * because the caller feeds only a little bit of data for each
8504 * call (8MB). Subsequent calls will result in different
8505 * sublists being selected.
8507 idx
= multilist_get_random_index(ml
);
8508 return (multilist_sublist_lock(ml
, idx
));
8512 * Evict buffers from the device write hand to the distance specified in
8513 * bytes. This distance may span populated buffers, it may span nothing.
8514 * This is clearing a region on the L2ARC device ready for writing.
8515 * If the 'all' boolean is set, every buffer is evicted.
8518 l2arc_evict(l2arc_dev_t
*dev
, uint64_t distance
, boolean_t all
)
8521 arc_buf_hdr_t
*hdr
, *hdr_prev
;
8522 kmutex_t
*hash_lock
;
8525 buflist
= &dev
->l2ad_buflist
;
8527 if (!all
&& dev
->l2ad_first
) {
8529 * This is the first sweep through the device. There is
8535 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- (2 * distance
))) {
8537 * When nearing the end of the device, evict to the end
8538 * before the device write hand jumps to the start.
8540 taddr
= dev
->l2ad_end
;
8542 taddr
= dev
->l2ad_hand
+ distance
;
8544 DTRACE_PROBE4(l2arc__evict
, l2arc_dev_t
*, dev
, list_t
*, buflist
,
8545 uint64_t, taddr
, boolean_t
, all
);
8548 mutex_enter(&dev
->l2ad_mtx
);
8549 for (hdr
= list_tail(buflist
); hdr
; hdr
= hdr_prev
) {
8550 hdr_prev
= list_prev(buflist
, hdr
);
8552 hash_lock
= HDR_LOCK(hdr
);
8555 * We cannot use mutex_enter or else we can deadlock
8556 * with l2arc_write_buffers (due to swapping the order
8557 * the hash lock and l2ad_mtx are taken).
8559 if (!mutex_tryenter(hash_lock
)) {
8561 * Missed the hash lock. Retry.
8563 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry
);
8564 mutex_exit(&dev
->l2ad_mtx
);
8565 mutex_enter(hash_lock
);
8566 mutex_exit(hash_lock
);
8571 * A header can't be on this list if it doesn't have L2 header.
8573 ASSERT(HDR_HAS_L2HDR(hdr
));
8575 /* Ensure this header has finished being written. */
8576 ASSERT(!HDR_L2_WRITING(hdr
));
8577 ASSERT(!HDR_L2_WRITE_HEAD(hdr
));
8579 if (!all
&& (hdr
->b_l2hdr
.b_daddr
>= taddr
||
8580 hdr
->b_l2hdr
.b_daddr
< dev
->l2ad_hand
)) {
8582 * We've evicted to the target address,
8583 * or the end of the device.
8585 mutex_exit(hash_lock
);
8589 if (!HDR_HAS_L1HDR(hdr
)) {
8590 ASSERT(!HDR_L2_READING(hdr
));
8592 * This doesn't exist in the ARC. Destroy.
8593 * arc_hdr_destroy() will call list_remove()
8594 * and decrement arcstat_l2_lsize.
8596 arc_change_state(arc_anon
, hdr
, hash_lock
);
8597 arc_hdr_destroy(hdr
);
8599 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_l2c_only
);
8600 ARCSTAT_BUMP(arcstat_l2_evict_l1cached
);
8602 * Invalidate issued or about to be issued
8603 * reads, since we may be about to write
8604 * over this location.
8606 if (HDR_L2_READING(hdr
)) {
8607 ARCSTAT_BUMP(arcstat_l2_evict_reading
);
8608 arc_hdr_set_flags(hdr
, ARC_FLAG_L2_EVICTED
);
8611 arc_hdr_l2hdr_destroy(hdr
);
8613 mutex_exit(hash_lock
);
8615 mutex_exit(&dev
->l2ad_mtx
);
8619 * Handle any abd transforms that might be required for writing to the L2ARC.
8620 * If successful, this function will always return an abd with the data
8621 * transformed as it is on disk in a new abd of asize bytes.
8624 l2arc_apply_transforms(spa_t
*spa
, arc_buf_hdr_t
*hdr
, uint64_t asize
,
8629 abd_t
*cabd
= NULL
, *eabd
= NULL
, *to_write
= hdr
->b_l1hdr
.b_pabd
;
8630 enum zio_compress compress
= HDR_GET_COMPRESS(hdr
);
8631 uint64_t psize
= HDR_GET_PSIZE(hdr
);
8632 uint64_t size
= arc_hdr_size(hdr
);
8633 boolean_t ismd
= HDR_ISTYPE_METADATA(hdr
);
8634 boolean_t bswap
= (hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
8635 dsl_crypto_key_t
*dck
= NULL
;
8636 uint8_t mac
[ZIO_DATA_MAC_LEN
] = { 0 };
8637 boolean_t no_crypt
= B_FALSE
;
8639 ASSERT((HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
8640 !HDR_COMPRESSION_ENABLED(hdr
)) ||
8641 HDR_ENCRYPTED(hdr
) || HDR_SHARED_DATA(hdr
) || psize
!= asize
);
8642 ASSERT3U(psize
, <=, asize
);
8645 * If this data simply needs its own buffer, we simply allocate it
8646 * and copy the data. This may be done to elimiate a depedency on a
8647 * shared buffer or to reallocate the buffer to match asize.
8649 if (HDR_HAS_RABD(hdr
) && asize
!= psize
) {
8650 ASSERT3U(asize
, >=, psize
);
8651 to_write
= abd_alloc_for_io(asize
, ismd
);
8652 abd_copy(to_write
, hdr
->b_crypt_hdr
.b_rabd
, psize
);
8654 abd_zero_off(to_write
, psize
, asize
- psize
);
8658 if ((compress
== ZIO_COMPRESS_OFF
|| HDR_COMPRESSION_ENABLED(hdr
)) &&
8659 !HDR_ENCRYPTED(hdr
)) {
8660 ASSERT3U(size
, ==, psize
);
8661 to_write
= abd_alloc_for_io(asize
, ismd
);
8662 abd_copy(to_write
, hdr
->b_l1hdr
.b_pabd
, size
);
8664 abd_zero_off(to_write
, size
, asize
- size
);
8668 if (compress
!= ZIO_COMPRESS_OFF
&& !HDR_COMPRESSION_ENABLED(hdr
)) {
8669 cabd
= abd_alloc_for_io(asize
, ismd
);
8670 tmp
= abd_borrow_buf(cabd
, asize
);
8672 psize
= zio_compress_data(compress
, to_write
, tmp
, size
);
8673 ASSERT3U(psize
, <=, HDR_GET_PSIZE(hdr
));
8675 bzero((char *)tmp
+ psize
, asize
- psize
);
8676 psize
= HDR_GET_PSIZE(hdr
);
8677 abd_return_buf_copy(cabd
, tmp
, asize
);
8681 if (HDR_ENCRYPTED(hdr
)) {
8682 eabd
= abd_alloc_for_io(asize
, ismd
);
8685 * If the dataset was disowned before the buffer
8686 * made it to this point, the key to re-encrypt
8687 * it won't be available. In this case we simply
8688 * won't write the buffer to the L2ARC.
8690 ret
= spa_keystore_lookup_key(spa
, hdr
->b_crypt_hdr
.b_dsobj
,
8695 ret
= zio_do_crypt_abd(B_TRUE
, &dck
->dck_key
,
8696 hdr
->b_crypt_hdr
.b_ot
, bswap
, hdr
->b_crypt_hdr
.b_salt
,
8697 hdr
->b_crypt_hdr
.b_iv
, mac
, psize
, to_write
, eabd
,
8703 abd_copy(eabd
, to_write
, psize
);
8706 abd_zero_off(eabd
, psize
, asize
- psize
);
8708 /* assert that the MAC we got here matches the one we saved */
8709 ASSERT0(bcmp(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
));
8710 spa_keystore_dsl_key_rele(spa
, dck
, FTAG
);
8712 if (to_write
== cabd
)
8719 ASSERT3P(to_write
, !=, hdr
->b_l1hdr
.b_pabd
);
8720 *abd_out
= to_write
;
8725 spa_keystore_dsl_key_rele(spa
, dck
, FTAG
);
8736 * Find and write ARC buffers to the L2ARC device.
8738 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
8739 * for reading until they have completed writing.
8740 * The headroom_boost is an in-out parameter used to maintain headroom boost
8741 * state between calls to this function.
8743 * Returns the number of bytes actually written (which may be smaller than
8744 * the delta by which the device hand has changed due to alignment).
8747 l2arc_write_buffers(spa_t
*spa
, l2arc_dev_t
*dev
, uint64_t target_sz
)
8749 arc_buf_hdr_t
*hdr
, *hdr_prev
, *head
;
8750 uint64_t write_asize
, write_psize
, write_lsize
, headroom
;
8752 l2arc_write_callback_t
*cb
;
8754 uint64_t guid
= spa_load_guid(spa
);
8756 ASSERT3P(dev
->l2ad_vdev
, !=, NULL
);
8759 write_lsize
= write_asize
= write_psize
= 0;
8761 head
= kmem_cache_alloc(hdr_l2only_cache
, KM_PUSHPAGE
);
8762 arc_hdr_set_flags(head
, ARC_FLAG_L2_WRITE_HEAD
| ARC_FLAG_HAS_L2HDR
);
8765 * Copy buffers for L2ARC writing.
8767 for (int try = 0; try < L2ARC_FEED_TYPES
; try++) {
8768 multilist_sublist_t
*mls
= l2arc_sublist_lock(try);
8769 uint64_t passed_sz
= 0;
8771 VERIFY3P(mls
, !=, NULL
);
8774 * L2ARC fast warmup.
8776 * Until the ARC is warm and starts to evict, read from the
8777 * head of the ARC lists rather than the tail.
8779 if (arc_warm
== B_FALSE
)
8780 hdr
= multilist_sublist_head(mls
);
8782 hdr
= multilist_sublist_tail(mls
);
8784 headroom
= target_sz
* l2arc_headroom
;
8785 if (zfs_compressed_arc_enabled
)
8786 headroom
= (headroom
* l2arc_headroom_boost
) / 100;
8788 for (; hdr
; hdr
= hdr_prev
) {
8789 kmutex_t
*hash_lock
;
8790 abd_t
*to_write
= NULL
;
8792 if (arc_warm
== B_FALSE
)
8793 hdr_prev
= multilist_sublist_next(mls
, hdr
);
8795 hdr_prev
= multilist_sublist_prev(mls
, hdr
);
8797 hash_lock
= HDR_LOCK(hdr
);
8798 if (!mutex_tryenter(hash_lock
)) {
8800 * Skip this buffer rather than waiting.
8805 passed_sz
+= HDR_GET_LSIZE(hdr
);
8806 if (passed_sz
> headroom
) {
8810 mutex_exit(hash_lock
);
8814 if (!l2arc_write_eligible(guid
, hdr
)) {
8815 mutex_exit(hash_lock
);
8820 * We rely on the L1 portion of the header below, so
8821 * it's invalid for this header to have been evicted out
8822 * of the ghost cache, prior to being written out. The
8823 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
8825 ASSERT(HDR_HAS_L1HDR(hdr
));
8827 ASSERT3U(HDR_GET_PSIZE(hdr
), >, 0);
8828 ASSERT3U(arc_hdr_size(hdr
), >, 0);
8829 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
8831 uint64_t psize
= HDR_GET_PSIZE(hdr
);
8832 uint64_t asize
= vdev_psize_to_asize(dev
->l2ad_vdev
,
8835 if ((write_asize
+ asize
) > target_sz
) {
8837 mutex_exit(hash_lock
);
8842 * We rely on the L1 portion of the header below, so
8843 * it's invalid for this header to have been evicted out
8844 * of the ghost cache, prior to being written out. The
8845 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
8847 arc_hdr_set_flags(hdr
, ARC_FLAG_L2_WRITING
);
8848 ASSERT(HDR_HAS_L1HDR(hdr
));
8850 ASSERT3U(HDR_GET_PSIZE(hdr
), >, 0);
8851 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
8853 ASSERT3U(arc_hdr_size(hdr
), >, 0);
8856 * If this header has b_rabd, we can use this since it
8857 * must always match the data exactly as it exists on
8858 * disk. Otherwise, the L2ARC can normally use the
8859 * hdr's data, but if we're sharing data between the
8860 * hdr and one of its bufs, L2ARC needs its own copy of
8861 * the data so that the ZIO below can't race with the
8862 * buf consumer. To ensure that this copy will be
8863 * available for the lifetime of the ZIO and be cleaned
8864 * up afterwards, we add it to the l2arc_free_on_write
8865 * queue. If we need to apply any transforms to the
8866 * data (compression, encryption) we will also need the
8869 if (HDR_HAS_RABD(hdr
) && psize
== asize
) {
8870 to_write
= hdr
->b_crypt_hdr
.b_rabd
;
8871 } else if ((HDR_COMPRESSION_ENABLED(hdr
) ||
8872 HDR_GET_COMPRESS(hdr
) == ZIO_COMPRESS_OFF
) &&
8873 !HDR_ENCRYPTED(hdr
) && !HDR_SHARED_DATA(hdr
) &&
8875 to_write
= hdr
->b_l1hdr
.b_pabd
;
8878 arc_buf_contents_t type
= arc_buf_type(hdr
);
8880 ret
= l2arc_apply_transforms(spa
, hdr
, asize
,
8883 arc_hdr_clear_flags(hdr
,
8884 ARC_FLAG_L2_WRITING
);
8885 mutex_exit(hash_lock
);
8889 l2arc_free_abd_on_write(to_write
, asize
, type
);
8894 * Insert a dummy header on the buflist so
8895 * l2arc_write_done() can find where the
8896 * write buffers begin without searching.
8898 mutex_enter(&dev
->l2ad_mtx
);
8899 list_insert_head(&dev
->l2ad_buflist
, head
);
8900 mutex_exit(&dev
->l2ad_mtx
);
8903 sizeof (l2arc_write_callback_t
), KM_SLEEP
);
8904 cb
->l2wcb_dev
= dev
;
8905 cb
->l2wcb_head
= head
;
8906 pio
= zio_root(spa
, l2arc_write_done
, cb
,
8910 hdr
->b_l2hdr
.b_dev
= dev
;
8911 hdr
->b_l2hdr
.b_hits
= 0;
8913 hdr
->b_l2hdr
.b_daddr
= dev
->l2ad_hand
;
8914 arc_hdr_set_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
8916 mutex_enter(&dev
->l2ad_mtx
);
8917 list_insert_head(&dev
->l2ad_buflist
, hdr
);
8918 mutex_exit(&dev
->l2ad_mtx
);
8920 (void) refcount_add_many(&dev
->l2ad_alloc
,
8921 arc_hdr_size(hdr
), hdr
);
8923 wzio
= zio_write_phys(pio
, dev
->l2ad_vdev
,
8924 hdr
->b_l2hdr
.b_daddr
, asize
, to_write
,
8925 ZIO_CHECKSUM_OFF
, NULL
, hdr
,
8926 ZIO_PRIORITY_ASYNC_WRITE
,
8927 ZIO_FLAG_CANFAIL
, B_FALSE
);
8929 write_lsize
+= HDR_GET_LSIZE(hdr
);
8930 DTRACE_PROBE2(l2arc__write
, vdev_t
*, dev
->l2ad_vdev
,
8933 write_psize
+= psize
;
8934 write_asize
+= asize
;
8935 dev
->l2ad_hand
+= asize
;
8937 mutex_exit(hash_lock
);
8939 (void) zio_nowait(wzio
);
8942 multilist_sublist_unlock(mls
);
8948 /* No buffers selected for writing? */
8950 ASSERT0(write_lsize
);
8951 ASSERT(!HDR_HAS_L1HDR(head
));
8952 kmem_cache_free(hdr_l2only_cache
, head
);
8956 ASSERT3U(write_asize
, <=, target_sz
);
8957 ARCSTAT_BUMP(arcstat_l2_writes_sent
);
8958 ARCSTAT_INCR(arcstat_l2_write_bytes
, write_psize
);
8959 ARCSTAT_INCR(arcstat_l2_lsize
, write_lsize
);
8960 ARCSTAT_INCR(arcstat_l2_psize
, write_psize
);
8961 vdev_space_update(dev
->l2ad_vdev
, write_psize
, 0, 0);
8964 * Bump device hand to the device start if it is approaching the end.
8965 * l2arc_evict() will already have evicted ahead for this case.
8967 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- target_sz
)) {
8968 dev
->l2ad_hand
= dev
->l2ad_start
;
8969 dev
->l2ad_first
= B_FALSE
;
8972 dev
->l2ad_writing
= B_TRUE
;
8973 (void) zio_wait(pio
);
8974 dev
->l2ad_writing
= B_FALSE
;
8976 return (write_asize
);
8980 * This thread feeds the L2ARC at regular intervals. This is the beating
8981 * heart of the L2ARC.
8985 l2arc_feed_thread(void *unused
)
8990 uint64_t size
, wrote
;
8991 clock_t begin
, next
= ddi_get_lbolt();
8992 fstrans_cookie_t cookie
;
8994 CALLB_CPR_INIT(&cpr
, &l2arc_feed_thr_lock
, callb_generic_cpr
, FTAG
);
8996 mutex_enter(&l2arc_feed_thr_lock
);
8998 cookie
= spl_fstrans_mark();
8999 while (l2arc_thread_exit
== 0) {
9000 CALLB_CPR_SAFE_BEGIN(&cpr
);
9001 (void) cv_timedwait_sig(&l2arc_feed_thr_cv
,
9002 &l2arc_feed_thr_lock
, next
);
9003 CALLB_CPR_SAFE_END(&cpr
, &l2arc_feed_thr_lock
);
9004 next
= ddi_get_lbolt() + hz
;
9007 * Quick check for L2ARC devices.
9009 mutex_enter(&l2arc_dev_mtx
);
9010 if (l2arc_ndev
== 0) {
9011 mutex_exit(&l2arc_dev_mtx
);
9014 mutex_exit(&l2arc_dev_mtx
);
9015 begin
= ddi_get_lbolt();
9018 * This selects the next l2arc device to write to, and in
9019 * doing so the next spa to feed from: dev->l2ad_spa. This
9020 * will return NULL if there are now no l2arc devices or if
9021 * they are all faulted.
9023 * If a device is returned, its spa's config lock is also
9024 * held to prevent device removal. l2arc_dev_get_next()
9025 * will grab and release l2arc_dev_mtx.
9027 if ((dev
= l2arc_dev_get_next()) == NULL
)
9030 spa
= dev
->l2ad_spa
;
9031 ASSERT3P(spa
, !=, NULL
);
9034 * If the pool is read-only then force the feed thread to
9035 * sleep a little longer.
9037 if (!spa_writeable(spa
)) {
9038 next
= ddi_get_lbolt() + 5 * l2arc_feed_secs
* hz
;
9039 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9044 * Avoid contributing to memory pressure.
9046 if (arc_reclaim_needed()) {
9047 ARCSTAT_BUMP(arcstat_l2_abort_lowmem
);
9048 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9052 ARCSTAT_BUMP(arcstat_l2_feeds
);
9054 size
= l2arc_write_size();
9057 * Evict L2ARC buffers that will be overwritten.
9059 l2arc_evict(dev
, size
, B_FALSE
);
9062 * Write ARC buffers.
9064 wrote
= l2arc_write_buffers(spa
, dev
, size
);
9067 * Calculate interval between writes.
9069 next
= l2arc_write_interval(begin
, size
, wrote
);
9070 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9072 spl_fstrans_unmark(cookie
);
9074 l2arc_thread_exit
= 0;
9075 cv_broadcast(&l2arc_feed_thr_cv
);
9076 CALLB_CPR_EXIT(&cpr
); /* drops l2arc_feed_thr_lock */
9081 l2arc_vdev_present(vdev_t
*vd
)
9085 mutex_enter(&l2arc_dev_mtx
);
9086 for (dev
= list_head(l2arc_dev_list
); dev
!= NULL
;
9087 dev
= list_next(l2arc_dev_list
, dev
)) {
9088 if (dev
->l2ad_vdev
== vd
)
9091 mutex_exit(&l2arc_dev_mtx
);
9093 return (dev
!= NULL
);
9097 * Add a vdev for use by the L2ARC. By this point the spa has already
9098 * validated the vdev and opened it.
9101 l2arc_add_vdev(spa_t
*spa
, vdev_t
*vd
)
9103 l2arc_dev_t
*adddev
;
9105 ASSERT(!l2arc_vdev_present(vd
));
9108 * Create a new l2arc device entry.
9110 adddev
= kmem_zalloc(sizeof (l2arc_dev_t
), KM_SLEEP
);
9111 adddev
->l2ad_spa
= spa
;
9112 adddev
->l2ad_vdev
= vd
;
9113 adddev
->l2ad_start
= VDEV_LABEL_START_SIZE
;
9114 adddev
->l2ad_end
= VDEV_LABEL_START_SIZE
+ vdev_get_min_asize(vd
);
9115 adddev
->l2ad_hand
= adddev
->l2ad_start
;
9116 adddev
->l2ad_first
= B_TRUE
;
9117 adddev
->l2ad_writing
= B_FALSE
;
9118 list_link_init(&adddev
->l2ad_node
);
9120 mutex_init(&adddev
->l2ad_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9122 * This is a list of all ARC buffers that are still valid on the
9125 list_create(&adddev
->l2ad_buflist
, sizeof (arc_buf_hdr_t
),
9126 offsetof(arc_buf_hdr_t
, b_l2hdr
.b_l2node
));
9128 vdev_space_update(vd
, 0, 0, adddev
->l2ad_end
- adddev
->l2ad_hand
);
9129 refcount_create(&adddev
->l2ad_alloc
);
9132 * Add device to global list
9134 mutex_enter(&l2arc_dev_mtx
);
9135 list_insert_head(l2arc_dev_list
, adddev
);
9136 atomic_inc_64(&l2arc_ndev
);
9137 mutex_exit(&l2arc_dev_mtx
);
9141 * Remove a vdev from the L2ARC.
9144 l2arc_remove_vdev(vdev_t
*vd
)
9146 l2arc_dev_t
*dev
, *nextdev
, *remdev
= NULL
;
9149 * Find the device by vdev
9151 mutex_enter(&l2arc_dev_mtx
);
9152 for (dev
= list_head(l2arc_dev_list
); dev
; dev
= nextdev
) {
9153 nextdev
= list_next(l2arc_dev_list
, dev
);
9154 if (vd
== dev
->l2ad_vdev
) {
9159 ASSERT3P(remdev
, !=, NULL
);
9162 * Remove device from global list
9164 list_remove(l2arc_dev_list
, remdev
);
9165 l2arc_dev_last
= NULL
; /* may have been invalidated */
9166 atomic_dec_64(&l2arc_ndev
);
9167 mutex_exit(&l2arc_dev_mtx
);
9170 * Clear all buflists and ARC references. L2ARC device flush.
9172 l2arc_evict(remdev
, 0, B_TRUE
);
9173 list_destroy(&remdev
->l2ad_buflist
);
9174 mutex_destroy(&remdev
->l2ad_mtx
);
9175 refcount_destroy(&remdev
->l2ad_alloc
);
9176 kmem_free(remdev
, sizeof (l2arc_dev_t
));
9182 l2arc_thread_exit
= 0;
9184 l2arc_writes_sent
= 0;
9185 l2arc_writes_done
= 0;
9187 mutex_init(&l2arc_feed_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
9188 cv_init(&l2arc_feed_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
9189 mutex_init(&l2arc_dev_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9190 mutex_init(&l2arc_free_on_write_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9192 l2arc_dev_list
= &L2ARC_dev_list
;
9193 l2arc_free_on_write
= &L2ARC_free_on_write
;
9194 list_create(l2arc_dev_list
, sizeof (l2arc_dev_t
),
9195 offsetof(l2arc_dev_t
, l2ad_node
));
9196 list_create(l2arc_free_on_write
, sizeof (l2arc_data_free_t
),
9197 offsetof(l2arc_data_free_t
, l2df_list_node
));
9204 * This is called from dmu_fini(), which is called from spa_fini();
9205 * Because of this, we can assume that all l2arc devices have
9206 * already been removed when the pools themselves were removed.
9209 l2arc_do_free_on_write();
9211 mutex_destroy(&l2arc_feed_thr_lock
);
9212 cv_destroy(&l2arc_feed_thr_cv
);
9213 mutex_destroy(&l2arc_dev_mtx
);
9214 mutex_destroy(&l2arc_free_on_write_mtx
);
9216 list_destroy(l2arc_dev_list
);
9217 list_destroy(l2arc_free_on_write
);
9223 if (!(spa_mode_global
& FWRITE
))
9226 (void) thread_create(NULL
, 0, l2arc_feed_thread
, NULL
, 0, &p0
,
9227 TS_RUN
, defclsyspri
);
9233 if (!(spa_mode_global
& FWRITE
))
9236 mutex_enter(&l2arc_feed_thr_lock
);
9237 cv_signal(&l2arc_feed_thr_cv
); /* kick thread out of startup */
9238 l2arc_thread_exit
= 1;
9239 while (l2arc_thread_exit
!= 0)
9240 cv_wait(&l2arc_feed_thr_cv
, &l2arc_feed_thr_lock
);
9241 mutex_exit(&l2arc_feed_thr_lock
);
9244 #if defined(_KERNEL)
9245 EXPORT_SYMBOL(arc_buf_size
);
9246 EXPORT_SYMBOL(arc_write
);
9247 EXPORT_SYMBOL(arc_read
);
9248 EXPORT_SYMBOL(arc_buf_info
);
9249 EXPORT_SYMBOL(arc_getbuf_func
);
9250 EXPORT_SYMBOL(arc_add_prune_callback
);
9251 EXPORT_SYMBOL(arc_remove_prune_callback
);
9254 module_param(zfs_arc_min
, ulong
, 0644);
9255 MODULE_PARM_DESC(zfs_arc_min
, "Min arc size");
9257 module_param(zfs_arc_max
, ulong
, 0644);
9258 MODULE_PARM_DESC(zfs_arc_max
, "Max arc size");
9260 module_param(zfs_arc_meta_limit
, ulong
, 0644);
9261 MODULE_PARM_DESC(zfs_arc_meta_limit
, "Meta limit for arc size");
9263 module_param(zfs_arc_meta_limit_percent
, ulong
, 0644);
9264 MODULE_PARM_DESC(zfs_arc_meta_limit_percent
,
9265 "Percent of arc size for arc meta limit");
9267 module_param(zfs_arc_meta_min
, ulong
, 0644);
9268 MODULE_PARM_DESC(zfs_arc_meta_min
, "Min arc metadata");
9270 module_param(zfs_arc_meta_prune
, int, 0644);
9271 MODULE_PARM_DESC(zfs_arc_meta_prune
, "Meta objects to scan for prune");
9273 module_param(zfs_arc_meta_adjust_restarts
, int, 0644);
9274 MODULE_PARM_DESC(zfs_arc_meta_adjust_restarts
,
9275 "Limit number of restarts in arc_adjust_meta");
9277 module_param(zfs_arc_meta_strategy
, int, 0644);
9278 MODULE_PARM_DESC(zfs_arc_meta_strategy
, "Meta reclaim strategy");
9280 module_param(zfs_arc_grow_retry
, int, 0644);
9281 MODULE_PARM_DESC(zfs_arc_grow_retry
, "Seconds before growing arc size");
9283 module_param(zfs_arc_p_dampener_disable
, int, 0644);
9284 MODULE_PARM_DESC(zfs_arc_p_dampener_disable
, "disable arc_p adapt dampener");
9286 module_param(zfs_arc_shrink_shift
, int, 0644);
9287 MODULE_PARM_DESC(zfs_arc_shrink_shift
, "log2(fraction of arc to reclaim)");
9289 module_param(zfs_arc_pc_percent
, uint
, 0644);
9290 MODULE_PARM_DESC(zfs_arc_pc_percent
,
9291 "Percent of pagecache to reclaim arc to");
9293 module_param(zfs_arc_p_min_shift
, int, 0644);
9294 MODULE_PARM_DESC(zfs_arc_p_min_shift
, "arc_c shift to calc min/max arc_p");
9296 module_param(zfs_arc_average_blocksize
, int, 0444);
9297 MODULE_PARM_DESC(zfs_arc_average_blocksize
, "Target average block size");
9299 module_param(zfs_compressed_arc_enabled
, int, 0644);
9300 MODULE_PARM_DESC(zfs_compressed_arc_enabled
, "Disable compressed arc buffers");
9302 module_param(zfs_arc_min_prefetch_ms
, int, 0644);
9303 MODULE_PARM_DESC(zfs_arc_min_prefetch_ms
, "Min life of prefetch block in ms");
9305 module_param(zfs_arc_min_prescient_prefetch_ms
, int, 0644);
9306 MODULE_PARM_DESC(zfs_arc_min_prescient_prefetch_ms
,
9307 "Min life of prescient prefetched block in ms");
9309 module_param(l2arc_write_max
, ulong
, 0644);
9310 MODULE_PARM_DESC(l2arc_write_max
, "Max write bytes per interval");
9312 module_param(l2arc_write_boost
, ulong
, 0644);
9313 MODULE_PARM_DESC(l2arc_write_boost
, "Extra write bytes during device warmup");
9315 module_param(l2arc_headroom
, ulong
, 0644);
9316 MODULE_PARM_DESC(l2arc_headroom
, "Number of max device writes to precache");
9318 module_param(l2arc_headroom_boost
, ulong
, 0644);
9319 MODULE_PARM_DESC(l2arc_headroom_boost
, "Compressed l2arc_headroom multiplier");
9321 module_param(l2arc_feed_secs
, ulong
, 0644);
9322 MODULE_PARM_DESC(l2arc_feed_secs
, "Seconds between L2ARC writing");
9324 module_param(l2arc_feed_min_ms
, ulong
, 0644);
9325 MODULE_PARM_DESC(l2arc_feed_min_ms
, "Min feed interval in milliseconds");
9327 module_param(l2arc_noprefetch
, int, 0644);
9328 MODULE_PARM_DESC(l2arc_noprefetch
, "Skip caching prefetched buffers");
9330 module_param(l2arc_feed_again
, int, 0644);
9331 MODULE_PARM_DESC(l2arc_feed_again
, "Turbo L2ARC warmup");
9333 module_param(l2arc_norw
, int, 0644);
9334 MODULE_PARM_DESC(l2arc_norw
, "No reads during writes");
9336 module_param(zfs_arc_lotsfree_percent
, int, 0644);
9337 MODULE_PARM_DESC(zfs_arc_lotsfree_percent
,
9338 "System free memory I/O throttle in bytes");
9340 module_param(zfs_arc_sys_free
, ulong
, 0644);
9341 MODULE_PARM_DESC(zfs_arc_sys_free
, "System free memory target size in bytes");
9343 module_param(zfs_arc_dnode_limit
, ulong
, 0644);
9344 MODULE_PARM_DESC(zfs_arc_dnode_limit
, "Minimum bytes of dnodes in arc");
9346 module_param(zfs_arc_dnode_limit_percent
, ulong
, 0644);
9347 MODULE_PARM_DESC(zfs_arc_dnode_limit_percent
,
9348 "Percent of ARC meta buffers for dnodes");
9350 module_param(zfs_arc_dnode_reduce_percent
, ulong
, 0644);
9351 MODULE_PARM_DESC(zfs_arc_dnode_reduce_percent
,
9352 "Percentage of excess dnodes to try to unpin");