4 * The contents of this file are subject to the terms of the
5 * Common Development and Distribution License (the "License").
6 * You may not use this file except in compliance with the License.
8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
9 * or http://www.opensolaris.org/os/licensing.
10 * See the License for the specific language governing permissions
11 * and limitations under the License.
13 * When distributing Covered Code, include this CDDL HEADER in each
14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE.
15 * If applicable, add the following below this CDDL HEADER, with the
16 * fields enclosed by brackets "[]" replaced with your own identifying
17 * information: Portions Copyright [yyyy] [name of copyright owner]
22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved.
23 * Copyright (c) 2012, Joyent, Inc. All rights reserved.
24 * Copyright (c) 2011, 2018 by Delphix. All rights reserved.
25 * Copyright (c) 2014 by Saso Kiselkov. All rights reserved.
26 * Copyright 2015 Nexenta Systems, Inc. All rights reserved.
30 * DVA-based Adjustable Replacement Cache
32 * While much of the theory of operation used here is
33 * based on the self-tuning, low overhead replacement cache
34 * presented by Megiddo and Modha at FAST 2003, there are some
35 * significant differences:
37 * 1. The Megiddo and Modha model assumes any page is evictable.
38 * Pages in its cache cannot be "locked" into memory. This makes
39 * the eviction algorithm simple: evict the last page in the list.
40 * This also make the performance characteristics easy to reason
41 * about. Our cache is not so simple. At any given moment, some
42 * subset of the blocks in the cache are un-evictable because we
43 * have handed out a reference to them. Blocks are only evictable
44 * when there are no external references active. This makes
45 * eviction far more problematic: we choose to evict the evictable
46 * blocks that are the "lowest" in the list.
48 * There are times when it is not possible to evict the requested
49 * space. In these circumstances we are unable to adjust the cache
50 * size. To prevent the cache growing unbounded at these times we
51 * implement a "cache throttle" that slows the flow of new data
52 * into the cache until we can make space available.
54 * 2. The Megiddo and Modha model assumes a fixed cache size.
55 * Pages are evicted when the cache is full and there is a cache
56 * miss. Our model has a variable sized cache. It grows with
57 * high use, but also tries to react to memory pressure from the
58 * operating system: decreasing its size when system memory is
61 * 3. The Megiddo and Modha model assumes a fixed page size. All
62 * elements of the cache are therefore exactly the same size. So
63 * when adjusting the cache size following a cache miss, its simply
64 * a matter of choosing a single page to evict. In our model, we
65 * have variable sized cache blocks (rangeing from 512 bytes to
66 * 128K bytes). We therefore choose a set of blocks to evict to make
67 * space for a cache miss that approximates as closely as possible
68 * the space used by the new block.
70 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache"
71 * by N. Megiddo & D. Modha, FAST 2003
77 * A new reference to a cache buffer can be obtained in two
78 * ways: 1) via a hash table lookup using the DVA as a key,
79 * or 2) via one of the ARC lists. The arc_read() interface
80 * uses method 1, while the internal ARC algorithms for
81 * adjusting the cache use method 2. We therefore provide two
82 * types of locks: 1) the hash table lock array, and 2) the
85 * Buffers do not have their own mutexes, rather they rely on the
86 * hash table mutexes for the bulk of their protection (i.e. most
87 * fields in the arc_buf_hdr_t are protected by these mutexes).
89 * buf_hash_find() returns the appropriate mutex (held) when it
90 * locates the requested buffer in the hash table. It returns
91 * NULL for the mutex if the buffer was not in the table.
93 * buf_hash_remove() expects the appropriate hash mutex to be
94 * already held before it is invoked.
96 * Each ARC state also has a mutex which is used to protect the
97 * buffer list associated with the state. When attempting to
98 * obtain a hash table lock while holding an ARC list lock you
99 * must use: mutex_tryenter() to avoid deadlock. Also note that
100 * the active state mutex must be held before the ghost state mutex.
102 * It as also possible to register a callback which is run when the
103 * arc_meta_limit is reached and no buffers can be safely evicted. In
104 * this case the arc user should drop a reference on some arc buffers so
105 * they can be reclaimed and the arc_meta_limit honored. For example,
106 * when using the ZPL each dentry holds a references on a znode. These
107 * dentries must be pruned before the arc buffer holding the znode can
110 * Note that the majority of the performance stats are manipulated
111 * with atomic operations.
113 * The L2ARC uses the l2ad_mtx on each vdev for the following:
115 * - L2ARC buflist creation
116 * - L2ARC buflist eviction
117 * - L2ARC write completion, which walks L2ARC buflists
118 * - ARC header destruction, as it removes from L2ARC buflists
119 * - ARC header release, as it removes from L2ARC buflists
125 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure.
126 * This structure can point either to a block that is still in the cache or to
127 * one that is only accessible in an L2 ARC device, or it can provide
128 * information about a block that was recently evicted. If a block is
129 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough
130 * information to retrieve it from the L2ARC device. This information is
131 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block
132 * that is in this state cannot access the data directly.
134 * Blocks that are actively being referenced or have not been evicted
135 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within
136 * the arc_buf_hdr_t that will point to the data block in memory. A block can
137 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC
138 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and
139 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd).
141 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the
142 * ability to store the physical data (b_pabd) associated with the DVA of the
143 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block,
144 * it will match its on-disk compression characteristics. This behavior can be
145 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the
146 * compressed ARC functionality is disabled, the b_pabd will point to an
147 * uncompressed version of the on-disk data.
149 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each
150 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it.
151 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC
152 * consumer. The ARC will provide references to this data and will keep it
153 * cached until it is no longer in use. The ARC caches only the L1ARC's physical
154 * data block and will evict any arc_buf_t that is no longer referenced. The
155 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the
156 * "overhead_size" kstat.
158 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or
159 * compressed form. The typical case is that consumers will want uncompressed
160 * data, and when that happens a new data buffer is allocated where the data is
161 * decompressed for them to use. Currently the only consumer who wants
162 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it
163 * exists on disk. When this happens, the arc_buf_t's data buffer is shared
164 * with the arc_buf_hdr_t.
166 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The
167 * first one is owned by a compressed send consumer (and therefore references
168 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be
169 * used by any other consumer (and has its own uncompressed copy of the data
184 * | b_buf +------------>+-----------+ arc_buf_t
185 * | b_pabd +-+ |b_next +---->+-----------+
186 * +-----------+ | |-----------| |b_next +-->NULL
187 * | |b_comp = T | +-----------+
188 * | |b_data +-+ |b_comp = F |
189 * | +-----------+ | |b_data +-+
190 * +->+------+ | +-----------+ |
192 * data | |<--------------+ | uncompressed
193 * +------+ compressed, | data
194 * shared +-->+------+
199 * When a consumer reads a block, the ARC must first look to see if the
200 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new
201 * arc_buf_t and either copies uncompressed data into a new data buffer from an
202 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a
203 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the
204 * hdr is compressed and the desired compression characteristics of the
205 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the
206 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be
207 * the last buffer in the hdr's b_buf list, however a shared compressed buf can
208 * be anywhere in the hdr's list.
210 * The diagram below shows an example of an uncompressed ARC hdr that is
211 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is
212 * the last element in the buf list):
224 * | | arc_buf_t (shared)
225 * | b_buf +------------>+---------+ arc_buf_t
226 * | | |b_next +---->+---------+
227 * | b_pabd +-+ |---------| |b_next +-->NULL
228 * +-----------+ | | | +---------+
230 * | +---------+ | |b_data +-+
231 * +->+------+ | +---------+ |
233 * uncompressed | | | |
236 * | uncompressed | | |
239 * +---------------------------------+
241 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd
242 * since the physical block is about to be rewritten. The new data contents
243 * will be contained in the arc_buf_t. As the I/O pipeline performs the write,
244 * it may compress the data before writing it to disk. The ARC will be called
245 * with the transformed data and will bcopy the transformed on-disk block into
246 * a newly allocated b_pabd. Writes are always done into buffers which have
247 * either been loaned (and hence are new and don't have other readers) or
248 * buffers which have been released (and hence have their own hdr, if there
249 * were originally other readers of the buf's original hdr). This ensures that
250 * the ARC only needs to update a single buf and its hdr after a write occurs.
252 * When the L2ARC is in use, it will also take advantage of the b_pabd. The
253 * L2ARC will always write the contents of b_pabd to the L2ARC. This means
254 * that when compressed ARC is enabled that the L2ARC blocks are identical
255 * to the on-disk block in the main data pool. This provides a significant
256 * advantage since the ARC can leverage the bp's checksum when reading from the
257 * L2ARC to determine if the contents are valid. However, if the compressed
258 * ARC is disabled, then the L2ARC's block must be transformed to look
259 * like the physical block in the main data pool before comparing the
260 * checksum and determining its validity.
262 * The L1ARC has a slightly different system for storing encrypted data.
263 * Raw (encrypted + possibly compressed) data has a few subtle differences from
264 * data that is just compressed. The biggest difference is that it is not
265 * possible to decrypt encrypted data (or visa versa) if the keys aren't loaded.
266 * The other difference is that encryption cannot be treated as a suggestion.
267 * If a caller would prefer compressed data, but they actually wind up with
268 * uncompressed data the worst thing that could happen is there might be a
269 * performance hit. If the caller requests encrypted data, however, we must be
270 * sure they actually get it or else secret information could be leaked. Raw
271 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore,
272 * may have both an encrypted version and a decrypted version of its data at
273 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is
274 * copied out of this header. To avoid complications with b_pabd, raw buffers
280 #include <sys/spa_impl.h>
281 #include <sys/zio_compress.h>
282 #include <sys/zio_checksum.h>
283 #include <sys/zfs_context.h>
285 #include <sys/refcount.h>
286 #include <sys/vdev.h>
287 #include <sys/vdev_impl.h>
288 #include <sys/dsl_pool.h>
289 #include <sys/zio_checksum.h>
290 #include <sys/multilist.h>
293 #include <sys/fm/fs/zfs.h>
295 #include <sys/shrinker.h>
296 #include <sys/vmsystm.h>
298 #include <linux/page_compat.h>
300 #include <sys/callb.h>
301 #include <sys/kstat.h>
302 #include <sys/dmu_tx.h>
303 #include <zfs_fletcher.h>
304 #include <sys/arc_impl.h>
305 #include <sys/trace_arc.h>
306 #include <sys/aggsum.h>
307 #include <sys/cityhash.h>
310 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */
311 boolean_t arc_watch
= B_FALSE
;
314 static kmutex_t arc_reclaim_lock
;
315 static kcondvar_t arc_reclaim_thread_cv
;
316 static boolean_t arc_reclaim_thread_exit
;
317 static kcondvar_t arc_reclaim_waiters_cv
;
320 * The number of headers to evict in arc_evict_state_impl() before
321 * dropping the sublist lock and evicting from another sublist. A lower
322 * value means we're more likely to evict the "correct" header (i.e. the
323 * oldest header in the arc state), but comes with higher overhead
324 * (i.e. more invocations of arc_evict_state_impl()).
326 int zfs_arc_evict_batch_limit
= 10;
328 /* number of seconds before growing cache again */
329 static int arc_grow_retry
= 5;
331 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */
332 int zfs_arc_overflow_shift
= 8;
334 /* shift of arc_c for calculating both min and max arc_p */
335 static int arc_p_min_shift
= 4;
337 /* log2(fraction of arc to reclaim) */
338 static int arc_shrink_shift
= 7;
340 /* percent of pagecache to reclaim arc to */
342 static uint_t zfs_arc_pc_percent
= 0;
346 * log2(fraction of ARC which must be free to allow growing).
347 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory,
348 * when reading a new block into the ARC, we will evict an equal-sized block
351 * This must be less than arc_shrink_shift, so that when we shrink the ARC,
352 * we will still not allow it to grow.
354 int arc_no_grow_shift
= 5;
358 * minimum lifespan of a prefetch block in clock ticks
359 * (initialized in arc_init())
361 static int arc_min_prefetch_ms
;
362 static int arc_min_prescient_prefetch_ms
;
365 * If this percent of memory is free, don't throttle.
367 int arc_lotsfree_percent
= 10;
372 * The arc has filled available memory and has now warmed up.
374 static boolean_t arc_warm
;
377 * log2 fraction of the zio arena to keep free.
379 int arc_zio_arena_free_shift
= 2;
382 * These tunables are for performance analysis.
384 unsigned long zfs_arc_max
= 0;
385 unsigned long zfs_arc_min
= 0;
386 unsigned long zfs_arc_meta_limit
= 0;
387 unsigned long zfs_arc_meta_min
= 0;
388 unsigned long zfs_arc_dnode_limit
= 0;
389 unsigned long zfs_arc_dnode_reduce_percent
= 10;
390 int zfs_arc_grow_retry
= 0;
391 int zfs_arc_shrink_shift
= 0;
392 int zfs_arc_p_min_shift
= 0;
393 int zfs_arc_average_blocksize
= 8 * 1024; /* 8KB */
396 * ARC dirty data constraints for arc_tempreserve_space() throttle.
398 unsigned long zfs_arc_dirty_limit_percent
= 50; /* total dirty data limit */
399 unsigned long zfs_arc_anon_limit_percent
= 25; /* anon block dirty limit */
400 unsigned long zfs_arc_pool_dirty_percent
= 20; /* each pool's anon allowance */
403 * Enable or disable compressed arc buffers.
405 int zfs_compressed_arc_enabled
= B_TRUE
;
408 * ARC will evict meta buffers that exceed arc_meta_limit. This
409 * tunable make arc_meta_limit adjustable for different workloads.
411 unsigned long zfs_arc_meta_limit_percent
= 75;
414 * Percentage that can be consumed by dnodes of ARC meta buffers.
416 unsigned long zfs_arc_dnode_limit_percent
= 10;
419 * These tunables are Linux specific
421 unsigned long zfs_arc_sys_free
= 0;
422 int zfs_arc_min_prefetch_ms
= 0;
423 int zfs_arc_min_prescient_prefetch_ms
= 0;
424 int zfs_arc_p_dampener_disable
= 1;
425 int zfs_arc_meta_prune
= 10000;
426 int zfs_arc_meta_strategy
= ARC_STRATEGY_META_BALANCED
;
427 int zfs_arc_meta_adjust_restarts
= 4096;
428 int zfs_arc_lotsfree_percent
= 10;
431 static arc_state_t ARC_anon
;
432 static arc_state_t ARC_mru
;
433 static arc_state_t ARC_mru_ghost
;
434 static arc_state_t ARC_mfu
;
435 static arc_state_t ARC_mfu_ghost
;
436 static arc_state_t ARC_l2c_only
;
438 typedef struct arc_stats
{
439 kstat_named_t arcstat_hits
;
440 kstat_named_t arcstat_misses
;
441 kstat_named_t arcstat_demand_data_hits
;
442 kstat_named_t arcstat_demand_data_misses
;
443 kstat_named_t arcstat_demand_metadata_hits
;
444 kstat_named_t arcstat_demand_metadata_misses
;
445 kstat_named_t arcstat_prefetch_data_hits
;
446 kstat_named_t arcstat_prefetch_data_misses
;
447 kstat_named_t arcstat_prefetch_metadata_hits
;
448 kstat_named_t arcstat_prefetch_metadata_misses
;
449 kstat_named_t arcstat_mru_hits
;
450 kstat_named_t arcstat_mru_ghost_hits
;
451 kstat_named_t arcstat_mfu_hits
;
452 kstat_named_t arcstat_mfu_ghost_hits
;
453 kstat_named_t arcstat_deleted
;
455 * Number of buffers that could not be evicted because the hash lock
456 * was held by another thread. The lock may not necessarily be held
457 * by something using the same buffer, since hash locks are shared
458 * by multiple buffers.
460 kstat_named_t arcstat_mutex_miss
;
462 * Number of buffers skipped when updating the access state due to the
463 * header having already been released after acquiring the hash lock.
465 kstat_named_t arcstat_access_skip
;
467 * Number of buffers skipped because they have I/O in progress, are
468 * indirect prefetch buffers that have not lived long enough, or are
469 * not from the spa we're trying to evict from.
471 kstat_named_t arcstat_evict_skip
;
473 * Number of times arc_evict_state() was unable to evict enough
474 * buffers to reach its target amount.
476 kstat_named_t arcstat_evict_not_enough
;
477 kstat_named_t arcstat_evict_l2_cached
;
478 kstat_named_t arcstat_evict_l2_eligible
;
479 kstat_named_t arcstat_evict_l2_ineligible
;
480 kstat_named_t arcstat_evict_l2_skip
;
481 kstat_named_t arcstat_hash_elements
;
482 kstat_named_t arcstat_hash_elements_max
;
483 kstat_named_t arcstat_hash_collisions
;
484 kstat_named_t arcstat_hash_chains
;
485 kstat_named_t arcstat_hash_chain_max
;
486 kstat_named_t arcstat_p
;
487 kstat_named_t arcstat_c
;
488 kstat_named_t arcstat_c_min
;
489 kstat_named_t arcstat_c_max
;
490 /* Not updated directly; only synced in arc_kstat_update. */
491 kstat_named_t arcstat_size
;
493 * Number of compressed bytes stored in the arc_buf_hdr_t's b_pabd.
494 * Note that the compressed bytes may match the uncompressed bytes
495 * if the block is either not compressed or compressed arc is disabled.
497 kstat_named_t arcstat_compressed_size
;
499 * Uncompressed size of the data stored in b_pabd. If compressed
500 * arc is disabled then this value will be identical to the stat
503 kstat_named_t arcstat_uncompressed_size
;
505 * Number of bytes stored in all the arc_buf_t's. This is classified
506 * as "overhead" since this data is typically short-lived and will
507 * be evicted from the arc when it becomes unreferenced unless the
508 * zfs_keep_uncompressed_metadata or zfs_keep_uncompressed_level
509 * values have been set (see comment in dbuf.c for more information).
511 kstat_named_t arcstat_overhead_size
;
513 * Number of bytes consumed by internal ARC structures necessary
514 * for tracking purposes; these structures are not actually
515 * backed by ARC buffers. This includes arc_buf_hdr_t structures
516 * (allocated via arc_buf_hdr_t_full and arc_buf_hdr_t_l2only
517 * caches), and arc_buf_t structures (allocated via arc_buf_t
519 * Not updated directly; only synced in arc_kstat_update.
521 kstat_named_t arcstat_hdr_size
;
523 * Number of bytes consumed by ARC buffers of type equal to
524 * ARC_BUFC_DATA. This is generally consumed by buffers backing
525 * on disk user data (e.g. plain file contents).
526 * Not updated directly; only synced in arc_kstat_update.
528 kstat_named_t arcstat_data_size
;
530 * Number of bytes consumed by ARC buffers of type equal to
531 * ARC_BUFC_METADATA. This is generally consumed by buffers
532 * backing on disk data that is used for internal ZFS
533 * structures (e.g. ZAP, dnode, indirect blocks, etc).
534 * Not updated directly; only synced in arc_kstat_update.
536 kstat_named_t arcstat_metadata_size
;
538 * Number of bytes consumed by dmu_buf_impl_t objects.
539 * Not updated directly; only synced in arc_kstat_update.
541 kstat_named_t arcstat_dbuf_size
;
543 * Number of bytes consumed by dnode_t objects.
544 * Not updated directly; only synced in arc_kstat_update.
546 kstat_named_t arcstat_dnode_size
;
548 * Number of bytes consumed by bonus buffers.
549 * Not updated directly; only synced in arc_kstat_update.
551 kstat_named_t arcstat_bonus_size
;
553 * Total number of bytes consumed by ARC buffers residing in the
554 * arc_anon state. This includes *all* buffers in the arc_anon
555 * state; e.g. data, metadata, evictable, and unevictable buffers
556 * are all included in this value.
557 * Not updated directly; only synced in arc_kstat_update.
559 kstat_named_t arcstat_anon_size
;
561 * Number of bytes consumed by ARC buffers that meet the
562 * following criteria: backing buffers of type ARC_BUFC_DATA,
563 * residing in the arc_anon state, and are eligible for eviction
564 * (e.g. have no outstanding holds on the buffer).
565 * Not updated directly; only synced in arc_kstat_update.
567 kstat_named_t arcstat_anon_evictable_data
;
569 * Number of bytes consumed by ARC buffers that meet the
570 * following criteria: backing buffers of type ARC_BUFC_METADATA,
571 * residing in the arc_anon state, and are eligible for eviction
572 * (e.g. have no outstanding holds on the buffer).
573 * Not updated directly; only synced in arc_kstat_update.
575 kstat_named_t arcstat_anon_evictable_metadata
;
577 * Total number of bytes consumed by ARC buffers residing in the
578 * arc_mru state. This includes *all* buffers in the arc_mru
579 * state; e.g. data, metadata, evictable, and unevictable buffers
580 * are all included in this value.
581 * Not updated directly; only synced in arc_kstat_update.
583 kstat_named_t arcstat_mru_size
;
585 * Number of bytes consumed by ARC buffers that meet the
586 * following criteria: backing buffers of type ARC_BUFC_DATA,
587 * residing in the arc_mru state, and are eligible for eviction
588 * (e.g. have no outstanding holds on the buffer).
589 * Not updated directly; only synced in arc_kstat_update.
591 kstat_named_t arcstat_mru_evictable_data
;
593 * Number of bytes consumed by ARC buffers that meet the
594 * following criteria: backing buffers of type ARC_BUFC_METADATA,
595 * residing in the arc_mru state, and are eligible for eviction
596 * (e.g. have no outstanding holds on the buffer).
597 * Not updated directly; only synced in arc_kstat_update.
599 kstat_named_t arcstat_mru_evictable_metadata
;
601 * Total number of bytes that *would have been* consumed by ARC
602 * buffers in the arc_mru_ghost state. The key thing to note
603 * here, is the fact that this size doesn't actually indicate
604 * RAM consumption. The ghost lists only consist of headers and
605 * don't actually have ARC buffers linked off of these headers.
606 * Thus, *if* the headers had associated ARC buffers, these
607 * buffers *would have* consumed this number of bytes.
608 * Not updated directly; only synced in arc_kstat_update.
610 kstat_named_t arcstat_mru_ghost_size
;
612 * Number of bytes that *would have been* consumed by ARC
613 * buffers that are eligible for eviction, of type
614 * ARC_BUFC_DATA, and linked off the arc_mru_ghost state.
615 * Not updated directly; only synced in arc_kstat_update.
617 kstat_named_t arcstat_mru_ghost_evictable_data
;
619 * Number of bytes that *would have been* consumed by ARC
620 * buffers that are eligible for eviction, of type
621 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
622 * Not updated directly; only synced in arc_kstat_update.
624 kstat_named_t arcstat_mru_ghost_evictable_metadata
;
626 * Total number of bytes consumed by ARC buffers residing in the
627 * arc_mfu state. This includes *all* buffers in the arc_mfu
628 * state; e.g. data, metadata, evictable, and unevictable buffers
629 * are all included in this value.
630 * Not updated directly; only synced in arc_kstat_update.
632 kstat_named_t arcstat_mfu_size
;
634 * Number of bytes consumed by ARC buffers that are eligible for
635 * eviction, of type ARC_BUFC_DATA, and reside in the arc_mfu
637 * Not updated directly; only synced in arc_kstat_update.
639 kstat_named_t arcstat_mfu_evictable_data
;
641 * Number of bytes consumed by ARC buffers that are eligible for
642 * eviction, of type ARC_BUFC_METADATA, and reside in the
644 * Not updated directly; only synced in arc_kstat_update.
646 kstat_named_t arcstat_mfu_evictable_metadata
;
648 * Total number of bytes that *would have been* consumed by ARC
649 * buffers in the arc_mfu_ghost state. See the comment above
650 * arcstat_mru_ghost_size for more details.
651 * Not updated directly; only synced in arc_kstat_update.
653 kstat_named_t arcstat_mfu_ghost_size
;
655 * Number of bytes that *would have been* consumed by ARC
656 * buffers that are eligible for eviction, of type
657 * ARC_BUFC_DATA, and linked off the arc_mfu_ghost state.
658 * Not updated directly; only synced in arc_kstat_update.
660 kstat_named_t arcstat_mfu_ghost_evictable_data
;
662 * Number of bytes that *would have been* consumed by ARC
663 * buffers that are eligible for eviction, of type
664 * ARC_BUFC_METADATA, and linked off the arc_mru_ghost state.
665 * Not updated directly; only synced in arc_kstat_update.
667 kstat_named_t arcstat_mfu_ghost_evictable_metadata
;
668 kstat_named_t arcstat_l2_hits
;
669 kstat_named_t arcstat_l2_misses
;
670 kstat_named_t arcstat_l2_feeds
;
671 kstat_named_t arcstat_l2_rw_clash
;
672 kstat_named_t arcstat_l2_read_bytes
;
673 kstat_named_t arcstat_l2_write_bytes
;
674 kstat_named_t arcstat_l2_writes_sent
;
675 kstat_named_t arcstat_l2_writes_done
;
676 kstat_named_t arcstat_l2_writes_error
;
677 kstat_named_t arcstat_l2_writes_lock_retry
;
678 kstat_named_t arcstat_l2_evict_lock_retry
;
679 kstat_named_t arcstat_l2_evict_reading
;
680 kstat_named_t arcstat_l2_evict_l1cached
;
681 kstat_named_t arcstat_l2_free_on_write
;
682 kstat_named_t arcstat_l2_abort_lowmem
;
683 kstat_named_t arcstat_l2_cksum_bad
;
684 kstat_named_t arcstat_l2_io_error
;
685 kstat_named_t arcstat_l2_lsize
;
686 kstat_named_t arcstat_l2_psize
;
687 /* Not updated directly; only synced in arc_kstat_update. */
688 kstat_named_t arcstat_l2_hdr_size
;
689 kstat_named_t arcstat_memory_throttle_count
;
690 kstat_named_t arcstat_memory_direct_count
;
691 kstat_named_t arcstat_memory_indirect_count
;
692 kstat_named_t arcstat_memory_all_bytes
;
693 kstat_named_t arcstat_memory_free_bytes
;
694 kstat_named_t arcstat_memory_available_bytes
;
695 kstat_named_t arcstat_no_grow
;
696 kstat_named_t arcstat_tempreserve
;
697 kstat_named_t arcstat_loaned_bytes
;
698 kstat_named_t arcstat_prune
;
699 /* Not updated directly; only synced in arc_kstat_update. */
700 kstat_named_t arcstat_meta_used
;
701 kstat_named_t arcstat_meta_limit
;
702 kstat_named_t arcstat_dnode_limit
;
703 kstat_named_t arcstat_meta_max
;
704 kstat_named_t arcstat_meta_min
;
705 kstat_named_t arcstat_async_upgrade_sync
;
706 kstat_named_t arcstat_demand_hit_predictive_prefetch
;
707 kstat_named_t arcstat_demand_hit_prescient_prefetch
;
708 kstat_named_t arcstat_need_free
;
709 kstat_named_t arcstat_sys_free
;
710 kstat_named_t arcstat_raw_size
;
713 static arc_stats_t arc_stats
= {
714 { "hits", KSTAT_DATA_UINT64
},
715 { "misses", KSTAT_DATA_UINT64
},
716 { "demand_data_hits", KSTAT_DATA_UINT64
},
717 { "demand_data_misses", KSTAT_DATA_UINT64
},
718 { "demand_metadata_hits", KSTAT_DATA_UINT64
},
719 { "demand_metadata_misses", KSTAT_DATA_UINT64
},
720 { "prefetch_data_hits", KSTAT_DATA_UINT64
},
721 { "prefetch_data_misses", KSTAT_DATA_UINT64
},
722 { "prefetch_metadata_hits", KSTAT_DATA_UINT64
},
723 { "prefetch_metadata_misses", KSTAT_DATA_UINT64
},
724 { "mru_hits", KSTAT_DATA_UINT64
},
725 { "mru_ghost_hits", KSTAT_DATA_UINT64
},
726 { "mfu_hits", KSTAT_DATA_UINT64
},
727 { "mfu_ghost_hits", KSTAT_DATA_UINT64
},
728 { "deleted", KSTAT_DATA_UINT64
},
729 { "mutex_miss", KSTAT_DATA_UINT64
},
730 { "access_skip", KSTAT_DATA_UINT64
},
731 { "evict_skip", KSTAT_DATA_UINT64
},
732 { "evict_not_enough", KSTAT_DATA_UINT64
},
733 { "evict_l2_cached", KSTAT_DATA_UINT64
},
734 { "evict_l2_eligible", KSTAT_DATA_UINT64
},
735 { "evict_l2_ineligible", KSTAT_DATA_UINT64
},
736 { "evict_l2_skip", KSTAT_DATA_UINT64
},
737 { "hash_elements", KSTAT_DATA_UINT64
},
738 { "hash_elements_max", KSTAT_DATA_UINT64
},
739 { "hash_collisions", KSTAT_DATA_UINT64
},
740 { "hash_chains", KSTAT_DATA_UINT64
},
741 { "hash_chain_max", KSTAT_DATA_UINT64
},
742 { "p", KSTAT_DATA_UINT64
},
743 { "c", KSTAT_DATA_UINT64
},
744 { "c_min", KSTAT_DATA_UINT64
},
745 { "c_max", KSTAT_DATA_UINT64
},
746 { "size", KSTAT_DATA_UINT64
},
747 { "compressed_size", KSTAT_DATA_UINT64
},
748 { "uncompressed_size", KSTAT_DATA_UINT64
},
749 { "overhead_size", KSTAT_DATA_UINT64
},
750 { "hdr_size", KSTAT_DATA_UINT64
},
751 { "data_size", KSTAT_DATA_UINT64
},
752 { "metadata_size", KSTAT_DATA_UINT64
},
753 { "dbuf_size", KSTAT_DATA_UINT64
},
754 { "dnode_size", KSTAT_DATA_UINT64
},
755 { "bonus_size", KSTAT_DATA_UINT64
},
756 { "anon_size", KSTAT_DATA_UINT64
},
757 { "anon_evictable_data", KSTAT_DATA_UINT64
},
758 { "anon_evictable_metadata", KSTAT_DATA_UINT64
},
759 { "mru_size", KSTAT_DATA_UINT64
},
760 { "mru_evictable_data", KSTAT_DATA_UINT64
},
761 { "mru_evictable_metadata", KSTAT_DATA_UINT64
},
762 { "mru_ghost_size", KSTAT_DATA_UINT64
},
763 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64
},
764 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
765 { "mfu_size", KSTAT_DATA_UINT64
},
766 { "mfu_evictable_data", KSTAT_DATA_UINT64
},
767 { "mfu_evictable_metadata", KSTAT_DATA_UINT64
},
768 { "mfu_ghost_size", KSTAT_DATA_UINT64
},
769 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64
},
770 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64
},
771 { "l2_hits", KSTAT_DATA_UINT64
},
772 { "l2_misses", KSTAT_DATA_UINT64
},
773 { "l2_feeds", KSTAT_DATA_UINT64
},
774 { "l2_rw_clash", KSTAT_DATA_UINT64
},
775 { "l2_read_bytes", KSTAT_DATA_UINT64
},
776 { "l2_write_bytes", KSTAT_DATA_UINT64
},
777 { "l2_writes_sent", KSTAT_DATA_UINT64
},
778 { "l2_writes_done", KSTAT_DATA_UINT64
},
779 { "l2_writes_error", KSTAT_DATA_UINT64
},
780 { "l2_writes_lock_retry", KSTAT_DATA_UINT64
},
781 { "l2_evict_lock_retry", KSTAT_DATA_UINT64
},
782 { "l2_evict_reading", KSTAT_DATA_UINT64
},
783 { "l2_evict_l1cached", KSTAT_DATA_UINT64
},
784 { "l2_free_on_write", KSTAT_DATA_UINT64
},
785 { "l2_abort_lowmem", KSTAT_DATA_UINT64
},
786 { "l2_cksum_bad", KSTAT_DATA_UINT64
},
787 { "l2_io_error", KSTAT_DATA_UINT64
},
788 { "l2_size", KSTAT_DATA_UINT64
},
789 { "l2_asize", KSTAT_DATA_UINT64
},
790 { "l2_hdr_size", KSTAT_DATA_UINT64
},
791 { "memory_throttle_count", KSTAT_DATA_UINT64
},
792 { "memory_direct_count", KSTAT_DATA_UINT64
},
793 { "memory_indirect_count", KSTAT_DATA_UINT64
},
794 { "memory_all_bytes", KSTAT_DATA_UINT64
},
795 { "memory_free_bytes", KSTAT_DATA_UINT64
},
796 { "memory_available_bytes", KSTAT_DATA_INT64
},
797 { "arc_no_grow", KSTAT_DATA_UINT64
},
798 { "arc_tempreserve", KSTAT_DATA_UINT64
},
799 { "arc_loaned_bytes", KSTAT_DATA_UINT64
},
800 { "arc_prune", KSTAT_DATA_UINT64
},
801 { "arc_meta_used", KSTAT_DATA_UINT64
},
802 { "arc_meta_limit", KSTAT_DATA_UINT64
},
803 { "arc_dnode_limit", KSTAT_DATA_UINT64
},
804 { "arc_meta_max", KSTAT_DATA_UINT64
},
805 { "arc_meta_min", KSTAT_DATA_UINT64
},
806 { "async_upgrade_sync", KSTAT_DATA_UINT64
},
807 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64
},
808 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64
},
809 { "arc_need_free", KSTAT_DATA_UINT64
},
810 { "arc_sys_free", KSTAT_DATA_UINT64
},
811 { "arc_raw_size", KSTAT_DATA_UINT64
}
814 #define ARCSTAT(stat) (arc_stats.stat.value.ui64)
816 #define ARCSTAT_INCR(stat, val) \
817 atomic_add_64(&arc_stats.stat.value.ui64, (val))
819 #define ARCSTAT_BUMP(stat) ARCSTAT_INCR(stat, 1)
820 #define ARCSTAT_BUMPDOWN(stat) ARCSTAT_INCR(stat, -1)
822 #define ARCSTAT_MAX(stat, val) { \
824 while ((val) > (m = arc_stats.stat.value.ui64) && \
825 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \
829 #define ARCSTAT_MAXSTAT(stat) \
830 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64)
833 * We define a macro to allow ARC hits/misses to be easily broken down by
834 * two separate conditions, giving a total of four different subtypes for
835 * each of hits and misses (so eight statistics total).
837 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \
840 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \
842 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \
846 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \
848 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\
853 static arc_state_t
*arc_anon
;
854 static arc_state_t
*arc_mru
;
855 static arc_state_t
*arc_mru_ghost
;
856 static arc_state_t
*arc_mfu
;
857 static arc_state_t
*arc_mfu_ghost
;
858 static arc_state_t
*arc_l2c_only
;
861 * There are several ARC variables that are critical to export as kstats --
862 * but we don't want to have to grovel around in the kstat whenever we wish to
863 * manipulate them. For these variables, we therefore define them to be in
864 * terms of the statistic variable. This assures that we are not introducing
865 * the possibility of inconsistency by having shadow copies of the variables,
866 * while still allowing the code to be readable.
868 #define arc_p ARCSTAT(arcstat_p) /* target size of MRU */
869 #define arc_c ARCSTAT(arcstat_c) /* target size of cache */
870 #define arc_c_min ARCSTAT(arcstat_c_min) /* min target cache size */
871 #define arc_c_max ARCSTAT(arcstat_c_max) /* max target cache size */
872 #define arc_no_grow ARCSTAT(arcstat_no_grow) /* do not grow cache size */
873 #define arc_tempreserve ARCSTAT(arcstat_tempreserve)
874 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes)
875 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */
876 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */
877 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */
878 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */
879 #define arc_need_free ARCSTAT(arcstat_need_free) /* bytes to be freed */
880 #define arc_sys_free ARCSTAT(arcstat_sys_free) /* target system free bytes */
882 /* size of all b_rabd's in entire arc */
883 #define arc_raw_size ARCSTAT(arcstat_raw_size)
884 /* compressed size of entire arc */
885 #define arc_compressed_size ARCSTAT(arcstat_compressed_size)
886 /* uncompressed size of entire arc */
887 #define arc_uncompressed_size ARCSTAT(arcstat_uncompressed_size)
888 /* number of bytes in the arc from arc_buf_t's */
889 #define arc_overhead_size ARCSTAT(arcstat_overhead_size)
892 * There are also some ARC variables that we want to export, but that are
893 * updated so often that having the canonical representation be the statistic
894 * variable causes a performance bottleneck. We want to use aggsum_t's for these
895 * instead, but still be able to export the kstat in the same way as before.
896 * The solution is to always use the aggsum version, except in the kstat update
900 aggsum_t arc_meta_used
;
901 aggsum_t astat_data_size
;
902 aggsum_t astat_metadata_size
;
903 aggsum_t astat_dbuf_size
;
904 aggsum_t astat_dnode_size
;
905 aggsum_t astat_bonus_size
;
906 aggsum_t astat_hdr_size
;
907 aggsum_t astat_l2_hdr_size
;
909 static list_t arc_prune_list
;
910 static kmutex_t arc_prune_mtx
;
911 static taskq_t
*arc_prune_taskq
;
913 #define GHOST_STATE(state) \
914 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \
915 (state) == arc_l2c_only)
917 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE)
918 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS)
919 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR)
920 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH)
921 #define HDR_PRESCIENT_PREFETCH(hdr) \
922 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH)
923 #define HDR_COMPRESSION_ENABLED(hdr) \
924 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC)
926 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE)
927 #define HDR_L2_READING(hdr) \
928 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \
929 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR))
930 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING)
931 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED)
932 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD)
933 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED)
934 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH)
935 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA)
937 #define HDR_ISTYPE_METADATA(hdr) \
938 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA)
939 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr))
941 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR)
942 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)
943 #define HDR_HAS_RABD(hdr) \
944 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \
945 (hdr)->b_crypt_hdr.b_rabd != NULL)
946 #define HDR_ENCRYPTED(hdr) \
947 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
948 #define HDR_AUTHENTICATED(hdr) \
949 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot))
951 /* For storing compression mode in b_flags */
952 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1)
954 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \
955 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS))
956 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \
957 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp));
959 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL)
960 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED)
961 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED)
962 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED)
968 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t))
969 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr))
970 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr))
973 * Hash table routines
976 #define HT_LOCK_ALIGN 64
977 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN)))
982 unsigned char pad
[HT_LOCK_PAD
];
986 #define BUF_LOCKS 8192
987 typedef struct buf_hash_table
{
989 arc_buf_hdr_t
**ht_table
;
990 struct ht_lock ht_locks
[BUF_LOCKS
];
993 static buf_hash_table_t buf_hash_table
;
995 #define BUF_HASH_INDEX(spa, dva, birth) \
996 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask)
997 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)])
998 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock))
999 #define HDR_LOCK(hdr) \
1000 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth)))
1002 uint64_t zfs_crc64_table
[256];
1008 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */
1009 #define L2ARC_HEADROOM 2 /* num of writes */
1012 * If we discover during ARC scan any buffers to be compressed, we boost
1013 * our headroom for the next scanning cycle by this percentage multiple.
1015 #define L2ARC_HEADROOM_BOOST 200
1016 #define L2ARC_FEED_SECS 1 /* caching interval secs */
1017 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */
1020 * We can feed L2ARC from two states of ARC buffers, mru and mfu,
1021 * and each of the state has two types: data and metadata.
1023 #define L2ARC_FEED_TYPES 4
1025 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent)
1026 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done)
1028 /* L2ARC Performance Tunables */
1029 unsigned long l2arc_write_max
= L2ARC_WRITE_SIZE
; /* def max write size */
1030 unsigned long l2arc_write_boost
= L2ARC_WRITE_SIZE
; /* extra warmup write */
1031 unsigned long l2arc_headroom
= L2ARC_HEADROOM
; /* # of dev writes */
1032 unsigned long l2arc_headroom_boost
= L2ARC_HEADROOM_BOOST
;
1033 unsigned long l2arc_feed_secs
= L2ARC_FEED_SECS
; /* interval seconds */
1034 unsigned long l2arc_feed_min_ms
= L2ARC_FEED_MIN_MS
; /* min interval msecs */
1035 int l2arc_noprefetch
= B_TRUE
; /* don't cache prefetch bufs */
1036 int l2arc_feed_again
= B_TRUE
; /* turbo warmup */
1037 int l2arc_norw
= B_FALSE
; /* no reads during writes */
1042 static list_t L2ARC_dev_list
; /* device list */
1043 static list_t
*l2arc_dev_list
; /* device list pointer */
1044 static kmutex_t l2arc_dev_mtx
; /* device list mutex */
1045 static l2arc_dev_t
*l2arc_dev_last
; /* last device used */
1046 static list_t L2ARC_free_on_write
; /* free after write buf list */
1047 static list_t
*l2arc_free_on_write
; /* free after write list ptr */
1048 static kmutex_t l2arc_free_on_write_mtx
; /* mutex for list */
1049 static uint64_t l2arc_ndev
; /* number of devices */
1051 typedef struct l2arc_read_callback
{
1052 arc_buf_hdr_t
*l2rcb_hdr
; /* read header */
1053 blkptr_t l2rcb_bp
; /* original blkptr */
1054 zbookmark_phys_t l2rcb_zb
; /* original bookmark */
1055 int l2rcb_flags
; /* original flags */
1056 abd_t
*l2rcb_abd
; /* temporary buffer */
1057 } l2arc_read_callback_t
;
1059 typedef struct l2arc_data_free
{
1060 /* protected by l2arc_free_on_write_mtx */
1063 arc_buf_contents_t l2df_type
;
1064 list_node_t l2df_list_node
;
1065 } l2arc_data_free_t
;
1067 typedef enum arc_fill_flags
{
1068 ARC_FILL_LOCKED
= 1 << 0, /* hdr lock is held */
1069 ARC_FILL_COMPRESSED
= 1 << 1, /* fill with compressed data */
1070 ARC_FILL_ENCRYPTED
= 1 << 2, /* fill with encrypted data */
1071 ARC_FILL_NOAUTH
= 1 << 3, /* don't attempt to authenticate */
1072 ARC_FILL_IN_PLACE
= 1 << 4 /* fill in place (special case) */
1075 static kmutex_t l2arc_feed_thr_lock
;
1076 static kcondvar_t l2arc_feed_thr_cv
;
1077 static uint8_t l2arc_thread_exit
;
1079 static abd_t
*arc_get_data_abd(arc_buf_hdr_t
*, uint64_t, void *);
1080 static void *arc_get_data_buf(arc_buf_hdr_t
*, uint64_t, void *);
1081 static void arc_get_data_impl(arc_buf_hdr_t
*, uint64_t, void *);
1082 static void arc_free_data_abd(arc_buf_hdr_t
*, abd_t
*, uint64_t, void *);
1083 static void arc_free_data_buf(arc_buf_hdr_t
*, void *, uint64_t, void *);
1084 static void arc_free_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
);
1085 static void arc_hdr_free_abd(arc_buf_hdr_t
*, boolean_t
);
1086 static void arc_hdr_alloc_abd(arc_buf_hdr_t
*, boolean_t
);
1087 static void arc_access(arc_buf_hdr_t
*, kmutex_t
*);
1088 static boolean_t
arc_is_overflowing(void);
1089 static void arc_buf_watch(arc_buf_t
*);
1090 static void arc_tuning_update(void);
1091 static void arc_prune_async(int64_t);
1092 static uint64_t arc_all_memory(void);
1094 static arc_buf_contents_t
arc_buf_type(arc_buf_hdr_t
*);
1095 static uint32_t arc_bufc_to_flags(arc_buf_contents_t
);
1096 static inline void arc_hdr_set_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
);
1097 static inline void arc_hdr_clear_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
);
1099 static boolean_t
l2arc_write_eligible(uint64_t, arc_buf_hdr_t
*);
1100 static void l2arc_read_done(zio_t
*);
1104 * We use Cityhash for this. It's fast, and has good hash properties without
1105 * requiring any large static buffers.
1108 buf_hash(uint64_t spa
, const dva_t
*dva
, uint64_t birth
)
1110 return (cityhash4(spa
, dva
->dva_word
[0], dva
->dva_word
[1], birth
));
1113 #define HDR_EMPTY(hdr) \
1114 ((hdr)->b_dva.dva_word[0] == 0 && \
1115 (hdr)->b_dva.dva_word[1] == 0)
1117 #define HDR_EQUAL(spa, dva, birth, hdr) \
1118 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \
1119 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \
1120 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa)
1123 buf_discard_identity(arc_buf_hdr_t
*hdr
)
1125 hdr
->b_dva
.dva_word
[0] = 0;
1126 hdr
->b_dva
.dva_word
[1] = 0;
1130 static arc_buf_hdr_t
*
1131 buf_hash_find(uint64_t spa
, const blkptr_t
*bp
, kmutex_t
**lockp
)
1133 const dva_t
*dva
= BP_IDENTITY(bp
);
1134 uint64_t birth
= BP_PHYSICAL_BIRTH(bp
);
1135 uint64_t idx
= BUF_HASH_INDEX(spa
, dva
, birth
);
1136 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
1139 mutex_enter(hash_lock
);
1140 for (hdr
= buf_hash_table
.ht_table
[idx
]; hdr
!= NULL
;
1141 hdr
= hdr
->b_hash_next
) {
1142 if (HDR_EQUAL(spa
, dva
, birth
, hdr
)) {
1147 mutex_exit(hash_lock
);
1153 * Insert an entry into the hash table. If there is already an element
1154 * equal to elem in the hash table, then the already existing element
1155 * will be returned and the new element will not be inserted.
1156 * Otherwise returns NULL.
1157 * If lockp == NULL, the caller is assumed to already hold the hash lock.
1159 static arc_buf_hdr_t
*
1160 buf_hash_insert(arc_buf_hdr_t
*hdr
, kmutex_t
**lockp
)
1162 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
1163 kmutex_t
*hash_lock
= BUF_HASH_LOCK(idx
);
1164 arc_buf_hdr_t
*fhdr
;
1167 ASSERT(!DVA_IS_EMPTY(&hdr
->b_dva
));
1168 ASSERT(hdr
->b_birth
!= 0);
1169 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
1171 if (lockp
!= NULL
) {
1173 mutex_enter(hash_lock
);
1175 ASSERT(MUTEX_HELD(hash_lock
));
1178 for (fhdr
= buf_hash_table
.ht_table
[idx
], i
= 0; fhdr
!= NULL
;
1179 fhdr
= fhdr
->b_hash_next
, i
++) {
1180 if (HDR_EQUAL(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
, fhdr
))
1184 hdr
->b_hash_next
= buf_hash_table
.ht_table
[idx
];
1185 buf_hash_table
.ht_table
[idx
] = hdr
;
1186 arc_hdr_set_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
1188 /* collect some hash table performance data */
1190 ARCSTAT_BUMP(arcstat_hash_collisions
);
1192 ARCSTAT_BUMP(arcstat_hash_chains
);
1194 ARCSTAT_MAX(arcstat_hash_chain_max
, i
);
1197 ARCSTAT_BUMP(arcstat_hash_elements
);
1198 ARCSTAT_MAXSTAT(arcstat_hash_elements
);
1204 buf_hash_remove(arc_buf_hdr_t
*hdr
)
1206 arc_buf_hdr_t
*fhdr
, **hdrp
;
1207 uint64_t idx
= BUF_HASH_INDEX(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
);
1209 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx
)));
1210 ASSERT(HDR_IN_HASH_TABLE(hdr
));
1212 hdrp
= &buf_hash_table
.ht_table
[idx
];
1213 while ((fhdr
= *hdrp
) != hdr
) {
1214 ASSERT3P(fhdr
, !=, NULL
);
1215 hdrp
= &fhdr
->b_hash_next
;
1217 *hdrp
= hdr
->b_hash_next
;
1218 hdr
->b_hash_next
= NULL
;
1219 arc_hdr_clear_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
1221 /* collect some hash table performance data */
1222 ARCSTAT_BUMPDOWN(arcstat_hash_elements
);
1224 if (buf_hash_table
.ht_table
[idx
] &&
1225 buf_hash_table
.ht_table
[idx
]->b_hash_next
== NULL
)
1226 ARCSTAT_BUMPDOWN(arcstat_hash_chains
);
1230 * Global data structures and functions for the buf kmem cache.
1233 static kmem_cache_t
*hdr_full_cache
;
1234 static kmem_cache_t
*hdr_full_crypt_cache
;
1235 static kmem_cache_t
*hdr_l2only_cache
;
1236 static kmem_cache_t
*buf_cache
;
1243 #if defined(_KERNEL)
1245 * Large allocations which do not require contiguous pages
1246 * should be using vmem_free() in the linux kernel\
1248 vmem_free(buf_hash_table
.ht_table
,
1249 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
1251 kmem_free(buf_hash_table
.ht_table
,
1252 (buf_hash_table
.ht_mask
+ 1) * sizeof (void *));
1254 for (i
= 0; i
< BUF_LOCKS
; i
++)
1255 mutex_destroy(&buf_hash_table
.ht_locks
[i
].ht_lock
);
1256 kmem_cache_destroy(hdr_full_cache
);
1257 kmem_cache_destroy(hdr_full_crypt_cache
);
1258 kmem_cache_destroy(hdr_l2only_cache
);
1259 kmem_cache_destroy(buf_cache
);
1263 * Constructor callback - called when the cache is empty
1264 * and a new buf is requested.
1268 hdr_full_cons(void *vbuf
, void *unused
, int kmflag
)
1270 arc_buf_hdr_t
*hdr
= vbuf
;
1272 bzero(hdr
, HDR_FULL_SIZE
);
1273 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
1274 cv_init(&hdr
->b_l1hdr
.b_cv
, NULL
, CV_DEFAULT
, NULL
);
1275 zfs_refcount_create(&hdr
->b_l1hdr
.b_refcnt
);
1276 mutex_init(&hdr
->b_l1hdr
.b_freeze_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1277 list_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1278 list_link_init(&hdr
->b_l2hdr
.b_l2node
);
1279 multilist_link_init(&hdr
->b_l1hdr
.b_arc_node
);
1280 arc_space_consume(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1287 hdr_full_crypt_cons(void *vbuf
, void *unused
, int kmflag
)
1289 arc_buf_hdr_t
*hdr
= vbuf
;
1291 hdr_full_cons(vbuf
, unused
, kmflag
);
1292 bzero(&hdr
->b_crypt_hdr
, sizeof (hdr
->b_crypt_hdr
));
1293 arc_space_consume(sizeof (hdr
->b_crypt_hdr
), ARC_SPACE_HDRS
);
1300 hdr_l2only_cons(void *vbuf
, void *unused
, int kmflag
)
1302 arc_buf_hdr_t
*hdr
= vbuf
;
1304 bzero(hdr
, HDR_L2ONLY_SIZE
);
1305 arc_space_consume(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1312 buf_cons(void *vbuf
, void *unused
, int kmflag
)
1314 arc_buf_t
*buf
= vbuf
;
1316 bzero(buf
, sizeof (arc_buf_t
));
1317 mutex_init(&buf
->b_evict_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
1318 arc_space_consume(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1324 * Destructor callback - called when a cached buf is
1325 * no longer required.
1329 hdr_full_dest(void *vbuf
, void *unused
)
1331 arc_buf_hdr_t
*hdr
= vbuf
;
1333 ASSERT(HDR_EMPTY(hdr
));
1334 cv_destroy(&hdr
->b_l1hdr
.b_cv
);
1335 zfs_refcount_destroy(&hdr
->b_l1hdr
.b_refcnt
);
1336 mutex_destroy(&hdr
->b_l1hdr
.b_freeze_lock
);
1337 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
1338 arc_space_return(HDR_FULL_SIZE
, ARC_SPACE_HDRS
);
1343 hdr_full_crypt_dest(void *vbuf
, void *unused
)
1345 arc_buf_hdr_t
*hdr
= vbuf
;
1347 hdr_full_dest(vbuf
, unused
);
1348 arc_space_return(sizeof (hdr
->b_crypt_hdr
), ARC_SPACE_HDRS
);
1353 hdr_l2only_dest(void *vbuf
, void *unused
)
1355 ASSERTV(arc_buf_hdr_t
*hdr
= vbuf
);
1357 ASSERT(HDR_EMPTY(hdr
));
1358 arc_space_return(HDR_L2ONLY_SIZE
, ARC_SPACE_L2HDRS
);
1363 buf_dest(void *vbuf
, void *unused
)
1365 arc_buf_t
*buf
= vbuf
;
1367 mutex_destroy(&buf
->b_evict_lock
);
1368 arc_space_return(sizeof (arc_buf_t
), ARC_SPACE_HDRS
);
1372 * Reclaim callback -- invoked when memory is low.
1376 hdr_recl(void *unused
)
1378 dprintf("hdr_recl called\n");
1380 * umem calls the reclaim func when we destroy the buf cache,
1381 * which is after we do arc_fini().
1384 cv_signal(&arc_reclaim_thread_cv
);
1390 uint64_t *ct
= NULL
;
1391 uint64_t hsize
= 1ULL << 12;
1395 * The hash table is big enough to fill all of physical memory
1396 * with an average block size of zfs_arc_average_blocksize (default 8K).
1397 * By default, the table will take up
1398 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers).
1400 while (hsize
* zfs_arc_average_blocksize
< arc_all_memory())
1403 buf_hash_table
.ht_mask
= hsize
- 1;
1404 #if defined(_KERNEL)
1406 * Large allocations which do not require contiguous pages
1407 * should be using vmem_alloc() in the linux kernel
1409 buf_hash_table
.ht_table
=
1410 vmem_zalloc(hsize
* sizeof (void*), KM_SLEEP
);
1412 buf_hash_table
.ht_table
=
1413 kmem_zalloc(hsize
* sizeof (void*), KM_NOSLEEP
);
1415 if (buf_hash_table
.ht_table
== NULL
) {
1416 ASSERT(hsize
> (1ULL << 8));
1421 hdr_full_cache
= kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE
,
1422 0, hdr_full_cons
, hdr_full_dest
, hdr_recl
, NULL
, NULL
, 0);
1423 hdr_full_crypt_cache
= kmem_cache_create("arc_buf_hdr_t_full_crypt",
1424 HDR_FULL_CRYPT_SIZE
, 0, hdr_full_crypt_cons
, hdr_full_crypt_dest
,
1425 hdr_recl
, NULL
, NULL
, 0);
1426 hdr_l2only_cache
= kmem_cache_create("arc_buf_hdr_t_l2only",
1427 HDR_L2ONLY_SIZE
, 0, hdr_l2only_cons
, hdr_l2only_dest
, hdr_recl
,
1429 buf_cache
= kmem_cache_create("arc_buf_t", sizeof (arc_buf_t
),
1430 0, buf_cons
, buf_dest
, NULL
, NULL
, NULL
, 0);
1432 for (i
= 0; i
< 256; i
++)
1433 for (ct
= zfs_crc64_table
+ i
, *ct
= i
, j
= 8; j
> 0; j
--)
1434 *ct
= (*ct
>> 1) ^ (-(*ct
& 1) & ZFS_CRC64_POLY
);
1436 for (i
= 0; i
< BUF_LOCKS
; i
++) {
1437 mutex_init(&buf_hash_table
.ht_locks
[i
].ht_lock
,
1438 NULL
, MUTEX_DEFAULT
, NULL
);
1442 #define ARC_MINTIME (hz>>4) /* 62 ms */
1445 * This is the size that the buf occupies in memory. If the buf is compressed,
1446 * it will correspond to the compressed size. You should use this method of
1447 * getting the buf size unless you explicitly need the logical size.
1450 arc_buf_size(arc_buf_t
*buf
)
1452 return (ARC_BUF_COMPRESSED(buf
) ?
1453 HDR_GET_PSIZE(buf
->b_hdr
) : HDR_GET_LSIZE(buf
->b_hdr
));
1457 arc_buf_lsize(arc_buf_t
*buf
)
1459 return (HDR_GET_LSIZE(buf
->b_hdr
));
1463 * This function will return B_TRUE if the buffer is encrypted in memory.
1464 * This buffer can be decrypted by calling arc_untransform().
1467 arc_is_encrypted(arc_buf_t
*buf
)
1469 return (ARC_BUF_ENCRYPTED(buf
) != 0);
1473 * Returns B_TRUE if the buffer represents data that has not had its MAC
1477 arc_is_unauthenticated(arc_buf_t
*buf
)
1479 return (HDR_NOAUTH(buf
->b_hdr
) != 0);
1483 arc_get_raw_params(arc_buf_t
*buf
, boolean_t
*byteorder
, uint8_t *salt
,
1484 uint8_t *iv
, uint8_t *mac
)
1486 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1488 ASSERT(HDR_PROTECTED(hdr
));
1490 bcopy(hdr
->b_crypt_hdr
.b_salt
, salt
, ZIO_DATA_SALT_LEN
);
1491 bcopy(hdr
->b_crypt_hdr
.b_iv
, iv
, ZIO_DATA_IV_LEN
);
1492 bcopy(hdr
->b_crypt_hdr
.b_mac
, mac
, ZIO_DATA_MAC_LEN
);
1493 *byteorder
= (hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
) ?
1494 ZFS_HOST_BYTEORDER
: !ZFS_HOST_BYTEORDER
;
1498 * Indicates how this buffer is compressed in memory. If it is not compressed
1499 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with
1500 * arc_untransform() as long as it is also unencrypted.
1503 arc_get_compression(arc_buf_t
*buf
)
1505 return (ARC_BUF_COMPRESSED(buf
) ?
1506 HDR_GET_COMPRESS(buf
->b_hdr
) : ZIO_COMPRESS_OFF
);
1510 * Return the compression algorithm used to store this data in the ARC. If ARC
1511 * compression is enabled or this is an encrypted block, this will be the same
1512 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF.
1514 static inline enum zio_compress
1515 arc_hdr_get_compress(arc_buf_hdr_t
*hdr
)
1517 return (HDR_COMPRESSION_ENABLED(hdr
) ?
1518 HDR_GET_COMPRESS(hdr
) : ZIO_COMPRESS_OFF
);
1521 static inline boolean_t
1522 arc_buf_is_shared(arc_buf_t
*buf
)
1524 boolean_t shared
= (buf
->b_data
!= NULL
&&
1525 buf
->b_hdr
->b_l1hdr
.b_pabd
!= NULL
&&
1526 abd_is_linear(buf
->b_hdr
->b_l1hdr
.b_pabd
) &&
1527 buf
->b_data
== abd_to_buf(buf
->b_hdr
->b_l1hdr
.b_pabd
));
1528 IMPLY(shared
, HDR_SHARED_DATA(buf
->b_hdr
));
1529 IMPLY(shared
, ARC_BUF_SHARED(buf
));
1530 IMPLY(shared
, ARC_BUF_COMPRESSED(buf
) || ARC_BUF_LAST(buf
));
1533 * It would be nice to assert arc_can_share() too, but the "hdr isn't
1534 * already being shared" requirement prevents us from doing that.
1541 * Free the checksum associated with this header. If there is no checksum, this
1545 arc_cksum_free(arc_buf_hdr_t
*hdr
)
1547 ASSERT(HDR_HAS_L1HDR(hdr
));
1549 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
1550 if (hdr
->b_l1hdr
.b_freeze_cksum
!= NULL
) {
1551 kmem_free(hdr
->b_l1hdr
.b_freeze_cksum
, sizeof (zio_cksum_t
));
1552 hdr
->b_l1hdr
.b_freeze_cksum
= NULL
;
1554 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1558 * Return true iff at least one of the bufs on hdr is not compressed.
1559 * Encrypted buffers count as compressed.
1562 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t
*hdr
)
1564 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
||
1565 MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1567 for (arc_buf_t
*b
= hdr
->b_l1hdr
.b_buf
; b
!= NULL
; b
= b
->b_next
) {
1568 if (!ARC_BUF_COMPRESSED(b
)) {
1577 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data
1578 * matches the checksum that is stored in the hdr. If there is no checksum,
1579 * or if the buf is compressed, this is a no-op.
1582 arc_cksum_verify(arc_buf_t
*buf
)
1584 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1587 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1590 if (ARC_BUF_COMPRESSED(buf
))
1593 ASSERT(HDR_HAS_L1HDR(hdr
));
1595 mutex_enter(&hdr
->b_l1hdr
.b_freeze_lock
);
1597 if (hdr
->b_l1hdr
.b_freeze_cksum
== NULL
|| HDR_IO_ERROR(hdr
)) {
1598 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1602 fletcher_2_native(buf
->b_data
, arc_buf_size(buf
), NULL
, &zc
);
1603 if (!ZIO_CHECKSUM_EQUAL(*hdr
->b_l1hdr
.b_freeze_cksum
, zc
))
1604 panic("buffer modified while frozen!");
1605 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1609 * This function makes the assumption that data stored in the L2ARC
1610 * will be transformed exactly as it is in the main pool. Because of
1611 * this we can verify the checksum against the reading process's bp.
1614 arc_cksum_is_equal(arc_buf_hdr_t
*hdr
, zio_t
*zio
)
1616 ASSERT(!BP_IS_EMBEDDED(zio
->io_bp
));
1617 VERIFY3U(BP_GET_PSIZE(zio
->io_bp
), ==, HDR_GET_PSIZE(hdr
));
1620 * Block pointers always store the checksum for the logical data.
1621 * If the block pointer has the gang bit set, then the checksum
1622 * it represents is for the reconstituted data and not for an
1623 * individual gang member. The zio pipeline, however, must be able to
1624 * determine the checksum of each of the gang constituents so it
1625 * treats the checksum comparison differently than what we need
1626 * for l2arc blocks. This prevents us from using the
1627 * zio_checksum_error() interface directly. Instead we must call the
1628 * zio_checksum_error_impl() so that we can ensure the checksum is
1629 * generated using the correct checksum algorithm and accounts for the
1630 * logical I/O size and not just a gang fragment.
1632 return (zio_checksum_error_impl(zio
->io_spa
, zio
->io_bp
,
1633 BP_GET_CHECKSUM(zio
->io_bp
), zio
->io_abd
, zio
->io_size
,
1634 zio
->io_offset
, NULL
) == 0);
1638 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a
1639 * checksum and attaches it to the buf's hdr so that we can ensure that the buf
1640 * isn't modified later on. If buf is compressed or there is already a checksum
1641 * on the hdr, this is a no-op (we only checksum uncompressed bufs).
1644 arc_cksum_compute(arc_buf_t
*buf
)
1646 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1648 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1651 ASSERT(HDR_HAS_L1HDR(hdr
));
1653 mutex_enter(&buf
->b_hdr
->b_l1hdr
.b_freeze_lock
);
1654 if (hdr
->b_l1hdr
.b_freeze_cksum
!= NULL
|| ARC_BUF_COMPRESSED(buf
)) {
1655 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1659 ASSERT(!ARC_BUF_ENCRYPTED(buf
));
1660 ASSERT(!ARC_BUF_COMPRESSED(buf
));
1661 hdr
->b_l1hdr
.b_freeze_cksum
= kmem_alloc(sizeof (zio_cksum_t
),
1663 fletcher_2_native(buf
->b_data
, arc_buf_size(buf
), NULL
,
1664 hdr
->b_l1hdr
.b_freeze_cksum
);
1665 mutex_exit(&hdr
->b_l1hdr
.b_freeze_lock
);
1671 arc_buf_sigsegv(int sig
, siginfo_t
*si
, void *unused
)
1673 panic("Got SIGSEGV at address: 0x%lx\n", (long)si
->si_addr
);
1679 arc_buf_unwatch(arc_buf_t
*buf
)
1683 ASSERT0(mprotect(buf
->b_data
, arc_buf_size(buf
),
1684 PROT_READ
| PROT_WRITE
));
1691 arc_buf_watch(arc_buf_t
*buf
)
1695 ASSERT0(mprotect(buf
->b_data
, arc_buf_size(buf
),
1700 static arc_buf_contents_t
1701 arc_buf_type(arc_buf_hdr_t
*hdr
)
1703 arc_buf_contents_t type
;
1704 if (HDR_ISTYPE_METADATA(hdr
)) {
1705 type
= ARC_BUFC_METADATA
;
1707 type
= ARC_BUFC_DATA
;
1709 VERIFY3U(hdr
->b_type
, ==, type
);
1714 arc_is_metadata(arc_buf_t
*buf
)
1716 return (HDR_ISTYPE_METADATA(buf
->b_hdr
) != 0);
1720 arc_bufc_to_flags(arc_buf_contents_t type
)
1724 /* metadata field is 0 if buffer contains normal data */
1726 case ARC_BUFC_METADATA
:
1727 return (ARC_FLAG_BUFC_METADATA
);
1731 panic("undefined ARC buffer type!");
1732 return ((uint32_t)-1);
1736 arc_buf_thaw(arc_buf_t
*buf
)
1738 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1740 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
1741 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
1743 arc_cksum_verify(buf
);
1746 * Compressed buffers do not manipulate the b_freeze_cksum.
1748 if (ARC_BUF_COMPRESSED(buf
))
1751 ASSERT(HDR_HAS_L1HDR(hdr
));
1752 arc_cksum_free(hdr
);
1753 arc_buf_unwatch(buf
);
1757 arc_buf_freeze(arc_buf_t
*buf
)
1759 if (!(zfs_flags
& ZFS_DEBUG_MODIFY
))
1762 if (ARC_BUF_COMPRESSED(buf
))
1765 ASSERT(HDR_HAS_L1HDR(buf
->b_hdr
));
1766 arc_cksum_compute(buf
);
1770 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead,
1771 * the following functions should be used to ensure that the flags are
1772 * updated in a thread-safe way. When manipulating the flags either
1773 * the hash_lock must be held or the hdr must be undiscoverable. This
1774 * ensures that we're not racing with any other threads when updating
1778 arc_hdr_set_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
)
1780 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1781 hdr
->b_flags
|= flags
;
1785 arc_hdr_clear_flags(arc_buf_hdr_t
*hdr
, arc_flags_t flags
)
1787 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1788 hdr
->b_flags
&= ~flags
;
1792 * Setting the compression bits in the arc_buf_hdr_t's b_flags is
1793 * done in a special way since we have to clear and set bits
1794 * at the same time. Consumers that wish to set the compression bits
1795 * must use this function to ensure that the flags are updated in
1796 * thread-safe manner.
1799 arc_hdr_set_compress(arc_buf_hdr_t
*hdr
, enum zio_compress cmp
)
1801 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1804 * Holes and embedded blocks will always have a psize = 0 so
1805 * we ignore the compression of the blkptr and set the
1806 * want to uncompress them. Mark them as uncompressed.
1808 if (!zfs_compressed_arc_enabled
|| HDR_GET_PSIZE(hdr
) == 0) {
1809 arc_hdr_clear_flags(hdr
, ARC_FLAG_COMPRESSED_ARC
);
1810 ASSERT(!HDR_COMPRESSION_ENABLED(hdr
));
1812 arc_hdr_set_flags(hdr
, ARC_FLAG_COMPRESSED_ARC
);
1813 ASSERT(HDR_COMPRESSION_ENABLED(hdr
));
1816 HDR_SET_COMPRESS(hdr
, cmp
);
1817 ASSERT3U(HDR_GET_COMPRESS(hdr
), ==, cmp
);
1821 * Looks for another buf on the same hdr which has the data decompressed, copies
1822 * from it, and returns true. If no such buf exists, returns false.
1825 arc_buf_try_copy_decompressed_data(arc_buf_t
*buf
)
1827 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
1828 boolean_t copied
= B_FALSE
;
1830 ASSERT(HDR_HAS_L1HDR(hdr
));
1831 ASSERT3P(buf
->b_data
, !=, NULL
);
1832 ASSERT(!ARC_BUF_COMPRESSED(buf
));
1834 for (arc_buf_t
*from
= hdr
->b_l1hdr
.b_buf
; from
!= NULL
;
1835 from
= from
->b_next
) {
1836 /* can't use our own data buffer */
1841 if (!ARC_BUF_COMPRESSED(from
)) {
1842 bcopy(from
->b_data
, buf
->b_data
, arc_buf_size(buf
));
1849 * There were no decompressed bufs, so there should not be a
1850 * checksum on the hdr either.
1852 EQUIV(!copied
, hdr
->b_l1hdr
.b_freeze_cksum
== NULL
);
1858 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t.
1861 arc_hdr_size(arc_buf_hdr_t
*hdr
)
1865 if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
&&
1866 HDR_GET_PSIZE(hdr
) > 0) {
1867 size
= HDR_GET_PSIZE(hdr
);
1869 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, 0);
1870 size
= HDR_GET_LSIZE(hdr
);
1876 arc_hdr_authenticate(arc_buf_hdr_t
*hdr
, spa_t
*spa
, uint64_t dsobj
)
1880 uint64_t lsize
= HDR_GET_LSIZE(hdr
);
1881 uint64_t psize
= HDR_GET_PSIZE(hdr
);
1882 void *tmpbuf
= NULL
;
1883 abd_t
*abd
= hdr
->b_l1hdr
.b_pabd
;
1885 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1886 ASSERT(HDR_AUTHENTICATED(hdr
));
1887 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
1890 * The MAC is calculated on the compressed data that is stored on disk.
1891 * However, if compressed arc is disabled we will only have the
1892 * decompressed data available to us now. Compress it into a temporary
1893 * abd so we can verify the MAC. The performance overhead of this will
1894 * be relatively low, since most objects in an encrypted objset will
1895 * be encrypted (instead of authenticated) anyway.
1897 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
1898 !HDR_COMPRESSION_ENABLED(hdr
)) {
1899 tmpbuf
= zio_buf_alloc(lsize
);
1900 abd
= abd_get_from_buf(tmpbuf
, lsize
);
1901 abd_take_ownership_of_buf(abd
, B_TRUE
);
1903 csize
= zio_compress_data(HDR_GET_COMPRESS(hdr
),
1904 hdr
->b_l1hdr
.b_pabd
, tmpbuf
, lsize
);
1905 ASSERT3U(csize
, <=, psize
);
1906 abd_zero_off(abd
, csize
, psize
- csize
);
1910 * Authentication is best effort. We authenticate whenever the key is
1911 * available. If we succeed we clear ARC_FLAG_NOAUTH.
1913 if (hdr
->b_crypt_hdr
.b_ot
== DMU_OT_OBJSET
) {
1914 ASSERT3U(HDR_GET_COMPRESS(hdr
), ==, ZIO_COMPRESS_OFF
);
1915 ASSERT3U(lsize
, ==, psize
);
1916 ret
= spa_do_crypt_objset_mac_abd(B_FALSE
, spa
, dsobj
, abd
,
1917 psize
, hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
1919 ret
= spa_do_crypt_mac_abd(B_FALSE
, spa
, dsobj
, abd
, psize
,
1920 hdr
->b_crypt_hdr
.b_mac
);
1924 arc_hdr_clear_flags(hdr
, ARC_FLAG_NOAUTH
);
1925 else if (ret
!= ENOENT
)
1941 * This function will take a header that only has raw encrypted data in
1942 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in
1943 * b_l1hdr.b_pabd. If designated in the header flags, this function will
1944 * also decompress the data.
1947 arc_hdr_decrypt(arc_buf_hdr_t
*hdr
, spa_t
*spa
, const zbookmark_phys_t
*zb
)
1952 boolean_t no_crypt
= B_FALSE
;
1953 boolean_t bswap
= (hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
1955 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
1956 ASSERT(HDR_ENCRYPTED(hdr
));
1958 arc_hdr_alloc_abd(hdr
, B_FALSE
);
1960 ret
= spa_do_crypt_abd(B_FALSE
, spa
, zb
, hdr
->b_crypt_hdr
.b_ot
,
1961 B_FALSE
, bswap
, hdr
->b_crypt_hdr
.b_salt
, hdr
->b_crypt_hdr
.b_iv
,
1962 hdr
->b_crypt_hdr
.b_mac
, HDR_GET_PSIZE(hdr
), hdr
->b_l1hdr
.b_pabd
,
1963 hdr
->b_crypt_hdr
.b_rabd
, &no_crypt
);
1968 abd_copy(hdr
->b_l1hdr
.b_pabd
, hdr
->b_crypt_hdr
.b_rabd
,
1969 HDR_GET_PSIZE(hdr
));
1973 * If this header has disabled arc compression but the b_pabd is
1974 * compressed after decrypting it, we need to decompress the newly
1977 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
1978 !HDR_COMPRESSION_ENABLED(hdr
)) {
1980 * We want to make sure that we are correctly honoring the
1981 * zfs_abd_scatter_enabled setting, so we allocate an abd here
1982 * and then loan a buffer from it, rather than allocating a
1983 * linear buffer and wrapping it in an abd later.
1985 cabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
1986 tmp
= abd_borrow_buf(cabd
, arc_hdr_size(hdr
));
1988 ret
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
1989 hdr
->b_l1hdr
.b_pabd
, tmp
, HDR_GET_PSIZE(hdr
),
1990 HDR_GET_LSIZE(hdr
));
1992 abd_return_buf(cabd
, tmp
, arc_hdr_size(hdr
));
1996 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
1997 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
1998 arc_hdr_size(hdr
), hdr
);
1999 hdr
->b_l1hdr
.b_pabd
= cabd
;
2005 arc_hdr_free_abd(hdr
, B_FALSE
);
2007 arc_free_data_buf(hdr
, cabd
, arc_hdr_size(hdr
), hdr
);
2013 * This function is called during arc_buf_fill() to prepare the header's
2014 * abd plaintext pointer for use. This involves authenticated protected
2015 * data and decrypting encrypted data into the plaintext abd.
2018 arc_fill_hdr_crypt(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, spa_t
*spa
,
2019 const zbookmark_phys_t
*zb
, boolean_t noauth
)
2023 ASSERT(HDR_PROTECTED(hdr
));
2025 if (hash_lock
!= NULL
)
2026 mutex_enter(hash_lock
);
2028 if (HDR_NOAUTH(hdr
) && !noauth
) {
2030 * The caller requested authenticated data but our data has
2031 * not been authenticated yet. Verify the MAC now if we can.
2033 ret
= arc_hdr_authenticate(hdr
, spa
, zb
->zb_objset
);
2036 } else if (HDR_HAS_RABD(hdr
) && hdr
->b_l1hdr
.b_pabd
== NULL
) {
2038 * If we only have the encrypted version of the data, but the
2039 * unencrypted version was requested we take this opportunity
2040 * to store the decrypted version in the header for future use.
2042 ret
= arc_hdr_decrypt(hdr
, spa
, zb
);
2047 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
2049 if (hash_lock
!= NULL
)
2050 mutex_exit(hash_lock
);
2055 if (hash_lock
!= NULL
)
2056 mutex_exit(hash_lock
);
2062 * This function is used by the dbuf code to decrypt bonus buffers in place.
2063 * The dbuf code itself doesn't have any locking for decrypting a shared dnode
2064 * block, so we use the hash lock here to protect against concurrent calls to
2068 arc_buf_untransform_in_place(arc_buf_t
*buf
, kmutex_t
*hash_lock
)
2070 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2072 ASSERT(HDR_ENCRYPTED(hdr
));
2073 ASSERT3U(hdr
->b_crypt_hdr
.b_ot
, ==, DMU_OT_DNODE
);
2074 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
2075 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
2077 zio_crypt_copy_dnode_bonus(hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
2079 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
2080 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
2081 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
2085 * Given a buf that has a data buffer attached to it, this function will
2086 * efficiently fill the buf with data of the specified compression setting from
2087 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr
2088 * are already sharing a data buf, no copy is performed.
2090 * If the buf is marked as compressed but uncompressed data was requested, this
2091 * will allocate a new data buffer for the buf, remove that flag, and fill the
2092 * buf with uncompressed data. You can't request a compressed buf on a hdr with
2093 * uncompressed data, and (since we haven't added support for it yet) if you
2094 * want compressed data your buf must already be marked as compressed and have
2095 * the correct-sized data buffer.
2098 arc_buf_fill(arc_buf_t
*buf
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2099 arc_fill_flags_t flags
)
2102 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
2103 boolean_t hdr_compressed
=
2104 (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
2105 boolean_t compressed
= (flags
& ARC_FILL_COMPRESSED
) != 0;
2106 boolean_t encrypted
= (flags
& ARC_FILL_ENCRYPTED
) != 0;
2107 dmu_object_byteswap_t bswap
= hdr
->b_l1hdr
.b_byteswap
;
2108 kmutex_t
*hash_lock
= (flags
& ARC_FILL_LOCKED
) ? NULL
: HDR_LOCK(hdr
);
2110 ASSERT3P(buf
->b_data
, !=, NULL
);
2111 IMPLY(compressed
, hdr_compressed
|| ARC_BUF_ENCRYPTED(buf
));
2112 IMPLY(compressed
, ARC_BUF_COMPRESSED(buf
));
2113 IMPLY(encrypted
, HDR_ENCRYPTED(hdr
));
2114 IMPLY(encrypted
, ARC_BUF_ENCRYPTED(buf
));
2115 IMPLY(encrypted
, ARC_BUF_COMPRESSED(buf
));
2116 IMPLY(encrypted
, !ARC_BUF_SHARED(buf
));
2119 * If the caller wanted encrypted data we just need to copy it from
2120 * b_rabd and potentially byteswap it. We won't be able to do any
2121 * further transforms on it.
2124 ASSERT(HDR_HAS_RABD(hdr
));
2125 abd_copy_to_buf(buf
->b_data
, hdr
->b_crypt_hdr
.b_rabd
,
2126 HDR_GET_PSIZE(hdr
));
2131 * Adjust encrypted and authenticated headers to accomodate
2132 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are
2133 * allowed to fail decryption due to keys not being loaded
2134 * without being marked as an IO error.
2136 if (HDR_PROTECTED(hdr
)) {
2137 error
= arc_fill_hdr_crypt(hdr
, hash_lock
, spa
,
2138 zb
, !!(flags
& ARC_FILL_NOAUTH
));
2139 if (error
== EACCES
&& (flags
& ARC_FILL_IN_PLACE
) != 0) {
2141 } else if (error
!= 0) {
2142 if (hash_lock
!= NULL
)
2143 mutex_enter(hash_lock
);
2144 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
2145 if (hash_lock
!= NULL
)
2146 mutex_exit(hash_lock
);
2152 * There is a special case here for dnode blocks which are
2153 * decrypting their bonus buffers. These blocks may request to
2154 * be decrypted in-place. This is necessary because there may
2155 * be many dnodes pointing into this buffer and there is
2156 * currently no method to synchronize replacing the backing
2157 * b_data buffer and updating all of the pointers. Here we use
2158 * the hash lock to ensure there are no races. If the need
2159 * arises for other types to be decrypted in-place, they must
2160 * add handling here as well.
2162 if ((flags
& ARC_FILL_IN_PLACE
) != 0) {
2163 ASSERT(!hdr_compressed
);
2164 ASSERT(!compressed
);
2167 if (HDR_ENCRYPTED(hdr
) && ARC_BUF_ENCRYPTED(buf
)) {
2168 ASSERT3U(hdr
->b_crypt_hdr
.b_ot
, ==, DMU_OT_DNODE
);
2170 if (hash_lock
!= NULL
)
2171 mutex_enter(hash_lock
);
2172 arc_buf_untransform_in_place(buf
, hash_lock
);
2173 if (hash_lock
!= NULL
)
2174 mutex_exit(hash_lock
);
2176 /* Compute the hdr's checksum if necessary */
2177 arc_cksum_compute(buf
);
2183 if (hdr_compressed
== compressed
) {
2184 if (!arc_buf_is_shared(buf
)) {
2185 abd_copy_to_buf(buf
->b_data
, hdr
->b_l1hdr
.b_pabd
,
2189 ASSERT(hdr_compressed
);
2190 ASSERT(!compressed
);
2191 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, HDR_GET_PSIZE(hdr
));
2194 * If the buf is sharing its data with the hdr, unlink it and
2195 * allocate a new data buffer for the buf.
2197 if (arc_buf_is_shared(buf
)) {
2198 ASSERT(ARC_BUF_COMPRESSED(buf
));
2200 /* We need to give the buf it's own b_data */
2201 buf
->b_flags
&= ~ARC_BUF_FLAG_SHARED
;
2203 arc_get_data_buf(hdr
, HDR_GET_LSIZE(hdr
), buf
);
2204 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
2206 /* Previously overhead was 0; just add new overhead */
2207 ARCSTAT_INCR(arcstat_overhead_size
, HDR_GET_LSIZE(hdr
));
2208 } else if (ARC_BUF_COMPRESSED(buf
)) {
2209 /* We need to reallocate the buf's b_data */
2210 arc_free_data_buf(hdr
, buf
->b_data
, HDR_GET_PSIZE(hdr
),
2213 arc_get_data_buf(hdr
, HDR_GET_LSIZE(hdr
), buf
);
2215 /* We increased the size of b_data; update overhead */
2216 ARCSTAT_INCR(arcstat_overhead_size
,
2217 HDR_GET_LSIZE(hdr
) - HDR_GET_PSIZE(hdr
));
2221 * Regardless of the buf's previous compression settings, it
2222 * should not be compressed at the end of this function.
2224 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
2227 * Try copying the data from another buf which already has a
2228 * decompressed version. If that's not possible, it's time to
2229 * bite the bullet and decompress the data from the hdr.
2231 if (arc_buf_try_copy_decompressed_data(buf
)) {
2232 /* Skip byteswapping and checksumming (already done) */
2233 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, !=, NULL
);
2236 error
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
2237 hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
2238 HDR_GET_PSIZE(hdr
), HDR_GET_LSIZE(hdr
));
2241 * Absent hardware errors or software bugs, this should
2242 * be impossible, but log it anyway so we can debug it.
2246 "hdr %p, compress %d, psize %d, lsize %d",
2247 hdr
, arc_hdr_get_compress(hdr
),
2248 HDR_GET_PSIZE(hdr
), HDR_GET_LSIZE(hdr
));
2249 if (hash_lock
!= NULL
)
2250 mutex_enter(hash_lock
);
2251 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
2252 if (hash_lock
!= NULL
)
2253 mutex_exit(hash_lock
);
2254 return (SET_ERROR(EIO
));
2260 /* Byteswap the buf's data if necessary */
2261 if (bswap
!= DMU_BSWAP_NUMFUNCS
) {
2262 ASSERT(!HDR_SHARED_DATA(hdr
));
2263 ASSERT3U(bswap
, <, DMU_BSWAP_NUMFUNCS
);
2264 dmu_ot_byteswap
[bswap
].ob_func(buf
->b_data
, HDR_GET_LSIZE(hdr
));
2267 /* Compute the hdr's checksum if necessary */
2268 arc_cksum_compute(buf
);
2274 * If this function is being called to decrypt an encrypted buffer or verify an
2275 * authenticated one, the key must be loaded and a mapping must be made
2276 * available in the keystore via spa_keystore_create_mapping() or one of its
2280 arc_untransform(arc_buf_t
*buf
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2284 arc_fill_flags_t flags
= 0;
2287 flags
|= ARC_FILL_IN_PLACE
;
2289 ret
= arc_buf_fill(buf
, spa
, zb
, flags
);
2290 if (ret
== ECKSUM
) {
2292 * Convert authentication and decryption errors to EIO
2293 * (and generate an ereport) before leaving the ARC.
2295 ret
= SET_ERROR(EIO
);
2296 spa_log_error(spa
, zb
);
2297 zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION
,
2298 spa
, NULL
, zb
, NULL
, 0, 0);
2305 * Increment the amount of evictable space in the arc_state_t's refcount.
2306 * We account for the space used by the hdr and the arc buf individually
2307 * so that we can add and remove them from the refcount individually.
2310 arc_evictable_space_increment(arc_buf_hdr_t
*hdr
, arc_state_t
*state
)
2312 arc_buf_contents_t type
= arc_buf_type(hdr
);
2314 ASSERT(HDR_HAS_L1HDR(hdr
));
2316 if (GHOST_STATE(state
)) {
2317 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
2318 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2319 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2320 ASSERT(!HDR_HAS_RABD(hdr
));
2321 (void) zfs_refcount_add_many(&state
->arcs_esize
[type
],
2322 HDR_GET_LSIZE(hdr
), hdr
);
2326 ASSERT(!GHOST_STATE(state
));
2327 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2328 (void) zfs_refcount_add_many(&state
->arcs_esize
[type
],
2329 arc_hdr_size(hdr
), hdr
);
2331 if (HDR_HAS_RABD(hdr
)) {
2332 (void) zfs_refcount_add_many(&state
->arcs_esize
[type
],
2333 HDR_GET_PSIZE(hdr
), hdr
);
2336 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2337 buf
= buf
->b_next
) {
2338 if (arc_buf_is_shared(buf
))
2340 (void) zfs_refcount_add_many(&state
->arcs_esize
[type
],
2341 arc_buf_size(buf
), buf
);
2346 * Decrement the amount of evictable space in the arc_state_t's refcount.
2347 * We account for the space used by the hdr and the arc buf individually
2348 * so that we can add and remove them from the refcount individually.
2351 arc_evictable_space_decrement(arc_buf_hdr_t
*hdr
, arc_state_t
*state
)
2353 arc_buf_contents_t type
= arc_buf_type(hdr
);
2355 ASSERT(HDR_HAS_L1HDR(hdr
));
2357 if (GHOST_STATE(state
)) {
2358 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
2359 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2360 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2361 ASSERT(!HDR_HAS_RABD(hdr
));
2362 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
2363 HDR_GET_LSIZE(hdr
), hdr
);
2367 ASSERT(!GHOST_STATE(state
));
2368 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2369 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
2370 arc_hdr_size(hdr
), hdr
);
2372 if (HDR_HAS_RABD(hdr
)) {
2373 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
2374 HDR_GET_PSIZE(hdr
), hdr
);
2377 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2378 buf
= buf
->b_next
) {
2379 if (arc_buf_is_shared(buf
))
2381 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
2382 arc_buf_size(buf
), buf
);
2387 * Add a reference to this hdr indicating that someone is actively
2388 * referencing that memory. When the refcount transitions from 0 to 1,
2389 * we remove it from the respective arc_state_t list to indicate that
2390 * it is not evictable.
2393 add_reference(arc_buf_hdr_t
*hdr
, void *tag
)
2397 ASSERT(HDR_HAS_L1HDR(hdr
));
2398 if (!MUTEX_HELD(HDR_LOCK(hdr
))) {
2399 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
2400 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
2401 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2404 state
= hdr
->b_l1hdr
.b_state
;
2406 if ((zfs_refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
) == 1) &&
2407 (state
!= arc_anon
)) {
2408 /* We don't use the L2-only state list. */
2409 if (state
!= arc_l2c_only
) {
2410 multilist_remove(state
->arcs_list
[arc_buf_type(hdr
)],
2412 arc_evictable_space_decrement(hdr
, state
);
2414 /* remove the prefetch flag if we get a reference */
2415 arc_hdr_clear_flags(hdr
, ARC_FLAG_PREFETCH
);
2420 * Remove a reference from this hdr. When the reference transitions from
2421 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's
2422 * list making it eligible for eviction.
2425 remove_reference(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
, void *tag
)
2428 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
2430 ASSERT(HDR_HAS_L1HDR(hdr
));
2431 ASSERT(state
== arc_anon
|| MUTEX_HELD(hash_lock
));
2432 ASSERT(!GHOST_STATE(state
));
2435 * arc_l2c_only counts as a ghost state so we don't need to explicitly
2436 * check to prevent usage of the arc_l2c_only list.
2438 if (((cnt
= zfs_refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
)) == 0) &&
2439 (state
!= arc_anon
)) {
2440 multilist_insert(state
->arcs_list
[arc_buf_type(hdr
)], hdr
);
2441 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, >, 0);
2442 arc_evictable_space_increment(hdr
, state
);
2448 * Returns detailed information about a specific arc buffer. When the
2449 * state_index argument is set the function will calculate the arc header
2450 * list position for its arc state. Since this requires a linear traversal
2451 * callers are strongly encourage not to do this. However, it can be helpful
2452 * for targeted analysis so the functionality is provided.
2455 arc_buf_info(arc_buf_t
*ab
, arc_buf_info_t
*abi
, int state_index
)
2457 arc_buf_hdr_t
*hdr
= ab
->b_hdr
;
2458 l1arc_buf_hdr_t
*l1hdr
= NULL
;
2459 l2arc_buf_hdr_t
*l2hdr
= NULL
;
2460 arc_state_t
*state
= NULL
;
2462 memset(abi
, 0, sizeof (arc_buf_info_t
));
2467 abi
->abi_flags
= hdr
->b_flags
;
2469 if (HDR_HAS_L1HDR(hdr
)) {
2470 l1hdr
= &hdr
->b_l1hdr
;
2471 state
= l1hdr
->b_state
;
2473 if (HDR_HAS_L2HDR(hdr
))
2474 l2hdr
= &hdr
->b_l2hdr
;
2477 abi
->abi_bufcnt
= l1hdr
->b_bufcnt
;
2478 abi
->abi_access
= l1hdr
->b_arc_access
;
2479 abi
->abi_mru_hits
= l1hdr
->b_mru_hits
;
2480 abi
->abi_mru_ghost_hits
= l1hdr
->b_mru_ghost_hits
;
2481 abi
->abi_mfu_hits
= l1hdr
->b_mfu_hits
;
2482 abi
->abi_mfu_ghost_hits
= l1hdr
->b_mfu_ghost_hits
;
2483 abi
->abi_holds
= zfs_refcount_count(&l1hdr
->b_refcnt
);
2487 abi
->abi_l2arc_dattr
= l2hdr
->b_daddr
;
2488 abi
->abi_l2arc_hits
= l2hdr
->b_hits
;
2491 abi
->abi_state_type
= state
? state
->arcs_state
: ARC_STATE_ANON
;
2492 abi
->abi_state_contents
= arc_buf_type(hdr
);
2493 abi
->abi_size
= arc_hdr_size(hdr
);
2497 * Move the supplied buffer to the indicated state. The hash lock
2498 * for the buffer must be held by the caller.
2501 arc_change_state(arc_state_t
*new_state
, arc_buf_hdr_t
*hdr
,
2502 kmutex_t
*hash_lock
)
2504 arc_state_t
*old_state
;
2507 boolean_t update_old
, update_new
;
2508 arc_buf_contents_t buftype
= arc_buf_type(hdr
);
2511 * We almost always have an L1 hdr here, since we call arc_hdr_realloc()
2512 * in arc_read() when bringing a buffer out of the L2ARC. However, the
2513 * L1 hdr doesn't always exist when we change state to arc_anon before
2514 * destroying a header, in which case reallocating to add the L1 hdr is
2517 if (HDR_HAS_L1HDR(hdr
)) {
2518 old_state
= hdr
->b_l1hdr
.b_state
;
2519 refcnt
= zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
);
2520 bufcnt
= hdr
->b_l1hdr
.b_bufcnt
;
2521 update_old
= (bufcnt
> 0 || hdr
->b_l1hdr
.b_pabd
!= NULL
||
2524 old_state
= arc_l2c_only
;
2527 update_old
= B_FALSE
;
2529 update_new
= update_old
;
2531 ASSERT(MUTEX_HELD(hash_lock
));
2532 ASSERT3P(new_state
, !=, old_state
);
2533 ASSERT(!GHOST_STATE(new_state
) || bufcnt
== 0);
2534 ASSERT(old_state
!= arc_anon
|| bufcnt
<= 1);
2537 * If this buffer is evictable, transfer it from the
2538 * old state list to the new state list.
2541 if (old_state
!= arc_anon
&& old_state
!= arc_l2c_only
) {
2542 ASSERT(HDR_HAS_L1HDR(hdr
));
2543 multilist_remove(old_state
->arcs_list
[buftype
], hdr
);
2545 if (GHOST_STATE(old_state
)) {
2547 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2548 update_old
= B_TRUE
;
2550 arc_evictable_space_decrement(hdr
, old_state
);
2552 if (new_state
!= arc_anon
&& new_state
!= arc_l2c_only
) {
2554 * An L1 header always exists here, since if we're
2555 * moving to some L1-cached state (i.e. not l2c_only or
2556 * anonymous), we realloc the header to add an L1hdr
2559 ASSERT(HDR_HAS_L1HDR(hdr
));
2560 multilist_insert(new_state
->arcs_list
[buftype
], hdr
);
2562 if (GHOST_STATE(new_state
)) {
2564 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
2565 update_new
= B_TRUE
;
2567 arc_evictable_space_increment(hdr
, new_state
);
2571 ASSERT(!HDR_EMPTY(hdr
));
2572 if (new_state
== arc_anon
&& HDR_IN_HASH_TABLE(hdr
))
2573 buf_hash_remove(hdr
);
2575 /* adjust state sizes (ignore arc_l2c_only) */
2577 if (update_new
&& new_state
!= arc_l2c_only
) {
2578 ASSERT(HDR_HAS_L1HDR(hdr
));
2579 if (GHOST_STATE(new_state
)) {
2583 * When moving a header to a ghost state, we first
2584 * remove all arc buffers. Thus, we'll have a
2585 * bufcnt of zero, and no arc buffer to use for
2586 * the reference. As a result, we use the arc
2587 * header pointer for the reference.
2589 (void) zfs_refcount_add_many(&new_state
->arcs_size
,
2590 HDR_GET_LSIZE(hdr
), hdr
);
2591 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2592 ASSERT(!HDR_HAS_RABD(hdr
));
2594 uint32_t buffers
= 0;
2597 * Each individual buffer holds a unique reference,
2598 * thus we must remove each of these references one
2601 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2602 buf
= buf
->b_next
) {
2603 ASSERT3U(bufcnt
, !=, 0);
2607 * When the arc_buf_t is sharing the data
2608 * block with the hdr, the owner of the
2609 * reference belongs to the hdr. Only
2610 * add to the refcount if the arc_buf_t is
2613 if (arc_buf_is_shared(buf
))
2616 (void) zfs_refcount_add_many(
2617 &new_state
->arcs_size
,
2618 arc_buf_size(buf
), buf
);
2620 ASSERT3U(bufcnt
, ==, buffers
);
2622 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2623 (void) zfs_refcount_add_many(
2624 &new_state
->arcs_size
,
2625 arc_hdr_size(hdr
), hdr
);
2628 if (HDR_HAS_RABD(hdr
)) {
2629 (void) zfs_refcount_add_many(
2630 &new_state
->arcs_size
,
2631 HDR_GET_PSIZE(hdr
), hdr
);
2636 if (update_old
&& old_state
!= arc_l2c_only
) {
2637 ASSERT(HDR_HAS_L1HDR(hdr
));
2638 if (GHOST_STATE(old_state
)) {
2640 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
2641 ASSERT(!HDR_HAS_RABD(hdr
));
2644 * When moving a header off of a ghost state,
2645 * the header will not contain any arc buffers.
2646 * We use the arc header pointer for the reference
2647 * which is exactly what we did when we put the
2648 * header on the ghost state.
2651 (void) zfs_refcount_remove_many(&old_state
->arcs_size
,
2652 HDR_GET_LSIZE(hdr
), hdr
);
2654 uint32_t buffers
= 0;
2657 * Each individual buffer holds a unique reference,
2658 * thus we must remove each of these references one
2661 for (arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
; buf
!= NULL
;
2662 buf
= buf
->b_next
) {
2663 ASSERT3U(bufcnt
, !=, 0);
2667 * When the arc_buf_t is sharing the data
2668 * block with the hdr, the owner of the
2669 * reference belongs to the hdr. Only
2670 * add to the refcount if the arc_buf_t is
2673 if (arc_buf_is_shared(buf
))
2676 (void) zfs_refcount_remove_many(
2677 &old_state
->arcs_size
, arc_buf_size(buf
),
2680 ASSERT3U(bufcnt
, ==, buffers
);
2681 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
2684 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
2685 (void) zfs_refcount_remove_many(
2686 &old_state
->arcs_size
, arc_hdr_size(hdr
),
2690 if (HDR_HAS_RABD(hdr
)) {
2691 (void) zfs_refcount_remove_many(
2692 &old_state
->arcs_size
, HDR_GET_PSIZE(hdr
),
2698 if (HDR_HAS_L1HDR(hdr
))
2699 hdr
->b_l1hdr
.b_state
= new_state
;
2702 * L2 headers should never be on the L2 state list since they don't
2703 * have L1 headers allocated.
2705 ASSERT(multilist_is_empty(arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]) &&
2706 multilist_is_empty(arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]));
2710 arc_space_consume(uint64_t space
, arc_space_type_t type
)
2712 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
2717 case ARC_SPACE_DATA
:
2718 aggsum_add(&astat_data_size
, space
);
2720 case ARC_SPACE_META
:
2721 aggsum_add(&astat_metadata_size
, space
);
2723 case ARC_SPACE_BONUS
:
2724 aggsum_add(&astat_bonus_size
, space
);
2726 case ARC_SPACE_DNODE
:
2727 aggsum_add(&astat_dnode_size
, space
);
2729 case ARC_SPACE_DBUF
:
2730 aggsum_add(&astat_dbuf_size
, space
);
2732 case ARC_SPACE_HDRS
:
2733 aggsum_add(&astat_hdr_size
, space
);
2735 case ARC_SPACE_L2HDRS
:
2736 aggsum_add(&astat_l2_hdr_size
, space
);
2740 if (type
!= ARC_SPACE_DATA
)
2741 aggsum_add(&arc_meta_used
, space
);
2743 aggsum_add(&arc_size
, space
);
2747 arc_space_return(uint64_t space
, arc_space_type_t type
)
2749 ASSERT(type
>= 0 && type
< ARC_SPACE_NUMTYPES
);
2754 case ARC_SPACE_DATA
:
2755 aggsum_add(&astat_data_size
, -space
);
2757 case ARC_SPACE_META
:
2758 aggsum_add(&astat_metadata_size
, -space
);
2760 case ARC_SPACE_BONUS
:
2761 aggsum_add(&astat_bonus_size
, -space
);
2763 case ARC_SPACE_DNODE
:
2764 aggsum_add(&astat_dnode_size
, -space
);
2766 case ARC_SPACE_DBUF
:
2767 aggsum_add(&astat_dbuf_size
, -space
);
2769 case ARC_SPACE_HDRS
:
2770 aggsum_add(&astat_hdr_size
, -space
);
2772 case ARC_SPACE_L2HDRS
:
2773 aggsum_add(&astat_l2_hdr_size
, -space
);
2777 if (type
!= ARC_SPACE_DATA
) {
2778 ASSERT(aggsum_compare(&arc_meta_used
, space
) >= 0);
2780 * We use the upper bound here rather than the precise value
2781 * because the arc_meta_max value doesn't need to be
2782 * precise. It's only consumed by humans via arcstats.
2784 if (arc_meta_max
< aggsum_upper_bound(&arc_meta_used
))
2785 arc_meta_max
= aggsum_upper_bound(&arc_meta_used
);
2786 aggsum_add(&arc_meta_used
, -space
);
2789 ASSERT(aggsum_compare(&arc_size
, space
) >= 0);
2790 aggsum_add(&arc_size
, -space
);
2794 * Given a hdr and a buf, returns whether that buf can share its b_data buffer
2795 * with the hdr's b_pabd.
2798 arc_can_share(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
2801 * The criteria for sharing a hdr's data are:
2802 * 1. the buffer is not encrypted
2803 * 2. the hdr's compression matches the buf's compression
2804 * 3. the hdr doesn't need to be byteswapped
2805 * 4. the hdr isn't already being shared
2806 * 5. the buf is either compressed or it is the last buf in the hdr list
2808 * Criterion #5 maintains the invariant that shared uncompressed
2809 * bufs must be the final buf in the hdr's b_buf list. Reading this, you
2810 * might ask, "if a compressed buf is allocated first, won't that be the
2811 * last thing in the list?", but in that case it's impossible to create
2812 * a shared uncompressed buf anyway (because the hdr must be compressed
2813 * to have the compressed buf). You might also think that #3 is
2814 * sufficient to make this guarantee, however it's possible
2815 * (specifically in the rare L2ARC write race mentioned in
2816 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that
2817 * is sharable, but wasn't at the time of its allocation. Rather than
2818 * allow a new shared uncompressed buf to be created and then shuffle
2819 * the list around to make it the last element, this simply disallows
2820 * sharing if the new buf isn't the first to be added.
2822 ASSERT3P(buf
->b_hdr
, ==, hdr
);
2823 boolean_t hdr_compressed
=
2824 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
;
2825 boolean_t buf_compressed
= ARC_BUF_COMPRESSED(buf
) != 0;
2826 return (!ARC_BUF_ENCRYPTED(buf
) &&
2827 buf_compressed
== hdr_compressed
&&
2828 hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
&&
2829 !HDR_SHARED_DATA(hdr
) &&
2830 (ARC_BUF_LAST(buf
) || ARC_BUF_COMPRESSED(buf
)));
2834 * Allocate a buf for this hdr. If you care about the data that's in the hdr,
2835 * or if you want a compressed buffer, pass those flags in. Returns 0 if the
2836 * copy was made successfully, or an error code otherwise.
2839 arc_buf_alloc_impl(arc_buf_hdr_t
*hdr
, spa_t
*spa
, const zbookmark_phys_t
*zb
,
2840 void *tag
, boolean_t encrypted
, boolean_t compressed
, boolean_t noauth
,
2841 boolean_t fill
, arc_buf_t
**ret
)
2844 arc_fill_flags_t flags
= ARC_FILL_LOCKED
;
2846 ASSERT(HDR_HAS_L1HDR(hdr
));
2847 ASSERT3U(HDR_GET_LSIZE(hdr
), >, 0);
2848 VERIFY(hdr
->b_type
== ARC_BUFC_DATA
||
2849 hdr
->b_type
== ARC_BUFC_METADATA
);
2850 ASSERT3P(ret
, !=, NULL
);
2851 ASSERT3P(*ret
, ==, NULL
);
2852 IMPLY(encrypted
, compressed
);
2854 hdr
->b_l1hdr
.b_mru_hits
= 0;
2855 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
2856 hdr
->b_l1hdr
.b_mfu_hits
= 0;
2857 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
2858 hdr
->b_l1hdr
.b_l2_hits
= 0;
2860 buf
= *ret
= kmem_cache_alloc(buf_cache
, KM_PUSHPAGE
);
2863 buf
->b_next
= hdr
->b_l1hdr
.b_buf
;
2866 add_reference(hdr
, tag
);
2869 * We're about to change the hdr's b_flags. We must either
2870 * hold the hash_lock or be undiscoverable.
2872 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
2875 * Only honor requests for compressed bufs if the hdr is actually
2876 * compressed. This must be overriden if the buffer is encrypted since
2877 * encrypted buffers cannot be decompressed.
2880 buf
->b_flags
|= ARC_BUF_FLAG_COMPRESSED
;
2881 buf
->b_flags
|= ARC_BUF_FLAG_ENCRYPTED
;
2882 flags
|= ARC_FILL_COMPRESSED
| ARC_FILL_ENCRYPTED
;
2883 } else if (compressed
&&
2884 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
) {
2885 buf
->b_flags
|= ARC_BUF_FLAG_COMPRESSED
;
2886 flags
|= ARC_FILL_COMPRESSED
;
2891 flags
|= ARC_FILL_NOAUTH
;
2895 * If the hdr's data can be shared then we share the data buffer and
2896 * set the appropriate bit in the hdr's b_flags to indicate the hdr is
2897 * allocate a new buffer to store the buf's data.
2899 * There are two additional restrictions here because we're sharing
2900 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be
2901 * actively involved in an L2ARC write, because if this buf is used by
2902 * an arc_write() then the hdr's data buffer will be released when the
2903 * write completes, even though the L2ARC write might still be using it.
2904 * Second, the hdr's ABD must be linear so that the buf's user doesn't
2905 * need to be ABD-aware.
2907 boolean_t can_share
= arc_can_share(hdr
, buf
) && !HDR_L2_WRITING(hdr
) &&
2908 hdr
->b_l1hdr
.b_pabd
!= NULL
&& abd_is_linear(hdr
->b_l1hdr
.b_pabd
);
2910 /* Set up b_data and sharing */
2912 buf
->b_data
= abd_to_buf(hdr
->b_l1hdr
.b_pabd
);
2913 buf
->b_flags
|= ARC_BUF_FLAG_SHARED
;
2914 arc_hdr_set_flags(hdr
, ARC_FLAG_SHARED_DATA
);
2917 arc_get_data_buf(hdr
, arc_buf_size(buf
), buf
);
2918 ARCSTAT_INCR(arcstat_overhead_size
, arc_buf_size(buf
));
2920 VERIFY3P(buf
->b_data
, !=, NULL
);
2922 hdr
->b_l1hdr
.b_buf
= buf
;
2923 hdr
->b_l1hdr
.b_bufcnt
+= 1;
2925 hdr
->b_crypt_hdr
.b_ebufcnt
+= 1;
2928 * If the user wants the data from the hdr, we need to either copy or
2929 * decompress the data.
2932 ASSERT3P(zb
, !=, NULL
);
2933 return (arc_buf_fill(buf
, spa
, zb
, flags
));
2939 static char *arc_onloan_tag
= "onloan";
2942 arc_loaned_bytes_update(int64_t delta
)
2944 atomic_add_64(&arc_loaned_bytes
, delta
);
2946 /* assert that it did not wrap around */
2947 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes
, 0), >=, 0);
2951 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in
2952 * flight data by arc_tempreserve_space() until they are "returned". Loaned
2953 * buffers must be returned to the arc before they can be used by the DMU or
2957 arc_loan_buf(spa_t
*spa
, boolean_t is_metadata
, int size
)
2959 arc_buf_t
*buf
= arc_alloc_buf(spa
, arc_onloan_tag
,
2960 is_metadata
? ARC_BUFC_METADATA
: ARC_BUFC_DATA
, size
);
2962 arc_loaned_bytes_update(arc_buf_size(buf
));
2968 arc_loan_compressed_buf(spa_t
*spa
, uint64_t psize
, uint64_t lsize
,
2969 enum zio_compress compression_type
)
2971 arc_buf_t
*buf
= arc_alloc_compressed_buf(spa
, arc_onloan_tag
,
2972 psize
, lsize
, compression_type
);
2974 arc_loaned_bytes_update(arc_buf_size(buf
));
2980 arc_loan_raw_buf(spa_t
*spa
, uint64_t dsobj
, boolean_t byteorder
,
2981 const uint8_t *salt
, const uint8_t *iv
, const uint8_t *mac
,
2982 dmu_object_type_t ot
, uint64_t psize
, uint64_t lsize
,
2983 enum zio_compress compression_type
)
2985 arc_buf_t
*buf
= arc_alloc_raw_buf(spa
, arc_onloan_tag
, dsobj
,
2986 byteorder
, salt
, iv
, mac
, ot
, psize
, lsize
, compression_type
);
2988 atomic_add_64(&arc_loaned_bytes
, psize
);
2994 * Return a loaned arc buffer to the arc.
2997 arc_return_buf(arc_buf_t
*buf
, void *tag
)
2999 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3001 ASSERT3P(buf
->b_data
, !=, NULL
);
3002 ASSERT(HDR_HAS_L1HDR(hdr
));
3003 (void) zfs_refcount_add(&hdr
->b_l1hdr
.b_refcnt
, tag
);
3004 (void) zfs_refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
3006 arc_loaned_bytes_update(-arc_buf_size(buf
));
3009 /* Detach an arc_buf from a dbuf (tag) */
3011 arc_loan_inuse_buf(arc_buf_t
*buf
, void *tag
)
3013 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3015 ASSERT3P(buf
->b_data
, !=, NULL
);
3016 ASSERT(HDR_HAS_L1HDR(hdr
));
3017 (void) zfs_refcount_add(&hdr
->b_l1hdr
.b_refcnt
, arc_onloan_tag
);
3018 (void) zfs_refcount_remove(&hdr
->b_l1hdr
.b_refcnt
, tag
);
3020 arc_loaned_bytes_update(arc_buf_size(buf
));
3024 l2arc_free_abd_on_write(abd_t
*abd
, size_t size
, arc_buf_contents_t type
)
3026 l2arc_data_free_t
*df
= kmem_alloc(sizeof (*df
), KM_SLEEP
);
3029 df
->l2df_size
= size
;
3030 df
->l2df_type
= type
;
3031 mutex_enter(&l2arc_free_on_write_mtx
);
3032 list_insert_head(l2arc_free_on_write
, df
);
3033 mutex_exit(&l2arc_free_on_write_mtx
);
3037 arc_hdr_free_on_write(arc_buf_hdr_t
*hdr
, boolean_t free_rdata
)
3039 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
3040 arc_buf_contents_t type
= arc_buf_type(hdr
);
3041 uint64_t size
= (free_rdata
) ? HDR_GET_PSIZE(hdr
) : arc_hdr_size(hdr
);
3043 /* protected by hash lock, if in the hash table */
3044 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
3045 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3046 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
3048 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
3051 (void) zfs_refcount_remove_many(&state
->arcs_size
, size
, hdr
);
3052 if (type
== ARC_BUFC_METADATA
) {
3053 arc_space_return(size
, ARC_SPACE_META
);
3055 ASSERT(type
== ARC_BUFC_DATA
);
3056 arc_space_return(size
, ARC_SPACE_DATA
);
3060 l2arc_free_abd_on_write(hdr
->b_crypt_hdr
.b_rabd
, size
, type
);
3062 l2arc_free_abd_on_write(hdr
->b_l1hdr
.b_pabd
, size
, type
);
3067 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the
3068 * data buffer, we transfer the refcount ownership to the hdr and update
3069 * the appropriate kstats.
3072 arc_share_buf(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3074 ASSERT(arc_can_share(hdr
, buf
));
3075 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3076 ASSERT(!ARC_BUF_ENCRYPTED(buf
));
3077 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3080 * Start sharing the data buffer. We transfer the
3081 * refcount ownership to the hdr since it always owns
3082 * the refcount whenever an arc_buf_t is shared.
3084 zfs_refcount_transfer_ownership_many(&hdr
->b_l1hdr
.b_state
->arcs_size
,
3085 arc_hdr_size(hdr
), buf
, hdr
);
3086 hdr
->b_l1hdr
.b_pabd
= abd_get_from_buf(buf
->b_data
, arc_buf_size(buf
));
3087 abd_take_ownership_of_buf(hdr
->b_l1hdr
.b_pabd
,
3088 HDR_ISTYPE_METADATA(hdr
));
3089 arc_hdr_set_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3090 buf
->b_flags
|= ARC_BUF_FLAG_SHARED
;
3093 * Since we've transferred ownership to the hdr we need
3094 * to increment its compressed and uncompressed kstats and
3095 * decrement the overhead size.
3097 ARCSTAT_INCR(arcstat_compressed_size
, arc_hdr_size(hdr
));
3098 ARCSTAT_INCR(arcstat_uncompressed_size
, HDR_GET_LSIZE(hdr
));
3099 ARCSTAT_INCR(arcstat_overhead_size
, -arc_buf_size(buf
));
3103 arc_unshare_buf(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3105 ASSERT(arc_buf_is_shared(buf
));
3106 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3107 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3110 * We are no longer sharing this buffer so we need
3111 * to transfer its ownership to the rightful owner.
3113 zfs_refcount_transfer_ownership_many(&hdr
->b_l1hdr
.b_state
->arcs_size
,
3114 arc_hdr_size(hdr
), hdr
, buf
);
3115 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3116 abd_release_ownership_of_buf(hdr
->b_l1hdr
.b_pabd
);
3117 abd_put(hdr
->b_l1hdr
.b_pabd
);
3118 hdr
->b_l1hdr
.b_pabd
= NULL
;
3119 buf
->b_flags
&= ~ARC_BUF_FLAG_SHARED
;
3122 * Since the buffer is no longer shared between
3123 * the arc buf and the hdr, count it as overhead.
3125 ARCSTAT_INCR(arcstat_compressed_size
, -arc_hdr_size(hdr
));
3126 ARCSTAT_INCR(arcstat_uncompressed_size
, -HDR_GET_LSIZE(hdr
));
3127 ARCSTAT_INCR(arcstat_overhead_size
, arc_buf_size(buf
));
3131 * Remove an arc_buf_t from the hdr's buf list and return the last
3132 * arc_buf_t on the list. If no buffers remain on the list then return
3136 arc_buf_remove(arc_buf_hdr_t
*hdr
, arc_buf_t
*buf
)
3138 ASSERT(HDR_HAS_L1HDR(hdr
));
3139 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3141 arc_buf_t
**bufp
= &hdr
->b_l1hdr
.b_buf
;
3142 arc_buf_t
*lastbuf
= NULL
;
3145 * Remove the buf from the hdr list and locate the last
3146 * remaining buffer on the list.
3148 while (*bufp
!= NULL
) {
3150 *bufp
= buf
->b_next
;
3153 * If we've removed a buffer in the middle of
3154 * the list then update the lastbuf and update
3157 if (*bufp
!= NULL
) {
3159 bufp
= &(*bufp
)->b_next
;
3163 ASSERT3P(lastbuf
, !=, buf
);
3164 IMPLY(hdr
->b_l1hdr
.b_bufcnt
> 0, lastbuf
!= NULL
);
3165 IMPLY(hdr
->b_l1hdr
.b_bufcnt
> 0, hdr
->b_l1hdr
.b_buf
!= NULL
);
3166 IMPLY(lastbuf
!= NULL
, ARC_BUF_LAST(lastbuf
));
3172 * Free up buf->b_data and pull the arc_buf_t off of the the arc_buf_hdr_t's
3176 arc_buf_destroy_impl(arc_buf_t
*buf
)
3178 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3181 * Free up the data associated with the buf but only if we're not
3182 * sharing this with the hdr. If we are sharing it with the hdr, the
3183 * hdr is responsible for doing the free.
3185 if (buf
->b_data
!= NULL
) {
3187 * We're about to change the hdr's b_flags. We must either
3188 * hold the hash_lock or be undiscoverable.
3190 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)) || HDR_EMPTY(hdr
));
3192 arc_cksum_verify(buf
);
3193 arc_buf_unwatch(buf
);
3195 if (arc_buf_is_shared(buf
)) {
3196 arc_hdr_clear_flags(hdr
, ARC_FLAG_SHARED_DATA
);
3198 uint64_t size
= arc_buf_size(buf
);
3199 arc_free_data_buf(hdr
, buf
->b_data
, size
, buf
);
3200 ARCSTAT_INCR(arcstat_overhead_size
, -size
);
3204 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
3205 hdr
->b_l1hdr
.b_bufcnt
-= 1;
3207 if (ARC_BUF_ENCRYPTED(buf
)) {
3208 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
3211 * If we have no more encrypted buffers and we've
3212 * already gotten a copy of the decrypted data we can
3213 * free b_rabd to save some space.
3215 if (hdr
->b_crypt_hdr
.b_ebufcnt
== 0 &&
3216 HDR_HAS_RABD(hdr
) && hdr
->b_l1hdr
.b_pabd
!= NULL
&&
3217 !HDR_IO_IN_PROGRESS(hdr
)) {
3218 arc_hdr_free_abd(hdr
, B_TRUE
);
3223 arc_buf_t
*lastbuf
= arc_buf_remove(hdr
, buf
);
3225 if (ARC_BUF_SHARED(buf
) && !ARC_BUF_COMPRESSED(buf
)) {
3227 * If the current arc_buf_t is sharing its data buffer with the
3228 * hdr, then reassign the hdr's b_pabd to share it with the new
3229 * buffer at the end of the list. The shared buffer is always
3230 * the last one on the hdr's buffer list.
3232 * There is an equivalent case for compressed bufs, but since
3233 * they aren't guaranteed to be the last buf in the list and
3234 * that is an exceedingly rare case, we just allow that space be
3235 * wasted temporarily. We must also be careful not to share
3236 * encrypted buffers, since they cannot be shared.
3238 if (lastbuf
!= NULL
&& !ARC_BUF_ENCRYPTED(lastbuf
)) {
3239 /* Only one buf can be shared at once */
3240 VERIFY(!arc_buf_is_shared(lastbuf
));
3241 /* hdr is uncompressed so can't have compressed buf */
3242 VERIFY(!ARC_BUF_COMPRESSED(lastbuf
));
3244 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3245 arc_hdr_free_abd(hdr
, B_FALSE
);
3248 * We must setup a new shared block between the
3249 * last buffer and the hdr. The data would have
3250 * been allocated by the arc buf so we need to transfer
3251 * ownership to the hdr since it's now being shared.
3253 arc_share_buf(hdr
, lastbuf
);
3255 } else if (HDR_SHARED_DATA(hdr
)) {
3257 * Uncompressed shared buffers are always at the end
3258 * of the list. Compressed buffers don't have the
3259 * same requirements. This makes it hard to
3260 * simply assert that the lastbuf is shared so
3261 * we rely on the hdr's compression flags to determine
3262 * if we have a compressed, shared buffer.
3264 ASSERT3P(lastbuf
, !=, NULL
);
3265 ASSERT(arc_buf_is_shared(lastbuf
) ||
3266 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
3270 * Free the checksum if we're removing the last uncompressed buf from
3273 if (!arc_hdr_has_uncompressed_buf(hdr
)) {
3274 arc_cksum_free(hdr
);
3277 /* clean up the buf */
3279 kmem_cache_free(buf_cache
, buf
);
3283 arc_hdr_alloc_abd(arc_buf_hdr_t
*hdr
, boolean_t alloc_rdata
)
3287 ASSERT3U(HDR_GET_LSIZE(hdr
), >, 0);
3288 ASSERT(HDR_HAS_L1HDR(hdr
));
3289 ASSERT(!HDR_SHARED_DATA(hdr
) || alloc_rdata
);
3290 IMPLY(alloc_rdata
, HDR_PROTECTED(hdr
));
3293 size
= HDR_GET_PSIZE(hdr
);
3294 ASSERT3P(hdr
->b_crypt_hdr
.b_rabd
, ==, NULL
);
3295 hdr
->b_crypt_hdr
.b_rabd
= arc_get_data_abd(hdr
, size
, hdr
);
3296 ASSERT3P(hdr
->b_crypt_hdr
.b_rabd
, !=, NULL
);
3297 ARCSTAT_INCR(arcstat_raw_size
, size
);
3299 size
= arc_hdr_size(hdr
);
3300 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3301 hdr
->b_l1hdr
.b_pabd
= arc_get_data_abd(hdr
, size
, hdr
);
3302 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
3305 ARCSTAT_INCR(arcstat_compressed_size
, size
);
3306 ARCSTAT_INCR(arcstat_uncompressed_size
, HDR_GET_LSIZE(hdr
));
3310 arc_hdr_free_abd(arc_buf_hdr_t
*hdr
, boolean_t free_rdata
)
3312 uint64_t size
= (free_rdata
) ? HDR_GET_PSIZE(hdr
) : arc_hdr_size(hdr
);
3314 ASSERT(HDR_HAS_L1HDR(hdr
));
3315 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
3316 IMPLY(free_rdata
, HDR_HAS_RABD(hdr
));
3319 * If the hdr is currently being written to the l2arc then
3320 * we defer freeing the data by adding it to the l2arc_free_on_write
3321 * list. The l2arc will free the data once it's finished
3322 * writing it to the l2arc device.
3324 if (HDR_L2_WRITING(hdr
)) {
3325 arc_hdr_free_on_write(hdr
, free_rdata
);
3326 ARCSTAT_BUMP(arcstat_l2_free_on_write
);
3327 } else if (free_rdata
) {
3328 arc_free_data_abd(hdr
, hdr
->b_crypt_hdr
.b_rabd
, size
, hdr
);
3330 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
, size
, hdr
);
3334 hdr
->b_crypt_hdr
.b_rabd
= NULL
;
3335 ARCSTAT_INCR(arcstat_raw_size
, -size
);
3337 hdr
->b_l1hdr
.b_pabd
= NULL
;
3340 if (hdr
->b_l1hdr
.b_pabd
== NULL
&& !HDR_HAS_RABD(hdr
))
3341 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
3343 ARCSTAT_INCR(arcstat_compressed_size
, -size
);
3344 ARCSTAT_INCR(arcstat_uncompressed_size
, -HDR_GET_LSIZE(hdr
));
3347 static arc_buf_hdr_t
*
3348 arc_hdr_alloc(uint64_t spa
, int32_t psize
, int32_t lsize
,
3349 boolean_t
protected, enum zio_compress compression_type
,
3350 arc_buf_contents_t type
, boolean_t alloc_rdata
)
3354 VERIFY(type
== ARC_BUFC_DATA
|| type
== ARC_BUFC_METADATA
);
3356 hdr
= kmem_cache_alloc(hdr_full_crypt_cache
, KM_PUSHPAGE
);
3358 hdr
= kmem_cache_alloc(hdr_full_cache
, KM_PUSHPAGE
);
3361 ASSERT(HDR_EMPTY(hdr
));
3362 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3363 HDR_SET_PSIZE(hdr
, psize
);
3364 HDR_SET_LSIZE(hdr
, lsize
);
3368 arc_hdr_set_flags(hdr
, arc_bufc_to_flags(type
) | ARC_FLAG_HAS_L1HDR
);
3369 arc_hdr_set_compress(hdr
, compression_type
);
3371 arc_hdr_set_flags(hdr
, ARC_FLAG_PROTECTED
);
3373 hdr
->b_l1hdr
.b_state
= arc_anon
;
3374 hdr
->b_l1hdr
.b_arc_access
= 0;
3375 hdr
->b_l1hdr
.b_bufcnt
= 0;
3376 hdr
->b_l1hdr
.b_buf
= NULL
;
3379 * Allocate the hdr's buffer. This will contain either
3380 * the compressed or uncompressed data depending on the block
3381 * it references and compressed arc enablement.
3383 arc_hdr_alloc_abd(hdr
, alloc_rdata
);
3384 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3390 * Transition between the two allocation states for the arc_buf_hdr struct.
3391 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without
3392 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller
3393 * version is used when a cache buffer is only in the L2ARC in order to reduce
3396 static arc_buf_hdr_t
*
3397 arc_hdr_realloc(arc_buf_hdr_t
*hdr
, kmem_cache_t
*old
, kmem_cache_t
*new)
3399 ASSERT(HDR_HAS_L2HDR(hdr
));
3401 arc_buf_hdr_t
*nhdr
;
3402 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
3404 ASSERT((old
== hdr_full_cache
&& new == hdr_l2only_cache
) ||
3405 (old
== hdr_l2only_cache
&& new == hdr_full_cache
));
3408 * if the caller wanted a new full header and the header is to be
3409 * encrypted we will actually allocate the header from the full crypt
3410 * cache instead. The same applies to freeing from the old cache.
3412 if (HDR_PROTECTED(hdr
) && new == hdr_full_cache
)
3413 new = hdr_full_crypt_cache
;
3414 if (HDR_PROTECTED(hdr
) && old
== hdr_full_cache
)
3415 old
= hdr_full_crypt_cache
;
3417 nhdr
= kmem_cache_alloc(new, KM_PUSHPAGE
);
3419 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
3420 buf_hash_remove(hdr
);
3422 bcopy(hdr
, nhdr
, HDR_L2ONLY_SIZE
);
3424 if (new == hdr_full_cache
|| new == hdr_full_crypt_cache
) {
3425 arc_hdr_set_flags(nhdr
, ARC_FLAG_HAS_L1HDR
);
3427 * arc_access and arc_change_state need to be aware that a
3428 * header has just come out of L2ARC, so we set its state to
3429 * l2c_only even though it's about to change.
3431 nhdr
->b_l1hdr
.b_state
= arc_l2c_only
;
3433 /* Verify previous threads set to NULL before freeing */
3434 ASSERT3P(nhdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3435 ASSERT(!HDR_HAS_RABD(hdr
));
3437 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
3438 ASSERT0(hdr
->b_l1hdr
.b_bufcnt
);
3439 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3442 * If we've reached here, We must have been called from
3443 * arc_evict_hdr(), as such we should have already been
3444 * removed from any ghost list we were previously on
3445 * (which protects us from racing with arc_evict_state),
3446 * thus no locking is needed during this check.
3448 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3451 * A buffer must not be moved into the arc_l2c_only
3452 * state if it's not finished being written out to the
3453 * l2arc device. Otherwise, the b_l1hdr.b_pabd field
3454 * might try to be accessed, even though it was removed.
3456 VERIFY(!HDR_L2_WRITING(hdr
));
3457 VERIFY3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
3458 ASSERT(!HDR_HAS_RABD(hdr
));
3460 arc_hdr_clear_flags(nhdr
, ARC_FLAG_HAS_L1HDR
);
3463 * The header has been reallocated so we need to re-insert it into any
3466 (void) buf_hash_insert(nhdr
, NULL
);
3468 ASSERT(list_link_active(&hdr
->b_l2hdr
.b_l2node
));
3470 mutex_enter(&dev
->l2ad_mtx
);
3473 * We must place the realloc'ed header back into the list at
3474 * the same spot. Otherwise, if it's placed earlier in the list,
3475 * l2arc_write_buffers() could find it during the function's
3476 * write phase, and try to write it out to the l2arc.
3478 list_insert_after(&dev
->l2ad_buflist
, hdr
, nhdr
);
3479 list_remove(&dev
->l2ad_buflist
, hdr
);
3481 mutex_exit(&dev
->l2ad_mtx
);
3484 * Since we're using the pointer address as the tag when
3485 * incrementing and decrementing the l2ad_alloc refcount, we
3486 * must remove the old pointer (that we're about to destroy) and
3487 * add the new pointer to the refcount. Otherwise we'd remove
3488 * the wrong pointer address when calling arc_hdr_destroy() later.
3491 (void) zfs_refcount_remove_many(&dev
->l2ad_alloc
,
3492 arc_hdr_size(hdr
), hdr
);
3493 (void) zfs_refcount_add_many(&dev
->l2ad_alloc
,
3494 arc_hdr_size(nhdr
), nhdr
);
3496 buf_discard_identity(hdr
);
3497 kmem_cache_free(old
, hdr
);
3503 * This function allows an L1 header to be reallocated as a crypt
3504 * header and vice versa. If we are going to a crypt header, the
3505 * new fields will be zeroed out.
3507 static arc_buf_hdr_t
*
3508 arc_hdr_realloc_crypt(arc_buf_hdr_t
*hdr
, boolean_t need_crypt
)
3510 arc_buf_hdr_t
*nhdr
;
3512 kmem_cache_t
*ncache
, *ocache
;
3513 unsigned nsize
, osize
;
3516 * This function requires that hdr is in the arc_anon state.
3517 * Therefore it won't have any L2ARC data for us to worry
3520 ASSERT(HDR_HAS_L1HDR(hdr
));
3521 ASSERT(!HDR_HAS_L2HDR(hdr
));
3522 ASSERT3U(!!HDR_PROTECTED(hdr
), !=, need_crypt
);
3523 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3524 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3525 ASSERT(!list_link_active(&hdr
->b_l2hdr
.b_l2node
));
3526 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
3529 ncache
= hdr_full_crypt_cache
;
3530 nsize
= sizeof (hdr
->b_crypt_hdr
);
3531 ocache
= hdr_full_cache
;
3532 osize
= HDR_FULL_SIZE
;
3534 ncache
= hdr_full_cache
;
3535 nsize
= HDR_FULL_SIZE
;
3536 ocache
= hdr_full_crypt_cache
;
3537 osize
= sizeof (hdr
->b_crypt_hdr
);
3540 nhdr
= kmem_cache_alloc(ncache
, KM_PUSHPAGE
);
3543 * Copy all members that aren't locks or condvars to the new header.
3544 * No lists are pointing to us (as we asserted above), so we don't
3545 * need to worry about the list nodes.
3547 nhdr
->b_dva
= hdr
->b_dva
;
3548 nhdr
->b_birth
= hdr
->b_birth
;
3549 nhdr
->b_type
= hdr
->b_type
;
3550 nhdr
->b_flags
= hdr
->b_flags
;
3551 nhdr
->b_psize
= hdr
->b_psize
;
3552 nhdr
->b_lsize
= hdr
->b_lsize
;
3553 nhdr
->b_spa
= hdr
->b_spa
;
3554 nhdr
->b_l1hdr
.b_freeze_cksum
= hdr
->b_l1hdr
.b_freeze_cksum
;
3555 nhdr
->b_l1hdr
.b_bufcnt
= hdr
->b_l1hdr
.b_bufcnt
;
3556 nhdr
->b_l1hdr
.b_byteswap
= hdr
->b_l1hdr
.b_byteswap
;
3557 nhdr
->b_l1hdr
.b_state
= hdr
->b_l1hdr
.b_state
;
3558 nhdr
->b_l1hdr
.b_arc_access
= hdr
->b_l1hdr
.b_arc_access
;
3559 nhdr
->b_l1hdr
.b_mru_hits
= hdr
->b_l1hdr
.b_mru_hits
;
3560 nhdr
->b_l1hdr
.b_mru_ghost_hits
= hdr
->b_l1hdr
.b_mru_ghost_hits
;
3561 nhdr
->b_l1hdr
.b_mfu_hits
= hdr
->b_l1hdr
.b_mfu_hits
;
3562 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= hdr
->b_l1hdr
.b_mfu_ghost_hits
;
3563 nhdr
->b_l1hdr
.b_l2_hits
= hdr
->b_l1hdr
.b_l2_hits
;
3564 nhdr
->b_l1hdr
.b_acb
= hdr
->b_l1hdr
.b_acb
;
3565 nhdr
->b_l1hdr
.b_pabd
= hdr
->b_l1hdr
.b_pabd
;
3568 * This zfs_refcount_add() exists only to ensure that the individual
3569 * arc buffers always point to a header that is referenced, avoiding
3570 * a small race condition that could trigger ASSERTs.
3572 (void) zfs_refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, FTAG
);
3573 nhdr
->b_l1hdr
.b_buf
= hdr
->b_l1hdr
.b_buf
;
3574 for (buf
= nhdr
->b_l1hdr
.b_buf
; buf
!= NULL
; buf
= buf
->b_next
) {
3575 mutex_enter(&buf
->b_evict_lock
);
3577 mutex_exit(&buf
->b_evict_lock
);
3580 zfs_refcount_transfer(&nhdr
->b_l1hdr
.b_refcnt
, &hdr
->b_l1hdr
.b_refcnt
);
3581 (void) zfs_refcount_remove(&nhdr
->b_l1hdr
.b_refcnt
, FTAG
);
3582 ASSERT0(zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
3585 arc_hdr_set_flags(nhdr
, ARC_FLAG_PROTECTED
);
3587 arc_hdr_clear_flags(nhdr
, ARC_FLAG_PROTECTED
);
3590 /* unset all members of the original hdr */
3591 bzero(&hdr
->b_dva
, sizeof (dva_t
));
3593 hdr
->b_type
= ARC_BUFC_INVALID
;
3598 hdr
->b_l1hdr
.b_freeze_cksum
= NULL
;
3599 hdr
->b_l1hdr
.b_buf
= NULL
;
3600 hdr
->b_l1hdr
.b_bufcnt
= 0;
3601 hdr
->b_l1hdr
.b_byteswap
= 0;
3602 hdr
->b_l1hdr
.b_state
= NULL
;
3603 hdr
->b_l1hdr
.b_arc_access
= 0;
3604 hdr
->b_l1hdr
.b_mru_hits
= 0;
3605 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
3606 hdr
->b_l1hdr
.b_mfu_hits
= 0;
3607 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
3608 hdr
->b_l1hdr
.b_l2_hits
= 0;
3609 hdr
->b_l1hdr
.b_acb
= NULL
;
3610 hdr
->b_l1hdr
.b_pabd
= NULL
;
3612 if (ocache
== hdr_full_crypt_cache
) {
3613 ASSERT(!HDR_HAS_RABD(hdr
));
3614 hdr
->b_crypt_hdr
.b_ot
= DMU_OT_NONE
;
3615 hdr
->b_crypt_hdr
.b_ebufcnt
= 0;
3616 hdr
->b_crypt_hdr
.b_dsobj
= 0;
3617 bzero(hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3618 bzero(hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3619 bzero(hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3622 buf_discard_identity(hdr
);
3623 kmem_cache_free(ocache
, hdr
);
3629 * This function is used by the send / receive code to convert a newly
3630 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It
3631 * is also used to allow the root objset block to be uupdated without altering
3632 * its embedded MACs. Both block types will always be uncompressed so we do not
3633 * have to worry about compression type or psize.
3636 arc_convert_to_raw(arc_buf_t
*buf
, uint64_t dsobj
, boolean_t byteorder
,
3637 dmu_object_type_t ot
, const uint8_t *salt
, const uint8_t *iv
,
3640 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3642 ASSERT(ot
== DMU_OT_DNODE
|| ot
== DMU_OT_OBJSET
);
3643 ASSERT(HDR_HAS_L1HDR(hdr
));
3644 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3646 buf
->b_flags
|= (ARC_BUF_FLAG_COMPRESSED
| ARC_BUF_FLAG_ENCRYPTED
);
3647 if (!HDR_PROTECTED(hdr
))
3648 hdr
= arc_hdr_realloc_crypt(hdr
, B_TRUE
);
3649 hdr
->b_crypt_hdr
.b_dsobj
= dsobj
;
3650 hdr
->b_crypt_hdr
.b_ot
= ot
;
3651 hdr
->b_l1hdr
.b_byteswap
= (byteorder
== ZFS_HOST_BYTEORDER
) ?
3652 DMU_BSWAP_NUMFUNCS
: DMU_OT_BYTESWAP(ot
);
3653 if (!arc_hdr_has_uncompressed_buf(hdr
))
3654 arc_cksum_free(hdr
);
3657 bcopy(salt
, hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3659 bcopy(iv
, hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3661 bcopy(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3665 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller.
3666 * The buf is returned thawed since we expect the consumer to modify it.
3669 arc_alloc_buf(spa_t
*spa
, void *tag
, arc_buf_contents_t type
, int32_t size
)
3671 arc_buf_hdr_t
*hdr
= arc_hdr_alloc(spa_load_guid(spa
), size
, size
,
3672 B_FALSE
, ZIO_COMPRESS_OFF
, type
, B_FALSE
);
3673 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3675 arc_buf_t
*buf
= NULL
;
3676 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_FALSE
, B_FALSE
,
3677 B_FALSE
, B_FALSE
, &buf
));
3684 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this
3685 * for bufs containing metadata.
3688 arc_alloc_compressed_buf(spa_t
*spa
, void *tag
, uint64_t psize
, uint64_t lsize
,
3689 enum zio_compress compression_type
)
3691 ASSERT3U(lsize
, >, 0);
3692 ASSERT3U(lsize
, >=, psize
);
3693 ASSERT3U(compression_type
, >, ZIO_COMPRESS_OFF
);
3694 ASSERT3U(compression_type
, <, ZIO_COMPRESS_FUNCTIONS
);
3696 arc_buf_hdr_t
*hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
,
3697 B_FALSE
, compression_type
, ARC_BUFC_DATA
, B_FALSE
);
3698 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3700 arc_buf_t
*buf
= NULL
;
3701 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_FALSE
,
3702 B_TRUE
, B_FALSE
, B_FALSE
, &buf
));
3704 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3706 if (!arc_buf_is_shared(buf
)) {
3708 * To ensure that the hdr has the correct data in it if we call
3709 * arc_untransform() on this buf before it's been written to
3710 * disk, it's easiest if we just set up sharing between the
3713 ASSERT(!abd_is_linear(hdr
->b_l1hdr
.b_pabd
));
3714 arc_hdr_free_abd(hdr
, B_FALSE
);
3715 arc_share_buf(hdr
, buf
);
3722 arc_alloc_raw_buf(spa_t
*spa
, void *tag
, uint64_t dsobj
, boolean_t byteorder
,
3723 const uint8_t *salt
, const uint8_t *iv
, const uint8_t *mac
,
3724 dmu_object_type_t ot
, uint64_t psize
, uint64_t lsize
,
3725 enum zio_compress compression_type
)
3729 arc_buf_contents_t type
= DMU_OT_IS_METADATA(ot
) ?
3730 ARC_BUFC_METADATA
: ARC_BUFC_DATA
;
3732 ASSERT3U(lsize
, >, 0);
3733 ASSERT3U(lsize
, >=, psize
);
3734 ASSERT3U(compression_type
, >=, ZIO_COMPRESS_OFF
);
3735 ASSERT3U(compression_type
, <, ZIO_COMPRESS_FUNCTIONS
);
3737 hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
, B_TRUE
,
3738 compression_type
, type
, B_TRUE
);
3739 ASSERT(!MUTEX_HELD(HDR_LOCK(hdr
)));
3741 hdr
->b_crypt_hdr
.b_dsobj
= dsobj
;
3742 hdr
->b_crypt_hdr
.b_ot
= ot
;
3743 hdr
->b_l1hdr
.b_byteswap
= (byteorder
== ZFS_HOST_BYTEORDER
) ?
3744 DMU_BSWAP_NUMFUNCS
: DMU_OT_BYTESWAP(ot
);
3745 bcopy(salt
, hdr
->b_crypt_hdr
.b_salt
, ZIO_DATA_SALT_LEN
);
3746 bcopy(iv
, hdr
->b_crypt_hdr
.b_iv
, ZIO_DATA_IV_LEN
);
3747 bcopy(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
);
3750 * This buffer will be considered encrypted even if the ot is not an
3751 * encrypted type. It will become authenticated instead in
3752 * arc_write_ready().
3755 VERIFY0(arc_buf_alloc_impl(hdr
, spa
, NULL
, tag
, B_TRUE
, B_TRUE
,
3756 B_FALSE
, B_FALSE
, &buf
));
3758 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
3764 arc_hdr_l2hdr_destroy(arc_buf_hdr_t
*hdr
)
3766 l2arc_buf_hdr_t
*l2hdr
= &hdr
->b_l2hdr
;
3767 l2arc_dev_t
*dev
= l2hdr
->b_dev
;
3768 uint64_t psize
= arc_hdr_size(hdr
);
3770 ASSERT(MUTEX_HELD(&dev
->l2ad_mtx
));
3771 ASSERT(HDR_HAS_L2HDR(hdr
));
3773 list_remove(&dev
->l2ad_buflist
, hdr
);
3775 ARCSTAT_INCR(arcstat_l2_psize
, -psize
);
3776 ARCSTAT_INCR(arcstat_l2_lsize
, -HDR_GET_LSIZE(hdr
));
3778 vdev_space_update(dev
->l2ad_vdev
, -psize
, 0, 0);
3780 (void) zfs_refcount_remove_many(&dev
->l2ad_alloc
, psize
, hdr
);
3781 arc_hdr_clear_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
3785 arc_hdr_destroy(arc_buf_hdr_t
*hdr
)
3787 if (HDR_HAS_L1HDR(hdr
)) {
3788 ASSERT(hdr
->b_l1hdr
.b_buf
== NULL
||
3789 hdr
->b_l1hdr
.b_bufcnt
> 0);
3790 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
3791 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
3793 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3794 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
3796 if (!HDR_EMPTY(hdr
))
3797 buf_discard_identity(hdr
);
3799 if (HDR_HAS_L2HDR(hdr
)) {
3800 l2arc_dev_t
*dev
= hdr
->b_l2hdr
.b_dev
;
3801 boolean_t buflist_held
= MUTEX_HELD(&dev
->l2ad_mtx
);
3804 mutex_enter(&dev
->l2ad_mtx
);
3807 * Even though we checked this conditional above, we
3808 * need to check this again now that we have the
3809 * l2ad_mtx. This is because we could be racing with
3810 * another thread calling l2arc_evict() which might have
3811 * destroyed this header's L2 portion as we were waiting
3812 * to acquire the l2ad_mtx. If that happens, we don't
3813 * want to re-destroy the header's L2 portion.
3815 if (HDR_HAS_L2HDR(hdr
))
3816 arc_hdr_l2hdr_destroy(hdr
);
3819 mutex_exit(&dev
->l2ad_mtx
);
3822 if (HDR_HAS_L1HDR(hdr
)) {
3823 arc_cksum_free(hdr
);
3825 while (hdr
->b_l1hdr
.b_buf
!= NULL
)
3826 arc_buf_destroy_impl(hdr
->b_l1hdr
.b_buf
);
3828 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
3829 arc_hdr_free_abd(hdr
, B_FALSE
);
3832 if (HDR_HAS_RABD(hdr
))
3833 arc_hdr_free_abd(hdr
, B_TRUE
);
3836 ASSERT3P(hdr
->b_hash_next
, ==, NULL
);
3837 if (HDR_HAS_L1HDR(hdr
)) {
3838 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
3839 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
3841 if (!HDR_PROTECTED(hdr
)) {
3842 kmem_cache_free(hdr_full_cache
, hdr
);
3844 kmem_cache_free(hdr_full_crypt_cache
, hdr
);
3847 kmem_cache_free(hdr_l2only_cache
, hdr
);
3852 arc_buf_destroy(arc_buf_t
*buf
, void* tag
)
3854 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
3855 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
3857 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
3858 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
3859 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3860 VERIFY0(remove_reference(hdr
, NULL
, tag
));
3861 arc_hdr_destroy(hdr
);
3865 mutex_enter(hash_lock
);
3866 ASSERT3P(hdr
, ==, buf
->b_hdr
);
3867 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
3868 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
3869 ASSERT3P(hdr
->b_l1hdr
.b_state
, !=, arc_anon
);
3870 ASSERT3P(buf
->b_data
, !=, NULL
);
3872 (void) remove_reference(hdr
, hash_lock
, tag
);
3873 arc_buf_destroy_impl(buf
);
3874 mutex_exit(hash_lock
);
3878 * Evict the arc_buf_hdr that is provided as a parameter. The resultant
3879 * state of the header is dependent on its state prior to entering this
3880 * function. The following transitions are possible:
3882 * - arc_mru -> arc_mru_ghost
3883 * - arc_mfu -> arc_mfu_ghost
3884 * - arc_mru_ghost -> arc_l2c_only
3885 * - arc_mru_ghost -> deleted
3886 * - arc_mfu_ghost -> arc_l2c_only
3887 * - arc_mfu_ghost -> deleted
3890 arc_evict_hdr(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
3892 arc_state_t
*evicted_state
, *state
;
3893 int64_t bytes_evicted
= 0;
3894 int min_lifetime
= HDR_PRESCIENT_PREFETCH(hdr
) ?
3895 arc_min_prescient_prefetch_ms
: arc_min_prefetch_ms
;
3897 ASSERT(MUTEX_HELD(hash_lock
));
3898 ASSERT(HDR_HAS_L1HDR(hdr
));
3900 state
= hdr
->b_l1hdr
.b_state
;
3901 if (GHOST_STATE(state
)) {
3902 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
3903 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
3906 * l2arc_write_buffers() relies on a header's L1 portion
3907 * (i.e. its b_pabd field) during it's write phase.
3908 * Thus, we cannot push a header onto the arc_l2c_only
3909 * state (removing its L1 piece) until the header is
3910 * done being written to the l2arc.
3912 if (HDR_HAS_L2HDR(hdr
) && HDR_L2_WRITING(hdr
)) {
3913 ARCSTAT_BUMP(arcstat_evict_l2_skip
);
3914 return (bytes_evicted
);
3917 ARCSTAT_BUMP(arcstat_deleted
);
3918 bytes_evicted
+= HDR_GET_LSIZE(hdr
);
3920 DTRACE_PROBE1(arc__delete
, arc_buf_hdr_t
*, hdr
);
3922 if (HDR_HAS_L2HDR(hdr
)) {
3923 ASSERT(hdr
->b_l1hdr
.b_pabd
== NULL
);
3924 ASSERT(!HDR_HAS_RABD(hdr
));
3926 * This buffer is cached on the 2nd Level ARC;
3927 * don't destroy the header.
3929 arc_change_state(arc_l2c_only
, hdr
, hash_lock
);
3931 * dropping from L1+L2 cached to L2-only,
3932 * realloc to remove the L1 header.
3934 hdr
= arc_hdr_realloc(hdr
, hdr_full_cache
,
3937 arc_change_state(arc_anon
, hdr
, hash_lock
);
3938 arc_hdr_destroy(hdr
);
3940 return (bytes_evicted
);
3943 ASSERT(state
== arc_mru
|| state
== arc_mfu
);
3944 evicted_state
= (state
== arc_mru
) ? arc_mru_ghost
: arc_mfu_ghost
;
3946 /* prefetch buffers have a minimum lifespan */
3947 if (HDR_IO_IN_PROGRESS(hdr
) ||
3948 ((hdr
->b_flags
& (ARC_FLAG_PREFETCH
| ARC_FLAG_INDIRECT
)) &&
3949 ddi_get_lbolt() - hdr
->b_l1hdr
.b_arc_access
<
3950 MSEC_TO_TICK(min_lifetime
))) {
3951 ARCSTAT_BUMP(arcstat_evict_skip
);
3952 return (bytes_evicted
);
3955 ASSERT0(zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
));
3956 while (hdr
->b_l1hdr
.b_buf
) {
3957 arc_buf_t
*buf
= hdr
->b_l1hdr
.b_buf
;
3958 if (!mutex_tryenter(&buf
->b_evict_lock
)) {
3959 ARCSTAT_BUMP(arcstat_mutex_miss
);
3962 if (buf
->b_data
!= NULL
)
3963 bytes_evicted
+= HDR_GET_LSIZE(hdr
);
3964 mutex_exit(&buf
->b_evict_lock
);
3965 arc_buf_destroy_impl(buf
);
3968 if (HDR_HAS_L2HDR(hdr
)) {
3969 ARCSTAT_INCR(arcstat_evict_l2_cached
, HDR_GET_LSIZE(hdr
));
3971 if (l2arc_write_eligible(hdr
->b_spa
, hdr
)) {
3972 ARCSTAT_INCR(arcstat_evict_l2_eligible
,
3973 HDR_GET_LSIZE(hdr
));
3975 ARCSTAT_INCR(arcstat_evict_l2_ineligible
,
3976 HDR_GET_LSIZE(hdr
));
3980 if (hdr
->b_l1hdr
.b_bufcnt
== 0) {
3981 arc_cksum_free(hdr
);
3983 bytes_evicted
+= arc_hdr_size(hdr
);
3986 * If this hdr is being evicted and has a compressed
3987 * buffer then we discard it here before we change states.
3988 * This ensures that the accounting is updated correctly
3989 * in arc_free_data_impl().
3991 if (hdr
->b_l1hdr
.b_pabd
!= NULL
)
3992 arc_hdr_free_abd(hdr
, B_FALSE
);
3994 if (HDR_HAS_RABD(hdr
))
3995 arc_hdr_free_abd(hdr
, B_TRUE
);
3997 arc_change_state(evicted_state
, hdr
, hash_lock
);
3998 ASSERT(HDR_IN_HASH_TABLE(hdr
));
3999 arc_hdr_set_flags(hdr
, ARC_FLAG_IN_HASH_TABLE
);
4000 DTRACE_PROBE1(arc__evict
, arc_buf_hdr_t
*, hdr
);
4003 return (bytes_evicted
);
4007 arc_evict_state_impl(multilist_t
*ml
, int idx
, arc_buf_hdr_t
*marker
,
4008 uint64_t spa
, int64_t bytes
)
4010 multilist_sublist_t
*mls
;
4011 uint64_t bytes_evicted
= 0;
4013 kmutex_t
*hash_lock
;
4014 int evict_count
= 0;
4016 ASSERT3P(marker
, !=, NULL
);
4017 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
4019 mls
= multilist_sublist_lock(ml
, idx
);
4021 for (hdr
= multilist_sublist_prev(mls
, marker
); hdr
!= NULL
;
4022 hdr
= multilist_sublist_prev(mls
, marker
)) {
4023 if ((bytes
!= ARC_EVICT_ALL
&& bytes_evicted
>= bytes
) ||
4024 (evict_count
>= zfs_arc_evict_batch_limit
))
4028 * To keep our iteration location, move the marker
4029 * forward. Since we're not holding hdr's hash lock, we
4030 * must be very careful and not remove 'hdr' from the
4031 * sublist. Otherwise, other consumers might mistake the
4032 * 'hdr' as not being on a sublist when they call the
4033 * multilist_link_active() function (they all rely on
4034 * the hash lock protecting concurrent insertions and
4035 * removals). multilist_sublist_move_forward() was
4036 * specifically implemented to ensure this is the case
4037 * (only 'marker' will be removed and re-inserted).
4039 multilist_sublist_move_forward(mls
, marker
);
4042 * The only case where the b_spa field should ever be
4043 * zero, is the marker headers inserted by
4044 * arc_evict_state(). It's possible for multiple threads
4045 * to be calling arc_evict_state() concurrently (e.g.
4046 * dsl_pool_close() and zio_inject_fault()), so we must
4047 * skip any markers we see from these other threads.
4049 if (hdr
->b_spa
== 0)
4052 /* we're only interested in evicting buffers of a certain spa */
4053 if (spa
!= 0 && hdr
->b_spa
!= spa
) {
4054 ARCSTAT_BUMP(arcstat_evict_skip
);
4058 hash_lock
= HDR_LOCK(hdr
);
4061 * We aren't calling this function from any code path
4062 * that would already be holding a hash lock, so we're
4063 * asserting on this assumption to be defensive in case
4064 * this ever changes. Without this check, it would be
4065 * possible to incorrectly increment arcstat_mutex_miss
4066 * below (e.g. if the code changed such that we called
4067 * this function with a hash lock held).
4069 ASSERT(!MUTEX_HELD(hash_lock
));
4071 if (mutex_tryenter(hash_lock
)) {
4072 uint64_t evicted
= arc_evict_hdr(hdr
, hash_lock
);
4073 mutex_exit(hash_lock
);
4075 bytes_evicted
+= evicted
;
4078 * If evicted is zero, arc_evict_hdr() must have
4079 * decided to skip this header, don't increment
4080 * evict_count in this case.
4086 * If arc_size isn't overflowing, signal any
4087 * threads that might happen to be waiting.
4089 * For each header evicted, we wake up a single
4090 * thread. If we used cv_broadcast, we could
4091 * wake up "too many" threads causing arc_size
4092 * to significantly overflow arc_c; since
4093 * arc_get_data_impl() doesn't check for overflow
4094 * when it's woken up (it doesn't because it's
4095 * possible for the ARC to be overflowing while
4096 * full of un-evictable buffers, and the
4097 * function should proceed in this case).
4099 * If threads are left sleeping, due to not
4100 * using cv_broadcast, they will be woken up
4101 * just before arc_reclaim_thread() sleeps.
4103 mutex_enter(&arc_reclaim_lock
);
4104 if (!arc_is_overflowing())
4105 cv_signal(&arc_reclaim_waiters_cv
);
4106 mutex_exit(&arc_reclaim_lock
);
4108 ARCSTAT_BUMP(arcstat_mutex_miss
);
4112 multilist_sublist_unlock(mls
);
4114 return (bytes_evicted
);
4118 * Evict buffers from the given arc state, until we've removed the
4119 * specified number of bytes. Move the removed buffers to the
4120 * appropriate evict state.
4122 * This function makes a "best effort". It skips over any buffers
4123 * it can't get a hash_lock on, and so, may not catch all candidates.
4124 * It may also return without evicting as much space as requested.
4126 * If bytes is specified using the special value ARC_EVICT_ALL, this
4127 * will evict all available (i.e. unlocked and evictable) buffers from
4128 * the given arc state; which is used by arc_flush().
4131 arc_evict_state(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
4132 arc_buf_contents_t type
)
4134 uint64_t total_evicted
= 0;
4135 multilist_t
*ml
= state
->arcs_list
[type
];
4137 arc_buf_hdr_t
**markers
;
4139 IMPLY(bytes
< 0, bytes
== ARC_EVICT_ALL
);
4141 num_sublists
= multilist_get_num_sublists(ml
);
4144 * If we've tried to evict from each sublist, made some
4145 * progress, but still have not hit the target number of bytes
4146 * to evict, we want to keep trying. The markers allow us to
4147 * pick up where we left off for each individual sublist, rather
4148 * than starting from the tail each time.
4150 markers
= kmem_zalloc(sizeof (*markers
) * num_sublists
, KM_SLEEP
);
4151 for (int i
= 0; i
< num_sublists
; i
++) {
4152 multilist_sublist_t
*mls
;
4154 markers
[i
] = kmem_cache_alloc(hdr_full_cache
, KM_SLEEP
);
4157 * A b_spa of 0 is used to indicate that this header is
4158 * a marker. This fact is used in arc_adjust_type() and
4159 * arc_evict_state_impl().
4161 markers
[i
]->b_spa
= 0;
4163 mls
= multilist_sublist_lock(ml
, i
);
4164 multilist_sublist_insert_tail(mls
, markers
[i
]);
4165 multilist_sublist_unlock(mls
);
4169 * While we haven't hit our target number of bytes to evict, or
4170 * we're evicting all available buffers.
4172 while (total_evicted
< bytes
|| bytes
== ARC_EVICT_ALL
) {
4173 int sublist_idx
= multilist_get_random_index(ml
);
4174 uint64_t scan_evicted
= 0;
4177 * Try to reduce pinned dnodes with a floor of arc_dnode_limit.
4178 * Request that 10% of the LRUs be scanned by the superblock
4181 if (type
== ARC_BUFC_DATA
&& aggsum_compare(&astat_dnode_size
,
4182 arc_dnode_limit
) > 0) {
4183 arc_prune_async((aggsum_upper_bound(&astat_dnode_size
) -
4184 arc_dnode_limit
) / sizeof (dnode_t
) /
4185 zfs_arc_dnode_reduce_percent
);
4189 * Start eviction using a randomly selected sublist,
4190 * this is to try and evenly balance eviction across all
4191 * sublists. Always starting at the same sublist
4192 * (e.g. index 0) would cause evictions to favor certain
4193 * sublists over others.
4195 for (int i
= 0; i
< num_sublists
; i
++) {
4196 uint64_t bytes_remaining
;
4197 uint64_t bytes_evicted
;
4199 if (bytes
== ARC_EVICT_ALL
)
4200 bytes_remaining
= ARC_EVICT_ALL
;
4201 else if (total_evicted
< bytes
)
4202 bytes_remaining
= bytes
- total_evicted
;
4206 bytes_evicted
= arc_evict_state_impl(ml
, sublist_idx
,
4207 markers
[sublist_idx
], spa
, bytes_remaining
);
4209 scan_evicted
+= bytes_evicted
;
4210 total_evicted
+= bytes_evicted
;
4212 /* we've reached the end, wrap to the beginning */
4213 if (++sublist_idx
>= num_sublists
)
4218 * If we didn't evict anything during this scan, we have
4219 * no reason to believe we'll evict more during another
4220 * scan, so break the loop.
4222 if (scan_evicted
== 0) {
4223 /* This isn't possible, let's make that obvious */
4224 ASSERT3S(bytes
, !=, 0);
4227 * When bytes is ARC_EVICT_ALL, the only way to
4228 * break the loop is when scan_evicted is zero.
4229 * In that case, we actually have evicted enough,
4230 * so we don't want to increment the kstat.
4232 if (bytes
!= ARC_EVICT_ALL
) {
4233 ASSERT3S(total_evicted
, <, bytes
);
4234 ARCSTAT_BUMP(arcstat_evict_not_enough
);
4241 for (int i
= 0; i
< num_sublists
; i
++) {
4242 multilist_sublist_t
*mls
= multilist_sublist_lock(ml
, i
);
4243 multilist_sublist_remove(mls
, markers
[i
]);
4244 multilist_sublist_unlock(mls
);
4246 kmem_cache_free(hdr_full_cache
, markers
[i
]);
4248 kmem_free(markers
, sizeof (*markers
) * num_sublists
);
4250 return (total_evicted
);
4254 * Flush all "evictable" data of the given type from the arc state
4255 * specified. This will not evict any "active" buffers (i.e. referenced).
4257 * When 'retry' is set to B_FALSE, the function will make a single pass
4258 * over the state and evict any buffers that it can. Since it doesn't
4259 * continually retry the eviction, it might end up leaving some buffers
4260 * in the ARC due to lock misses.
4262 * When 'retry' is set to B_TRUE, the function will continually retry the
4263 * eviction until *all* evictable buffers have been removed from the
4264 * state. As a result, if concurrent insertions into the state are
4265 * allowed (e.g. if the ARC isn't shutting down), this function might
4266 * wind up in an infinite loop, continually trying to evict buffers.
4269 arc_flush_state(arc_state_t
*state
, uint64_t spa
, arc_buf_contents_t type
,
4272 uint64_t evicted
= 0;
4274 while (zfs_refcount_count(&state
->arcs_esize
[type
]) != 0) {
4275 evicted
+= arc_evict_state(state
, spa
, ARC_EVICT_ALL
, type
);
4285 * Helper function for arc_prune_async() it is responsible for safely
4286 * handling the execution of a registered arc_prune_func_t.
4289 arc_prune_task(void *ptr
)
4291 arc_prune_t
*ap
= (arc_prune_t
*)ptr
;
4292 arc_prune_func_t
*func
= ap
->p_pfunc
;
4295 func(ap
->p_adjust
, ap
->p_private
);
4297 zfs_refcount_remove(&ap
->p_refcnt
, func
);
4301 * Notify registered consumers they must drop holds on a portion of the ARC
4302 * buffered they reference. This provides a mechanism to ensure the ARC can
4303 * honor the arc_meta_limit and reclaim otherwise pinned ARC buffers. This
4304 * is analogous to dnlc_reduce_cache() but more generic.
4306 * This operation is performed asynchronously so it may be safely called
4307 * in the context of the arc_reclaim_thread(). A reference is taken here
4308 * for each registered arc_prune_t and the arc_prune_task() is responsible
4309 * for releasing it once the registered arc_prune_func_t has completed.
4312 arc_prune_async(int64_t adjust
)
4316 mutex_enter(&arc_prune_mtx
);
4317 for (ap
= list_head(&arc_prune_list
); ap
!= NULL
;
4318 ap
= list_next(&arc_prune_list
, ap
)) {
4320 if (zfs_refcount_count(&ap
->p_refcnt
) >= 2)
4323 zfs_refcount_add(&ap
->p_refcnt
, ap
->p_pfunc
);
4324 ap
->p_adjust
= adjust
;
4325 if (taskq_dispatch(arc_prune_taskq
, arc_prune_task
,
4326 ap
, TQ_SLEEP
) == TASKQID_INVALID
) {
4327 zfs_refcount_remove(&ap
->p_refcnt
, ap
->p_pfunc
);
4330 ARCSTAT_BUMP(arcstat_prune
);
4332 mutex_exit(&arc_prune_mtx
);
4336 * Evict the specified number of bytes from the state specified,
4337 * restricting eviction to the spa and type given. This function
4338 * prevents us from trying to evict more from a state's list than
4339 * is "evictable", and to skip evicting altogether when passed a
4340 * negative value for "bytes". In contrast, arc_evict_state() will
4341 * evict everything it can, when passed a negative value for "bytes".
4344 arc_adjust_impl(arc_state_t
*state
, uint64_t spa
, int64_t bytes
,
4345 arc_buf_contents_t type
)
4349 if (bytes
> 0 && zfs_refcount_count(&state
->arcs_esize
[type
]) > 0) {
4350 delta
= MIN(zfs_refcount_count(&state
->arcs_esize
[type
]),
4352 return (arc_evict_state(state
, spa
, delta
, type
));
4359 * The goal of this function is to evict enough meta data buffers from the
4360 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly
4361 * more complicated than it appears because it is common for data buffers
4362 * to have holds on meta data buffers. In addition, dnode meta data buffers
4363 * will be held by the dnodes in the block preventing them from being freed.
4364 * This means we can't simply traverse the ARC and expect to always find
4365 * enough unheld meta data buffer to release.
4367 * Therefore, this function has been updated to make alternating passes
4368 * over the ARC releasing data buffers and then newly unheld meta data
4369 * buffers. This ensures forward progress is maintained and meta_used
4370 * will decrease. Normally this is sufficient, but if required the ARC
4371 * will call the registered prune callbacks causing dentry and inodes to
4372 * be dropped from the VFS cache. This will make dnode meta data buffers
4373 * available for reclaim.
4376 arc_adjust_meta_balanced(uint64_t meta_used
)
4378 int64_t delta
, prune
= 0, adjustmnt
;
4379 uint64_t total_evicted
= 0;
4380 arc_buf_contents_t type
= ARC_BUFC_DATA
;
4381 int restarts
= MAX(zfs_arc_meta_adjust_restarts
, 0);
4385 * This slightly differs than the way we evict from the mru in
4386 * arc_adjust because we don't have a "target" value (i.e. no
4387 * "meta" arc_p). As a result, I think we can completely
4388 * cannibalize the metadata in the MRU before we evict the
4389 * metadata from the MFU. I think we probably need to implement a
4390 * "metadata arc_p" value to do this properly.
4392 adjustmnt
= meta_used
- arc_meta_limit
;
4394 if (adjustmnt
> 0 &&
4395 zfs_refcount_count(&arc_mru
->arcs_esize
[type
]) > 0) {
4396 delta
= MIN(zfs_refcount_count(&arc_mru
->arcs_esize
[type
]),
4398 total_evicted
+= arc_adjust_impl(arc_mru
, 0, delta
, type
);
4403 * We can't afford to recalculate adjustmnt here. If we do,
4404 * new metadata buffers can sneak into the MRU or ANON lists,
4405 * thus penalize the MFU metadata. Although the fudge factor is
4406 * small, it has been empirically shown to be significant for
4407 * certain workloads (e.g. creating many empty directories). As
4408 * such, we use the original calculation for adjustmnt, and
4409 * simply decrement the amount of data evicted from the MRU.
4412 if (adjustmnt
> 0 &&
4413 zfs_refcount_count(&arc_mfu
->arcs_esize
[type
]) > 0) {
4414 delta
= MIN(zfs_refcount_count(&arc_mfu
->arcs_esize
[type
]),
4416 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, delta
, type
);
4419 adjustmnt
= meta_used
- arc_meta_limit
;
4421 if (adjustmnt
> 0 &&
4422 zfs_refcount_count(&arc_mru_ghost
->arcs_esize
[type
]) > 0) {
4423 delta
= MIN(adjustmnt
,
4424 zfs_refcount_count(&arc_mru_ghost
->arcs_esize
[type
]));
4425 total_evicted
+= arc_adjust_impl(arc_mru_ghost
, 0, delta
, type
);
4429 if (adjustmnt
> 0 &&
4430 zfs_refcount_count(&arc_mfu_ghost
->arcs_esize
[type
]) > 0) {
4431 delta
= MIN(adjustmnt
,
4432 zfs_refcount_count(&arc_mfu_ghost
->arcs_esize
[type
]));
4433 total_evicted
+= arc_adjust_impl(arc_mfu_ghost
, 0, delta
, type
);
4437 * If after attempting to make the requested adjustment to the ARC
4438 * the meta limit is still being exceeded then request that the
4439 * higher layers drop some cached objects which have holds on ARC
4440 * meta buffers. Requests to the upper layers will be made with
4441 * increasingly large scan sizes until the ARC is below the limit.
4443 if (meta_used
> arc_meta_limit
) {
4444 if (type
== ARC_BUFC_DATA
) {
4445 type
= ARC_BUFC_METADATA
;
4447 type
= ARC_BUFC_DATA
;
4449 if (zfs_arc_meta_prune
) {
4450 prune
+= zfs_arc_meta_prune
;
4451 arc_prune_async(prune
);
4460 return (total_evicted
);
4464 * Evict metadata buffers from the cache, such that arc_meta_used is
4465 * capped by the arc_meta_limit tunable.
4468 arc_adjust_meta_only(uint64_t meta_used
)
4470 uint64_t total_evicted
= 0;
4474 * If we're over the meta limit, we want to evict enough
4475 * metadata to get back under the meta limit. We don't want to
4476 * evict so much that we drop the MRU below arc_p, though. If
4477 * we're over the meta limit more than we're over arc_p, we
4478 * evict some from the MRU here, and some from the MFU below.
4480 target
= MIN((int64_t)(meta_used
- arc_meta_limit
),
4481 (int64_t)(zfs_refcount_count(&arc_anon
->arcs_size
) +
4482 zfs_refcount_count(&arc_mru
->arcs_size
) - arc_p
));
4484 total_evicted
+= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4487 * Similar to the above, we want to evict enough bytes to get us
4488 * below the meta limit, but not so much as to drop us below the
4489 * space allotted to the MFU (which is defined as arc_c - arc_p).
4491 target
= MIN((int64_t)(meta_used
- arc_meta_limit
),
4492 (int64_t)(zfs_refcount_count(&arc_mfu
->arcs_size
) -
4495 total_evicted
+= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4497 return (total_evicted
);
4501 arc_adjust_meta(uint64_t meta_used
)
4503 if (zfs_arc_meta_strategy
== ARC_STRATEGY_META_ONLY
)
4504 return (arc_adjust_meta_only(meta_used
));
4506 return (arc_adjust_meta_balanced(meta_used
));
4510 * Return the type of the oldest buffer in the given arc state
4512 * This function will select a random sublist of type ARC_BUFC_DATA and
4513 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist
4514 * is compared, and the type which contains the "older" buffer will be
4517 static arc_buf_contents_t
4518 arc_adjust_type(arc_state_t
*state
)
4520 multilist_t
*data_ml
= state
->arcs_list
[ARC_BUFC_DATA
];
4521 multilist_t
*meta_ml
= state
->arcs_list
[ARC_BUFC_METADATA
];
4522 int data_idx
= multilist_get_random_index(data_ml
);
4523 int meta_idx
= multilist_get_random_index(meta_ml
);
4524 multilist_sublist_t
*data_mls
;
4525 multilist_sublist_t
*meta_mls
;
4526 arc_buf_contents_t type
;
4527 arc_buf_hdr_t
*data_hdr
;
4528 arc_buf_hdr_t
*meta_hdr
;
4531 * We keep the sublist lock until we're finished, to prevent
4532 * the headers from being destroyed via arc_evict_state().
4534 data_mls
= multilist_sublist_lock(data_ml
, data_idx
);
4535 meta_mls
= multilist_sublist_lock(meta_ml
, meta_idx
);
4538 * These two loops are to ensure we skip any markers that
4539 * might be at the tail of the lists due to arc_evict_state().
4542 for (data_hdr
= multilist_sublist_tail(data_mls
); data_hdr
!= NULL
;
4543 data_hdr
= multilist_sublist_prev(data_mls
, data_hdr
)) {
4544 if (data_hdr
->b_spa
!= 0)
4548 for (meta_hdr
= multilist_sublist_tail(meta_mls
); meta_hdr
!= NULL
;
4549 meta_hdr
= multilist_sublist_prev(meta_mls
, meta_hdr
)) {
4550 if (meta_hdr
->b_spa
!= 0)
4554 if (data_hdr
== NULL
&& meta_hdr
== NULL
) {
4555 type
= ARC_BUFC_DATA
;
4556 } else if (data_hdr
== NULL
) {
4557 ASSERT3P(meta_hdr
, !=, NULL
);
4558 type
= ARC_BUFC_METADATA
;
4559 } else if (meta_hdr
== NULL
) {
4560 ASSERT3P(data_hdr
, !=, NULL
);
4561 type
= ARC_BUFC_DATA
;
4563 ASSERT3P(data_hdr
, !=, NULL
);
4564 ASSERT3P(meta_hdr
, !=, NULL
);
4566 /* The headers can't be on the sublist without an L1 header */
4567 ASSERT(HDR_HAS_L1HDR(data_hdr
));
4568 ASSERT(HDR_HAS_L1HDR(meta_hdr
));
4570 if (data_hdr
->b_l1hdr
.b_arc_access
<
4571 meta_hdr
->b_l1hdr
.b_arc_access
) {
4572 type
= ARC_BUFC_DATA
;
4574 type
= ARC_BUFC_METADATA
;
4578 multilist_sublist_unlock(meta_mls
);
4579 multilist_sublist_unlock(data_mls
);
4585 * Evict buffers from the cache, such that arc_size is capped by arc_c.
4590 uint64_t total_evicted
= 0;
4593 uint64_t asize
= aggsum_value(&arc_size
);
4594 uint64_t ameta
= aggsum_value(&arc_meta_used
);
4597 * If we're over arc_meta_limit, we want to correct that before
4598 * potentially evicting data buffers below.
4600 total_evicted
+= arc_adjust_meta(ameta
);
4605 * If we're over the target cache size, we want to evict enough
4606 * from the list to get back to our target size. We don't want
4607 * to evict too much from the MRU, such that it drops below
4608 * arc_p. So, if we're over our target cache size more than
4609 * the MRU is over arc_p, we'll evict enough to get back to
4610 * arc_p here, and then evict more from the MFU below.
4612 target
= MIN((int64_t)(asize
- arc_c
),
4613 (int64_t)(zfs_refcount_count(&arc_anon
->arcs_size
) +
4614 zfs_refcount_count(&arc_mru
->arcs_size
) + ameta
- arc_p
));
4617 * If we're below arc_meta_min, always prefer to evict data.
4618 * Otherwise, try to satisfy the requested number of bytes to
4619 * evict from the type which contains older buffers; in an
4620 * effort to keep newer buffers in the cache regardless of their
4621 * type. If we cannot satisfy the number of bytes from this
4622 * type, spill over into the next type.
4624 if (arc_adjust_type(arc_mru
) == ARC_BUFC_METADATA
&&
4625 ameta
> arc_meta_min
) {
4626 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4627 total_evicted
+= bytes
;
4630 * If we couldn't evict our target number of bytes from
4631 * metadata, we try to get the rest from data.
4636 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
4638 bytes
= arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_DATA
);
4639 total_evicted
+= bytes
;
4642 * If we couldn't evict our target number of bytes from
4643 * data, we try to get the rest from metadata.
4648 arc_adjust_impl(arc_mru
, 0, target
, ARC_BUFC_METADATA
);
4652 * Re-sum ARC stats after the first round of evictions.
4654 asize
= aggsum_value(&arc_size
);
4655 ameta
= aggsum_value(&arc_meta_used
);
4661 * Now that we've tried to evict enough from the MRU to get its
4662 * size back to arc_p, if we're still above the target cache
4663 * size, we evict the rest from the MFU.
4665 target
= asize
- arc_c
;
4667 if (arc_adjust_type(arc_mfu
) == ARC_BUFC_METADATA
&&
4668 ameta
> arc_meta_min
) {
4669 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4670 total_evicted
+= bytes
;
4673 * If we couldn't evict our target number of bytes from
4674 * metadata, we try to get the rest from data.
4679 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
4681 bytes
= arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_DATA
);
4682 total_evicted
+= bytes
;
4685 * If we couldn't evict our target number of bytes from
4686 * data, we try to get the rest from data.
4691 arc_adjust_impl(arc_mfu
, 0, target
, ARC_BUFC_METADATA
);
4695 * Adjust ghost lists
4697 * In addition to the above, the ARC also defines target values
4698 * for the ghost lists. The sum of the mru list and mru ghost
4699 * list should never exceed the target size of the cache, and
4700 * the sum of the mru list, mfu list, mru ghost list, and mfu
4701 * ghost list should never exceed twice the target size of the
4702 * cache. The following logic enforces these limits on the ghost
4703 * caches, and evicts from them as needed.
4705 target
= zfs_refcount_count(&arc_mru
->arcs_size
) +
4706 zfs_refcount_count(&arc_mru_ghost
->arcs_size
) - arc_c
;
4708 bytes
= arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_DATA
);
4709 total_evicted
+= bytes
;
4714 arc_adjust_impl(arc_mru_ghost
, 0, target
, ARC_BUFC_METADATA
);
4717 * We assume the sum of the mru list and mfu list is less than
4718 * or equal to arc_c (we enforced this above), which means we
4719 * can use the simpler of the two equations below:
4721 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c
4722 * mru ghost + mfu ghost <= arc_c
4724 target
= zfs_refcount_count(&arc_mru_ghost
->arcs_size
) +
4725 zfs_refcount_count(&arc_mfu_ghost
->arcs_size
) - arc_c
;
4727 bytes
= arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_DATA
);
4728 total_evicted
+= bytes
;
4733 arc_adjust_impl(arc_mfu_ghost
, 0, target
, ARC_BUFC_METADATA
);
4735 return (total_evicted
);
4739 arc_flush(spa_t
*spa
, boolean_t retry
)
4744 * If retry is B_TRUE, a spa must not be specified since we have
4745 * no good way to determine if all of a spa's buffers have been
4746 * evicted from an arc state.
4748 ASSERT(!retry
|| spa
== 0);
4751 guid
= spa_load_guid(spa
);
4753 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_DATA
, retry
);
4754 (void) arc_flush_state(arc_mru
, guid
, ARC_BUFC_METADATA
, retry
);
4756 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_DATA
, retry
);
4757 (void) arc_flush_state(arc_mfu
, guid
, ARC_BUFC_METADATA
, retry
);
4759 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_DATA
, retry
);
4760 (void) arc_flush_state(arc_mru_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
4762 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_DATA
, retry
);
4763 (void) arc_flush_state(arc_mfu_ghost
, guid
, ARC_BUFC_METADATA
, retry
);
4767 arc_shrink(int64_t to_free
)
4769 uint64_t asize
= aggsum_value(&arc_size
);
4772 if (c
> to_free
&& c
- to_free
> arc_c_min
) {
4773 arc_c
= c
- to_free
;
4774 atomic_add_64(&arc_p
, -(arc_p
>> arc_shrink_shift
));
4776 arc_c
= MAX(asize
, arc_c_min
);
4778 arc_p
= (arc_c
>> 1);
4779 ASSERT(arc_c
>= arc_c_min
);
4780 ASSERT((int64_t)arc_p
>= 0);
4786 (void) arc_adjust();
4790 * Return maximum amount of memory that we could possibly use. Reduced
4791 * to half of all memory in user space which is primarily used for testing.
4794 arc_all_memory(void)
4797 #ifdef CONFIG_HIGHMEM
4798 return (ptob(totalram_pages
- totalhigh_pages
));
4800 return (ptob(totalram_pages
));
4801 #endif /* CONFIG_HIGHMEM */
4803 return (ptob(physmem
) / 2);
4804 #endif /* _KERNEL */
4808 * Return the amount of memory that is considered free. In user space
4809 * which is primarily used for testing we pretend that free memory ranges
4810 * from 0-20% of all memory.
4813 arc_free_memory(void)
4816 #ifdef CONFIG_HIGHMEM
4819 return (ptob(si
.freeram
- si
.freehigh
));
4821 return (ptob(nr_free_pages() +
4822 nr_inactive_file_pages() +
4823 nr_inactive_anon_pages() +
4824 nr_slab_reclaimable_pages()));
4826 #endif /* CONFIG_HIGHMEM */
4828 return (spa_get_random(arc_all_memory() * 20 / 100));
4829 #endif /* _KERNEL */
4832 typedef enum free_memory_reason_t
{
4837 FMR_PAGES_PP_MAXIMUM
,
4840 } free_memory_reason_t
;
4842 int64_t last_free_memory
;
4843 free_memory_reason_t last_free_reason
;
4847 * Additional reserve of pages for pp_reserve.
4849 int64_t arc_pages_pp_reserve
= 64;
4852 * Additional reserve of pages for swapfs.
4854 int64_t arc_swapfs_reserve
= 64;
4855 #endif /* _KERNEL */
4858 * Return the amount of memory that can be consumed before reclaim will be
4859 * needed. Positive if there is sufficient free memory, negative indicates
4860 * the amount of memory that needs to be freed up.
4863 arc_available_memory(void)
4865 int64_t lowest
= INT64_MAX
;
4866 free_memory_reason_t r
= FMR_UNKNOWN
;
4873 pgcnt_t needfree
= btop(arc_need_free
);
4874 pgcnt_t lotsfree
= btop(arc_sys_free
);
4875 pgcnt_t desfree
= 0;
4876 pgcnt_t freemem
= btop(arc_free_memory());
4880 n
= PAGESIZE
* (-needfree
);
4888 * check that we're out of range of the pageout scanner. It starts to
4889 * schedule paging if freemem is less than lotsfree and needfree.
4890 * lotsfree is the high-water mark for pageout, and needfree is the
4891 * number of needed free pages. We add extra pages here to make sure
4892 * the scanner doesn't start up while we're freeing memory.
4894 n
= PAGESIZE
* (freemem
- lotsfree
- needfree
- desfree
);
4902 * check to make sure that swapfs has enough space so that anon
4903 * reservations can still succeed. anon_resvmem() checks that the
4904 * availrmem is greater than swapfs_minfree, and the number of reserved
4905 * swap pages. We also add a bit of extra here just to prevent
4906 * circumstances from getting really dire.
4908 n
= PAGESIZE
* (availrmem
- swapfs_minfree
- swapfs_reserve
-
4909 desfree
- arc_swapfs_reserve
);
4912 r
= FMR_SWAPFS_MINFREE
;
4916 * Check that we have enough availrmem that memory locking (e.g., via
4917 * mlock(3C) or memcntl(2)) can still succeed. (pages_pp_maximum
4918 * stores the number of pages that cannot be locked; when availrmem
4919 * drops below pages_pp_maximum, page locking mechanisms such as
4920 * page_pp_lock() will fail.)
4922 n
= PAGESIZE
* (availrmem
- pages_pp_maximum
-
4923 arc_pages_pp_reserve
);
4926 r
= FMR_PAGES_PP_MAXIMUM
;
4932 * If we're on a 32-bit platform, it's possible that we'll exhaust the
4933 * kernel heap space before we ever run out of available physical
4934 * memory. Most checks of the size of the heap_area compare against
4935 * tune.t_minarmem, which is the minimum available real memory that we
4936 * can have in the system. However, this is generally fixed at 25 pages
4937 * which is so low that it's useless. In this comparison, we seek to
4938 * calculate the total heap-size, and reclaim if more than 3/4ths of the
4939 * heap is allocated. (Or, in the calculation, if less than 1/4th is
4942 n
= vmem_size(heap_arena
, VMEM_FREE
) -
4943 (vmem_size(heap_arena
, VMEM_FREE
| VMEM_ALLOC
) >> 2);
4951 * If zio data pages are being allocated out of a separate heap segment,
4952 * then enforce that the size of available vmem for this arena remains
4953 * above about 1/4th (1/(2^arc_zio_arena_free_shift)) free.
4955 * Note that reducing the arc_zio_arena_free_shift keeps more virtual
4956 * memory (in the zio_arena) free, which can avoid memory
4957 * fragmentation issues.
4959 if (zio_arena
!= NULL
) {
4960 n
= (int64_t)vmem_size(zio_arena
, VMEM_FREE
) -
4961 (vmem_size(zio_arena
, VMEM_ALLOC
) >>
4962 arc_zio_arena_free_shift
);
4969 /* Every 100 calls, free a small amount */
4970 if (spa_get_random(100) == 0)
4972 #endif /* _KERNEL */
4974 last_free_memory
= lowest
;
4975 last_free_reason
= r
;
4981 * Determine if the system is under memory pressure and is asking
4982 * to reclaim memory. A return value of B_TRUE indicates that the system
4983 * is under memory pressure and that the arc should adjust accordingly.
4986 arc_reclaim_needed(void)
4988 return (arc_available_memory() < 0);
4992 arc_kmem_reap_now(void)
4995 kmem_cache_t
*prev_cache
= NULL
;
4996 kmem_cache_t
*prev_data_cache
= NULL
;
4997 extern kmem_cache_t
*zio_buf_cache
[];
4998 extern kmem_cache_t
*zio_data_buf_cache
[];
4999 extern kmem_cache_t
*range_seg_cache
;
5002 if ((aggsum_compare(&arc_meta_used
, arc_meta_limit
) >= 0) &&
5003 zfs_arc_meta_prune
) {
5005 * We are exceeding our meta-data cache limit.
5006 * Prune some entries to release holds on meta-data.
5008 arc_prune_async(zfs_arc_meta_prune
);
5012 * Reclaim unused memory from all kmem caches.
5018 for (i
= 0; i
< SPA_MAXBLOCKSIZE
>> SPA_MINBLOCKSHIFT
; i
++) {
5020 /* reach upper limit of cache size on 32-bit */
5021 if (zio_buf_cache
[i
] == NULL
)
5024 if (zio_buf_cache
[i
] != prev_cache
) {
5025 prev_cache
= zio_buf_cache
[i
];
5026 kmem_cache_reap_now(zio_buf_cache
[i
]);
5028 if (zio_data_buf_cache
[i
] != prev_data_cache
) {
5029 prev_data_cache
= zio_data_buf_cache
[i
];
5030 kmem_cache_reap_now(zio_data_buf_cache
[i
]);
5033 kmem_cache_reap_now(buf_cache
);
5034 kmem_cache_reap_now(hdr_full_cache
);
5035 kmem_cache_reap_now(hdr_l2only_cache
);
5036 kmem_cache_reap_now(range_seg_cache
);
5038 if (zio_arena
!= NULL
) {
5040 * Ask the vmem arena to reclaim unused memory from its
5043 vmem_qcache_reap(zio_arena
);
5048 * Threads can block in arc_get_data_impl() waiting for this thread to evict
5049 * enough data and signal them to proceed. When this happens, the threads in
5050 * arc_get_data_impl() are sleeping while holding the hash lock for their
5051 * particular arc header. Thus, we must be careful to never sleep on a
5052 * hash lock in this thread. This is to prevent the following deadlock:
5054 * - Thread A sleeps on CV in arc_get_data_impl() holding hash lock "L",
5055 * waiting for the reclaim thread to signal it.
5057 * - arc_reclaim_thread() tries to acquire hash lock "L" using mutex_enter,
5058 * fails, and goes to sleep forever.
5060 * This possible deadlock is avoided by always acquiring a hash lock
5061 * using mutex_tryenter() from arc_reclaim_thread().
5065 arc_reclaim_thread(void *unused
)
5067 fstrans_cookie_t cookie
= spl_fstrans_mark();
5068 hrtime_t growtime
= 0;
5071 CALLB_CPR_INIT(&cpr
, &arc_reclaim_lock
, callb_generic_cpr
, FTAG
);
5073 mutex_enter(&arc_reclaim_lock
);
5074 while (!arc_reclaim_thread_exit
) {
5075 uint64_t evicted
= 0;
5076 uint64_t need_free
= arc_need_free
;
5077 arc_tuning_update();
5080 * This is necessary in order for the mdb ::arc dcmd to
5081 * show up to date information. Since the ::arc command
5082 * does not call the kstat's update function, without
5083 * this call, the command may show stale stats for the
5084 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even
5085 * with this change, the data might be up to 1 second
5086 * out of date; but that should suffice. The arc_state_t
5087 * structures can be queried directly if more accurate
5088 * information is needed.
5091 if (arc_ksp
!= NULL
)
5092 arc_ksp
->ks_update(arc_ksp
, KSTAT_READ
);
5094 mutex_exit(&arc_reclaim_lock
);
5097 * We call arc_adjust() before (possibly) calling
5098 * arc_kmem_reap_now(), so that we can wake up
5099 * arc_get_data_buf() sooner.
5101 evicted
= arc_adjust();
5103 int64_t free_memory
= arc_available_memory();
5104 if (free_memory
< 0) {
5106 arc_no_grow
= B_TRUE
;
5110 * Wait at least zfs_grow_retry (default 5) seconds
5111 * before considering growing.
5113 growtime
= gethrtime() + SEC2NSEC(arc_grow_retry
);
5115 arc_kmem_reap_now();
5118 * If we are still low on memory, shrink the ARC
5119 * so that we have arc_shrink_min free space.
5121 free_memory
= arc_available_memory();
5124 (arc_c
>> arc_shrink_shift
) - free_memory
;
5127 to_free
= MAX(to_free
, need_free
);
5129 arc_shrink(to_free
);
5131 } else if (free_memory
< arc_c
>> arc_no_grow_shift
) {
5132 arc_no_grow
= B_TRUE
;
5133 } else if (gethrtime() >= growtime
) {
5134 arc_no_grow
= B_FALSE
;
5137 mutex_enter(&arc_reclaim_lock
);
5140 * If evicted is zero, we couldn't evict anything via
5141 * arc_adjust(). This could be due to hash lock
5142 * collisions, but more likely due to the majority of
5143 * arc buffers being unevictable. Therefore, even if
5144 * arc_size is above arc_c, another pass is unlikely to
5145 * be helpful and could potentially cause us to enter an
5148 if (aggsum_compare(&arc_size
, arc_c
) <= 0|| evicted
== 0) {
5150 * We're either no longer overflowing, or we
5151 * can't evict anything more, so we should wake
5152 * up any threads before we go to sleep and remove
5153 * the bytes we were working on from arc_need_free
5154 * since nothing more will be done here.
5156 cv_broadcast(&arc_reclaim_waiters_cv
);
5157 ARCSTAT_INCR(arcstat_need_free
, -need_free
);
5160 * Block until signaled, or after one second (we
5161 * might need to perform arc_kmem_reap_now()
5162 * even if we aren't being signalled)
5164 CALLB_CPR_SAFE_BEGIN(&cpr
);
5165 (void) cv_timedwait_sig_hires(&arc_reclaim_thread_cv
,
5166 &arc_reclaim_lock
, SEC2NSEC(1), MSEC2NSEC(1), 0);
5167 CALLB_CPR_SAFE_END(&cpr
, &arc_reclaim_lock
);
5171 arc_reclaim_thread_exit
= B_FALSE
;
5172 cv_broadcast(&arc_reclaim_thread_cv
);
5173 CALLB_CPR_EXIT(&cpr
); /* drops arc_reclaim_lock */
5174 spl_fstrans_unmark(cookie
);
5180 * Determine the amount of memory eligible for eviction contained in the
5181 * ARC. All clean data reported by the ghost lists can always be safely
5182 * evicted. Due to arc_c_min, the same does not hold for all clean data
5183 * contained by the regular mru and mfu lists.
5185 * In the case of the regular mru and mfu lists, we need to report as
5186 * much clean data as possible, such that evicting that same reported
5187 * data will not bring arc_size below arc_c_min. Thus, in certain
5188 * circumstances, the total amount of clean data in the mru and mfu
5189 * lists might not actually be evictable.
5191 * The following two distinct cases are accounted for:
5193 * 1. The sum of the amount of dirty data contained by both the mru and
5194 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
5195 * is greater than or equal to arc_c_min.
5196 * (i.e. amount of dirty data >= arc_c_min)
5198 * This is the easy case; all clean data contained by the mru and mfu
5199 * lists is evictable. Evicting all clean data can only drop arc_size
5200 * to the amount of dirty data, which is greater than arc_c_min.
5202 * 2. The sum of the amount of dirty data contained by both the mru and
5203 * mfu lists, plus the ARC's other accounting (e.g. the anon list),
5204 * is less than arc_c_min.
5205 * (i.e. arc_c_min > amount of dirty data)
5207 * 2.1. arc_size is greater than or equal arc_c_min.
5208 * (i.e. arc_size >= arc_c_min > amount of dirty data)
5210 * In this case, not all clean data from the regular mru and mfu
5211 * lists is actually evictable; we must leave enough clean data
5212 * to keep arc_size above arc_c_min. Thus, the maximum amount of
5213 * evictable data from the two lists combined, is exactly the
5214 * difference between arc_size and arc_c_min.
5216 * 2.2. arc_size is less than arc_c_min
5217 * (i.e. arc_c_min > arc_size > amount of dirty data)
5219 * In this case, none of the data contained in the mru and mfu
5220 * lists is evictable, even if it's clean. Since arc_size is
5221 * already below arc_c_min, evicting any more would only
5222 * increase this negative difference.
5225 arc_evictable_memory(void)
5227 int64_t asize
= aggsum_value(&arc_size
);
5228 uint64_t arc_clean
=
5229 zfs_refcount_count(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]) +
5230 zfs_refcount_count(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]) +
5231 zfs_refcount_count(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]) +
5232 zfs_refcount_count(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
5233 uint64_t arc_dirty
= MAX((int64_t)asize
- (int64_t)arc_clean
, 0);
5236 * Scale reported evictable memory in proportion to page cache, cap
5237 * at specified min/max.
5239 uint64_t min
= (ptob(nr_file_pages()) / 100) * zfs_arc_pc_percent
;
5240 min
= MAX(arc_c_min
, MIN(arc_c_max
, min
));
5242 if (arc_dirty
>= min
)
5245 return (MAX((int64_t)asize
- (int64_t)min
, 0));
5249 * If sc->nr_to_scan is zero, the caller is requesting a query of the
5250 * number of objects which can potentially be freed. If it is nonzero,
5251 * the request is to free that many objects.
5253 * Linux kernels >= 3.12 have the count_objects and scan_objects callbacks
5254 * in struct shrinker and also require the shrinker to return the number
5257 * Older kernels require the shrinker to return the number of freeable
5258 * objects following the freeing of nr_to_free.
5260 static spl_shrinker_t
5261 __arc_shrinker_func(struct shrinker
*shrink
, struct shrink_control
*sc
)
5265 /* The arc is considered warm once reclaim has occurred */
5266 if (unlikely(arc_warm
== B_FALSE
))
5269 /* Return the potential number of reclaimable pages */
5270 pages
= btop((int64_t)arc_evictable_memory());
5271 if (sc
->nr_to_scan
== 0)
5274 /* Not allowed to perform filesystem reclaim */
5275 if (!(sc
->gfp_mask
& __GFP_FS
))
5276 return (SHRINK_STOP
);
5278 /* Reclaim in progress */
5279 if (mutex_tryenter(&arc_reclaim_lock
) == 0) {
5280 ARCSTAT_INCR(arcstat_need_free
, ptob(sc
->nr_to_scan
));
5284 mutex_exit(&arc_reclaim_lock
);
5287 * Evict the requested number of pages by shrinking arc_c the
5291 arc_shrink(ptob(sc
->nr_to_scan
));
5292 if (current_is_kswapd())
5293 arc_kmem_reap_now();
5294 #ifdef HAVE_SPLIT_SHRINKER_CALLBACK
5295 pages
= MAX((int64_t)pages
-
5296 (int64_t)btop(arc_evictable_memory()), 0);
5298 pages
= btop(arc_evictable_memory());
5301 * We've shrunk what we can, wake up threads.
5303 cv_broadcast(&arc_reclaim_waiters_cv
);
5305 pages
= SHRINK_STOP
;
5308 * When direct reclaim is observed it usually indicates a rapid
5309 * increase in memory pressure. This occurs because the kswapd
5310 * threads were unable to asynchronously keep enough free memory
5311 * available. In this case set arc_no_grow to briefly pause arc
5312 * growth to avoid compounding the memory pressure.
5314 if (current_is_kswapd()) {
5315 ARCSTAT_BUMP(arcstat_memory_indirect_count
);
5317 arc_no_grow
= B_TRUE
;
5318 arc_kmem_reap_now();
5319 ARCSTAT_BUMP(arcstat_memory_direct_count
);
5324 SPL_SHRINKER_CALLBACK_WRAPPER(arc_shrinker_func
);
5326 SPL_SHRINKER_DECLARE(arc_shrinker
, arc_shrinker_func
, DEFAULT_SEEKS
);
5327 #endif /* _KERNEL */
5330 * Adapt arc info given the number of bytes we are trying to add and
5331 * the state that we are coming from. This function is only called
5332 * when we are adding new content to the cache.
5335 arc_adapt(int bytes
, arc_state_t
*state
)
5338 uint64_t arc_p_min
= (arc_c
>> arc_p_min_shift
);
5339 int64_t mrug_size
= zfs_refcount_count(&arc_mru_ghost
->arcs_size
);
5340 int64_t mfug_size
= zfs_refcount_count(&arc_mfu_ghost
->arcs_size
);
5342 if (state
== arc_l2c_only
)
5347 * Adapt the target size of the MRU list:
5348 * - if we just hit in the MRU ghost list, then increase
5349 * the target size of the MRU list.
5350 * - if we just hit in the MFU ghost list, then increase
5351 * the target size of the MFU list by decreasing the
5352 * target size of the MRU list.
5354 if (state
== arc_mru_ghost
) {
5355 mult
= (mrug_size
>= mfug_size
) ? 1 : (mfug_size
/ mrug_size
);
5356 if (!zfs_arc_p_dampener_disable
)
5357 mult
= MIN(mult
, 10); /* avoid wild arc_p adjustment */
5359 arc_p
= MIN(arc_c
- arc_p_min
, arc_p
+ bytes
* mult
);
5360 } else if (state
== arc_mfu_ghost
) {
5363 mult
= (mfug_size
>= mrug_size
) ? 1 : (mrug_size
/ mfug_size
);
5364 if (!zfs_arc_p_dampener_disable
)
5365 mult
= MIN(mult
, 10);
5367 delta
= MIN(bytes
* mult
, arc_p
);
5368 arc_p
= MAX(arc_p_min
, arc_p
- delta
);
5370 ASSERT((int64_t)arc_p
>= 0);
5372 if (arc_reclaim_needed()) {
5373 cv_signal(&arc_reclaim_thread_cv
);
5380 if (arc_c
>= arc_c_max
)
5384 * If we're within (2 * maxblocksize) bytes of the target
5385 * cache size, increment the target cache size
5387 ASSERT3U(arc_c
, >=, 2ULL << SPA_MAXBLOCKSHIFT
);
5388 if (aggsum_compare(&arc_size
, arc_c
- (2ULL << SPA_MAXBLOCKSHIFT
)) >=
5390 atomic_add_64(&arc_c
, (int64_t)bytes
);
5391 if (arc_c
> arc_c_max
)
5393 else if (state
== arc_anon
)
5394 atomic_add_64(&arc_p
, (int64_t)bytes
);
5398 ASSERT((int64_t)arc_p
>= 0);
5402 * Check if arc_size has grown past our upper threshold, determined by
5403 * zfs_arc_overflow_shift.
5406 arc_is_overflowing(void)
5408 /* Always allow at least one block of overflow */
5409 uint64_t overflow
= MAX(SPA_MAXBLOCKSIZE
,
5410 arc_c
>> zfs_arc_overflow_shift
);
5413 * We just compare the lower bound here for performance reasons. Our
5414 * primary goals are to make sure that the arc never grows without
5415 * bound, and that it can reach its maximum size. This check
5416 * accomplishes both goals. The maximum amount we could run over by is
5417 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block
5418 * in the ARC. In practice, that's in the tens of MB, which is low
5419 * enough to be safe.
5421 return (aggsum_lower_bound(&arc_size
) >= arc_c
+ overflow
);
5425 arc_get_data_abd(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5427 arc_buf_contents_t type
= arc_buf_type(hdr
);
5429 arc_get_data_impl(hdr
, size
, tag
);
5430 if (type
== ARC_BUFC_METADATA
) {
5431 return (abd_alloc(size
, B_TRUE
));
5433 ASSERT(type
== ARC_BUFC_DATA
);
5434 return (abd_alloc(size
, B_FALSE
));
5439 arc_get_data_buf(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5441 arc_buf_contents_t type
= arc_buf_type(hdr
);
5443 arc_get_data_impl(hdr
, size
, tag
);
5444 if (type
== ARC_BUFC_METADATA
) {
5445 return (zio_buf_alloc(size
));
5447 ASSERT(type
== ARC_BUFC_DATA
);
5448 return (zio_data_buf_alloc(size
));
5453 * Allocate a block and return it to the caller. If we are hitting the
5454 * hard limit for the cache size, we must sleep, waiting for the eviction
5455 * thread to catch up. If we're past the target size but below the hard
5456 * limit, we'll only signal the reclaim thread and continue on.
5459 arc_get_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5461 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
5462 arc_buf_contents_t type
= arc_buf_type(hdr
);
5464 arc_adapt(size
, state
);
5467 * If arc_size is currently overflowing, and has grown past our
5468 * upper limit, we must be adding data faster than the evict
5469 * thread can evict. Thus, to ensure we don't compound the
5470 * problem by adding more data and forcing arc_size to grow even
5471 * further past it's target size, we halt and wait for the
5472 * eviction thread to catch up.
5474 * It's also possible that the reclaim thread is unable to evict
5475 * enough buffers to get arc_size below the overflow limit (e.g.
5476 * due to buffers being un-evictable, or hash lock collisions).
5477 * In this case, we want to proceed regardless if we're
5478 * overflowing; thus we don't use a while loop here.
5480 if (arc_is_overflowing()) {
5481 mutex_enter(&arc_reclaim_lock
);
5484 * Now that we've acquired the lock, we may no longer be
5485 * over the overflow limit, lets check.
5487 * We're ignoring the case of spurious wake ups. If that
5488 * were to happen, it'd let this thread consume an ARC
5489 * buffer before it should have (i.e. before we're under
5490 * the overflow limit and were signalled by the reclaim
5491 * thread). As long as that is a rare occurrence, it
5492 * shouldn't cause any harm.
5494 if (arc_is_overflowing()) {
5495 cv_signal(&arc_reclaim_thread_cv
);
5496 cv_wait(&arc_reclaim_waiters_cv
, &arc_reclaim_lock
);
5499 mutex_exit(&arc_reclaim_lock
);
5502 VERIFY3U(hdr
->b_type
, ==, type
);
5503 if (type
== ARC_BUFC_METADATA
) {
5504 arc_space_consume(size
, ARC_SPACE_META
);
5506 arc_space_consume(size
, ARC_SPACE_DATA
);
5510 * Update the state size. Note that ghost states have a
5511 * "ghost size" and so don't need to be updated.
5513 if (!GHOST_STATE(state
)) {
5515 (void) zfs_refcount_add_many(&state
->arcs_size
, size
, tag
);
5518 * If this is reached via arc_read, the link is
5519 * protected by the hash lock. If reached via
5520 * arc_buf_alloc, the header should not be accessed by
5521 * any other thread. And, if reached via arc_read_done,
5522 * the hash lock will protect it if it's found in the
5523 * hash table; otherwise no other thread should be
5524 * trying to [add|remove]_reference it.
5526 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
5527 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
5528 (void) zfs_refcount_add_many(&state
->arcs_esize
[type
],
5533 * If we are growing the cache, and we are adding anonymous
5534 * data, and we have outgrown arc_p, update arc_p
5536 if (aggsum_compare(&arc_size
, arc_c
) < 0 &&
5537 hdr
->b_l1hdr
.b_state
== arc_anon
&&
5538 (zfs_refcount_count(&arc_anon
->arcs_size
) +
5539 zfs_refcount_count(&arc_mru
->arcs_size
) > arc_p
))
5540 arc_p
= MIN(arc_c
, arc_p
+ size
);
5545 arc_free_data_abd(arc_buf_hdr_t
*hdr
, abd_t
*abd
, uint64_t size
, void *tag
)
5547 arc_free_data_impl(hdr
, size
, tag
);
5552 arc_free_data_buf(arc_buf_hdr_t
*hdr
, void *buf
, uint64_t size
, void *tag
)
5554 arc_buf_contents_t type
= arc_buf_type(hdr
);
5556 arc_free_data_impl(hdr
, size
, tag
);
5557 if (type
== ARC_BUFC_METADATA
) {
5558 zio_buf_free(buf
, size
);
5560 ASSERT(type
== ARC_BUFC_DATA
);
5561 zio_data_buf_free(buf
, size
);
5566 * Free the arc data buffer.
5569 arc_free_data_impl(arc_buf_hdr_t
*hdr
, uint64_t size
, void *tag
)
5571 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
5572 arc_buf_contents_t type
= arc_buf_type(hdr
);
5574 /* protected by hash lock, if in the hash table */
5575 if (multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
)) {
5576 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
5577 ASSERT(state
!= arc_anon
&& state
!= arc_l2c_only
);
5579 (void) zfs_refcount_remove_many(&state
->arcs_esize
[type
],
5582 (void) zfs_refcount_remove_many(&state
->arcs_size
, size
, tag
);
5584 VERIFY3U(hdr
->b_type
, ==, type
);
5585 if (type
== ARC_BUFC_METADATA
) {
5586 arc_space_return(size
, ARC_SPACE_META
);
5588 ASSERT(type
== ARC_BUFC_DATA
);
5589 arc_space_return(size
, ARC_SPACE_DATA
);
5594 * This routine is called whenever a buffer is accessed.
5595 * NOTE: the hash lock is dropped in this function.
5598 arc_access(arc_buf_hdr_t
*hdr
, kmutex_t
*hash_lock
)
5602 ASSERT(MUTEX_HELD(hash_lock
));
5603 ASSERT(HDR_HAS_L1HDR(hdr
));
5605 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
5607 * This buffer is not in the cache, and does not
5608 * appear in our "ghost" list. Add the new buffer
5612 ASSERT0(hdr
->b_l1hdr
.b_arc_access
);
5613 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5614 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
5615 arc_change_state(arc_mru
, hdr
, hash_lock
);
5617 } else if (hdr
->b_l1hdr
.b_state
== arc_mru
) {
5618 now
= ddi_get_lbolt();
5621 * If this buffer is here because of a prefetch, then either:
5622 * - clear the flag if this is a "referencing" read
5623 * (any subsequent access will bump this into the MFU state).
5625 * - move the buffer to the head of the list if this is
5626 * another prefetch (to make it less likely to be evicted).
5628 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5629 if (zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
5630 /* link protected by hash lock */
5631 ASSERT(multilist_link_active(
5632 &hdr
->b_l1hdr
.b_arc_node
));
5634 arc_hdr_clear_flags(hdr
,
5636 ARC_FLAG_PRESCIENT_PREFETCH
);
5637 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
5638 ARCSTAT_BUMP(arcstat_mru_hits
);
5640 hdr
->b_l1hdr
.b_arc_access
= now
;
5645 * This buffer has been "accessed" only once so far,
5646 * but it is still in the cache. Move it to the MFU
5649 if (ddi_time_after(now
, hdr
->b_l1hdr
.b_arc_access
+
5652 * More than 125ms have passed since we
5653 * instantiated this buffer. Move it to the
5654 * most frequently used state.
5656 hdr
->b_l1hdr
.b_arc_access
= now
;
5657 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5658 arc_change_state(arc_mfu
, hdr
, hash_lock
);
5660 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_hits
);
5661 ARCSTAT_BUMP(arcstat_mru_hits
);
5662 } else if (hdr
->b_l1hdr
.b_state
== arc_mru_ghost
) {
5663 arc_state_t
*new_state
;
5665 * This buffer has been "accessed" recently, but
5666 * was evicted from the cache. Move it to the
5670 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5671 new_state
= arc_mru
;
5672 if (zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
) > 0) {
5673 arc_hdr_clear_flags(hdr
,
5675 ARC_FLAG_PRESCIENT_PREFETCH
);
5677 DTRACE_PROBE1(new_state__mru
, arc_buf_hdr_t
*, hdr
);
5679 new_state
= arc_mfu
;
5680 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5683 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5684 arc_change_state(new_state
, hdr
, hash_lock
);
5686 atomic_inc_32(&hdr
->b_l1hdr
.b_mru_ghost_hits
);
5687 ARCSTAT_BUMP(arcstat_mru_ghost_hits
);
5688 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu
) {
5690 * This buffer has been accessed more than once and is
5691 * still in the cache. Keep it in the MFU state.
5693 * NOTE: an add_reference() that occurred when we did
5694 * the arc_read() will have kicked this off the list.
5695 * If it was a prefetch, we will explicitly move it to
5696 * the head of the list now.
5699 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_hits
);
5700 ARCSTAT_BUMP(arcstat_mfu_hits
);
5701 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5702 } else if (hdr
->b_l1hdr
.b_state
== arc_mfu_ghost
) {
5703 arc_state_t
*new_state
= arc_mfu
;
5705 * This buffer has been accessed more than once but has
5706 * been evicted from the cache. Move it back to the
5710 if (HDR_PREFETCH(hdr
) || HDR_PRESCIENT_PREFETCH(hdr
)) {
5712 * This is a prefetch access...
5713 * move this block back to the MRU state.
5715 new_state
= arc_mru
;
5718 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5719 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5720 arc_change_state(new_state
, hdr
, hash_lock
);
5722 atomic_inc_32(&hdr
->b_l1hdr
.b_mfu_ghost_hits
);
5723 ARCSTAT_BUMP(arcstat_mfu_ghost_hits
);
5724 } else if (hdr
->b_l1hdr
.b_state
== arc_l2c_only
) {
5726 * This buffer is on the 2nd Level ARC.
5729 hdr
->b_l1hdr
.b_arc_access
= ddi_get_lbolt();
5730 DTRACE_PROBE1(new_state__mfu
, arc_buf_hdr_t
*, hdr
);
5731 arc_change_state(arc_mfu
, hdr
, hash_lock
);
5733 cmn_err(CE_PANIC
, "invalid arc state 0x%p",
5734 hdr
->b_l1hdr
.b_state
);
5739 * This routine is called by dbuf_hold() to update the arc_access() state
5740 * which otherwise would be skipped for entries in the dbuf cache.
5743 arc_buf_access(arc_buf_t
*buf
)
5745 mutex_enter(&buf
->b_evict_lock
);
5746 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
5749 * Avoid taking the hash_lock when possible as an optimization.
5750 * The header must be checked again under the hash_lock in order
5751 * to handle the case where it is concurrently being released.
5753 if (hdr
->b_l1hdr
.b_state
== arc_anon
|| HDR_EMPTY(hdr
)) {
5754 mutex_exit(&buf
->b_evict_lock
);
5758 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
5759 mutex_enter(hash_lock
);
5761 if (hdr
->b_l1hdr
.b_state
== arc_anon
|| HDR_EMPTY(hdr
)) {
5762 mutex_exit(hash_lock
);
5763 mutex_exit(&buf
->b_evict_lock
);
5764 ARCSTAT_BUMP(arcstat_access_skip
);
5768 mutex_exit(&buf
->b_evict_lock
);
5770 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
5771 hdr
->b_l1hdr
.b_state
== arc_mfu
);
5773 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
5774 arc_access(hdr
, hash_lock
);
5775 mutex_exit(hash_lock
);
5777 ARCSTAT_BUMP(arcstat_hits
);
5778 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
) && !HDR_PRESCIENT_PREFETCH(hdr
),
5779 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
), data
, metadata
, hits
);
5782 /* a generic arc_read_done_func_t which you can use */
5785 arc_bcopy_func(zio_t
*zio
, const zbookmark_phys_t
*zb
, const blkptr_t
*bp
,
5786 arc_buf_t
*buf
, void *arg
)
5791 bcopy(buf
->b_data
, arg
, arc_buf_size(buf
));
5792 arc_buf_destroy(buf
, arg
);
5795 /* a generic arc_read_done_func_t */
5798 arc_getbuf_func(zio_t
*zio
, const zbookmark_phys_t
*zb
, const blkptr_t
*bp
,
5799 arc_buf_t
*buf
, void *arg
)
5801 arc_buf_t
**bufp
= arg
;
5804 ASSERT(zio
== NULL
|| zio
->io_error
!= 0);
5807 ASSERT(zio
== NULL
|| zio
->io_error
== 0);
5809 ASSERT(buf
->b_data
!= NULL
);
5814 arc_hdr_verify(arc_buf_hdr_t
*hdr
, blkptr_t
*bp
)
5816 if (BP_IS_HOLE(bp
) || BP_IS_EMBEDDED(bp
)) {
5817 ASSERT3U(HDR_GET_PSIZE(hdr
), ==, 0);
5818 ASSERT3U(arc_hdr_get_compress(hdr
), ==, ZIO_COMPRESS_OFF
);
5820 if (HDR_COMPRESSION_ENABLED(hdr
)) {
5821 ASSERT3U(arc_hdr_get_compress(hdr
), ==,
5822 BP_GET_COMPRESS(bp
));
5824 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, BP_GET_LSIZE(bp
));
5825 ASSERT3U(HDR_GET_PSIZE(hdr
), ==, BP_GET_PSIZE(bp
));
5826 ASSERT3U(!!HDR_PROTECTED(hdr
), ==, BP_IS_PROTECTED(bp
));
5831 arc_read_done(zio_t
*zio
)
5833 blkptr_t
*bp
= zio
->io_bp
;
5834 arc_buf_hdr_t
*hdr
= zio
->io_private
;
5835 kmutex_t
*hash_lock
= NULL
;
5836 arc_callback_t
*callback_list
;
5837 arc_callback_t
*acb
;
5838 boolean_t freeable
= B_FALSE
;
5841 * The hdr was inserted into hash-table and removed from lists
5842 * prior to starting I/O. We should find this header, since
5843 * it's in the hash table, and it should be legit since it's
5844 * not possible to evict it during the I/O. The only possible
5845 * reason for it not to be found is if we were freed during the
5848 if (HDR_IN_HASH_TABLE(hdr
)) {
5849 arc_buf_hdr_t
*found
;
5851 ASSERT3U(hdr
->b_birth
, ==, BP_PHYSICAL_BIRTH(zio
->io_bp
));
5852 ASSERT3U(hdr
->b_dva
.dva_word
[0], ==,
5853 BP_IDENTITY(zio
->io_bp
)->dva_word
[0]);
5854 ASSERT3U(hdr
->b_dva
.dva_word
[1], ==,
5855 BP_IDENTITY(zio
->io_bp
)->dva_word
[1]);
5857 found
= buf_hash_find(hdr
->b_spa
, zio
->io_bp
, &hash_lock
);
5859 ASSERT((found
== hdr
&&
5860 DVA_EQUAL(&hdr
->b_dva
, BP_IDENTITY(zio
->io_bp
))) ||
5861 (found
== hdr
&& HDR_L2_READING(hdr
)));
5862 ASSERT3P(hash_lock
, !=, NULL
);
5865 if (BP_IS_PROTECTED(bp
)) {
5866 hdr
->b_crypt_hdr
.b_ot
= BP_GET_TYPE(bp
);
5867 hdr
->b_crypt_hdr
.b_dsobj
= zio
->io_bookmark
.zb_objset
;
5868 zio_crypt_decode_params_bp(bp
, hdr
->b_crypt_hdr
.b_salt
,
5869 hdr
->b_crypt_hdr
.b_iv
);
5871 if (BP_GET_TYPE(bp
) == DMU_OT_INTENT_LOG
) {
5874 tmpbuf
= abd_borrow_buf_copy(zio
->io_abd
,
5875 sizeof (zil_chain_t
));
5876 zio_crypt_decode_mac_zil(tmpbuf
,
5877 hdr
->b_crypt_hdr
.b_mac
);
5878 abd_return_buf(zio
->io_abd
, tmpbuf
,
5879 sizeof (zil_chain_t
));
5881 zio_crypt_decode_mac_bp(bp
, hdr
->b_crypt_hdr
.b_mac
);
5885 if (zio
->io_error
== 0) {
5886 /* byteswap if necessary */
5887 if (BP_SHOULD_BYTESWAP(zio
->io_bp
)) {
5888 if (BP_GET_LEVEL(zio
->io_bp
) > 0) {
5889 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_UINT64
;
5891 hdr
->b_l1hdr
.b_byteswap
=
5892 DMU_OT_BYTESWAP(BP_GET_TYPE(zio
->io_bp
));
5895 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
5899 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2_EVICTED
);
5900 if (l2arc_noprefetch
&& HDR_PREFETCH(hdr
))
5901 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2CACHE
);
5903 callback_list
= hdr
->b_l1hdr
.b_acb
;
5904 ASSERT3P(callback_list
, !=, NULL
);
5906 if (hash_lock
&& zio
->io_error
== 0 &&
5907 hdr
->b_l1hdr
.b_state
== arc_anon
) {
5909 * Only call arc_access on anonymous buffers. This is because
5910 * if we've issued an I/O for an evicted buffer, we've already
5911 * called arc_access (to prevent any simultaneous readers from
5912 * getting confused).
5914 arc_access(hdr
, hash_lock
);
5918 * If a read request has a callback (i.e. acb_done is not NULL), then we
5919 * make a buf containing the data according to the parameters which were
5920 * passed in. The implementation of arc_buf_alloc_impl() ensures that we
5921 * aren't needlessly decompressing the data multiple times.
5923 int callback_cnt
= 0;
5924 for (acb
= callback_list
; acb
!= NULL
; acb
= acb
->acb_next
) {
5930 if (zio
->io_error
!= 0)
5933 int error
= arc_buf_alloc_impl(hdr
, zio
->io_spa
,
5934 &acb
->acb_zb
, acb
->acb_private
, acb
->acb_encrypted
,
5935 acb
->acb_compressed
, acb
->acb_noauth
, B_TRUE
,
5939 * Assert non-speculative zios didn't fail because an
5940 * encryption key wasn't loaded
5942 ASSERT((zio
->io_flags
& ZIO_FLAG_SPECULATIVE
) ||
5946 * If we failed to decrypt, report an error now (as the zio
5947 * layer would have done if it had done the transforms).
5949 if (error
== ECKSUM
) {
5950 ASSERT(BP_IS_PROTECTED(bp
));
5951 error
= SET_ERROR(EIO
);
5952 if ((zio
->io_flags
& ZIO_FLAG_SPECULATIVE
) == 0) {
5953 spa_log_error(zio
->io_spa
, &acb
->acb_zb
);
5954 zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION
,
5955 zio
->io_spa
, NULL
, &acb
->acb_zb
, zio
, 0, 0);
5961 * Decompression or decryption failed. Set
5962 * io_error so that when we call acb_done
5963 * (below), we will indicate that the read
5964 * failed. Note that in the unusual case
5965 * where one callback is compressed and another
5966 * uncompressed, we will mark all of them
5967 * as failed, even though the uncompressed
5968 * one can't actually fail. In this case,
5969 * the hdr will not be anonymous, because
5970 * if there are multiple callbacks, it's
5971 * because multiple threads found the same
5972 * arc buf in the hash table.
5974 zio
->io_error
= error
;
5979 * If there are multiple callbacks, we must have the hash lock,
5980 * because the only way for multiple threads to find this hdr is
5981 * in the hash table. This ensures that if there are multiple
5982 * callbacks, the hdr is not anonymous. If it were anonymous,
5983 * we couldn't use arc_buf_destroy() in the error case below.
5985 ASSERT(callback_cnt
< 2 || hash_lock
!= NULL
);
5987 hdr
->b_l1hdr
.b_acb
= NULL
;
5988 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
5989 if (callback_cnt
== 0)
5990 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
5992 ASSERT(zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
) ||
5993 callback_list
!= NULL
);
5995 if (zio
->io_error
== 0) {
5996 arc_hdr_verify(hdr
, zio
->io_bp
);
5998 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_ERROR
);
5999 if (hdr
->b_l1hdr
.b_state
!= arc_anon
)
6000 arc_change_state(arc_anon
, hdr
, hash_lock
);
6001 if (HDR_IN_HASH_TABLE(hdr
))
6002 buf_hash_remove(hdr
);
6003 freeable
= zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
6007 * Broadcast before we drop the hash_lock to avoid the possibility
6008 * that the hdr (and hence the cv) might be freed before we get to
6009 * the cv_broadcast().
6011 cv_broadcast(&hdr
->b_l1hdr
.b_cv
);
6013 if (hash_lock
!= NULL
) {
6014 mutex_exit(hash_lock
);
6017 * This block was freed while we waited for the read to
6018 * complete. It has been removed from the hash table and
6019 * moved to the anonymous state (so that it won't show up
6022 ASSERT3P(hdr
->b_l1hdr
.b_state
, ==, arc_anon
);
6023 freeable
= zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
);
6026 /* execute each callback and free its structure */
6027 while ((acb
= callback_list
) != NULL
) {
6028 if (acb
->acb_done
!= NULL
) {
6029 if (zio
->io_error
!= 0 && acb
->acb_buf
!= NULL
) {
6031 * If arc_buf_alloc_impl() fails during
6032 * decompression, the buf will still be
6033 * allocated, and needs to be freed here.
6035 arc_buf_destroy(acb
->acb_buf
,
6037 acb
->acb_buf
= NULL
;
6039 acb
->acb_done(zio
, &zio
->io_bookmark
, zio
->io_bp
,
6040 acb
->acb_buf
, acb
->acb_private
);
6043 if (acb
->acb_zio_dummy
!= NULL
) {
6044 acb
->acb_zio_dummy
->io_error
= zio
->io_error
;
6045 zio_nowait(acb
->acb_zio_dummy
);
6048 callback_list
= acb
->acb_next
;
6049 kmem_free(acb
, sizeof (arc_callback_t
));
6053 arc_hdr_destroy(hdr
);
6057 * "Read" the block at the specified DVA (in bp) via the
6058 * cache. If the block is found in the cache, invoke the provided
6059 * callback immediately and return. Note that the `zio' parameter
6060 * in the callback will be NULL in this case, since no IO was
6061 * required. If the block is not in the cache pass the read request
6062 * on to the spa with a substitute callback function, so that the
6063 * requested block will be added to the cache.
6065 * If a read request arrives for a block that has a read in-progress,
6066 * either wait for the in-progress read to complete (and return the
6067 * results); or, if this is a read with a "done" func, add a record
6068 * to the read to invoke the "done" func when the read completes,
6069 * and return; or just return.
6071 * arc_read_done() will invoke all the requested "done" functions
6072 * for readers of this block.
6075 arc_read(zio_t
*pio
, spa_t
*spa
, const blkptr_t
*bp
,
6076 arc_read_done_func_t
*done
, void *private, zio_priority_t priority
,
6077 int zio_flags
, arc_flags_t
*arc_flags
, const zbookmark_phys_t
*zb
)
6079 arc_buf_hdr_t
*hdr
= NULL
;
6080 kmutex_t
*hash_lock
= NULL
;
6082 uint64_t guid
= spa_load_guid(spa
);
6083 boolean_t compressed_read
= (zio_flags
& ZIO_FLAG_RAW_COMPRESS
) != 0;
6084 boolean_t encrypted_read
= BP_IS_ENCRYPTED(bp
) &&
6085 (zio_flags
& ZIO_FLAG_RAW_ENCRYPT
) != 0;
6086 boolean_t noauth_read
= BP_IS_AUTHENTICATED(bp
) &&
6087 (zio_flags
& ZIO_FLAG_RAW_ENCRYPT
) != 0;
6090 ASSERT(!BP_IS_EMBEDDED(bp
) ||
6091 BPE_GET_ETYPE(bp
) == BP_EMBEDDED_TYPE_DATA
);
6094 if (!BP_IS_EMBEDDED(bp
)) {
6096 * Embedded BP's have no DVA and require no I/O to "read".
6097 * Create an anonymous arc buf to back it.
6099 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
6103 * Determine if we have an L1 cache hit or a cache miss. For simplicity
6104 * we maintain encrypted data seperately from compressed / uncompressed
6105 * data. If the user is requesting raw encrypted data and we don't have
6106 * that in the header we will read from disk to guarantee that we can
6107 * get it even if the encryption keys aren't loaded.
6109 if (hdr
!= NULL
&& HDR_HAS_L1HDR(hdr
) && (HDR_HAS_RABD(hdr
) ||
6110 (hdr
->b_l1hdr
.b_pabd
!= NULL
&& !encrypted_read
))) {
6111 arc_buf_t
*buf
= NULL
;
6112 *arc_flags
|= ARC_FLAG_CACHED
;
6114 if (HDR_IO_IN_PROGRESS(hdr
)) {
6115 zio_t
*head_zio
= hdr
->b_l1hdr
.b_acb
->acb_zio_head
;
6117 ASSERT3P(head_zio
, !=, NULL
);
6118 if ((hdr
->b_flags
& ARC_FLAG_PRIO_ASYNC_READ
) &&
6119 priority
== ZIO_PRIORITY_SYNC_READ
) {
6121 * This is a sync read that needs to wait for
6122 * an in-flight async read. Request that the
6123 * zio have its priority upgraded.
6125 zio_change_priority(head_zio
, priority
);
6126 DTRACE_PROBE1(arc__async__upgrade__sync
,
6127 arc_buf_hdr_t
*, hdr
);
6128 ARCSTAT_BUMP(arcstat_async_upgrade_sync
);
6130 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
6131 arc_hdr_clear_flags(hdr
,
6132 ARC_FLAG_PREDICTIVE_PREFETCH
);
6135 if (*arc_flags
& ARC_FLAG_WAIT
) {
6136 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
6137 mutex_exit(hash_lock
);
6140 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
6143 arc_callback_t
*acb
= NULL
;
6145 acb
= kmem_zalloc(sizeof (arc_callback_t
),
6147 acb
->acb_done
= done
;
6148 acb
->acb_private
= private;
6149 acb
->acb_compressed
= compressed_read
;
6150 acb
->acb_encrypted
= encrypted_read
;
6151 acb
->acb_noauth
= noauth_read
;
6154 acb
->acb_zio_dummy
= zio_null(pio
,
6155 spa
, NULL
, NULL
, NULL
, zio_flags
);
6157 ASSERT3P(acb
->acb_done
, !=, NULL
);
6158 acb
->acb_zio_head
= head_zio
;
6159 acb
->acb_next
= hdr
->b_l1hdr
.b_acb
;
6160 hdr
->b_l1hdr
.b_acb
= acb
;
6161 mutex_exit(hash_lock
);
6164 mutex_exit(hash_lock
);
6168 ASSERT(hdr
->b_l1hdr
.b_state
== arc_mru
||
6169 hdr
->b_l1hdr
.b_state
== arc_mfu
);
6172 if (hdr
->b_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
) {
6174 * This is a demand read which does not have to
6175 * wait for i/o because we did a predictive
6176 * prefetch i/o for it, which has completed.
6179 arc__demand__hit__predictive__prefetch
,
6180 arc_buf_hdr_t
*, hdr
);
6182 arcstat_demand_hit_predictive_prefetch
);
6183 arc_hdr_clear_flags(hdr
,
6184 ARC_FLAG_PREDICTIVE_PREFETCH
);
6187 if (hdr
->b_flags
& ARC_FLAG_PRESCIENT_PREFETCH
) {
6189 arcstat_demand_hit_prescient_prefetch
);
6190 arc_hdr_clear_flags(hdr
,
6191 ARC_FLAG_PRESCIENT_PREFETCH
);
6194 ASSERT(!BP_IS_EMBEDDED(bp
) || !BP_IS_HOLE(bp
));
6196 /* Get a buf with the desired data in it. */
6197 rc
= arc_buf_alloc_impl(hdr
, spa
, zb
, private,
6198 encrypted_read
, compressed_read
, noauth_read
,
6202 * Convert authentication and decryption errors
6203 * to EIO (and generate an ereport if needed)
6204 * before leaving the ARC.
6206 rc
= SET_ERROR(EIO
);
6207 if ((zio_flags
& ZIO_FLAG_SPECULATIVE
) == 0) {
6208 spa_log_error(spa
, zb
);
6210 FM_EREPORT_ZFS_AUTHENTICATION
,
6211 spa
, NULL
, zb
, NULL
, 0, 0);
6215 (void) remove_reference(hdr
, hash_lock
,
6217 arc_buf_destroy_impl(buf
);
6221 /* assert any errors weren't due to unloaded keys */
6222 ASSERT((zio_flags
& ZIO_FLAG_SPECULATIVE
) ||
6224 } else if (*arc_flags
& ARC_FLAG_PREFETCH
&&
6225 zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 0) {
6226 arc_hdr_set_flags(hdr
, ARC_FLAG_PREFETCH
);
6228 DTRACE_PROBE1(arc__hit
, arc_buf_hdr_t
*, hdr
);
6229 arc_access(hdr
, hash_lock
);
6230 if (*arc_flags
& ARC_FLAG_PRESCIENT_PREFETCH
)
6231 arc_hdr_set_flags(hdr
, ARC_FLAG_PRESCIENT_PREFETCH
);
6232 if (*arc_flags
& ARC_FLAG_L2CACHE
)
6233 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
6234 mutex_exit(hash_lock
);
6235 ARCSTAT_BUMP(arcstat_hits
);
6236 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
6237 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
6238 data
, metadata
, hits
);
6241 done(NULL
, zb
, bp
, buf
, private);
6243 uint64_t lsize
= BP_GET_LSIZE(bp
);
6244 uint64_t psize
= BP_GET_PSIZE(bp
);
6245 arc_callback_t
*acb
;
6248 boolean_t devw
= B_FALSE
;
6253 * Gracefully handle a damaged logical block size as a
6256 if (lsize
> spa_maxblocksize(spa
)) {
6257 rc
= SET_ERROR(ECKSUM
);
6262 /* this block is not in the cache */
6263 arc_buf_hdr_t
*exists
= NULL
;
6264 arc_buf_contents_t type
= BP_GET_BUFC_TYPE(bp
);
6265 hdr
= arc_hdr_alloc(spa_load_guid(spa
), psize
, lsize
,
6266 BP_IS_PROTECTED(bp
), BP_GET_COMPRESS(bp
), type
,
6269 if (!BP_IS_EMBEDDED(bp
)) {
6270 hdr
->b_dva
= *BP_IDENTITY(bp
);
6271 hdr
->b_birth
= BP_PHYSICAL_BIRTH(bp
);
6272 exists
= buf_hash_insert(hdr
, &hash_lock
);
6274 if (exists
!= NULL
) {
6275 /* somebody beat us to the hash insert */
6276 mutex_exit(hash_lock
);
6277 buf_discard_identity(hdr
);
6278 arc_hdr_destroy(hdr
);
6279 goto top
; /* restart the IO request */
6283 * This block is in the ghost cache or encrypted data
6284 * was requested and we didn't have it. If it was
6285 * L2-only (and thus didn't have an L1 hdr),
6286 * we realloc the header to add an L1 hdr.
6288 if (!HDR_HAS_L1HDR(hdr
)) {
6289 hdr
= arc_hdr_realloc(hdr
, hdr_l2only_cache
,
6293 if (GHOST_STATE(hdr
->b_l1hdr
.b_state
)) {
6294 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
6295 ASSERT(!HDR_HAS_RABD(hdr
));
6296 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6297 ASSERT0(zfs_refcount_count(
6298 &hdr
->b_l1hdr
.b_refcnt
));
6299 ASSERT3P(hdr
->b_l1hdr
.b_buf
, ==, NULL
);
6300 ASSERT3P(hdr
->b_l1hdr
.b_freeze_cksum
, ==, NULL
);
6301 } else if (HDR_IO_IN_PROGRESS(hdr
)) {
6303 * If this header already had an IO in progress
6304 * and we are performing another IO to fetch
6305 * encrypted data we must wait until the first
6306 * IO completes so as not to confuse
6307 * arc_read_done(). This should be very rare
6308 * and so the performance impact shouldn't
6311 cv_wait(&hdr
->b_l1hdr
.b_cv
, hash_lock
);
6312 mutex_exit(hash_lock
);
6317 * This is a delicate dance that we play here.
6318 * This hdr might be in the ghost list so we access
6319 * it to move it out of the ghost list before we
6320 * initiate the read. If it's a prefetch then
6321 * it won't have a callback so we'll remove the
6322 * reference that arc_buf_alloc_impl() created. We
6323 * do this after we've called arc_access() to
6324 * avoid hitting an assert in remove_reference().
6326 arc_access(hdr
, hash_lock
);
6327 arc_hdr_alloc_abd(hdr
, encrypted_read
);
6330 if (encrypted_read
) {
6331 ASSERT(HDR_HAS_RABD(hdr
));
6332 size
= HDR_GET_PSIZE(hdr
);
6333 hdr_abd
= hdr
->b_crypt_hdr
.b_rabd
;
6334 zio_flags
|= ZIO_FLAG_RAW
;
6336 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
6337 size
= arc_hdr_size(hdr
);
6338 hdr_abd
= hdr
->b_l1hdr
.b_pabd
;
6340 if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
) {
6341 zio_flags
|= ZIO_FLAG_RAW_COMPRESS
;
6345 * For authenticated bp's, we do not ask the ZIO layer
6346 * to authenticate them since this will cause the entire
6347 * IO to fail if the key isn't loaded. Instead, we
6348 * defer authentication until arc_buf_fill(), which will
6349 * verify the data when the key is available.
6351 if (BP_IS_AUTHENTICATED(bp
))
6352 zio_flags
|= ZIO_FLAG_RAW_ENCRYPT
;
6355 if (*arc_flags
& ARC_FLAG_PREFETCH
&&
6356 zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
))
6357 arc_hdr_set_flags(hdr
, ARC_FLAG_PREFETCH
);
6358 if (*arc_flags
& ARC_FLAG_PRESCIENT_PREFETCH
)
6359 arc_hdr_set_flags(hdr
, ARC_FLAG_PRESCIENT_PREFETCH
);
6360 if (*arc_flags
& ARC_FLAG_L2CACHE
)
6361 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
6362 if (BP_IS_AUTHENTICATED(bp
))
6363 arc_hdr_set_flags(hdr
, ARC_FLAG_NOAUTH
);
6364 if (BP_GET_LEVEL(bp
) > 0)
6365 arc_hdr_set_flags(hdr
, ARC_FLAG_INDIRECT
);
6366 if (*arc_flags
& ARC_FLAG_PREDICTIVE_PREFETCH
)
6367 arc_hdr_set_flags(hdr
, ARC_FLAG_PREDICTIVE_PREFETCH
);
6368 ASSERT(!GHOST_STATE(hdr
->b_l1hdr
.b_state
));
6370 acb
= kmem_zalloc(sizeof (arc_callback_t
), KM_SLEEP
);
6371 acb
->acb_done
= done
;
6372 acb
->acb_private
= private;
6373 acb
->acb_compressed
= compressed_read
;
6374 acb
->acb_encrypted
= encrypted_read
;
6375 acb
->acb_noauth
= noauth_read
;
6378 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
6379 hdr
->b_l1hdr
.b_acb
= acb
;
6380 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
6382 if (HDR_HAS_L2HDR(hdr
) &&
6383 (vd
= hdr
->b_l2hdr
.b_dev
->l2ad_vdev
) != NULL
) {
6384 devw
= hdr
->b_l2hdr
.b_dev
->l2ad_writing
;
6385 addr
= hdr
->b_l2hdr
.b_daddr
;
6387 * Lock out L2ARC device removal.
6389 if (vdev_is_dead(vd
) ||
6390 !spa_config_tryenter(spa
, SCL_L2ARC
, vd
, RW_READER
))
6395 * We count both async reads and scrub IOs as asynchronous so
6396 * that both can be upgraded in the event of a cache hit while
6397 * the read IO is still in-flight.
6399 if (priority
== ZIO_PRIORITY_ASYNC_READ
||
6400 priority
== ZIO_PRIORITY_SCRUB
)
6401 arc_hdr_set_flags(hdr
, ARC_FLAG_PRIO_ASYNC_READ
);
6403 arc_hdr_clear_flags(hdr
, ARC_FLAG_PRIO_ASYNC_READ
);
6406 * At this point, we have a level 1 cache miss. Try again in
6407 * L2ARC if possible.
6409 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, lsize
);
6411 DTRACE_PROBE4(arc__miss
, arc_buf_hdr_t
*, hdr
, blkptr_t
*, bp
,
6412 uint64_t, lsize
, zbookmark_phys_t
*, zb
);
6413 ARCSTAT_BUMP(arcstat_misses
);
6414 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr
),
6415 demand
, prefetch
, !HDR_ISTYPE_METADATA(hdr
),
6416 data
, metadata
, misses
);
6418 if (vd
!= NULL
&& l2arc_ndev
!= 0 && !(l2arc_norw
&& devw
)) {
6420 * Read from the L2ARC if the following are true:
6421 * 1. The L2ARC vdev was previously cached.
6422 * 2. This buffer still has L2ARC metadata.
6423 * 3. This buffer isn't currently writing to the L2ARC.
6424 * 4. The L2ARC entry wasn't evicted, which may
6425 * also have invalidated the vdev.
6426 * 5. This isn't prefetch and l2arc_noprefetch is set.
6428 if (HDR_HAS_L2HDR(hdr
) &&
6429 !HDR_L2_WRITING(hdr
) && !HDR_L2_EVICTED(hdr
) &&
6430 !(l2arc_noprefetch
&& HDR_PREFETCH(hdr
))) {
6431 l2arc_read_callback_t
*cb
;
6435 DTRACE_PROBE1(l2arc__hit
, arc_buf_hdr_t
*, hdr
);
6436 ARCSTAT_BUMP(arcstat_l2_hits
);
6437 atomic_inc_32(&hdr
->b_l2hdr
.b_hits
);
6439 cb
= kmem_zalloc(sizeof (l2arc_read_callback_t
),
6441 cb
->l2rcb_hdr
= hdr
;
6444 cb
->l2rcb_flags
= zio_flags
;
6446 asize
= vdev_psize_to_asize(vd
, size
);
6447 if (asize
!= size
) {
6448 abd
= abd_alloc_for_io(asize
,
6449 HDR_ISTYPE_METADATA(hdr
));
6450 cb
->l2rcb_abd
= abd
;
6455 ASSERT(addr
>= VDEV_LABEL_START_SIZE
&&
6456 addr
+ asize
<= vd
->vdev_psize
-
6457 VDEV_LABEL_END_SIZE
);
6460 * l2arc read. The SCL_L2ARC lock will be
6461 * released by l2arc_read_done().
6462 * Issue a null zio if the underlying buffer
6463 * was squashed to zero size by compression.
6465 ASSERT3U(arc_hdr_get_compress(hdr
), !=,
6466 ZIO_COMPRESS_EMPTY
);
6467 rzio
= zio_read_phys(pio
, vd
, addr
,
6470 l2arc_read_done
, cb
, priority
,
6471 zio_flags
| ZIO_FLAG_DONT_CACHE
|
6473 ZIO_FLAG_DONT_PROPAGATE
|
6474 ZIO_FLAG_DONT_RETRY
, B_FALSE
);
6475 acb
->acb_zio_head
= rzio
;
6477 if (hash_lock
!= NULL
)
6478 mutex_exit(hash_lock
);
6480 DTRACE_PROBE2(l2arc__read
, vdev_t
*, vd
,
6482 ARCSTAT_INCR(arcstat_l2_read_bytes
,
6483 HDR_GET_PSIZE(hdr
));
6485 if (*arc_flags
& ARC_FLAG_NOWAIT
) {
6490 ASSERT(*arc_flags
& ARC_FLAG_WAIT
);
6491 if (zio_wait(rzio
) == 0)
6494 /* l2arc read error; goto zio_read() */
6495 if (hash_lock
!= NULL
)
6496 mutex_enter(hash_lock
);
6498 DTRACE_PROBE1(l2arc__miss
,
6499 arc_buf_hdr_t
*, hdr
);
6500 ARCSTAT_BUMP(arcstat_l2_misses
);
6501 if (HDR_L2_WRITING(hdr
))
6502 ARCSTAT_BUMP(arcstat_l2_rw_clash
);
6503 spa_config_exit(spa
, SCL_L2ARC
, vd
);
6507 spa_config_exit(spa
, SCL_L2ARC
, vd
);
6508 if (l2arc_ndev
!= 0) {
6509 DTRACE_PROBE1(l2arc__miss
,
6510 arc_buf_hdr_t
*, hdr
);
6511 ARCSTAT_BUMP(arcstat_l2_misses
);
6515 rzio
= zio_read(pio
, spa
, bp
, hdr_abd
, size
,
6516 arc_read_done
, hdr
, priority
, zio_flags
, zb
);
6517 acb
->acb_zio_head
= rzio
;
6519 if (hash_lock
!= NULL
)
6520 mutex_exit(hash_lock
);
6522 if (*arc_flags
& ARC_FLAG_WAIT
) {
6523 rc
= zio_wait(rzio
);
6527 ASSERT(*arc_flags
& ARC_FLAG_NOWAIT
);
6532 /* embedded bps don't actually go to disk */
6533 if (!BP_IS_EMBEDDED(bp
))
6534 spa_read_history_add(spa
, zb
, *arc_flags
);
6539 arc_add_prune_callback(arc_prune_func_t
*func
, void *private)
6543 p
= kmem_alloc(sizeof (*p
), KM_SLEEP
);
6545 p
->p_private
= private;
6546 list_link_init(&p
->p_node
);
6547 zfs_refcount_create(&p
->p_refcnt
);
6549 mutex_enter(&arc_prune_mtx
);
6550 zfs_refcount_add(&p
->p_refcnt
, &arc_prune_list
);
6551 list_insert_head(&arc_prune_list
, p
);
6552 mutex_exit(&arc_prune_mtx
);
6558 arc_remove_prune_callback(arc_prune_t
*p
)
6560 boolean_t wait
= B_FALSE
;
6561 mutex_enter(&arc_prune_mtx
);
6562 list_remove(&arc_prune_list
, p
);
6563 if (zfs_refcount_remove(&p
->p_refcnt
, &arc_prune_list
) > 0)
6565 mutex_exit(&arc_prune_mtx
);
6567 /* wait for arc_prune_task to finish */
6569 taskq_wait_outstanding(arc_prune_taskq
, 0);
6570 ASSERT0(zfs_refcount_count(&p
->p_refcnt
));
6571 zfs_refcount_destroy(&p
->p_refcnt
);
6572 kmem_free(p
, sizeof (*p
));
6576 * Notify the arc that a block was freed, and thus will never be used again.
6579 arc_freed(spa_t
*spa
, const blkptr_t
*bp
)
6582 kmutex_t
*hash_lock
;
6583 uint64_t guid
= spa_load_guid(spa
);
6585 ASSERT(!BP_IS_EMBEDDED(bp
));
6587 hdr
= buf_hash_find(guid
, bp
, &hash_lock
);
6592 * We might be trying to free a block that is still doing I/O
6593 * (i.e. prefetch) or has a reference (i.e. a dedup-ed,
6594 * dmu_sync-ed block). If this block is being prefetched, then it
6595 * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr
6596 * until the I/O completes. A block may also have a reference if it is
6597 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would
6598 * have written the new block to its final resting place on disk but
6599 * without the dedup flag set. This would have left the hdr in the MRU
6600 * state and discoverable. When the txg finally syncs it detects that
6601 * the block was overridden in open context and issues an override I/O.
6602 * Since this is a dedup block, the override I/O will determine if the
6603 * block is already in the DDT. If so, then it will replace the io_bp
6604 * with the bp from the DDT and allow the I/O to finish. When the I/O
6605 * reaches the done callback, dbuf_write_override_done, it will
6606 * check to see if the io_bp and io_bp_override are identical.
6607 * If they are not, then it indicates that the bp was replaced with
6608 * the bp in the DDT and the override bp is freed. This allows
6609 * us to arrive here with a reference on a block that is being
6610 * freed. So if we have an I/O in progress, or a reference to
6611 * this hdr, then we don't destroy the hdr.
6613 if (!HDR_HAS_L1HDR(hdr
) || (!HDR_IO_IN_PROGRESS(hdr
) &&
6614 zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
))) {
6615 arc_change_state(arc_anon
, hdr
, hash_lock
);
6616 arc_hdr_destroy(hdr
);
6617 mutex_exit(hash_lock
);
6619 mutex_exit(hash_lock
);
6625 * Release this buffer from the cache, making it an anonymous buffer. This
6626 * must be done after a read and prior to modifying the buffer contents.
6627 * If the buffer has more than one reference, we must make
6628 * a new hdr for the buffer.
6631 arc_release(arc_buf_t
*buf
, void *tag
)
6633 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
6636 * It would be nice to assert that if its DMU metadata (level >
6637 * 0 || it's the dnode file), then it must be syncing context.
6638 * But we don't know that information at this level.
6641 mutex_enter(&buf
->b_evict_lock
);
6643 ASSERT(HDR_HAS_L1HDR(hdr
));
6646 * We don't grab the hash lock prior to this check, because if
6647 * the buffer's header is in the arc_anon state, it won't be
6648 * linked into the hash table.
6650 if (hdr
->b_l1hdr
.b_state
== arc_anon
) {
6651 mutex_exit(&buf
->b_evict_lock
);
6652 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6653 ASSERT(!HDR_IN_HASH_TABLE(hdr
));
6654 ASSERT(!HDR_HAS_L2HDR(hdr
));
6655 ASSERT(HDR_EMPTY(hdr
));
6657 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
6658 ASSERT3S(zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
), ==, 1);
6659 ASSERT(!list_link_active(&hdr
->b_l1hdr
.b_arc_node
));
6661 hdr
->b_l1hdr
.b_arc_access
= 0;
6664 * If the buf is being overridden then it may already
6665 * have a hdr that is not empty.
6667 buf_discard_identity(hdr
);
6673 kmutex_t
*hash_lock
= HDR_LOCK(hdr
);
6674 mutex_enter(hash_lock
);
6677 * This assignment is only valid as long as the hash_lock is
6678 * held, we must be careful not to reference state or the
6679 * b_state field after dropping the lock.
6681 arc_state_t
*state
= hdr
->b_l1hdr
.b_state
;
6682 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
6683 ASSERT3P(state
, !=, arc_anon
);
6685 /* this buffer is not on any list */
6686 ASSERT3S(zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
), >, 0);
6688 if (HDR_HAS_L2HDR(hdr
)) {
6689 mutex_enter(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
6692 * We have to recheck this conditional again now that
6693 * we're holding the l2ad_mtx to prevent a race with
6694 * another thread which might be concurrently calling
6695 * l2arc_evict(). In that case, l2arc_evict() might have
6696 * destroyed the header's L2 portion as we were waiting
6697 * to acquire the l2ad_mtx.
6699 if (HDR_HAS_L2HDR(hdr
))
6700 arc_hdr_l2hdr_destroy(hdr
);
6702 mutex_exit(&hdr
->b_l2hdr
.b_dev
->l2ad_mtx
);
6706 * Do we have more than one buf?
6708 if (hdr
->b_l1hdr
.b_bufcnt
> 1) {
6709 arc_buf_hdr_t
*nhdr
;
6710 uint64_t spa
= hdr
->b_spa
;
6711 uint64_t psize
= HDR_GET_PSIZE(hdr
);
6712 uint64_t lsize
= HDR_GET_LSIZE(hdr
);
6713 boolean_t
protected = HDR_PROTECTED(hdr
);
6714 enum zio_compress compress
= arc_hdr_get_compress(hdr
);
6715 arc_buf_contents_t type
= arc_buf_type(hdr
);
6716 VERIFY3U(hdr
->b_type
, ==, type
);
6718 ASSERT(hdr
->b_l1hdr
.b_buf
!= buf
|| buf
->b_next
!= NULL
);
6719 (void) remove_reference(hdr
, hash_lock
, tag
);
6721 if (arc_buf_is_shared(buf
) && !ARC_BUF_COMPRESSED(buf
)) {
6722 ASSERT3P(hdr
->b_l1hdr
.b_buf
, !=, buf
);
6723 ASSERT(ARC_BUF_LAST(buf
));
6727 * Pull the data off of this hdr and attach it to
6728 * a new anonymous hdr. Also find the last buffer
6729 * in the hdr's buffer list.
6731 arc_buf_t
*lastbuf
= arc_buf_remove(hdr
, buf
);
6732 ASSERT3P(lastbuf
, !=, NULL
);
6735 * If the current arc_buf_t and the hdr are sharing their data
6736 * buffer, then we must stop sharing that block.
6738 if (arc_buf_is_shared(buf
)) {
6739 ASSERT3P(hdr
->b_l1hdr
.b_buf
, !=, buf
);
6740 VERIFY(!arc_buf_is_shared(lastbuf
));
6743 * First, sever the block sharing relationship between
6744 * buf and the arc_buf_hdr_t.
6746 arc_unshare_buf(hdr
, buf
);
6749 * Now we need to recreate the hdr's b_pabd. Since we
6750 * have lastbuf handy, we try to share with it, but if
6751 * we can't then we allocate a new b_pabd and copy the
6752 * data from buf into it.
6754 if (arc_can_share(hdr
, lastbuf
)) {
6755 arc_share_buf(hdr
, lastbuf
);
6757 arc_hdr_alloc_abd(hdr
, B_FALSE
);
6758 abd_copy_from_buf(hdr
->b_l1hdr
.b_pabd
,
6759 buf
->b_data
, psize
);
6761 VERIFY3P(lastbuf
->b_data
, !=, NULL
);
6762 } else if (HDR_SHARED_DATA(hdr
)) {
6764 * Uncompressed shared buffers are always at the end
6765 * of the list. Compressed buffers don't have the
6766 * same requirements. This makes it hard to
6767 * simply assert that the lastbuf is shared so
6768 * we rely on the hdr's compression flags to determine
6769 * if we have a compressed, shared buffer.
6771 ASSERT(arc_buf_is_shared(lastbuf
) ||
6772 arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
);
6773 ASSERT(!ARC_BUF_SHARED(buf
));
6776 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
|| HDR_HAS_RABD(hdr
));
6777 ASSERT3P(state
, !=, arc_l2c_only
);
6779 (void) zfs_refcount_remove_many(&state
->arcs_size
,
6780 arc_buf_size(buf
), buf
);
6782 if (zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
)) {
6783 ASSERT3P(state
, !=, arc_l2c_only
);
6784 (void) zfs_refcount_remove_many(
6785 &state
->arcs_esize
[type
],
6786 arc_buf_size(buf
), buf
);
6789 hdr
->b_l1hdr
.b_bufcnt
-= 1;
6790 if (ARC_BUF_ENCRYPTED(buf
))
6791 hdr
->b_crypt_hdr
.b_ebufcnt
-= 1;
6793 arc_cksum_verify(buf
);
6794 arc_buf_unwatch(buf
);
6796 /* if this is the last uncompressed buf free the checksum */
6797 if (!arc_hdr_has_uncompressed_buf(hdr
))
6798 arc_cksum_free(hdr
);
6800 mutex_exit(hash_lock
);
6803 * Allocate a new hdr. The new hdr will contain a b_pabd
6804 * buffer which will be freed in arc_write().
6806 nhdr
= arc_hdr_alloc(spa
, psize
, lsize
, protected,
6807 compress
, type
, HDR_HAS_RABD(hdr
));
6808 ASSERT3P(nhdr
->b_l1hdr
.b_buf
, ==, NULL
);
6809 ASSERT0(nhdr
->b_l1hdr
.b_bufcnt
);
6810 ASSERT0(zfs_refcount_count(&nhdr
->b_l1hdr
.b_refcnt
));
6811 VERIFY3U(nhdr
->b_type
, ==, type
);
6812 ASSERT(!HDR_SHARED_DATA(nhdr
));
6814 nhdr
->b_l1hdr
.b_buf
= buf
;
6815 nhdr
->b_l1hdr
.b_bufcnt
= 1;
6816 if (ARC_BUF_ENCRYPTED(buf
))
6817 nhdr
->b_crypt_hdr
.b_ebufcnt
= 1;
6818 nhdr
->b_l1hdr
.b_mru_hits
= 0;
6819 nhdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
6820 nhdr
->b_l1hdr
.b_mfu_hits
= 0;
6821 nhdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
6822 nhdr
->b_l1hdr
.b_l2_hits
= 0;
6823 (void) zfs_refcount_add(&nhdr
->b_l1hdr
.b_refcnt
, tag
);
6826 mutex_exit(&buf
->b_evict_lock
);
6827 (void) zfs_refcount_add_many(&arc_anon
->arcs_size
,
6828 arc_buf_size(buf
), buf
);
6830 mutex_exit(&buf
->b_evict_lock
);
6831 ASSERT(zfs_refcount_count(&hdr
->b_l1hdr
.b_refcnt
) == 1);
6832 /* protected by hash lock, or hdr is on arc_anon */
6833 ASSERT(!multilist_link_active(&hdr
->b_l1hdr
.b_arc_node
));
6834 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
6835 hdr
->b_l1hdr
.b_mru_hits
= 0;
6836 hdr
->b_l1hdr
.b_mru_ghost_hits
= 0;
6837 hdr
->b_l1hdr
.b_mfu_hits
= 0;
6838 hdr
->b_l1hdr
.b_mfu_ghost_hits
= 0;
6839 hdr
->b_l1hdr
.b_l2_hits
= 0;
6840 arc_change_state(arc_anon
, hdr
, hash_lock
);
6841 hdr
->b_l1hdr
.b_arc_access
= 0;
6843 mutex_exit(hash_lock
);
6844 buf_discard_identity(hdr
);
6850 arc_released(arc_buf_t
*buf
)
6854 mutex_enter(&buf
->b_evict_lock
);
6855 released
= (buf
->b_data
!= NULL
&&
6856 buf
->b_hdr
->b_l1hdr
.b_state
== arc_anon
);
6857 mutex_exit(&buf
->b_evict_lock
);
6863 arc_referenced(arc_buf_t
*buf
)
6867 mutex_enter(&buf
->b_evict_lock
);
6868 referenced
= (zfs_refcount_count(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
6869 mutex_exit(&buf
->b_evict_lock
);
6870 return (referenced
);
6875 arc_write_ready(zio_t
*zio
)
6877 arc_write_callback_t
*callback
= zio
->io_private
;
6878 arc_buf_t
*buf
= callback
->awcb_buf
;
6879 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
6880 blkptr_t
*bp
= zio
->io_bp
;
6881 uint64_t psize
= BP_IS_HOLE(bp
) ? 0 : BP_GET_PSIZE(bp
);
6882 fstrans_cookie_t cookie
= spl_fstrans_mark();
6884 ASSERT(HDR_HAS_L1HDR(hdr
));
6885 ASSERT(!zfs_refcount_is_zero(&buf
->b_hdr
->b_l1hdr
.b_refcnt
));
6886 ASSERT(hdr
->b_l1hdr
.b_bufcnt
> 0);
6889 * If we're reexecuting this zio because the pool suspended, then
6890 * cleanup any state that was previously set the first time the
6891 * callback was invoked.
6893 if (zio
->io_flags
& ZIO_FLAG_REEXECUTED
) {
6894 arc_cksum_free(hdr
);
6895 arc_buf_unwatch(buf
);
6896 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
6897 if (arc_buf_is_shared(buf
)) {
6898 arc_unshare_buf(hdr
, buf
);
6900 arc_hdr_free_abd(hdr
, B_FALSE
);
6904 if (HDR_HAS_RABD(hdr
))
6905 arc_hdr_free_abd(hdr
, B_TRUE
);
6907 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
6908 ASSERT(!HDR_HAS_RABD(hdr
));
6909 ASSERT(!HDR_SHARED_DATA(hdr
));
6910 ASSERT(!arc_buf_is_shared(buf
));
6912 callback
->awcb_ready(zio
, buf
, callback
->awcb_private
);
6914 if (HDR_IO_IN_PROGRESS(hdr
))
6915 ASSERT(zio
->io_flags
& ZIO_FLAG_REEXECUTED
);
6917 arc_hdr_set_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
6919 if (BP_IS_PROTECTED(bp
) != !!HDR_PROTECTED(hdr
))
6920 hdr
= arc_hdr_realloc_crypt(hdr
, BP_IS_PROTECTED(bp
));
6922 if (BP_IS_PROTECTED(bp
)) {
6923 /* ZIL blocks are written through zio_rewrite */
6924 ASSERT3U(BP_GET_TYPE(bp
), !=, DMU_OT_INTENT_LOG
);
6925 ASSERT(HDR_PROTECTED(hdr
));
6927 if (BP_SHOULD_BYTESWAP(bp
)) {
6928 if (BP_GET_LEVEL(bp
) > 0) {
6929 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_UINT64
;
6931 hdr
->b_l1hdr
.b_byteswap
=
6932 DMU_OT_BYTESWAP(BP_GET_TYPE(bp
));
6935 hdr
->b_l1hdr
.b_byteswap
= DMU_BSWAP_NUMFUNCS
;
6938 hdr
->b_crypt_hdr
.b_ot
= BP_GET_TYPE(bp
);
6939 hdr
->b_crypt_hdr
.b_dsobj
= zio
->io_bookmark
.zb_objset
;
6940 zio_crypt_decode_params_bp(bp
, hdr
->b_crypt_hdr
.b_salt
,
6941 hdr
->b_crypt_hdr
.b_iv
);
6942 zio_crypt_decode_mac_bp(bp
, hdr
->b_crypt_hdr
.b_mac
);
6946 * If this block was written for raw encryption but the zio layer
6947 * ended up only authenticating it, adjust the buffer flags now.
6949 if (BP_IS_AUTHENTICATED(bp
) && ARC_BUF_ENCRYPTED(buf
)) {
6950 arc_hdr_set_flags(hdr
, ARC_FLAG_NOAUTH
);
6951 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
6952 if (BP_GET_COMPRESS(bp
) == ZIO_COMPRESS_OFF
)
6953 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
6954 } else if (BP_IS_HOLE(bp
) && ARC_BUF_ENCRYPTED(buf
)) {
6955 buf
->b_flags
&= ~ARC_BUF_FLAG_ENCRYPTED
;
6956 buf
->b_flags
&= ~ARC_BUF_FLAG_COMPRESSED
;
6959 /* this must be done after the buffer flags are adjusted */
6960 arc_cksum_compute(buf
);
6962 enum zio_compress compress
;
6963 if (BP_IS_HOLE(bp
) || BP_IS_EMBEDDED(bp
)) {
6964 compress
= ZIO_COMPRESS_OFF
;
6966 ASSERT3U(HDR_GET_LSIZE(hdr
), ==, BP_GET_LSIZE(bp
));
6967 compress
= BP_GET_COMPRESS(bp
);
6969 HDR_SET_PSIZE(hdr
, psize
);
6970 arc_hdr_set_compress(hdr
, compress
);
6972 if (zio
->io_error
!= 0 || psize
== 0)
6976 * Fill the hdr with data. If the buffer is encrypted we have no choice
6977 * but to copy the data into b_radb. If the hdr is compressed, the data
6978 * we want is available from the zio, otherwise we can take it from
6981 * We might be able to share the buf's data with the hdr here. However,
6982 * doing so would cause the ARC to be full of linear ABDs if we write a
6983 * lot of shareable data. As a compromise, we check whether scattered
6984 * ABDs are allowed, and assume that if they are then the user wants
6985 * the ARC to be primarily filled with them regardless of the data being
6986 * written. Therefore, if they're allowed then we allocate one and copy
6987 * the data into it; otherwise, we share the data directly if we can.
6989 if (ARC_BUF_ENCRYPTED(buf
)) {
6990 ASSERT3U(psize
, >, 0);
6991 ASSERT(ARC_BUF_COMPRESSED(buf
));
6992 arc_hdr_alloc_abd(hdr
, B_TRUE
);
6993 abd_copy(hdr
->b_crypt_hdr
.b_rabd
, zio
->io_abd
, psize
);
6994 } else if (zfs_abd_scatter_enabled
|| !arc_can_share(hdr
, buf
)) {
6996 * Ideally, we would always copy the io_abd into b_pabd, but the
6997 * user may have disabled compressed ARC, thus we must check the
6998 * hdr's compression setting rather than the io_bp's.
7000 if (BP_IS_ENCRYPTED(bp
)) {
7001 ASSERT3U(psize
, >, 0);
7002 arc_hdr_alloc_abd(hdr
, B_TRUE
);
7003 abd_copy(hdr
->b_crypt_hdr
.b_rabd
, zio
->io_abd
, psize
);
7004 } else if (arc_hdr_get_compress(hdr
) != ZIO_COMPRESS_OFF
&&
7005 !ARC_BUF_COMPRESSED(buf
)) {
7006 ASSERT3U(psize
, >, 0);
7007 arc_hdr_alloc_abd(hdr
, B_FALSE
);
7008 abd_copy(hdr
->b_l1hdr
.b_pabd
, zio
->io_abd
, psize
);
7010 ASSERT3U(zio
->io_orig_size
, ==, arc_hdr_size(hdr
));
7011 arc_hdr_alloc_abd(hdr
, B_FALSE
);
7012 abd_copy_from_buf(hdr
->b_l1hdr
.b_pabd
, buf
->b_data
,
7016 ASSERT3P(buf
->b_data
, ==, abd_to_buf(zio
->io_orig_abd
));
7017 ASSERT3U(zio
->io_orig_size
, ==, arc_buf_size(buf
));
7018 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, ==, 1);
7020 arc_share_buf(hdr
, buf
);
7024 arc_hdr_verify(hdr
, bp
);
7025 spl_fstrans_unmark(cookie
);
7029 arc_write_children_ready(zio_t
*zio
)
7031 arc_write_callback_t
*callback
= zio
->io_private
;
7032 arc_buf_t
*buf
= callback
->awcb_buf
;
7034 callback
->awcb_children_ready(zio
, buf
, callback
->awcb_private
);
7038 * The SPA calls this callback for each physical write that happens on behalf
7039 * of a logical write. See the comment in dbuf_write_physdone() for details.
7042 arc_write_physdone(zio_t
*zio
)
7044 arc_write_callback_t
*cb
= zio
->io_private
;
7045 if (cb
->awcb_physdone
!= NULL
)
7046 cb
->awcb_physdone(zio
, cb
->awcb_buf
, cb
->awcb_private
);
7050 arc_write_done(zio_t
*zio
)
7052 arc_write_callback_t
*callback
= zio
->io_private
;
7053 arc_buf_t
*buf
= callback
->awcb_buf
;
7054 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
7056 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
7058 if (zio
->io_error
== 0) {
7059 arc_hdr_verify(hdr
, zio
->io_bp
);
7061 if (BP_IS_HOLE(zio
->io_bp
) || BP_IS_EMBEDDED(zio
->io_bp
)) {
7062 buf_discard_identity(hdr
);
7064 hdr
->b_dva
= *BP_IDENTITY(zio
->io_bp
);
7065 hdr
->b_birth
= BP_PHYSICAL_BIRTH(zio
->io_bp
);
7068 ASSERT(HDR_EMPTY(hdr
));
7072 * If the block to be written was all-zero or compressed enough to be
7073 * embedded in the BP, no write was performed so there will be no
7074 * dva/birth/checksum. The buffer must therefore remain anonymous
7077 if (!HDR_EMPTY(hdr
)) {
7078 arc_buf_hdr_t
*exists
;
7079 kmutex_t
*hash_lock
;
7081 ASSERT3U(zio
->io_error
, ==, 0);
7083 arc_cksum_verify(buf
);
7085 exists
= buf_hash_insert(hdr
, &hash_lock
);
7086 if (exists
!= NULL
) {
7088 * This can only happen if we overwrite for
7089 * sync-to-convergence, because we remove
7090 * buffers from the hash table when we arc_free().
7092 if (zio
->io_flags
& ZIO_FLAG_IO_REWRITE
) {
7093 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
7094 panic("bad overwrite, hdr=%p exists=%p",
7095 (void *)hdr
, (void *)exists
);
7096 ASSERT(zfs_refcount_is_zero(
7097 &exists
->b_l1hdr
.b_refcnt
));
7098 arc_change_state(arc_anon
, exists
, hash_lock
);
7099 mutex_exit(hash_lock
);
7100 arc_hdr_destroy(exists
);
7101 exists
= buf_hash_insert(hdr
, &hash_lock
);
7102 ASSERT3P(exists
, ==, NULL
);
7103 } else if (zio
->io_flags
& ZIO_FLAG_NOPWRITE
) {
7105 ASSERT(zio
->io_prop
.zp_nopwrite
);
7106 if (!BP_EQUAL(&zio
->io_bp_orig
, zio
->io_bp
))
7107 panic("bad nopwrite, hdr=%p exists=%p",
7108 (void *)hdr
, (void *)exists
);
7111 ASSERT(hdr
->b_l1hdr
.b_bufcnt
== 1);
7112 ASSERT(hdr
->b_l1hdr
.b_state
== arc_anon
);
7113 ASSERT(BP_GET_DEDUP(zio
->io_bp
));
7114 ASSERT(BP_GET_LEVEL(zio
->io_bp
) == 0);
7117 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
7118 /* if it's not anon, we are doing a scrub */
7119 if (exists
== NULL
&& hdr
->b_l1hdr
.b_state
== arc_anon
)
7120 arc_access(hdr
, hash_lock
);
7121 mutex_exit(hash_lock
);
7123 arc_hdr_clear_flags(hdr
, ARC_FLAG_IO_IN_PROGRESS
);
7126 ASSERT(!zfs_refcount_is_zero(&hdr
->b_l1hdr
.b_refcnt
));
7127 callback
->awcb_done(zio
, buf
, callback
->awcb_private
);
7129 abd_put(zio
->io_abd
);
7130 kmem_free(callback
, sizeof (arc_write_callback_t
));
7134 arc_write(zio_t
*pio
, spa_t
*spa
, uint64_t txg
,
7135 blkptr_t
*bp
, arc_buf_t
*buf
, boolean_t l2arc
,
7136 const zio_prop_t
*zp
, arc_write_done_func_t
*ready
,
7137 arc_write_done_func_t
*children_ready
, arc_write_done_func_t
*physdone
,
7138 arc_write_done_func_t
*done
, void *private, zio_priority_t priority
,
7139 int zio_flags
, const zbookmark_phys_t
*zb
)
7141 arc_buf_hdr_t
*hdr
= buf
->b_hdr
;
7142 arc_write_callback_t
*callback
;
7144 zio_prop_t localprop
= *zp
;
7146 ASSERT3P(ready
, !=, NULL
);
7147 ASSERT3P(done
, !=, NULL
);
7148 ASSERT(!HDR_IO_ERROR(hdr
));
7149 ASSERT(!HDR_IO_IN_PROGRESS(hdr
));
7150 ASSERT3P(hdr
->b_l1hdr
.b_acb
, ==, NULL
);
7151 ASSERT3U(hdr
->b_l1hdr
.b_bufcnt
, >, 0);
7153 arc_hdr_set_flags(hdr
, ARC_FLAG_L2CACHE
);
7155 if (ARC_BUF_ENCRYPTED(buf
)) {
7156 ASSERT(ARC_BUF_COMPRESSED(buf
));
7157 localprop
.zp_encrypt
= B_TRUE
;
7158 localprop
.zp_compress
= HDR_GET_COMPRESS(hdr
);
7159 localprop
.zp_byteorder
=
7160 (hdr
->b_l1hdr
.b_byteswap
== DMU_BSWAP_NUMFUNCS
) ?
7161 ZFS_HOST_BYTEORDER
: !ZFS_HOST_BYTEORDER
;
7162 bcopy(hdr
->b_crypt_hdr
.b_salt
, localprop
.zp_salt
,
7164 bcopy(hdr
->b_crypt_hdr
.b_iv
, localprop
.zp_iv
,
7166 bcopy(hdr
->b_crypt_hdr
.b_mac
, localprop
.zp_mac
,
7168 if (DMU_OT_IS_ENCRYPTED(localprop
.zp_type
)) {
7169 localprop
.zp_nopwrite
= B_FALSE
;
7170 localprop
.zp_copies
=
7171 MIN(localprop
.zp_copies
, SPA_DVAS_PER_BP
- 1);
7173 zio_flags
|= ZIO_FLAG_RAW
;
7174 } else if (ARC_BUF_COMPRESSED(buf
)) {
7175 ASSERT3U(HDR_GET_LSIZE(hdr
), !=, arc_buf_size(buf
));
7176 localprop
.zp_compress
= HDR_GET_COMPRESS(hdr
);
7177 zio_flags
|= ZIO_FLAG_RAW_COMPRESS
;
7179 callback
= kmem_zalloc(sizeof (arc_write_callback_t
), KM_SLEEP
);
7180 callback
->awcb_ready
= ready
;
7181 callback
->awcb_children_ready
= children_ready
;
7182 callback
->awcb_physdone
= physdone
;
7183 callback
->awcb_done
= done
;
7184 callback
->awcb_private
= private;
7185 callback
->awcb_buf
= buf
;
7188 * The hdr's b_pabd is now stale, free it now. A new data block
7189 * will be allocated when the zio pipeline calls arc_write_ready().
7191 if (hdr
->b_l1hdr
.b_pabd
!= NULL
) {
7193 * If the buf is currently sharing the data block with
7194 * the hdr then we need to break that relationship here.
7195 * The hdr will remain with a NULL data pointer and the
7196 * buf will take sole ownership of the block.
7198 if (arc_buf_is_shared(buf
)) {
7199 arc_unshare_buf(hdr
, buf
);
7201 arc_hdr_free_abd(hdr
, B_FALSE
);
7203 VERIFY3P(buf
->b_data
, !=, NULL
);
7206 if (HDR_HAS_RABD(hdr
))
7207 arc_hdr_free_abd(hdr
, B_TRUE
);
7209 if (!(zio_flags
& ZIO_FLAG_RAW
))
7210 arc_hdr_set_compress(hdr
, ZIO_COMPRESS_OFF
);
7212 ASSERT(!arc_buf_is_shared(buf
));
7213 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, ==, NULL
);
7215 zio
= zio_write(pio
, spa
, txg
, bp
,
7216 abd_get_from_buf(buf
->b_data
, HDR_GET_LSIZE(hdr
)),
7217 HDR_GET_LSIZE(hdr
), arc_buf_size(buf
), &localprop
, arc_write_ready
,
7218 (children_ready
!= NULL
) ? arc_write_children_ready
: NULL
,
7219 arc_write_physdone
, arc_write_done
, callback
,
7220 priority
, zio_flags
, zb
);
7226 arc_memory_throttle(spa_t
*spa
, uint64_t reserve
, uint64_t txg
)
7229 uint64_t available_memory
= arc_free_memory();
7233 MIN(available_memory
, vmem_size(heap_arena
, VMEM_FREE
));
7236 if (available_memory
> arc_all_memory() * arc_lotsfree_percent
/ 100)
7239 if (txg
> spa
->spa_lowmem_last_txg
) {
7240 spa
->spa_lowmem_last_txg
= txg
;
7241 spa
->spa_lowmem_page_load
= 0;
7244 * If we are in pageout, we know that memory is already tight,
7245 * the arc is already going to be evicting, so we just want to
7246 * continue to let page writes occur as quickly as possible.
7248 if (current_is_kswapd()) {
7249 if (spa
->spa_lowmem_page_load
>
7250 MAX(arc_sys_free
/ 4, available_memory
) / 4) {
7251 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
7252 return (SET_ERROR(ERESTART
));
7254 /* Note: reserve is inflated, so we deflate */
7255 atomic_add_64(&spa
->spa_lowmem_page_load
, reserve
/ 8);
7257 } else if (spa
->spa_lowmem_page_load
> 0 && arc_reclaim_needed()) {
7258 /* memory is low, delay before restarting */
7259 ARCSTAT_INCR(arcstat_memory_throttle_count
, 1);
7260 DMU_TX_STAT_BUMP(dmu_tx_memory_reclaim
);
7261 return (SET_ERROR(EAGAIN
));
7263 spa
->spa_lowmem_page_load
= 0;
7264 #endif /* _KERNEL */
7269 arc_tempreserve_clear(uint64_t reserve
)
7271 atomic_add_64(&arc_tempreserve
, -reserve
);
7272 ASSERT((int64_t)arc_tempreserve
>= 0);
7276 arc_tempreserve_space(spa_t
*spa
, uint64_t reserve
, uint64_t txg
)
7282 reserve
> arc_c
/4 &&
7283 reserve
* 4 > (2ULL << SPA_MAXBLOCKSHIFT
))
7284 arc_c
= MIN(arc_c_max
, reserve
* 4);
7287 * Throttle when the calculated memory footprint for the TXG
7288 * exceeds the target ARC size.
7290 if (reserve
> arc_c
) {
7291 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve
);
7292 return (SET_ERROR(ERESTART
));
7296 * Don't count loaned bufs as in flight dirty data to prevent long
7297 * network delays from blocking transactions that are ready to be
7298 * assigned to a txg.
7301 /* assert that it has not wrapped around */
7302 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes
, 0), >=, 0);
7304 anon_size
= MAX((int64_t)(zfs_refcount_count(&arc_anon
->arcs_size
) -
7305 arc_loaned_bytes
), 0);
7308 * Writes will, almost always, require additional memory allocations
7309 * in order to compress/encrypt/etc the data. We therefore need to
7310 * make sure that there is sufficient available memory for this.
7312 error
= arc_memory_throttle(spa
, reserve
, txg
);
7317 * Throttle writes when the amount of dirty data in the cache
7318 * gets too large. We try to keep the cache less than half full
7319 * of dirty blocks so that our sync times don't grow too large.
7321 * In the case of one pool being built on another pool, we want
7322 * to make sure we don't end up throttling the lower (backing)
7323 * pool when the upper pool is the majority contributor to dirty
7324 * data. To insure we make forward progress during throttling, we
7325 * also check the current pool's net dirty data and only throttle
7326 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty
7327 * data in the cache.
7329 * Note: if two requests come in concurrently, we might let them
7330 * both succeed, when one of them should fail. Not a huge deal.
7332 uint64_t total_dirty
= reserve
+ arc_tempreserve
+ anon_size
;
7333 uint64_t spa_dirty_anon
= spa_dirty_data(spa
);
7335 if (total_dirty
> arc_c
* zfs_arc_dirty_limit_percent
/ 100 &&
7336 anon_size
> arc_c
* zfs_arc_anon_limit_percent
/ 100 &&
7337 spa_dirty_anon
> anon_size
* zfs_arc_pool_dirty_percent
/ 100) {
7339 uint64_t meta_esize
= zfs_refcount_count(
7340 &arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7341 uint64_t data_esize
=
7342 zfs_refcount_count(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7343 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK "
7344 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n",
7345 arc_tempreserve
>> 10, meta_esize
>> 10,
7346 data_esize
>> 10, reserve
>> 10, arc_c
>> 10);
7348 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle
);
7349 return (SET_ERROR(ERESTART
));
7351 atomic_add_64(&arc_tempreserve
, reserve
);
7356 arc_kstat_update_state(arc_state_t
*state
, kstat_named_t
*size
,
7357 kstat_named_t
*evict_data
, kstat_named_t
*evict_metadata
)
7359 size
->value
.ui64
= zfs_refcount_count(&state
->arcs_size
);
7360 evict_data
->value
.ui64
=
7361 zfs_refcount_count(&state
->arcs_esize
[ARC_BUFC_DATA
]);
7362 evict_metadata
->value
.ui64
=
7363 zfs_refcount_count(&state
->arcs_esize
[ARC_BUFC_METADATA
]);
7367 arc_kstat_update(kstat_t
*ksp
, int rw
)
7369 arc_stats_t
*as
= ksp
->ks_data
;
7371 if (rw
== KSTAT_WRITE
) {
7372 return (SET_ERROR(EACCES
));
7374 arc_kstat_update_state(arc_anon
,
7375 &as
->arcstat_anon_size
,
7376 &as
->arcstat_anon_evictable_data
,
7377 &as
->arcstat_anon_evictable_metadata
);
7378 arc_kstat_update_state(arc_mru
,
7379 &as
->arcstat_mru_size
,
7380 &as
->arcstat_mru_evictable_data
,
7381 &as
->arcstat_mru_evictable_metadata
);
7382 arc_kstat_update_state(arc_mru_ghost
,
7383 &as
->arcstat_mru_ghost_size
,
7384 &as
->arcstat_mru_ghost_evictable_data
,
7385 &as
->arcstat_mru_ghost_evictable_metadata
);
7386 arc_kstat_update_state(arc_mfu
,
7387 &as
->arcstat_mfu_size
,
7388 &as
->arcstat_mfu_evictable_data
,
7389 &as
->arcstat_mfu_evictable_metadata
);
7390 arc_kstat_update_state(arc_mfu_ghost
,
7391 &as
->arcstat_mfu_ghost_size
,
7392 &as
->arcstat_mfu_ghost_evictable_data
,
7393 &as
->arcstat_mfu_ghost_evictable_metadata
);
7395 ARCSTAT(arcstat_size
) = aggsum_value(&arc_size
);
7396 ARCSTAT(arcstat_meta_used
) = aggsum_value(&arc_meta_used
);
7397 ARCSTAT(arcstat_data_size
) = aggsum_value(&astat_data_size
);
7398 ARCSTAT(arcstat_metadata_size
) =
7399 aggsum_value(&astat_metadata_size
);
7400 ARCSTAT(arcstat_hdr_size
) = aggsum_value(&astat_hdr_size
);
7401 ARCSTAT(arcstat_l2_hdr_size
) = aggsum_value(&astat_l2_hdr_size
);
7402 ARCSTAT(arcstat_dbuf_size
) = aggsum_value(&astat_dbuf_size
);
7403 ARCSTAT(arcstat_dnode_size
) = aggsum_value(&astat_dnode_size
);
7404 ARCSTAT(arcstat_bonus_size
) = aggsum_value(&astat_bonus_size
);
7406 as
->arcstat_memory_all_bytes
.value
.ui64
=
7408 as
->arcstat_memory_free_bytes
.value
.ui64
=
7410 as
->arcstat_memory_available_bytes
.value
.i64
=
7411 arc_available_memory();
7418 * This function *must* return indices evenly distributed between all
7419 * sublists of the multilist. This is needed due to how the ARC eviction
7420 * code is laid out; arc_evict_state() assumes ARC buffers are evenly
7421 * distributed between all sublists and uses this assumption when
7422 * deciding which sublist to evict from and how much to evict from it.
7425 arc_state_multilist_index_func(multilist_t
*ml
, void *obj
)
7427 arc_buf_hdr_t
*hdr
= obj
;
7430 * We rely on b_dva to generate evenly distributed index
7431 * numbers using buf_hash below. So, as an added precaution,
7432 * let's make sure we never add empty buffers to the arc lists.
7434 ASSERT(!HDR_EMPTY(hdr
));
7437 * The assumption here, is the hash value for a given
7438 * arc_buf_hdr_t will remain constant throughout its lifetime
7439 * (i.e. its b_spa, b_dva, and b_birth fields don't change).
7440 * Thus, we don't need to store the header's sublist index
7441 * on insertion, as this index can be recalculated on removal.
7443 * Also, the low order bits of the hash value are thought to be
7444 * distributed evenly. Otherwise, in the case that the multilist
7445 * has a power of two number of sublists, each sublists' usage
7446 * would not be evenly distributed.
7448 return (buf_hash(hdr
->b_spa
, &hdr
->b_dva
, hdr
->b_birth
) %
7449 multilist_get_num_sublists(ml
));
7453 * Called during module initialization and periodically thereafter to
7454 * apply reasonable changes to the exposed performance tunings. Non-zero
7455 * zfs_* values which differ from the currently set values will be applied.
7458 arc_tuning_update(void)
7460 uint64_t allmem
= arc_all_memory();
7461 unsigned long limit
;
7463 /* Valid range: 64M - <all physical memory> */
7464 if ((zfs_arc_max
) && (zfs_arc_max
!= arc_c_max
) &&
7465 (zfs_arc_max
>= 64 << 20) && (zfs_arc_max
< allmem
) &&
7466 (zfs_arc_max
> arc_c_min
)) {
7467 arc_c_max
= zfs_arc_max
;
7469 arc_p
= (arc_c
>> 1);
7470 if (arc_meta_limit
> arc_c_max
)
7471 arc_meta_limit
= arc_c_max
;
7472 if (arc_dnode_limit
> arc_meta_limit
)
7473 arc_dnode_limit
= arc_meta_limit
;
7476 /* Valid range: 32M - <arc_c_max> */
7477 if ((zfs_arc_min
) && (zfs_arc_min
!= arc_c_min
) &&
7478 (zfs_arc_min
>= 2ULL << SPA_MAXBLOCKSHIFT
) &&
7479 (zfs_arc_min
<= arc_c_max
)) {
7480 arc_c_min
= zfs_arc_min
;
7481 arc_c
= MAX(arc_c
, arc_c_min
);
7484 /* Valid range: 16M - <arc_c_max> */
7485 if ((zfs_arc_meta_min
) && (zfs_arc_meta_min
!= arc_meta_min
) &&
7486 (zfs_arc_meta_min
>= 1ULL << SPA_MAXBLOCKSHIFT
) &&
7487 (zfs_arc_meta_min
<= arc_c_max
)) {
7488 arc_meta_min
= zfs_arc_meta_min
;
7489 if (arc_meta_limit
< arc_meta_min
)
7490 arc_meta_limit
= arc_meta_min
;
7491 if (arc_dnode_limit
< arc_meta_min
)
7492 arc_dnode_limit
= arc_meta_min
;
7495 /* Valid range: <arc_meta_min> - <arc_c_max> */
7496 limit
= zfs_arc_meta_limit
? zfs_arc_meta_limit
:
7497 MIN(zfs_arc_meta_limit_percent
, 100) * arc_c_max
/ 100;
7498 if ((limit
!= arc_meta_limit
) &&
7499 (limit
>= arc_meta_min
) &&
7500 (limit
<= arc_c_max
))
7501 arc_meta_limit
= limit
;
7503 /* Valid range: <arc_meta_min> - <arc_meta_limit> */
7504 limit
= zfs_arc_dnode_limit
? zfs_arc_dnode_limit
:
7505 MIN(zfs_arc_dnode_limit_percent
, 100) * arc_meta_limit
/ 100;
7506 if ((limit
!= arc_dnode_limit
) &&
7507 (limit
>= arc_meta_min
) &&
7508 (limit
<= arc_meta_limit
))
7509 arc_dnode_limit
= limit
;
7511 /* Valid range: 1 - N */
7512 if (zfs_arc_grow_retry
)
7513 arc_grow_retry
= zfs_arc_grow_retry
;
7515 /* Valid range: 1 - N */
7516 if (zfs_arc_shrink_shift
) {
7517 arc_shrink_shift
= zfs_arc_shrink_shift
;
7518 arc_no_grow_shift
= MIN(arc_no_grow_shift
, arc_shrink_shift
-1);
7521 /* Valid range: 1 - N */
7522 if (zfs_arc_p_min_shift
)
7523 arc_p_min_shift
= zfs_arc_p_min_shift
;
7525 /* Valid range: 1 - N ms */
7526 if (zfs_arc_min_prefetch_ms
)
7527 arc_min_prefetch_ms
= zfs_arc_min_prefetch_ms
;
7529 /* Valid range: 1 - N ms */
7530 if (zfs_arc_min_prescient_prefetch_ms
) {
7531 arc_min_prescient_prefetch_ms
=
7532 zfs_arc_min_prescient_prefetch_ms
;
7535 /* Valid range: 0 - 100 */
7536 if ((zfs_arc_lotsfree_percent
>= 0) &&
7537 (zfs_arc_lotsfree_percent
<= 100))
7538 arc_lotsfree_percent
= zfs_arc_lotsfree_percent
;
7540 /* Valid range: 0 - <all physical memory> */
7541 if ((zfs_arc_sys_free
) && (zfs_arc_sys_free
!= arc_sys_free
))
7542 arc_sys_free
= MIN(MAX(zfs_arc_sys_free
, 0), allmem
);
7547 arc_state_init(void)
7549 arc_anon
= &ARC_anon
;
7551 arc_mru_ghost
= &ARC_mru_ghost
;
7553 arc_mfu_ghost
= &ARC_mfu_ghost
;
7554 arc_l2c_only
= &ARC_l2c_only
;
7556 arc_mru
->arcs_list
[ARC_BUFC_METADATA
] =
7557 multilist_create(sizeof (arc_buf_hdr_t
),
7558 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7559 arc_state_multilist_index_func
);
7560 arc_mru
->arcs_list
[ARC_BUFC_DATA
] =
7561 multilist_create(sizeof (arc_buf_hdr_t
),
7562 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7563 arc_state_multilist_index_func
);
7564 arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
] =
7565 multilist_create(sizeof (arc_buf_hdr_t
),
7566 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7567 arc_state_multilist_index_func
);
7568 arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
] =
7569 multilist_create(sizeof (arc_buf_hdr_t
),
7570 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7571 arc_state_multilist_index_func
);
7572 arc_mfu
->arcs_list
[ARC_BUFC_METADATA
] =
7573 multilist_create(sizeof (arc_buf_hdr_t
),
7574 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7575 arc_state_multilist_index_func
);
7576 arc_mfu
->arcs_list
[ARC_BUFC_DATA
] =
7577 multilist_create(sizeof (arc_buf_hdr_t
),
7578 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7579 arc_state_multilist_index_func
);
7580 arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
] =
7581 multilist_create(sizeof (arc_buf_hdr_t
),
7582 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7583 arc_state_multilist_index_func
);
7584 arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
] =
7585 multilist_create(sizeof (arc_buf_hdr_t
),
7586 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7587 arc_state_multilist_index_func
);
7588 arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
] =
7589 multilist_create(sizeof (arc_buf_hdr_t
),
7590 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7591 arc_state_multilist_index_func
);
7592 arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
] =
7593 multilist_create(sizeof (arc_buf_hdr_t
),
7594 offsetof(arc_buf_hdr_t
, b_l1hdr
.b_arc_node
),
7595 arc_state_multilist_index_func
);
7597 zfs_refcount_create(&arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7598 zfs_refcount_create(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7599 zfs_refcount_create(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]);
7600 zfs_refcount_create(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]);
7601 zfs_refcount_create(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7602 zfs_refcount_create(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7603 zfs_refcount_create(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
7604 zfs_refcount_create(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]);
7605 zfs_refcount_create(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7606 zfs_refcount_create(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7607 zfs_refcount_create(&arc_l2c_only
->arcs_esize
[ARC_BUFC_METADATA
]);
7608 zfs_refcount_create(&arc_l2c_only
->arcs_esize
[ARC_BUFC_DATA
]);
7610 zfs_refcount_create(&arc_anon
->arcs_size
);
7611 zfs_refcount_create(&arc_mru
->arcs_size
);
7612 zfs_refcount_create(&arc_mru_ghost
->arcs_size
);
7613 zfs_refcount_create(&arc_mfu
->arcs_size
);
7614 zfs_refcount_create(&arc_mfu_ghost
->arcs_size
);
7615 zfs_refcount_create(&arc_l2c_only
->arcs_size
);
7617 aggsum_init(&arc_meta_used
, 0);
7618 aggsum_init(&arc_size
, 0);
7619 aggsum_init(&astat_data_size
, 0);
7620 aggsum_init(&astat_metadata_size
, 0);
7621 aggsum_init(&astat_hdr_size
, 0);
7622 aggsum_init(&astat_l2_hdr_size
, 0);
7623 aggsum_init(&astat_bonus_size
, 0);
7624 aggsum_init(&astat_dnode_size
, 0);
7625 aggsum_init(&astat_dbuf_size
, 0);
7627 arc_anon
->arcs_state
= ARC_STATE_ANON
;
7628 arc_mru
->arcs_state
= ARC_STATE_MRU
;
7629 arc_mru_ghost
->arcs_state
= ARC_STATE_MRU_GHOST
;
7630 arc_mfu
->arcs_state
= ARC_STATE_MFU
;
7631 arc_mfu_ghost
->arcs_state
= ARC_STATE_MFU_GHOST
;
7632 arc_l2c_only
->arcs_state
= ARC_STATE_L2C_ONLY
;
7636 arc_state_fini(void)
7638 zfs_refcount_destroy(&arc_anon
->arcs_esize
[ARC_BUFC_METADATA
]);
7639 zfs_refcount_destroy(&arc_anon
->arcs_esize
[ARC_BUFC_DATA
]);
7640 zfs_refcount_destroy(&arc_mru
->arcs_esize
[ARC_BUFC_METADATA
]);
7641 zfs_refcount_destroy(&arc_mru
->arcs_esize
[ARC_BUFC_DATA
]);
7642 zfs_refcount_destroy(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7643 zfs_refcount_destroy(&arc_mru_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7644 zfs_refcount_destroy(&arc_mfu
->arcs_esize
[ARC_BUFC_METADATA
]);
7645 zfs_refcount_destroy(&arc_mfu
->arcs_esize
[ARC_BUFC_DATA
]);
7646 zfs_refcount_destroy(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_METADATA
]);
7647 zfs_refcount_destroy(&arc_mfu_ghost
->arcs_esize
[ARC_BUFC_DATA
]);
7648 zfs_refcount_destroy(&arc_l2c_only
->arcs_esize
[ARC_BUFC_METADATA
]);
7649 zfs_refcount_destroy(&arc_l2c_only
->arcs_esize
[ARC_BUFC_DATA
]);
7651 zfs_refcount_destroy(&arc_anon
->arcs_size
);
7652 zfs_refcount_destroy(&arc_mru
->arcs_size
);
7653 zfs_refcount_destroy(&arc_mru_ghost
->arcs_size
);
7654 zfs_refcount_destroy(&arc_mfu
->arcs_size
);
7655 zfs_refcount_destroy(&arc_mfu_ghost
->arcs_size
);
7656 zfs_refcount_destroy(&arc_l2c_only
->arcs_size
);
7658 multilist_destroy(arc_mru
->arcs_list
[ARC_BUFC_METADATA
]);
7659 multilist_destroy(arc_mru_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
7660 multilist_destroy(arc_mfu
->arcs_list
[ARC_BUFC_METADATA
]);
7661 multilist_destroy(arc_mfu_ghost
->arcs_list
[ARC_BUFC_METADATA
]);
7662 multilist_destroy(arc_mru
->arcs_list
[ARC_BUFC_DATA
]);
7663 multilist_destroy(arc_mru_ghost
->arcs_list
[ARC_BUFC_DATA
]);
7664 multilist_destroy(arc_mfu
->arcs_list
[ARC_BUFC_DATA
]);
7665 multilist_destroy(arc_mfu_ghost
->arcs_list
[ARC_BUFC_DATA
]);
7666 multilist_destroy(arc_l2c_only
->arcs_list
[ARC_BUFC_METADATA
]);
7667 multilist_destroy(arc_l2c_only
->arcs_list
[ARC_BUFC_DATA
]);
7669 aggsum_fini(&arc_meta_used
);
7670 aggsum_fini(&arc_size
);
7671 aggsum_fini(&astat_data_size
);
7672 aggsum_fini(&astat_metadata_size
);
7673 aggsum_fini(&astat_hdr_size
);
7674 aggsum_fini(&astat_l2_hdr_size
);
7675 aggsum_fini(&astat_bonus_size
);
7676 aggsum_fini(&astat_dnode_size
);
7677 aggsum_fini(&astat_dbuf_size
);
7681 arc_target_bytes(void)
7689 uint64_t percent
, allmem
= arc_all_memory();
7691 mutex_init(&arc_reclaim_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
7692 cv_init(&arc_reclaim_thread_cv
, NULL
, CV_DEFAULT
, NULL
);
7693 cv_init(&arc_reclaim_waiters_cv
, NULL
, CV_DEFAULT
, NULL
);
7695 arc_min_prefetch_ms
= 1000;
7696 arc_min_prescient_prefetch_ms
= 6000;
7700 * Register a shrinker to support synchronous (direct) memory
7701 * reclaim from the arc. This is done to prevent kswapd from
7702 * swapping out pages when it is preferable to shrink the arc.
7704 spl_register_shrinker(&arc_shrinker
);
7706 /* Set to 1/64 of all memory or a minimum of 512K */
7707 arc_sys_free
= MAX(allmem
/ 64, (512 * 1024));
7711 /* Set max to 1/2 of all memory */
7712 arc_c_max
= allmem
/ 2;
7715 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more */
7716 arc_c_min
= MAX(allmem
/ 32, 2ULL << SPA_MAXBLOCKSHIFT
);
7719 * In userland, there's only the memory pressure that we artificially
7720 * create (see arc_available_memory()). Don't let arc_c get too
7721 * small, because it can cause transactions to be larger than
7722 * arc_c, causing arc_tempreserve_space() to fail.
7724 arc_c_min
= MAX(arc_c_max
/ 2, 2ULL << SPA_MAXBLOCKSHIFT
);
7728 arc_p
= (arc_c
>> 1);
7730 /* Set min to 1/2 of arc_c_min */
7731 arc_meta_min
= 1ULL << SPA_MAXBLOCKSHIFT
;
7732 /* Initialize maximum observed usage to zero */
7735 * Set arc_meta_limit to a percent of arc_c_max with a floor of
7736 * arc_meta_min, and a ceiling of arc_c_max.
7738 percent
= MIN(zfs_arc_meta_limit_percent
, 100);
7739 arc_meta_limit
= MAX(arc_meta_min
, (percent
* arc_c_max
) / 100);
7740 percent
= MIN(zfs_arc_dnode_limit_percent
, 100);
7741 arc_dnode_limit
= (percent
* arc_meta_limit
) / 100;
7743 /* Apply user specified tunings */
7744 arc_tuning_update();
7746 /* if kmem_flags are set, lets try to use less memory */
7747 if (kmem_debugging())
7749 if (arc_c
< arc_c_min
)
7755 list_create(&arc_prune_list
, sizeof (arc_prune_t
),
7756 offsetof(arc_prune_t
, p_node
));
7757 mutex_init(&arc_prune_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
7759 arc_prune_taskq
= taskq_create("arc_prune", max_ncpus
, defclsyspri
,
7760 max_ncpus
, INT_MAX
, TASKQ_PREPOPULATE
| TASKQ_DYNAMIC
);
7762 arc_reclaim_thread_exit
= B_FALSE
;
7764 arc_ksp
= kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED
,
7765 sizeof (arc_stats
) / sizeof (kstat_named_t
), KSTAT_FLAG_VIRTUAL
);
7767 if (arc_ksp
!= NULL
) {
7768 arc_ksp
->ks_data
= &arc_stats
;
7769 arc_ksp
->ks_update
= arc_kstat_update
;
7770 kstat_install(arc_ksp
);
7773 (void) thread_create(NULL
, 0, arc_reclaim_thread
, NULL
, 0, &p0
,
7774 TS_RUN
, defclsyspri
);
7780 * Calculate maximum amount of dirty data per pool.
7782 * If it has been set by a module parameter, take that.
7783 * Otherwise, use a percentage of physical memory defined by
7784 * zfs_dirty_data_max_percent (default 10%) with a cap at
7785 * zfs_dirty_data_max_max (default 4G or 25% of physical memory).
7787 if (zfs_dirty_data_max_max
== 0)
7788 zfs_dirty_data_max_max
= MIN(4ULL * 1024 * 1024 * 1024,
7789 allmem
* zfs_dirty_data_max_max_percent
/ 100);
7791 if (zfs_dirty_data_max
== 0) {
7792 zfs_dirty_data_max
= allmem
*
7793 zfs_dirty_data_max_percent
/ 100;
7794 zfs_dirty_data_max
= MIN(zfs_dirty_data_max
,
7795 zfs_dirty_data_max_max
);
7805 spl_unregister_shrinker(&arc_shrinker
);
7806 #endif /* _KERNEL */
7808 mutex_enter(&arc_reclaim_lock
);
7809 arc_reclaim_thread_exit
= B_TRUE
;
7811 * The reclaim thread will set arc_reclaim_thread_exit back to
7812 * B_FALSE when it is finished exiting; we're waiting for that.
7814 while (arc_reclaim_thread_exit
) {
7815 cv_signal(&arc_reclaim_thread_cv
);
7816 cv_wait(&arc_reclaim_thread_cv
, &arc_reclaim_lock
);
7818 mutex_exit(&arc_reclaim_lock
);
7820 /* Use B_TRUE to ensure *all* buffers are evicted */
7821 arc_flush(NULL
, B_TRUE
);
7825 if (arc_ksp
!= NULL
) {
7826 kstat_delete(arc_ksp
);
7830 taskq_wait(arc_prune_taskq
);
7831 taskq_destroy(arc_prune_taskq
);
7833 mutex_enter(&arc_prune_mtx
);
7834 while ((p
= list_head(&arc_prune_list
)) != NULL
) {
7835 list_remove(&arc_prune_list
, p
);
7836 zfs_refcount_remove(&p
->p_refcnt
, &arc_prune_list
);
7837 zfs_refcount_destroy(&p
->p_refcnt
);
7838 kmem_free(p
, sizeof (*p
));
7840 mutex_exit(&arc_prune_mtx
);
7842 list_destroy(&arc_prune_list
);
7843 mutex_destroy(&arc_prune_mtx
);
7844 mutex_destroy(&arc_reclaim_lock
);
7845 cv_destroy(&arc_reclaim_thread_cv
);
7846 cv_destroy(&arc_reclaim_waiters_cv
);
7851 ASSERT0(arc_loaned_bytes
);
7857 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk.
7858 * It uses dedicated storage devices to hold cached data, which are populated
7859 * using large infrequent writes. The main role of this cache is to boost
7860 * the performance of random read workloads. The intended L2ARC devices
7861 * include short-stroked disks, solid state disks, and other media with
7862 * substantially faster read latency than disk.
7864 * +-----------------------+
7866 * +-----------------------+
7869 * l2arc_feed_thread() arc_read()
7873 * +---------------+ |
7875 * +---------------+ |
7880 * +-------+ +-------+
7882 * | cache | | cache |
7883 * +-------+ +-------+
7884 * +=========+ .-----.
7885 * : L2ARC : |-_____-|
7886 * : devices : | Disks |
7887 * +=========+ `-_____-'
7889 * Read requests are satisfied from the following sources, in order:
7892 * 2) vdev cache of L2ARC devices
7894 * 4) vdev cache of disks
7897 * Some L2ARC device types exhibit extremely slow write performance.
7898 * To accommodate for this there are some significant differences between
7899 * the L2ARC and traditional cache design:
7901 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from
7902 * the ARC behave as usual, freeing buffers and placing headers on ghost
7903 * lists. The ARC does not send buffers to the L2ARC during eviction as
7904 * this would add inflated write latencies for all ARC memory pressure.
7906 * 2. The L2ARC attempts to cache data from the ARC before it is evicted.
7907 * It does this by periodically scanning buffers from the eviction-end of
7908 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are
7909 * not already there. It scans until a headroom of buffers is satisfied,
7910 * which itself is a buffer for ARC eviction. If a compressible buffer is
7911 * found during scanning and selected for writing to an L2ARC device, we
7912 * temporarily boost scanning headroom during the next scan cycle to make
7913 * sure we adapt to compression effects (which might significantly reduce
7914 * the data volume we write to L2ARC). The thread that does this is
7915 * l2arc_feed_thread(), illustrated below; example sizes are included to
7916 * provide a better sense of ratio than this diagram:
7919 * +---------------------+----------+
7920 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC
7921 * +---------------------+----------+ | o L2ARC eligible
7922 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer
7923 * +---------------------+----------+ |
7924 * 15.9 Gbytes ^ 32 Mbytes |
7926 * l2arc_feed_thread()
7928 * l2arc write hand <--[oooo]--'
7932 * +==============================+
7933 * L2ARC dev |####|#|###|###| |####| ... |
7934 * +==============================+
7937 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of
7938 * evicted, then the L2ARC has cached a buffer much sooner than it probably
7939 * needed to, potentially wasting L2ARC device bandwidth and storage. It is
7940 * safe to say that this is an uncommon case, since buffers at the end of
7941 * the ARC lists have moved there due to inactivity.
7943 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom,
7944 * then the L2ARC simply misses copying some buffers. This serves as a
7945 * pressure valve to prevent heavy read workloads from both stalling the ARC
7946 * with waits and clogging the L2ARC with writes. This also helps prevent
7947 * the potential for the L2ARC to churn if it attempts to cache content too
7948 * quickly, such as during backups of the entire pool.
7950 * 5. After system boot and before the ARC has filled main memory, there are
7951 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru
7952 * lists can remain mostly static. Instead of searching from tail of these
7953 * lists as pictured, the l2arc_feed_thread() will search from the list heads
7954 * for eligible buffers, greatly increasing its chance of finding them.
7956 * The L2ARC device write speed is also boosted during this time so that
7957 * the L2ARC warms up faster. Since there have been no ARC evictions yet,
7958 * there are no L2ARC reads, and no fear of degrading read performance
7959 * through increased writes.
7961 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that
7962 * the vdev queue can aggregate them into larger and fewer writes. Each
7963 * device is written to in a rotor fashion, sweeping writes through
7964 * available space then repeating.
7966 * 7. The L2ARC does not store dirty content. It never needs to flush
7967 * write buffers back to disk based storage.
7969 * 8. If an ARC buffer is written (and dirtied) which also exists in the
7970 * L2ARC, the now stale L2ARC buffer is immediately dropped.
7972 * The performance of the L2ARC can be tweaked by a number of tunables, which
7973 * may be necessary for different workloads:
7975 * l2arc_write_max max write bytes per interval
7976 * l2arc_write_boost extra write bytes during device warmup
7977 * l2arc_noprefetch skip caching prefetched buffers
7978 * l2arc_headroom number of max device writes to precache
7979 * l2arc_headroom_boost when we find compressed buffers during ARC
7980 * scanning, we multiply headroom by this
7981 * percentage factor for the next scan cycle,
7982 * since more compressed buffers are likely to
7984 * l2arc_feed_secs seconds between L2ARC writing
7986 * Tunables may be removed or added as future performance improvements are
7987 * integrated, and also may become zpool properties.
7989 * There are three key functions that control how the L2ARC warms up:
7991 * l2arc_write_eligible() check if a buffer is eligible to cache
7992 * l2arc_write_size() calculate how much to write
7993 * l2arc_write_interval() calculate sleep delay between writes
7995 * These three functions determine what to write, how much, and how quickly
8000 l2arc_write_eligible(uint64_t spa_guid
, arc_buf_hdr_t
*hdr
)
8003 * A buffer is *not* eligible for the L2ARC if it:
8004 * 1. belongs to a different spa.
8005 * 2. is already cached on the L2ARC.
8006 * 3. has an I/O in progress (it may be an incomplete read).
8007 * 4. is flagged not eligible (zfs property).
8009 if (hdr
->b_spa
!= spa_guid
|| HDR_HAS_L2HDR(hdr
) ||
8010 HDR_IO_IN_PROGRESS(hdr
) || !HDR_L2CACHE(hdr
))
8017 l2arc_write_size(void)
8022 * Make sure our globals have meaningful values in case the user
8025 size
= l2arc_write_max
;
8027 cmn_err(CE_NOTE
, "Bad value for l2arc_write_max, value must "
8028 "be greater than zero, resetting it to the default (%d)",
8030 size
= l2arc_write_max
= L2ARC_WRITE_SIZE
;
8033 if (arc_warm
== B_FALSE
)
8034 size
+= l2arc_write_boost
;
8041 l2arc_write_interval(clock_t began
, uint64_t wanted
, uint64_t wrote
)
8043 clock_t interval
, next
, now
;
8046 * If the ARC lists are busy, increase our write rate; if the
8047 * lists are stale, idle back. This is achieved by checking
8048 * how much we previously wrote - if it was more than half of
8049 * what we wanted, schedule the next write much sooner.
8051 if (l2arc_feed_again
&& wrote
> (wanted
/ 2))
8052 interval
= (hz
* l2arc_feed_min_ms
) / 1000;
8054 interval
= hz
* l2arc_feed_secs
;
8056 now
= ddi_get_lbolt();
8057 next
= MAX(now
, MIN(now
+ interval
, began
+ interval
));
8063 * Cycle through L2ARC devices. This is how L2ARC load balances.
8064 * If a device is returned, this also returns holding the spa config lock.
8066 static l2arc_dev_t
*
8067 l2arc_dev_get_next(void)
8069 l2arc_dev_t
*first
, *next
= NULL
;
8072 * Lock out the removal of spas (spa_namespace_lock), then removal
8073 * of cache devices (l2arc_dev_mtx). Once a device has been selected,
8074 * both locks will be dropped and a spa config lock held instead.
8076 mutex_enter(&spa_namespace_lock
);
8077 mutex_enter(&l2arc_dev_mtx
);
8079 /* if there are no vdevs, there is nothing to do */
8080 if (l2arc_ndev
== 0)
8084 next
= l2arc_dev_last
;
8086 /* loop around the list looking for a non-faulted vdev */
8088 next
= list_head(l2arc_dev_list
);
8090 next
= list_next(l2arc_dev_list
, next
);
8092 next
= list_head(l2arc_dev_list
);
8095 /* if we have come back to the start, bail out */
8098 else if (next
== first
)
8101 } while (vdev_is_dead(next
->l2ad_vdev
));
8103 /* if we were unable to find any usable vdevs, return NULL */
8104 if (vdev_is_dead(next
->l2ad_vdev
))
8107 l2arc_dev_last
= next
;
8110 mutex_exit(&l2arc_dev_mtx
);
8113 * Grab the config lock to prevent the 'next' device from being
8114 * removed while we are writing to it.
8117 spa_config_enter(next
->l2ad_spa
, SCL_L2ARC
, next
, RW_READER
);
8118 mutex_exit(&spa_namespace_lock
);
8124 * Free buffers that were tagged for destruction.
8127 l2arc_do_free_on_write(void)
8130 l2arc_data_free_t
*df
, *df_prev
;
8132 mutex_enter(&l2arc_free_on_write_mtx
);
8133 buflist
= l2arc_free_on_write
;
8135 for (df
= list_tail(buflist
); df
; df
= df_prev
) {
8136 df_prev
= list_prev(buflist
, df
);
8137 ASSERT3P(df
->l2df_abd
, !=, NULL
);
8138 abd_free(df
->l2df_abd
);
8139 list_remove(buflist
, df
);
8140 kmem_free(df
, sizeof (l2arc_data_free_t
));
8143 mutex_exit(&l2arc_free_on_write_mtx
);
8147 * A write to a cache device has completed. Update all headers to allow
8148 * reads from these buffers to begin.
8151 l2arc_write_done(zio_t
*zio
)
8153 l2arc_write_callback_t
*cb
;
8156 arc_buf_hdr_t
*head
, *hdr
, *hdr_prev
;
8157 kmutex_t
*hash_lock
;
8158 int64_t bytes_dropped
= 0;
8160 cb
= zio
->io_private
;
8161 ASSERT3P(cb
, !=, NULL
);
8162 dev
= cb
->l2wcb_dev
;
8163 ASSERT3P(dev
, !=, NULL
);
8164 head
= cb
->l2wcb_head
;
8165 ASSERT3P(head
, !=, NULL
);
8166 buflist
= &dev
->l2ad_buflist
;
8167 ASSERT3P(buflist
, !=, NULL
);
8168 DTRACE_PROBE2(l2arc__iodone
, zio_t
*, zio
,
8169 l2arc_write_callback_t
*, cb
);
8171 if (zio
->io_error
!= 0)
8172 ARCSTAT_BUMP(arcstat_l2_writes_error
);
8175 * All writes completed, or an error was hit.
8178 mutex_enter(&dev
->l2ad_mtx
);
8179 for (hdr
= list_prev(buflist
, head
); hdr
; hdr
= hdr_prev
) {
8180 hdr_prev
= list_prev(buflist
, hdr
);
8182 hash_lock
= HDR_LOCK(hdr
);
8185 * We cannot use mutex_enter or else we can deadlock
8186 * with l2arc_write_buffers (due to swapping the order
8187 * the hash lock and l2ad_mtx are taken).
8189 if (!mutex_tryenter(hash_lock
)) {
8191 * Missed the hash lock. We must retry so we
8192 * don't leave the ARC_FLAG_L2_WRITING bit set.
8194 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry
);
8197 * We don't want to rescan the headers we've
8198 * already marked as having been written out, so
8199 * we reinsert the head node so we can pick up
8200 * where we left off.
8202 list_remove(buflist
, head
);
8203 list_insert_after(buflist
, hdr
, head
);
8205 mutex_exit(&dev
->l2ad_mtx
);
8208 * We wait for the hash lock to become available
8209 * to try and prevent busy waiting, and increase
8210 * the chance we'll be able to acquire the lock
8211 * the next time around.
8213 mutex_enter(hash_lock
);
8214 mutex_exit(hash_lock
);
8219 * We could not have been moved into the arc_l2c_only
8220 * state while in-flight due to our ARC_FLAG_L2_WRITING
8221 * bit being set. Let's just ensure that's being enforced.
8223 ASSERT(HDR_HAS_L1HDR(hdr
));
8226 * Skipped - drop L2ARC entry and mark the header as no
8227 * longer L2 eligibile.
8229 if (zio
->io_error
!= 0) {
8231 * Error - drop L2ARC entry.
8233 list_remove(buflist
, hdr
);
8234 arc_hdr_clear_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
8236 ARCSTAT_INCR(arcstat_l2_psize
, -arc_hdr_size(hdr
));
8237 ARCSTAT_INCR(arcstat_l2_lsize
, -HDR_GET_LSIZE(hdr
));
8239 bytes_dropped
+= arc_hdr_size(hdr
);
8240 (void) zfs_refcount_remove_many(&dev
->l2ad_alloc
,
8241 arc_hdr_size(hdr
), hdr
);
8245 * Allow ARC to begin reads and ghost list evictions to
8248 arc_hdr_clear_flags(hdr
, ARC_FLAG_L2_WRITING
);
8250 mutex_exit(hash_lock
);
8253 atomic_inc_64(&l2arc_writes_done
);
8254 list_remove(buflist
, head
);
8255 ASSERT(!HDR_HAS_L1HDR(head
));
8256 kmem_cache_free(hdr_l2only_cache
, head
);
8257 mutex_exit(&dev
->l2ad_mtx
);
8259 vdev_space_update(dev
->l2ad_vdev
, -bytes_dropped
, 0, 0);
8261 l2arc_do_free_on_write();
8263 kmem_free(cb
, sizeof (l2arc_write_callback_t
));
8267 l2arc_untransform(zio_t
*zio
, l2arc_read_callback_t
*cb
)
8270 spa_t
*spa
= zio
->io_spa
;
8271 arc_buf_hdr_t
*hdr
= cb
->l2rcb_hdr
;
8272 blkptr_t
*bp
= zio
->io_bp
;
8273 uint8_t salt
[ZIO_DATA_SALT_LEN
];
8274 uint8_t iv
[ZIO_DATA_IV_LEN
];
8275 uint8_t mac
[ZIO_DATA_MAC_LEN
];
8276 boolean_t no_crypt
= B_FALSE
;
8279 * ZIL data is never be written to the L2ARC, so we don't need
8280 * special handling for its unique MAC storage.
8282 ASSERT3U(BP_GET_TYPE(bp
), !=, DMU_OT_INTENT_LOG
);
8283 ASSERT(MUTEX_HELD(HDR_LOCK(hdr
)));
8284 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
8287 * If the data was encrypted, decrypt it now. Note that
8288 * we must check the bp here and not the hdr, since the
8289 * hdr does not have its encryption parameters updated
8290 * until arc_read_done().
8292 if (BP_IS_ENCRYPTED(bp
)) {
8293 abd_t
*eabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
8295 zio_crypt_decode_params_bp(bp
, salt
, iv
);
8296 zio_crypt_decode_mac_bp(bp
, mac
);
8298 ret
= spa_do_crypt_abd(B_FALSE
, spa
, &cb
->l2rcb_zb
,
8299 BP_GET_TYPE(bp
), BP_GET_DEDUP(bp
), BP_SHOULD_BYTESWAP(bp
),
8300 salt
, iv
, mac
, HDR_GET_PSIZE(hdr
), eabd
,
8301 hdr
->b_l1hdr
.b_pabd
, &no_crypt
);
8303 arc_free_data_abd(hdr
, eabd
, arc_hdr_size(hdr
), hdr
);
8308 * If we actually performed decryption, replace b_pabd
8309 * with the decrypted data. Otherwise we can just throw
8310 * our decryption buffer away.
8313 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
8314 arc_hdr_size(hdr
), hdr
);
8315 hdr
->b_l1hdr
.b_pabd
= eabd
;
8318 arc_free_data_abd(hdr
, eabd
, arc_hdr_size(hdr
), hdr
);
8323 * If the L2ARC block was compressed, but ARC compression
8324 * is disabled we decompress the data into a new buffer and
8325 * replace the existing data.
8327 if (HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
8328 !HDR_COMPRESSION_ENABLED(hdr
)) {
8329 abd_t
*cabd
= arc_get_data_abd(hdr
, arc_hdr_size(hdr
), hdr
);
8330 void *tmp
= abd_borrow_buf(cabd
, arc_hdr_size(hdr
));
8332 ret
= zio_decompress_data(HDR_GET_COMPRESS(hdr
),
8333 hdr
->b_l1hdr
.b_pabd
, tmp
, HDR_GET_PSIZE(hdr
),
8334 HDR_GET_LSIZE(hdr
));
8336 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
8337 arc_free_data_abd(hdr
, cabd
, arc_hdr_size(hdr
), hdr
);
8341 abd_return_buf_copy(cabd
, tmp
, arc_hdr_size(hdr
));
8342 arc_free_data_abd(hdr
, hdr
->b_l1hdr
.b_pabd
,
8343 arc_hdr_size(hdr
), hdr
);
8344 hdr
->b_l1hdr
.b_pabd
= cabd
;
8346 zio
->io_size
= HDR_GET_LSIZE(hdr
);
8357 * A read to a cache device completed. Validate buffer contents before
8358 * handing over to the regular ARC routines.
8361 l2arc_read_done(zio_t
*zio
)
8364 l2arc_read_callback_t
*cb
= zio
->io_private
;
8366 kmutex_t
*hash_lock
;
8367 boolean_t valid_cksum
;
8368 boolean_t using_rdata
= (BP_IS_ENCRYPTED(&cb
->l2rcb_bp
) &&
8369 (cb
->l2rcb_flags
& ZIO_FLAG_RAW_ENCRYPT
));
8371 ASSERT3P(zio
->io_vd
, !=, NULL
);
8372 ASSERT(zio
->io_flags
& ZIO_FLAG_DONT_PROPAGATE
);
8374 spa_config_exit(zio
->io_spa
, SCL_L2ARC
, zio
->io_vd
);
8376 ASSERT3P(cb
, !=, NULL
);
8377 hdr
= cb
->l2rcb_hdr
;
8378 ASSERT3P(hdr
, !=, NULL
);
8380 hash_lock
= HDR_LOCK(hdr
);
8381 mutex_enter(hash_lock
);
8382 ASSERT3P(hash_lock
, ==, HDR_LOCK(hdr
));
8385 * If the data was read into a temporary buffer,
8386 * move it and free the buffer.
8388 if (cb
->l2rcb_abd
!= NULL
) {
8389 ASSERT3U(arc_hdr_size(hdr
), <, zio
->io_size
);
8390 if (zio
->io_error
== 0) {
8392 abd_copy(hdr
->b_crypt_hdr
.b_rabd
,
8393 cb
->l2rcb_abd
, arc_hdr_size(hdr
));
8395 abd_copy(hdr
->b_l1hdr
.b_pabd
,
8396 cb
->l2rcb_abd
, arc_hdr_size(hdr
));
8401 * The following must be done regardless of whether
8402 * there was an error:
8403 * - free the temporary buffer
8404 * - point zio to the real ARC buffer
8405 * - set zio size accordingly
8406 * These are required because zio is either re-used for
8407 * an I/O of the block in the case of the error
8408 * or the zio is passed to arc_read_done() and it
8411 abd_free(cb
->l2rcb_abd
);
8412 zio
->io_size
= zio
->io_orig_size
= arc_hdr_size(hdr
);
8415 ASSERT(HDR_HAS_RABD(hdr
));
8416 zio
->io_abd
= zio
->io_orig_abd
=
8417 hdr
->b_crypt_hdr
.b_rabd
;
8419 ASSERT3P(hdr
->b_l1hdr
.b_pabd
, !=, NULL
);
8420 zio
->io_abd
= zio
->io_orig_abd
= hdr
->b_l1hdr
.b_pabd
;
8424 ASSERT3P(zio
->io_abd
, !=, NULL
);
8427 * Check this survived the L2ARC journey.
8429 ASSERT(zio
->io_abd
== hdr
->b_l1hdr
.b_pabd
||
8430 (HDR_HAS_RABD(hdr
) && zio
->io_abd
== hdr
->b_crypt_hdr
.b_rabd
));
8431 zio
->io_bp_copy
= cb
->l2rcb_bp
; /* XXX fix in L2ARC 2.0 */
8432 zio
->io_bp
= &zio
->io_bp_copy
; /* XXX fix in L2ARC 2.0 */
8434 valid_cksum
= arc_cksum_is_equal(hdr
, zio
);
8437 * b_rabd will always match the data as it exists on disk if it is
8438 * being used. Therefore if we are reading into b_rabd we do not
8439 * attempt to untransform the data.
8441 if (valid_cksum
&& !using_rdata
)
8442 tfm_error
= l2arc_untransform(zio
, cb
);
8444 if (valid_cksum
&& tfm_error
== 0 && zio
->io_error
== 0 &&
8445 !HDR_L2_EVICTED(hdr
)) {
8446 mutex_exit(hash_lock
);
8447 zio
->io_private
= hdr
;
8450 mutex_exit(hash_lock
);
8452 * Buffer didn't survive caching. Increment stats and
8453 * reissue to the original storage device.
8455 if (zio
->io_error
!= 0) {
8456 ARCSTAT_BUMP(arcstat_l2_io_error
);
8458 zio
->io_error
= SET_ERROR(EIO
);
8460 if (!valid_cksum
|| tfm_error
!= 0)
8461 ARCSTAT_BUMP(arcstat_l2_cksum_bad
);
8464 * If there's no waiter, issue an async i/o to the primary
8465 * storage now. If there *is* a waiter, the caller must
8466 * issue the i/o in a context where it's OK to block.
8468 if (zio
->io_waiter
== NULL
) {
8469 zio_t
*pio
= zio_unique_parent(zio
);
8470 void *abd
= (using_rdata
) ?
8471 hdr
->b_crypt_hdr
.b_rabd
: hdr
->b_l1hdr
.b_pabd
;
8473 ASSERT(!pio
|| pio
->io_child_type
== ZIO_CHILD_LOGICAL
);
8475 zio_nowait(zio_read(pio
, zio
->io_spa
, zio
->io_bp
,
8476 abd
, zio
->io_size
, arc_read_done
,
8477 hdr
, zio
->io_priority
, cb
->l2rcb_flags
,
8482 kmem_free(cb
, sizeof (l2arc_read_callback_t
));
8486 * This is the list priority from which the L2ARC will search for pages to
8487 * cache. This is used within loops (0..3) to cycle through lists in the
8488 * desired order. This order can have a significant effect on cache
8491 * Currently the metadata lists are hit first, MFU then MRU, followed by
8492 * the data lists. This function returns a locked list, and also returns
8495 static multilist_sublist_t
*
8496 l2arc_sublist_lock(int list_num
)
8498 multilist_t
*ml
= NULL
;
8501 ASSERT(list_num
>= 0 && list_num
< L2ARC_FEED_TYPES
);
8505 ml
= arc_mfu
->arcs_list
[ARC_BUFC_METADATA
];
8508 ml
= arc_mru
->arcs_list
[ARC_BUFC_METADATA
];
8511 ml
= arc_mfu
->arcs_list
[ARC_BUFC_DATA
];
8514 ml
= arc_mru
->arcs_list
[ARC_BUFC_DATA
];
8521 * Return a randomly-selected sublist. This is acceptable
8522 * because the caller feeds only a little bit of data for each
8523 * call (8MB). Subsequent calls will result in different
8524 * sublists being selected.
8526 idx
= multilist_get_random_index(ml
);
8527 return (multilist_sublist_lock(ml
, idx
));
8531 * Evict buffers from the device write hand to the distance specified in
8532 * bytes. This distance may span populated buffers, it may span nothing.
8533 * This is clearing a region on the L2ARC device ready for writing.
8534 * If the 'all' boolean is set, every buffer is evicted.
8537 l2arc_evict(l2arc_dev_t
*dev
, uint64_t distance
, boolean_t all
)
8540 arc_buf_hdr_t
*hdr
, *hdr_prev
;
8541 kmutex_t
*hash_lock
;
8544 buflist
= &dev
->l2ad_buflist
;
8546 if (!all
&& dev
->l2ad_first
) {
8548 * This is the first sweep through the device. There is
8554 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- (2 * distance
))) {
8556 * When nearing the end of the device, evict to the end
8557 * before the device write hand jumps to the start.
8559 taddr
= dev
->l2ad_end
;
8561 taddr
= dev
->l2ad_hand
+ distance
;
8563 DTRACE_PROBE4(l2arc__evict
, l2arc_dev_t
*, dev
, list_t
*, buflist
,
8564 uint64_t, taddr
, boolean_t
, all
);
8567 mutex_enter(&dev
->l2ad_mtx
);
8568 for (hdr
= list_tail(buflist
); hdr
; hdr
= hdr_prev
) {
8569 hdr_prev
= list_prev(buflist
, hdr
);
8571 hash_lock
= HDR_LOCK(hdr
);
8574 * We cannot use mutex_enter or else we can deadlock
8575 * with l2arc_write_buffers (due to swapping the order
8576 * the hash lock and l2ad_mtx are taken).
8578 if (!mutex_tryenter(hash_lock
)) {
8580 * Missed the hash lock. Retry.
8582 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry
);
8583 mutex_exit(&dev
->l2ad_mtx
);
8584 mutex_enter(hash_lock
);
8585 mutex_exit(hash_lock
);
8590 * A header can't be on this list if it doesn't have L2 header.
8592 ASSERT(HDR_HAS_L2HDR(hdr
));
8594 /* Ensure this header has finished being written. */
8595 ASSERT(!HDR_L2_WRITING(hdr
));
8596 ASSERT(!HDR_L2_WRITE_HEAD(hdr
));
8598 if (!all
&& (hdr
->b_l2hdr
.b_daddr
>= taddr
||
8599 hdr
->b_l2hdr
.b_daddr
< dev
->l2ad_hand
)) {
8601 * We've evicted to the target address,
8602 * or the end of the device.
8604 mutex_exit(hash_lock
);
8608 if (!HDR_HAS_L1HDR(hdr
)) {
8609 ASSERT(!HDR_L2_READING(hdr
));
8611 * This doesn't exist in the ARC. Destroy.
8612 * arc_hdr_destroy() will call list_remove()
8613 * and decrement arcstat_l2_lsize.
8615 arc_change_state(arc_anon
, hdr
, hash_lock
);
8616 arc_hdr_destroy(hdr
);
8618 ASSERT(hdr
->b_l1hdr
.b_state
!= arc_l2c_only
);
8619 ARCSTAT_BUMP(arcstat_l2_evict_l1cached
);
8621 * Invalidate issued or about to be issued
8622 * reads, since we may be about to write
8623 * over this location.
8625 if (HDR_L2_READING(hdr
)) {
8626 ARCSTAT_BUMP(arcstat_l2_evict_reading
);
8627 arc_hdr_set_flags(hdr
, ARC_FLAG_L2_EVICTED
);
8630 arc_hdr_l2hdr_destroy(hdr
);
8632 mutex_exit(hash_lock
);
8634 mutex_exit(&dev
->l2ad_mtx
);
8638 * Handle any abd transforms that might be required for writing to the L2ARC.
8639 * If successful, this function will always return an abd with the data
8640 * transformed as it is on disk in a new abd of asize bytes.
8643 l2arc_apply_transforms(spa_t
*spa
, arc_buf_hdr_t
*hdr
, uint64_t asize
,
8648 abd_t
*cabd
= NULL
, *eabd
= NULL
, *to_write
= hdr
->b_l1hdr
.b_pabd
;
8649 enum zio_compress compress
= HDR_GET_COMPRESS(hdr
);
8650 uint64_t psize
= HDR_GET_PSIZE(hdr
);
8651 uint64_t size
= arc_hdr_size(hdr
);
8652 boolean_t ismd
= HDR_ISTYPE_METADATA(hdr
);
8653 boolean_t bswap
= (hdr
->b_l1hdr
.b_byteswap
!= DMU_BSWAP_NUMFUNCS
);
8654 dsl_crypto_key_t
*dck
= NULL
;
8655 uint8_t mac
[ZIO_DATA_MAC_LEN
] = { 0 };
8656 boolean_t no_crypt
= B_FALSE
;
8658 ASSERT((HDR_GET_COMPRESS(hdr
) != ZIO_COMPRESS_OFF
&&
8659 !HDR_COMPRESSION_ENABLED(hdr
)) ||
8660 HDR_ENCRYPTED(hdr
) || HDR_SHARED_DATA(hdr
) || psize
!= asize
);
8661 ASSERT3U(psize
, <=, asize
);
8664 * If this data simply needs its own buffer, we simply allocate it
8665 * and copy the data. This may be done to elimiate a depedency on a
8666 * shared buffer or to reallocate the buffer to match asize.
8668 if (HDR_HAS_RABD(hdr
) && asize
!= psize
) {
8669 ASSERT3U(asize
, >=, psize
);
8670 to_write
= abd_alloc_for_io(asize
, ismd
);
8671 abd_copy(to_write
, hdr
->b_crypt_hdr
.b_rabd
, psize
);
8673 abd_zero_off(to_write
, psize
, asize
- psize
);
8677 if ((compress
== ZIO_COMPRESS_OFF
|| HDR_COMPRESSION_ENABLED(hdr
)) &&
8678 !HDR_ENCRYPTED(hdr
)) {
8679 ASSERT3U(size
, ==, psize
);
8680 to_write
= abd_alloc_for_io(asize
, ismd
);
8681 abd_copy(to_write
, hdr
->b_l1hdr
.b_pabd
, size
);
8683 abd_zero_off(to_write
, size
, asize
- size
);
8687 if (compress
!= ZIO_COMPRESS_OFF
&& !HDR_COMPRESSION_ENABLED(hdr
)) {
8688 cabd
= abd_alloc_for_io(asize
, ismd
);
8689 tmp
= abd_borrow_buf(cabd
, asize
);
8691 psize
= zio_compress_data(compress
, to_write
, tmp
, size
);
8692 ASSERT3U(psize
, <=, HDR_GET_PSIZE(hdr
));
8694 bzero((char *)tmp
+ psize
, asize
- psize
);
8695 psize
= HDR_GET_PSIZE(hdr
);
8696 abd_return_buf_copy(cabd
, tmp
, asize
);
8700 if (HDR_ENCRYPTED(hdr
)) {
8701 eabd
= abd_alloc_for_io(asize
, ismd
);
8704 * If the dataset was disowned before the buffer
8705 * made it to this point, the key to re-encrypt
8706 * it won't be available. In this case we simply
8707 * won't write the buffer to the L2ARC.
8709 ret
= spa_keystore_lookup_key(spa
, hdr
->b_crypt_hdr
.b_dsobj
,
8714 ret
= zio_do_crypt_abd(B_TRUE
, &dck
->dck_key
,
8715 hdr
->b_crypt_hdr
.b_ot
, bswap
, hdr
->b_crypt_hdr
.b_salt
,
8716 hdr
->b_crypt_hdr
.b_iv
, mac
, psize
, to_write
, eabd
,
8722 abd_copy(eabd
, to_write
, psize
);
8725 abd_zero_off(eabd
, psize
, asize
- psize
);
8727 /* assert that the MAC we got here matches the one we saved */
8728 ASSERT0(bcmp(mac
, hdr
->b_crypt_hdr
.b_mac
, ZIO_DATA_MAC_LEN
));
8729 spa_keystore_dsl_key_rele(spa
, dck
, FTAG
);
8731 if (to_write
== cabd
)
8738 ASSERT3P(to_write
, !=, hdr
->b_l1hdr
.b_pabd
);
8739 *abd_out
= to_write
;
8744 spa_keystore_dsl_key_rele(spa
, dck
, FTAG
);
8755 * Find and write ARC buffers to the L2ARC device.
8757 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid
8758 * for reading until they have completed writing.
8759 * The headroom_boost is an in-out parameter used to maintain headroom boost
8760 * state between calls to this function.
8762 * Returns the number of bytes actually written (which may be smaller than
8763 * the delta by which the device hand has changed due to alignment).
8766 l2arc_write_buffers(spa_t
*spa
, l2arc_dev_t
*dev
, uint64_t target_sz
)
8768 arc_buf_hdr_t
*hdr
, *hdr_prev
, *head
;
8769 uint64_t write_asize
, write_psize
, write_lsize
, headroom
;
8771 l2arc_write_callback_t
*cb
;
8773 uint64_t guid
= spa_load_guid(spa
);
8775 ASSERT3P(dev
->l2ad_vdev
, !=, NULL
);
8778 write_lsize
= write_asize
= write_psize
= 0;
8780 head
= kmem_cache_alloc(hdr_l2only_cache
, KM_PUSHPAGE
);
8781 arc_hdr_set_flags(head
, ARC_FLAG_L2_WRITE_HEAD
| ARC_FLAG_HAS_L2HDR
);
8784 * Copy buffers for L2ARC writing.
8786 for (int try = 0; try < L2ARC_FEED_TYPES
; try++) {
8787 multilist_sublist_t
*mls
= l2arc_sublist_lock(try);
8788 uint64_t passed_sz
= 0;
8790 VERIFY3P(mls
, !=, NULL
);
8793 * L2ARC fast warmup.
8795 * Until the ARC is warm and starts to evict, read from the
8796 * head of the ARC lists rather than the tail.
8798 if (arc_warm
== B_FALSE
)
8799 hdr
= multilist_sublist_head(mls
);
8801 hdr
= multilist_sublist_tail(mls
);
8803 headroom
= target_sz
* l2arc_headroom
;
8804 if (zfs_compressed_arc_enabled
)
8805 headroom
= (headroom
* l2arc_headroom_boost
) / 100;
8807 for (; hdr
; hdr
= hdr_prev
) {
8808 kmutex_t
*hash_lock
;
8809 abd_t
*to_write
= NULL
;
8811 if (arc_warm
== B_FALSE
)
8812 hdr_prev
= multilist_sublist_next(mls
, hdr
);
8814 hdr_prev
= multilist_sublist_prev(mls
, hdr
);
8816 hash_lock
= HDR_LOCK(hdr
);
8817 if (!mutex_tryenter(hash_lock
)) {
8819 * Skip this buffer rather than waiting.
8824 passed_sz
+= HDR_GET_LSIZE(hdr
);
8825 if (passed_sz
> headroom
) {
8829 mutex_exit(hash_lock
);
8833 if (!l2arc_write_eligible(guid
, hdr
)) {
8834 mutex_exit(hash_lock
);
8839 * We rely on the L1 portion of the header below, so
8840 * it's invalid for this header to have been evicted out
8841 * of the ghost cache, prior to being written out. The
8842 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
8844 ASSERT(HDR_HAS_L1HDR(hdr
));
8846 ASSERT3U(HDR_GET_PSIZE(hdr
), >, 0);
8847 ASSERT3U(arc_hdr_size(hdr
), >, 0);
8848 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
8850 uint64_t psize
= HDR_GET_PSIZE(hdr
);
8851 uint64_t asize
= vdev_psize_to_asize(dev
->l2ad_vdev
,
8854 if ((write_asize
+ asize
) > target_sz
) {
8856 mutex_exit(hash_lock
);
8861 * We rely on the L1 portion of the header below, so
8862 * it's invalid for this header to have been evicted out
8863 * of the ghost cache, prior to being written out. The
8864 * ARC_FLAG_L2_WRITING bit ensures this won't happen.
8866 arc_hdr_set_flags(hdr
, ARC_FLAG_L2_WRITING
);
8867 ASSERT(HDR_HAS_L1HDR(hdr
));
8869 ASSERT3U(HDR_GET_PSIZE(hdr
), >, 0);
8870 ASSERT(hdr
->b_l1hdr
.b_pabd
!= NULL
||
8872 ASSERT3U(arc_hdr_size(hdr
), >, 0);
8875 * If this header has b_rabd, we can use this since it
8876 * must always match the data exactly as it exists on
8877 * disk. Otherwise, the L2ARC can normally use the
8878 * hdr's data, but if we're sharing data between the
8879 * hdr and one of its bufs, L2ARC needs its own copy of
8880 * the data so that the ZIO below can't race with the
8881 * buf consumer. To ensure that this copy will be
8882 * available for the lifetime of the ZIO and be cleaned
8883 * up afterwards, we add it to the l2arc_free_on_write
8884 * queue. If we need to apply any transforms to the
8885 * data (compression, encryption) we will also need the
8888 if (HDR_HAS_RABD(hdr
) && psize
== asize
) {
8889 to_write
= hdr
->b_crypt_hdr
.b_rabd
;
8890 } else if ((HDR_COMPRESSION_ENABLED(hdr
) ||
8891 HDR_GET_COMPRESS(hdr
) == ZIO_COMPRESS_OFF
) &&
8892 !HDR_ENCRYPTED(hdr
) && !HDR_SHARED_DATA(hdr
) &&
8894 to_write
= hdr
->b_l1hdr
.b_pabd
;
8897 arc_buf_contents_t type
= arc_buf_type(hdr
);
8899 ret
= l2arc_apply_transforms(spa
, hdr
, asize
,
8902 arc_hdr_clear_flags(hdr
,
8903 ARC_FLAG_L2_WRITING
);
8904 mutex_exit(hash_lock
);
8908 l2arc_free_abd_on_write(to_write
, asize
, type
);
8913 * Insert a dummy header on the buflist so
8914 * l2arc_write_done() can find where the
8915 * write buffers begin without searching.
8917 mutex_enter(&dev
->l2ad_mtx
);
8918 list_insert_head(&dev
->l2ad_buflist
, head
);
8919 mutex_exit(&dev
->l2ad_mtx
);
8922 sizeof (l2arc_write_callback_t
), KM_SLEEP
);
8923 cb
->l2wcb_dev
= dev
;
8924 cb
->l2wcb_head
= head
;
8925 pio
= zio_root(spa
, l2arc_write_done
, cb
,
8929 hdr
->b_l2hdr
.b_dev
= dev
;
8930 hdr
->b_l2hdr
.b_hits
= 0;
8932 hdr
->b_l2hdr
.b_daddr
= dev
->l2ad_hand
;
8933 arc_hdr_set_flags(hdr
, ARC_FLAG_HAS_L2HDR
);
8935 mutex_enter(&dev
->l2ad_mtx
);
8936 list_insert_head(&dev
->l2ad_buflist
, hdr
);
8937 mutex_exit(&dev
->l2ad_mtx
);
8939 (void) zfs_refcount_add_many(&dev
->l2ad_alloc
,
8940 arc_hdr_size(hdr
), hdr
);
8942 wzio
= zio_write_phys(pio
, dev
->l2ad_vdev
,
8943 hdr
->b_l2hdr
.b_daddr
, asize
, to_write
,
8944 ZIO_CHECKSUM_OFF
, NULL
, hdr
,
8945 ZIO_PRIORITY_ASYNC_WRITE
,
8946 ZIO_FLAG_CANFAIL
, B_FALSE
);
8948 write_lsize
+= HDR_GET_LSIZE(hdr
);
8949 DTRACE_PROBE2(l2arc__write
, vdev_t
*, dev
->l2ad_vdev
,
8952 write_psize
+= psize
;
8953 write_asize
+= asize
;
8954 dev
->l2ad_hand
+= asize
;
8956 mutex_exit(hash_lock
);
8958 (void) zio_nowait(wzio
);
8961 multilist_sublist_unlock(mls
);
8967 /* No buffers selected for writing? */
8969 ASSERT0(write_lsize
);
8970 ASSERT(!HDR_HAS_L1HDR(head
));
8971 kmem_cache_free(hdr_l2only_cache
, head
);
8975 ASSERT3U(write_asize
, <=, target_sz
);
8976 ARCSTAT_BUMP(arcstat_l2_writes_sent
);
8977 ARCSTAT_INCR(arcstat_l2_write_bytes
, write_psize
);
8978 ARCSTAT_INCR(arcstat_l2_lsize
, write_lsize
);
8979 ARCSTAT_INCR(arcstat_l2_psize
, write_psize
);
8980 vdev_space_update(dev
->l2ad_vdev
, write_psize
, 0, 0);
8983 * Bump device hand to the device start if it is approaching the end.
8984 * l2arc_evict() will already have evicted ahead for this case.
8986 if (dev
->l2ad_hand
>= (dev
->l2ad_end
- target_sz
)) {
8987 dev
->l2ad_hand
= dev
->l2ad_start
;
8988 dev
->l2ad_first
= B_FALSE
;
8991 dev
->l2ad_writing
= B_TRUE
;
8992 (void) zio_wait(pio
);
8993 dev
->l2ad_writing
= B_FALSE
;
8995 return (write_asize
);
8999 * This thread feeds the L2ARC at regular intervals. This is the beating
9000 * heart of the L2ARC.
9004 l2arc_feed_thread(void *unused
)
9009 uint64_t size
, wrote
;
9010 clock_t begin
, next
= ddi_get_lbolt();
9011 fstrans_cookie_t cookie
;
9013 CALLB_CPR_INIT(&cpr
, &l2arc_feed_thr_lock
, callb_generic_cpr
, FTAG
);
9015 mutex_enter(&l2arc_feed_thr_lock
);
9017 cookie
= spl_fstrans_mark();
9018 while (l2arc_thread_exit
== 0) {
9019 CALLB_CPR_SAFE_BEGIN(&cpr
);
9020 (void) cv_timedwait_sig(&l2arc_feed_thr_cv
,
9021 &l2arc_feed_thr_lock
, next
);
9022 CALLB_CPR_SAFE_END(&cpr
, &l2arc_feed_thr_lock
);
9023 next
= ddi_get_lbolt() + hz
;
9026 * Quick check for L2ARC devices.
9028 mutex_enter(&l2arc_dev_mtx
);
9029 if (l2arc_ndev
== 0) {
9030 mutex_exit(&l2arc_dev_mtx
);
9033 mutex_exit(&l2arc_dev_mtx
);
9034 begin
= ddi_get_lbolt();
9037 * This selects the next l2arc device to write to, and in
9038 * doing so the next spa to feed from: dev->l2ad_spa. This
9039 * will return NULL if there are now no l2arc devices or if
9040 * they are all faulted.
9042 * If a device is returned, its spa's config lock is also
9043 * held to prevent device removal. l2arc_dev_get_next()
9044 * will grab and release l2arc_dev_mtx.
9046 if ((dev
= l2arc_dev_get_next()) == NULL
)
9049 spa
= dev
->l2ad_spa
;
9050 ASSERT3P(spa
, !=, NULL
);
9053 * If the pool is read-only then force the feed thread to
9054 * sleep a little longer.
9056 if (!spa_writeable(spa
)) {
9057 next
= ddi_get_lbolt() + 5 * l2arc_feed_secs
* hz
;
9058 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9063 * Avoid contributing to memory pressure.
9065 if (arc_reclaim_needed()) {
9066 ARCSTAT_BUMP(arcstat_l2_abort_lowmem
);
9067 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9071 ARCSTAT_BUMP(arcstat_l2_feeds
);
9073 size
= l2arc_write_size();
9076 * Evict L2ARC buffers that will be overwritten.
9078 l2arc_evict(dev
, size
, B_FALSE
);
9081 * Write ARC buffers.
9083 wrote
= l2arc_write_buffers(spa
, dev
, size
);
9086 * Calculate interval between writes.
9088 next
= l2arc_write_interval(begin
, size
, wrote
);
9089 spa_config_exit(spa
, SCL_L2ARC
, dev
);
9091 spl_fstrans_unmark(cookie
);
9093 l2arc_thread_exit
= 0;
9094 cv_broadcast(&l2arc_feed_thr_cv
);
9095 CALLB_CPR_EXIT(&cpr
); /* drops l2arc_feed_thr_lock */
9100 l2arc_vdev_present(vdev_t
*vd
)
9104 mutex_enter(&l2arc_dev_mtx
);
9105 for (dev
= list_head(l2arc_dev_list
); dev
!= NULL
;
9106 dev
= list_next(l2arc_dev_list
, dev
)) {
9107 if (dev
->l2ad_vdev
== vd
)
9110 mutex_exit(&l2arc_dev_mtx
);
9112 return (dev
!= NULL
);
9116 * Add a vdev for use by the L2ARC. By this point the spa has already
9117 * validated the vdev and opened it.
9120 l2arc_add_vdev(spa_t
*spa
, vdev_t
*vd
)
9122 l2arc_dev_t
*adddev
;
9124 ASSERT(!l2arc_vdev_present(vd
));
9127 * Create a new l2arc device entry.
9129 adddev
= kmem_zalloc(sizeof (l2arc_dev_t
), KM_SLEEP
);
9130 adddev
->l2ad_spa
= spa
;
9131 adddev
->l2ad_vdev
= vd
;
9132 adddev
->l2ad_start
= VDEV_LABEL_START_SIZE
;
9133 adddev
->l2ad_end
= VDEV_LABEL_START_SIZE
+ vdev_get_min_asize(vd
);
9134 adddev
->l2ad_hand
= adddev
->l2ad_start
;
9135 adddev
->l2ad_first
= B_TRUE
;
9136 adddev
->l2ad_writing
= B_FALSE
;
9137 list_link_init(&adddev
->l2ad_node
);
9139 mutex_init(&adddev
->l2ad_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9141 * This is a list of all ARC buffers that are still valid on the
9144 list_create(&adddev
->l2ad_buflist
, sizeof (arc_buf_hdr_t
),
9145 offsetof(arc_buf_hdr_t
, b_l2hdr
.b_l2node
));
9147 vdev_space_update(vd
, 0, 0, adddev
->l2ad_end
- adddev
->l2ad_hand
);
9148 zfs_refcount_create(&adddev
->l2ad_alloc
);
9151 * Add device to global list
9153 mutex_enter(&l2arc_dev_mtx
);
9154 list_insert_head(l2arc_dev_list
, adddev
);
9155 atomic_inc_64(&l2arc_ndev
);
9156 mutex_exit(&l2arc_dev_mtx
);
9160 * Remove a vdev from the L2ARC.
9163 l2arc_remove_vdev(vdev_t
*vd
)
9165 l2arc_dev_t
*dev
, *nextdev
, *remdev
= NULL
;
9168 * Find the device by vdev
9170 mutex_enter(&l2arc_dev_mtx
);
9171 for (dev
= list_head(l2arc_dev_list
); dev
; dev
= nextdev
) {
9172 nextdev
= list_next(l2arc_dev_list
, dev
);
9173 if (vd
== dev
->l2ad_vdev
) {
9178 ASSERT3P(remdev
, !=, NULL
);
9181 * Remove device from global list
9183 list_remove(l2arc_dev_list
, remdev
);
9184 l2arc_dev_last
= NULL
; /* may have been invalidated */
9185 atomic_dec_64(&l2arc_ndev
);
9186 mutex_exit(&l2arc_dev_mtx
);
9189 * Clear all buflists and ARC references. L2ARC device flush.
9191 l2arc_evict(remdev
, 0, B_TRUE
);
9192 list_destroy(&remdev
->l2ad_buflist
);
9193 mutex_destroy(&remdev
->l2ad_mtx
);
9194 zfs_refcount_destroy(&remdev
->l2ad_alloc
);
9195 kmem_free(remdev
, sizeof (l2arc_dev_t
));
9201 l2arc_thread_exit
= 0;
9203 l2arc_writes_sent
= 0;
9204 l2arc_writes_done
= 0;
9206 mutex_init(&l2arc_feed_thr_lock
, NULL
, MUTEX_DEFAULT
, NULL
);
9207 cv_init(&l2arc_feed_thr_cv
, NULL
, CV_DEFAULT
, NULL
);
9208 mutex_init(&l2arc_dev_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9209 mutex_init(&l2arc_free_on_write_mtx
, NULL
, MUTEX_DEFAULT
, NULL
);
9211 l2arc_dev_list
= &L2ARC_dev_list
;
9212 l2arc_free_on_write
= &L2ARC_free_on_write
;
9213 list_create(l2arc_dev_list
, sizeof (l2arc_dev_t
),
9214 offsetof(l2arc_dev_t
, l2ad_node
));
9215 list_create(l2arc_free_on_write
, sizeof (l2arc_data_free_t
),
9216 offsetof(l2arc_data_free_t
, l2df_list_node
));
9223 * This is called from dmu_fini(), which is called from spa_fini();
9224 * Because of this, we can assume that all l2arc devices have
9225 * already been removed when the pools themselves were removed.
9228 l2arc_do_free_on_write();
9230 mutex_destroy(&l2arc_feed_thr_lock
);
9231 cv_destroy(&l2arc_feed_thr_cv
);
9232 mutex_destroy(&l2arc_dev_mtx
);
9233 mutex_destroy(&l2arc_free_on_write_mtx
);
9235 list_destroy(l2arc_dev_list
);
9236 list_destroy(l2arc_free_on_write
);
9242 if (!(spa_mode_global
& FWRITE
))
9245 (void) thread_create(NULL
, 0, l2arc_feed_thread
, NULL
, 0, &p0
,
9246 TS_RUN
, defclsyspri
);
9252 if (!(spa_mode_global
& FWRITE
))
9255 mutex_enter(&l2arc_feed_thr_lock
);
9256 cv_signal(&l2arc_feed_thr_cv
); /* kick thread out of startup */
9257 l2arc_thread_exit
= 1;
9258 while (l2arc_thread_exit
!= 0)
9259 cv_wait(&l2arc_feed_thr_cv
, &l2arc_feed_thr_lock
);
9260 mutex_exit(&l2arc_feed_thr_lock
);
9263 #if defined(_KERNEL)
9264 EXPORT_SYMBOL(arc_buf_size
);
9265 EXPORT_SYMBOL(arc_write
);
9266 EXPORT_SYMBOL(arc_read
);
9267 EXPORT_SYMBOL(arc_buf_info
);
9268 EXPORT_SYMBOL(arc_getbuf_func
);
9269 EXPORT_SYMBOL(arc_add_prune_callback
);
9270 EXPORT_SYMBOL(arc_remove_prune_callback
);
9273 module_param(zfs_arc_min
, ulong
, 0644);
9274 MODULE_PARM_DESC(zfs_arc_min
, "Min arc size");
9276 module_param(zfs_arc_max
, ulong
, 0644);
9277 MODULE_PARM_DESC(zfs_arc_max
, "Max arc size");
9279 module_param(zfs_arc_meta_limit
, ulong
, 0644);
9280 MODULE_PARM_DESC(zfs_arc_meta_limit
, "Meta limit for arc size");
9282 module_param(zfs_arc_meta_limit_percent
, ulong
, 0644);
9283 MODULE_PARM_DESC(zfs_arc_meta_limit_percent
,
9284 "Percent of arc size for arc meta limit");
9286 module_param(zfs_arc_meta_min
, ulong
, 0644);
9287 MODULE_PARM_DESC(zfs_arc_meta_min
, "Min arc metadata");
9289 module_param(zfs_arc_meta_prune
, int, 0644);
9290 MODULE_PARM_DESC(zfs_arc_meta_prune
, "Meta objects to scan for prune");
9292 module_param(zfs_arc_meta_adjust_restarts
, int, 0644);
9293 MODULE_PARM_DESC(zfs_arc_meta_adjust_restarts
,
9294 "Limit number of restarts in arc_adjust_meta");
9296 module_param(zfs_arc_meta_strategy
, int, 0644);
9297 MODULE_PARM_DESC(zfs_arc_meta_strategy
, "Meta reclaim strategy");
9299 module_param(zfs_arc_grow_retry
, int, 0644);
9300 MODULE_PARM_DESC(zfs_arc_grow_retry
, "Seconds before growing arc size");
9302 module_param(zfs_arc_p_dampener_disable
, int, 0644);
9303 MODULE_PARM_DESC(zfs_arc_p_dampener_disable
, "disable arc_p adapt dampener");
9305 module_param(zfs_arc_shrink_shift
, int, 0644);
9306 MODULE_PARM_DESC(zfs_arc_shrink_shift
, "log2(fraction of arc to reclaim)");
9308 module_param(zfs_arc_pc_percent
, uint
, 0644);
9309 MODULE_PARM_DESC(zfs_arc_pc_percent
,
9310 "Percent of pagecache to reclaim arc to");
9312 module_param(zfs_arc_p_min_shift
, int, 0644);
9313 MODULE_PARM_DESC(zfs_arc_p_min_shift
, "arc_c shift to calc min/max arc_p");
9315 module_param(zfs_arc_average_blocksize
, int, 0444);
9316 MODULE_PARM_DESC(zfs_arc_average_blocksize
, "Target average block size");
9318 module_param(zfs_compressed_arc_enabled
, int, 0644);
9319 MODULE_PARM_DESC(zfs_compressed_arc_enabled
, "Disable compressed arc buffers");
9321 module_param(zfs_arc_min_prefetch_ms
, int, 0644);
9322 MODULE_PARM_DESC(zfs_arc_min_prefetch_ms
, "Min life of prefetch block in ms");
9324 module_param(zfs_arc_min_prescient_prefetch_ms
, int, 0644);
9325 MODULE_PARM_DESC(zfs_arc_min_prescient_prefetch_ms
,
9326 "Min life of prescient prefetched block in ms");
9328 module_param(l2arc_write_max
, ulong
, 0644);
9329 MODULE_PARM_DESC(l2arc_write_max
, "Max write bytes per interval");
9331 module_param(l2arc_write_boost
, ulong
, 0644);
9332 MODULE_PARM_DESC(l2arc_write_boost
, "Extra write bytes during device warmup");
9334 module_param(l2arc_headroom
, ulong
, 0644);
9335 MODULE_PARM_DESC(l2arc_headroom
, "Number of max device writes to precache");
9337 module_param(l2arc_headroom_boost
, ulong
, 0644);
9338 MODULE_PARM_DESC(l2arc_headroom_boost
, "Compressed l2arc_headroom multiplier");
9340 module_param(l2arc_feed_secs
, ulong
, 0644);
9341 MODULE_PARM_DESC(l2arc_feed_secs
, "Seconds between L2ARC writing");
9343 module_param(l2arc_feed_min_ms
, ulong
, 0644);
9344 MODULE_PARM_DESC(l2arc_feed_min_ms
, "Min feed interval in milliseconds");
9346 module_param(l2arc_noprefetch
, int, 0644);
9347 MODULE_PARM_DESC(l2arc_noprefetch
, "Skip caching prefetched buffers");
9349 module_param(l2arc_feed_again
, int, 0644);
9350 MODULE_PARM_DESC(l2arc_feed_again
, "Turbo L2ARC warmup");
9352 module_param(l2arc_norw
, int, 0644);
9353 MODULE_PARM_DESC(l2arc_norw
, "No reads during writes");
9355 module_param(zfs_arc_lotsfree_percent
, int, 0644);
9356 MODULE_PARM_DESC(zfs_arc_lotsfree_percent
,
9357 "System free memory I/O throttle in bytes");
9359 module_param(zfs_arc_sys_free
, ulong
, 0644);
9360 MODULE_PARM_DESC(zfs_arc_sys_free
, "System free memory target size in bytes");
9362 module_param(zfs_arc_dnode_limit
, ulong
, 0644);
9363 MODULE_PARM_DESC(zfs_arc_dnode_limit
, "Minimum bytes of dnodes in arc");
9365 module_param(zfs_arc_dnode_limit_percent
, ulong
, 0644);
9366 MODULE_PARM_DESC(zfs_arc_dnode_limit_percent
,
9367 "Percent of ARC meta buffers for dnodes");
9369 module_param(zfs_arc_dnode_reduce_percent
, ulong
, 0644);
9370 MODULE_PARM_DESC(zfs_arc_dnode_reduce_percent
,
9371 "Percentage of excess dnodes to try to unpin");