1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2019, Joyent, Inc. 24 * Copyright (c) 2011, 2018 by Delphix. All rights reserved. 25 * Copyright (c) 2014 by Saso Kiselkov. All rights reserved. 26 * Copyright 2017 Nexenta Systems, Inc. All rights reserved. 27 * Copyright (c) 2011, 2019, Delphix. All rights reserved. 28 * Copyright (c) 2020, George Amanakis. All rights reserved. 29 */ 30 31 /* 32 * DVA-based Adjustable Replacement Cache 33 * 34 * While much of the theory of operation used here is 35 * based on the self-tuning, low overhead replacement cache 36 * presented by Megiddo and Modha at FAST 2003, there are some 37 * significant differences: 38 * 39 * 1. The Megiddo and Modha model assumes any page is evictable. 40 * Pages in its cache cannot be "locked" into memory. This makes 41 * the eviction algorithm simple: evict the last page in the list. 42 * This also make the performance characteristics easy to reason 43 * about. Our cache is not so simple. At any given moment, some 44 * subset of the blocks in the cache are un-evictable because we 45 * have handed out a reference to them. Blocks are only evictable 46 * when there are no external references active. This makes 47 * eviction far more problematic: we choose to evict the evictable 48 * blocks that are the "lowest" in the list. 49 * 50 * There are times when it is not possible to evict the requested 51 * space. In these circumstances we are unable to adjust the cache 52 * size. To prevent the cache growing unbounded at these times we 53 * implement a "cache throttle" that slows the flow of new data 54 * into the cache until we can make space available. 55 * 56 * 2. The Megiddo and Modha model assumes a fixed cache size. 57 * Pages are evicted when the cache is full and there is a cache 58 * miss. Our model has a variable sized cache. It grows with 59 * high use, but also tries to react to memory pressure from the 60 * operating system: decreasing its size when system memory is 61 * tight. 62 * 63 * 3. The Megiddo and Modha model assumes a fixed page size. All 64 * elements of the cache are therefore exactly the same size. So 65 * when adjusting the cache size following a cache miss, its simply 66 * a matter of choosing a single page to evict. In our model, we 67 * have variable sized cache blocks (rangeing from 512 bytes to 68 * 128K bytes). We therefore choose a set of blocks to evict to make 69 * space for a cache miss that approximates as closely as possible 70 * the space used by the new block. 71 * 72 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" 73 * by N. Megiddo & D. Modha, FAST 2003 74 */ 75 76 /* 77 * The locking model: 78 * 79 * A new reference to a cache buffer can be obtained in two 80 * ways: 1) via a hash table lookup using the DVA as a key, 81 * or 2) via one of the ARC lists. The arc_read() interface 82 * uses method 1, while the internal ARC algorithms for 83 * adjusting the cache use method 2. We therefore provide two 84 * types of locks: 1) the hash table lock array, and 2) the 85 * ARC list locks. 86 * 87 * Buffers do not have their own mutexes, rather they rely on the 88 * hash table mutexes for the bulk of their protection (i.e. most 89 * fields in the arc_buf_hdr_t are protected by these mutexes). 90 * 91 * buf_hash_find() returns the appropriate mutex (held) when it 92 * locates the requested buffer in the hash table. It returns 93 * NULL for the mutex if the buffer was not in the table. 94 * 95 * buf_hash_remove() expects the appropriate hash mutex to be 96 * already held before it is invoked. 97 * 98 * Each ARC state also has a mutex which is used to protect the 99 * buffer list associated with the state. When attempting to 100 * obtain a hash table lock while holding an ARC list lock you 101 * must use: mutex_tryenter() to avoid deadlock. Also note that 102 * the active state mutex must be held before the ghost state mutex. 103 * 104 * Note that the majority of the performance stats are manipulated 105 * with atomic operations. 106 * 107 * The L2ARC uses the l2ad_mtx on each vdev for the following: 108 * 109 * - L2ARC buflist creation 110 * - L2ARC buflist eviction 111 * - L2ARC write completion, which walks L2ARC buflists 112 * - ARC header destruction, as it removes from L2ARC buflists 113 * - ARC header release, as it removes from L2ARC buflists 114 */ 115 116 /* 117 * ARC operation: 118 * 119 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. 120 * This structure can point either to a block that is still in the cache or to 121 * one that is only accessible in an L2 ARC device, or it can provide 122 * information about a block that was recently evicted. If a block is 123 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough 124 * information to retrieve it from the L2ARC device. This information is 125 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block 126 * that is in this state cannot access the data directly. 127 * 128 * Blocks that are actively being referenced or have not been evicted 129 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within 130 * the arc_buf_hdr_t that will point to the data block in memory. A block can 131 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC 132 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and 133 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). 134 * 135 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the 136 * ability to store the physical data (b_pabd) associated with the DVA of the 137 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, 138 * it will match its on-disk compression characteristics. This behavior can be 139 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the 140 * compressed ARC functionality is disabled, the b_pabd will point to an 141 * uncompressed version of the on-disk data. 142 * 143 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each 144 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. 145 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC 146 * consumer. The ARC will provide references to this data and will keep it 147 * cached until it is no longer in use. The ARC caches only the L1ARC's physical 148 * data block and will evict any arc_buf_t that is no longer referenced. The 149 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the 150 * "overhead_size" kstat. 151 * 152 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or 153 * compressed form. The typical case is that consumers will want uncompressed 154 * data, and when that happens a new data buffer is allocated where the data is 155 * decompressed for them to use. Currently the only consumer who wants 156 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it 157 * exists on disk. When this happens, the arc_buf_t's data buffer is shared 158 * with the arc_buf_hdr_t. 159 * 160 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The 161 * first one is owned by a compressed send consumer (and therefore references 162 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be 163 * used by any other consumer (and has its own uncompressed copy of the data 164 * buffer). 165 * 166 * arc_buf_hdr_t 167 * +-----------+ 168 * | fields | 169 * | common to | 170 * | L1- and | 171 * | L2ARC | 172 * +-----------+ 173 * | l2arc_buf_hdr_t 174 * | | 175 * +-----------+ 176 * | l1arc_buf_hdr_t 177 * | | arc_buf_t 178 * | b_buf +------------>+-----------+ arc_buf_t 179 * | b_pabd +-+ |b_next +---->+-----------+ 180 * +-----------+ | |-----------| |b_next +-->NULL 181 * | |b_comp = T | +-----------+ 182 * | |b_data +-+ |b_comp = F | 183 * | +-----------+ | |b_data +-+ 184 * +->+------+ | +-----------+ | 185 * compressed | | | | 186 * data | |<--------------+ | uncompressed 187 * +------+ compressed, | data 188 * shared +-->+------+ 189 * data | | 190 * | | 191 * +------+ 192 * 193 * When a consumer reads a block, the ARC must first look to see if the 194 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new 195 * arc_buf_t and either copies uncompressed data into a new data buffer from an 196 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a 197 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the 198 * hdr is compressed and the desired compression characteristics of the 199 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the 200 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be 201 * the last buffer in the hdr's b_buf list, however a shared compressed buf can 202 * be anywhere in the hdr's list. 203 * 204 * The diagram below shows an example of an uncompressed ARC hdr that is 205 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is 206 * the last element in the buf list): 207 * 208 * arc_buf_hdr_t 209 * +-----------+ 210 * | | 211 * | | 212 * | | 213 * +-----------+ 214 * l2arc_buf_hdr_t| | 215 * | | 216 * +-----------+ 217 * l1arc_buf_hdr_t| | 218 * | | arc_buf_t (shared) 219 * | b_buf +------------>+---------+ arc_buf_t 220 * | | |b_next +---->+---------+ 221 * | b_pabd +-+ |---------| |b_next +-->NULL 222 * +-----------+ | | | +---------+ 223 * | |b_data +-+ | | 224 * | +---------+ | |b_data +-+ 225 * +->+------+ | +---------+ | 226 * | | | | 227 * uncompressed | | | | 228 * data +------+ | | 229 * ^ +->+------+ | 230 * | uncompressed | | | 231 * | data | | | 232 * | +------+ | 233 * +---------------------------------+ 234 * 235 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd 236 * since the physical block is about to be rewritten. The new data contents 237 * will be contained in the arc_buf_t. As the I/O pipeline performs the write, 238 * it may compress the data before writing it to disk. The ARC will be called 239 * with the transformed data and will bcopy the transformed on-disk block into 240 * a newly allocated b_pabd. Writes are always done into buffers which have 241 * either been loaned (and hence are new and don't have other readers) or 242 * buffers which have been released (and hence have their own hdr, if there 243 * were originally other readers of the buf's original hdr). This ensures that 244 * the ARC only needs to update a single buf and its hdr after a write occurs. 245 * 246 * When the L2ARC is in use, it will also take advantage of the b_pabd. The 247 * L2ARC will always write the contents of b_pabd to the L2ARC. This means 248 * that when compressed ARC is enabled that the L2ARC blocks are identical 249 * to the on-disk block in the main data pool. This provides a significant 250 * advantage since the ARC can leverage the bp's checksum when reading from the 251 * L2ARC to determine if the contents are valid. However, if the compressed 252 * ARC is disabled, then the L2ARC's block must be transformed to look 253 * like the physical block in the main data pool before comparing the 254 * checksum and determining its validity. 255 * 256 * The L1ARC has a slightly different system for storing encrypted data. 257 * Raw (encrypted + possibly compressed) data has a few subtle differences from 258 * data that is just compressed. The biggest difference is that it is not 259 * possible to decrypt encrypted data (or visa versa) if the keys aren't loaded. 260 * The other difference is that encryption cannot be treated as a suggestion. 261 * If a caller would prefer compressed data, but they actually wind up with 262 * uncompressed data the worst thing that could happen is there might be a 263 * performance hit. If the caller requests encrypted data, however, we must be 264 * sure they actually get it or else secret information could be leaked. Raw 265 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, 266 * may have both an encrypted version and a decrypted version of its data at 267 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is 268 * copied out of this header. To avoid complications with b_pabd, raw buffers 269 * cannot be shared. 270 */ 271 272 #include <sys/spa.h> 273 #include <sys/zio.h> 274 #include <sys/spa_impl.h> 275 #include <sys/zio_compress.h> 276 #include <sys/zio_checksum.h> 277 #include <sys/zfs_context.h> 278 #include <sys/arc.h> 279 #include <sys/refcount.h> 280 #include <sys/vdev.h> 281 #include <sys/vdev_impl.h> 282 #include <sys/dsl_pool.h> 283 #include <sys/zio_checksum.h> 284 #include <sys/multilist.h> 285 #include <sys/abd.h> 286 #include <sys/zil.h> 287 #include <sys/fm/fs/zfs.h> 288 #ifdef _KERNEL 289 #include <sys/vmsystm.h> 290 #include <vm/anon.h> 291 #include <sys/fs/swapnode.h> 292 #include <sys/dnlc.h> 293 #endif 294 #include <sys/callb.h> 295 #include <sys/kstat.h> 296 #include <sys/zthr.h> 297 #include <zfs_fletcher.h> 298 #include <sys/arc_impl.h> 299 #include <sys/aggsum.h> 300 #include <sys/cityhash.h> 301 #include <sys/param.h> 302 303 #ifndef _KERNEL 304 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ 305 boolean_t arc_watch = B_FALSE; 306 int arc_procfd; 307 #endif 308 309 /* 310 * This thread's job is to keep enough free memory in the system, by 311 * calling arc_kmem_reap_now() plus arc_shrink(), which improves 312 * arc_available_memory(). 313 */ 314 static zthr_t *arc_reap_zthr; 315 316 /* 317 * This thread's job is to keep arc_size under arc_c, by calling 318 * arc_adjust(), which improves arc_is_overflowing(). 319 */ 320 static zthr_t *arc_adjust_zthr; 321 322 static kmutex_t arc_adjust_lock; 323 static kcondvar_t arc_adjust_waiters_cv; 324 static boolean_t arc_adjust_needed = B_FALSE; 325 326 uint_t arc_reduce_dnlc_percent = 3; 327 328 /* 329 * The number of headers to evict in arc_evict_state_impl() before 330 * dropping the sublist lock and evicting from another sublist. A lower 331 * value means we're more likely to evict the "correct" header (i.e. the 332 * oldest header in the arc state), but comes with higher overhead 333 * (i.e. more invocations of arc_evict_state_impl()). 334 */ 335 int zfs_arc_evict_batch_limit = 10; 336 337 /* number of seconds before growing cache again */ 338 int arc_grow_retry = 60; 339 340 /* 341 * Minimum time between calls to arc_kmem_reap_soon(). Note that this will 342 * be converted to ticks, so with the default hz=100, a setting of 15 ms 343 * will actually wait 2 ticks, or 20ms. 344 */ 345 int arc_kmem_cache_reap_retry_ms = 1000; 346 347 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ 348 int zfs_arc_overflow_shift = 8; 349 350 /* shift of arc_c for calculating both min and max arc_p */ 351 int arc_p_min_shift = 4; 352 353 /* log2(fraction of arc to reclaim) */ 354 int arc_shrink_shift = 7; 355 356 /* 357 * log2(fraction of ARC which must be free to allow growing). 358 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, 359 * when reading a new block into the ARC, we will evict an equal-sized block 360 * from the ARC. 361 * 362 * This must be less than arc_shrink_shift, so that when we shrink the ARC, 363 * we will still not allow it to grow. 364 */ 365 int arc_no_grow_shift = 5; 366 367 368 /* 369 * minimum lifespan of a prefetch block in clock ticks 370 * (initialized in arc_init()) 371 */ 372 static int zfs_arc_min_prefetch_ms = 1; 373 static int zfs_arc_min_prescient_prefetch_ms = 6; 374 375 /* 376 * If this percent of memory is free, don't throttle. 377 */ 378 int arc_lotsfree_percent = 10; 379 380 static boolean_t arc_initialized; 381 382 /* 383 * The arc has filled available memory and has now warmed up. 384 */ 385 static boolean_t arc_warm; 386 387 /* 388 * log2 fraction of the zio arena to keep free. 389 */ 390 int arc_zio_arena_free_shift = 2; 391 392 /* 393 * These tunables are for performance analysis. 394 */ 395 uint64_t zfs_arc_max; 396 uint64_t zfs_arc_min; 397 uint64_t zfs_arc_meta_limit = 0; 398 uint64_t zfs_arc_meta_min = 0; 399 int zfs_arc_grow_retry = 0; 400 int zfs_arc_shrink_shift = 0; 401 int zfs_arc_p_min_shift = 0; 402 int zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ 403 404 /* 405 * ARC dirty data constraints for arc_tempreserve_space() throttle 406 */ 407 uint_t zfs_arc_dirty_limit_percent = 50; /* total dirty data limit */ 408 uint_t zfs_arc_anon_limit_percent = 25; /* anon block dirty limit */ 409 uint_t zfs_arc_pool_dirty_percent = 20; /* each pool's anon allowance */ 410 411 boolean_t zfs_compressed_arc_enabled = B_TRUE; 412 413 /* The 6 states: */ 414 static arc_state_t ARC_anon; 415 static arc_state_t ARC_mru; 416 static arc_state_t ARC_mru_ghost; 417 static arc_state_t ARC_mfu; 418 static arc_state_t ARC_mfu_ghost; 419 static arc_state_t ARC_l2c_only; 420 421 arc_stats_t arc_stats = { 422 { "hits", KSTAT_DATA_UINT64 }, 423 { "misses", KSTAT_DATA_UINT64 }, 424 { "demand_data_hits", KSTAT_DATA_UINT64 }, 425 { "demand_data_misses", KSTAT_DATA_UINT64 }, 426 { "demand_metadata_hits", KSTAT_DATA_UINT64 }, 427 { "demand_metadata_misses", KSTAT_DATA_UINT64 }, 428 { "prefetch_data_hits", KSTAT_DATA_UINT64 }, 429 { "prefetch_data_misses", KSTAT_DATA_UINT64 }, 430 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, 431 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, 432 { "mru_hits", KSTAT_DATA_UINT64 }, 433 { "mru_ghost_hits", KSTAT_DATA_UINT64 }, 434 { "mfu_hits", KSTAT_DATA_UINT64 }, 435 { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, 436 { "deleted", KSTAT_DATA_UINT64 }, 437 { "mutex_miss", KSTAT_DATA_UINT64 }, 438 { "access_skip", KSTAT_DATA_UINT64 }, 439 { "evict_skip", KSTAT_DATA_UINT64 }, 440 { "evict_not_enough", KSTAT_DATA_UINT64 }, 441 { "evict_l2_cached", KSTAT_DATA_UINT64 }, 442 { "evict_l2_eligible", KSTAT_DATA_UINT64 }, 443 { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, 444 { "evict_l2_skip", KSTAT_DATA_UINT64 }, 445 { "hash_elements", KSTAT_DATA_UINT64 }, 446 { "hash_elements_max", KSTAT_DATA_UINT64 }, 447 { "hash_collisions", KSTAT_DATA_UINT64 }, 448 { "hash_chains", KSTAT_DATA_UINT64 }, 449 { "hash_chain_max", KSTAT_DATA_UINT64 }, 450 { "p", KSTAT_DATA_UINT64 }, 451 { "c", KSTAT_DATA_UINT64 }, 452 { "c_min", KSTAT_DATA_UINT64 }, 453 { "c_max", KSTAT_DATA_UINT64 }, 454 { "size", KSTAT_DATA_UINT64 }, 455 { "compressed_size", KSTAT_DATA_UINT64 }, 456 { "uncompressed_size", KSTAT_DATA_UINT64 }, 457 { "overhead_size", KSTAT_DATA_UINT64 }, 458 { "hdr_size", KSTAT_DATA_UINT64 }, 459 { "data_size", KSTAT_DATA_UINT64 }, 460 { "metadata_size", KSTAT_DATA_UINT64 }, 461 { "other_size", KSTAT_DATA_UINT64 }, 462 { "anon_size", KSTAT_DATA_UINT64 }, 463 { "anon_evictable_data", KSTAT_DATA_UINT64 }, 464 { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, 465 { "mru_size", KSTAT_DATA_UINT64 }, 466 { "mru_evictable_data", KSTAT_DATA_UINT64 }, 467 { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, 468 { "mru_ghost_size", KSTAT_DATA_UINT64 }, 469 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, 470 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 471 { "mfu_size", KSTAT_DATA_UINT64 }, 472 { "mfu_evictable_data", KSTAT_DATA_UINT64 }, 473 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, 474 { "mfu_ghost_size", KSTAT_DATA_UINT64 }, 475 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, 476 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 477 { "l2_hits", KSTAT_DATA_UINT64 }, 478 { "l2_misses", KSTAT_DATA_UINT64 }, 479 { "l2_feeds", KSTAT_DATA_UINT64 }, 480 { "l2_rw_clash", KSTAT_DATA_UINT64 }, 481 { "l2_read_bytes", KSTAT_DATA_UINT64 }, 482 { "l2_write_bytes", KSTAT_DATA_UINT64 }, 483 { "l2_writes_sent", KSTAT_DATA_UINT64 }, 484 { "l2_writes_done", KSTAT_DATA_UINT64 }, 485 { "l2_writes_error", KSTAT_DATA_UINT64 }, 486 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, 487 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, 488 { "l2_evict_reading", KSTAT_DATA_UINT64 }, 489 { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, 490 { "l2_free_on_write", KSTAT_DATA_UINT64 }, 491 { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, 492 { "l2_cksum_bad", KSTAT_DATA_UINT64 }, 493 { "l2_io_error", KSTAT_DATA_UINT64 }, 494 { "l2_size", KSTAT_DATA_UINT64 }, 495 { "l2_asize", KSTAT_DATA_UINT64 }, 496 { "l2_hdr_size", KSTAT_DATA_UINT64 }, 497 { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, 498 { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, 499 { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, 500 { "l2_log_blk_count", KSTAT_DATA_UINT64 }, 501 { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, 502 { "l2_rebuild_success", KSTAT_DATA_UINT64 }, 503 { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, 504 { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, 505 { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, 506 { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, 507 { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, 508 { "l2_rebuild_size", KSTAT_DATA_UINT64 }, 509 { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, 510 { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, 511 { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, 512 { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, 513 { "memory_throttle_count", KSTAT_DATA_UINT64 }, 514 { "arc_meta_used", KSTAT_DATA_UINT64 }, 515 { "arc_meta_limit", KSTAT_DATA_UINT64 }, 516 { "arc_meta_max", KSTAT_DATA_UINT64 }, 517 { "arc_meta_min", KSTAT_DATA_UINT64 }, 518 { "async_upgrade_sync", KSTAT_DATA_UINT64 }, 519 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, 520 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, 521 }; 522 523 #define ARCSTAT_MAX(stat, val) { \ 524 uint64_t m; \ 525 while ((val) > (m = arc_stats.stat.value.ui64) && \ 526 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ 527 continue; \ 528 } 529 530 #define ARCSTAT_MAXSTAT(stat) \ 531 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64) 532 533 /* 534 * We define a macro to allow ARC hits/misses to be easily broken down by 535 * two separate conditions, giving a total of four different subtypes for 536 * each of hits and misses (so eight statistics total). 537 */ 538 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ 539 if (cond1) { \ 540 if (cond2) { \ 541 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ 542 } else { \ 543 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ 544 } \ 545 } else { \ 546 if (cond2) { \ 547 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ 548 } else { \ 549 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ 550 } \ 551 } 552 553 /* 554 * This macro allows us to use kstats as floating averages. Each time we 555 * update this kstat, we first factor it and the update value by 556 * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall 557 * average. This macro assumes that integer loads and stores are atomic, but 558 * is not safe for multiple writers updating the kstat in parallel (only the 559 * last writer's update will remain). 560 */ 561 #define ARCSTAT_F_AVG_FACTOR 3 562 #define ARCSTAT_F_AVG(stat, value) \ 563 do { \ 564 uint64_t x = ARCSTAT(stat); \ 565 x = x - x / ARCSTAT_F_AVG_FACTOR + \ 566 (value) / ARCSTAT_F_AVG_FACTOR; \ 567 ARCSTAT(stat) = x; \ 568 _NOTE(CONSTCOND) \ 569 } while (0) 570 571 kstat_t *arc_ksp; 572 static arc_state_t *arc_anon; 573 static arc_state_t *arc_mru; 574 static arc_state_t *arc_mru_ghost; 575 static arc_state_t *arc_mfu; 576 static arc_state_t *arc_mfu_ghost; 577 static arc_state_t *arc_l2c_only; 578 579 /* 580 * There are also some ARC variables that we want to export, but that are 581 * updated so often that having the canonical representation be the statistic 582 * variable causes a performance bottleneck. We want to use aggsum_t's for these 583 * instead, but still be able to export the kstat in the same way as before. 584 * The solution is to always use the aggsum version, except in the kstat update 585 * callback. 586 */ 587 aggsum_t arc_size; 588 aggsum_t arc_meta_used; 589 aggsum_t astat_data_size; 590 aggsum_t astat_metadata_size; 591 aggsum_t astat_hdr_size; 592 aggsum_t astat_other_size; 593 aggsum_t astat_l2_hdr_size; 594 595 static int arc_no_grow; /* Don't try to grow cache size */ 596 static hrtime_t arc_growtime; 597 static uint64_t arc_tempreserve; 598 static uint64_t arc_loaned_bytes; 599 600 #define GHOST_STATE(state) \ 601 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ 602 (state) == arc_l2c_only) 603 604 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) 605 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) 606 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) 607 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) 608 #define HDR_PRESCIENT_PREFETCH(hdr) \ 609 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) 610 #define HDR_COMPRESSION_ENABLED(hdr) \ 611 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) 612 613 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) 614 #define HDR_L2_READING(hdr) \ 615 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ 616 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) 617 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) 618 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) 619 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) 620 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) 621 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) 622 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) 623 624 #define HDR_ISTYPE_METADATA(hdr) \ 625 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) 626 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) 627 628 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) 629 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) 630 #define HDR_HAS_RABD(hdr) \ 631 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ 632 (hdr)->b_crypt_hdr.b_rabd != NULL) 633 #define HDR_ENCRYPTED(hdr) \ 634 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 635 #define HDR_AUTHENTICATED(hdr) \ 636 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 637 638 /* For storing compression mode in b_flags */ 639 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) 640 641 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ 642 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) 643 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ 644 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); 645 646 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) 647 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) 648 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) 649 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) 650 651 /* 652 * Other sizes 653 */ 654 655 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) 656 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr)) 657 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) 658 659 /* 660 * Hash table routines 661 */ 662 663 #define HT_LOCK_PAD 64 664 665 struct ht_lock { 666 kmutex_t ht_lock; 667 #ifdef _KERNEL 668 unsigned char pad[(HT_LOCK_PAD - sizeof (kmutex_t))]; 669 #endif 670 }; 671 672 #define BUF_LOCKS 256 673 typedef struct buf_hash_table { 674 uint64_t ht_mask; 675 arc_buf_hdr_t **ht_table; 676 struct ht_lock ht_locks[BUF_LOCKS]; 677 } buf_hash_table_t; 678 679 static buf_hash_table_t buf_hash_table; 680 681 #define BUF_HASH_INDEX(spa, dva, birth) \ 682 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) 683 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) 684 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock)) 685 #define HDR_LOCK(hdr) \ 686 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) 687 688 uint64_t zfs_crc64_table[256]; 689 690 /* 691 * Level 2 ARC 692 */ 693 694 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ 695 #define L2ARC_HEADROOM 2 /* num of writes */ 696 /* 697 * If we discover during ARC scan any buffers to be compressed, we boost 698 * our headroom for the next scanning cycle by this percentage multiple. 699 */ 700 #define L2ARC_HEADROOM_BOOST 200 701 #define L2ARC_FEED_SECS 1 /* caching interval secs */ 702 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ 703 704 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent) 705 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done) 706 707 /* L2ARC Performance Tunables */ 708 uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* default max write size */ 709 uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra write during warmup */ 710 uint64_t l2arc_headroom = L2ARC_HEADROOM; /* number of dev writes */ 711 uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; 712 uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ 713 uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval milliseconds */ 714 boolean_t l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ 715 boolean_t l2arc_feed_again = B_TRUE; /* turbo warmup */ 716 boolean_t l2arc_norw = B_TRUE; /* no reads during writes */ 717 718 /* 719 * L2ARC Internals 720 */ 721 static list_t L2ARC_dev_list; /* device list */ 722 static list_t *l2arc_dev_list; /* device list pointer */ 723 static kmutex_t l2arc_dev_mtx; /* device list mutex */ 724 static l2arc_dev_t *l2arc_dev_last; /* last device used */ 725 static list_t L2ARC_free_on_write; /* free after write buf list */ 726 static list_t *l2arc_free_on_write; /* free after write list ptr */ 727 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ 728 static uint64_t l2arc_ndev; /* number of devices */ 729 730 typedef struct l2arc_read_callback { 731 arc_buf_hdr_t *l2rcb_hdr; /* read header */ 732 blkptr_t l2rcb_bp; /* original blkptr */ 733 zbookmark_phys_t l2rcb_zb; /* original bookmark */ 734 int l2rcb_flags; /* original flags */ 735 abd_t *l2rcb_abd; /* temporary buffer */ 736 } l2arc_read_callback_t; 737 738 typedef struct l2arc_data_free { 739 /* protected by l2arc_free_on_write_mtx */ 740 abd_t *l2df_abd; 741 size_t l2df_size; 742 arc_buf_contents_t l2df_type; 743 list_node_t l2df_list_node; 744 } l2arc_data_free_t; 745 746 static kmutex_t l2arc_feed_thr_lock; 747 static kcondvar_t l2arc_feed_thr_cv; 748 static uint8_t l2arc_thread_exit; 749 750 static kmutex_t l2arc_rebuild_thr_lock; 751 static kcondvar_t l2arc_rebuild_thr_cv; 752 753 enum arc_hdr_alloc_flags { 754 ARC_HDR_ALLOC_RDATA = 0x1, 755 ARC_HDR_DO_ADAPT = 0x2, 756 }; 757 758 759 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, void *, boolean_t); 760 typedef enum arc_fill_flags { 761 ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ 762 ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ 763 ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ 764 ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ 765 ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ 766 } arc_fill_flags_t; 767 768 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, void *); 769 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, void *, boolean_t); 770 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, void *); 771 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, void *); 772 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag); 773 static void arc_hdr_free_pabd(arc_buf_hdr_t *, boolean_t); 774 static void arc_hdr_alloc_pabd(arc_buf_hdr_t *, int); 775 static void arc_access(arc_buf_hdr_t *, kmutex_t *); 776 static boolean_t arc_is_overflowing(); 777 static void arc_buf_watch(arc_buf_t *); 778 static l2arc_dev_t *l2arc_vdev_get(vdev_t *vd); 779 780 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); 781 static uint32_t arc_bufc_to_flags(arc_buf_contents_t); 782 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 783 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 784 785 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); 786 static void l2arc_read_done(zio_t *); 787 788 /* 789 * The arc_all_memory function is a ZoL enhancement that lives in their OSL 790 * code. In user-space code, which is used primarily for testing, we return 791 * half of all memory. 792 */ 793 uint64_t 794 arc_all_memory(void) 795 { 796 #ifdef _KERNEL 797 return (ptob(physmem)); 798 #else 799 return ((sysconf(_SC_PAGESIZE) * sysconf(_SC_PHYS_PAGES)) / 2); 800 #endif 801 } 802 803 /* 804 * We use Cityhash for this. It's fast, and has good hash properties without 805 * requiring any large static buffers. 806 */ 807 static uint64_t 808 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) 809 { 810 return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); 811 } 812 813 #define HDR_EMPTY(hdr) \ 814 ((hdr)->b_dva.dva_word[0] == 0 && \ 815 (hdr)->b_dva.dva_word[1] == 0) 816 817 #define HDR_EMPTY_OR_LOCKED(hdr) \ 818 (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) 819 820 #define HDR_EQUAL(spa, dva, birth, hdr) \ 821 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ 822 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ 823 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) 824 825 static void 826 buf_discard_identity(arc_buf_hdr_t *hdr) 827 { 828 hdr->b_dva.dva_word[0] = 0; 829 hdr->b_dva.dva_word[1] = 0; 830 hdr->b_birth = 0; 831 } 832 833 static arc_buf_hdr_t * 834 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) 835 { 836 const dva_t *dva = BP_IDENTITY(bp); 837 uint64_t birth = BP_PHYSICAL_BIRTH(bp); 838 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); 839 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 840 arc_buf_hdr_t *hdr; 841 842 mutex_enter(hash_lock); 843 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; 844 hdr = hdr->b_hash_next) { 845 if (HDR_EQUAL(spa, dva, birth, hdr)) { 846 *lockp = hash_lock; 847 return (hdr); 848 } 849 } 850 mutex_exit(hash_lock); 851 *lockp = NULL; 852 return (NULL); 853 } 854 855 /* 856 * Insert an entry into the hash table. If there is already an element 857 * equal to elem in the hash table, then the already existing element 858 * will be returned and the new element will not be inserted. 859 * Otherwise returns NULL. 860 * If lockp == NULL, the caller is assumed to already hold the hash lock. 861 */ 862 static arc_buf_hdr_t * 863 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) 864 { 865 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 866 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 867 arc_buf_hdr_t *fhdr; 868 uint32_t i; 869 870 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); 871 ASSERT(hdr->b_birth != 0); 872 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 873 874 if (lockp != NULL) { 875 *lockp = hash_lock; 876 mutex_enter(hash_lock); 877 } else { 878 ASSERT(MUTEX_HELD(hash_lock)); 879 } 880 881 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; 882 fhdr = fhdr->b_hash_next, i++) { 883 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) 884 return (fhdr); 885 } 886 887 hdr->b_hash_next = buf_hash_table.ht_table[idx]; 888 buf_hash_table.ht_table[idx] = hdr; 889 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 890 891 /* collect some hash table performance data */ 892 if (i > 0) { 893 ARCSTAT_BUMP(arcstat_hash_collisions); 894 if (i == 1) 895 ARCSTAT_BUMP(arcstat_hash_chains); 896 897 ARCSTAT_MAX(arcstat_hash_chain_max, i); 898 } 899 900 ARCSTAT_BUMP(arcstat_hash_elements); 901 ARCSTAT_MAXSTAT(arcstat_hash_elements); 902 903 return (NULL); 904 } 905 906 static void 907 buf_hash_remove(arc_buf_hdr_t *hdr) 908 { 909 arc_buf_hdr_t *fhdr, **hdrp; 910 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 911 912 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); 913 ASSERT(HDR_IN_HASH_TABLE(hdr)); 914 915 hdrp = &buf_hash_table.ht_table[idx]; 916 while ((fhdr = *hdrp) != hdr) { 917 ASSERT3P(fhdr, !=, NULL); 918 hdrp = &fhdr->b_hash_next; 919 } 920 *hdrp = hdr->b_hash_next; 921 hdr->b_hash_next = NULL; 922 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 923 924 /* collect some hash table performance data */ 925 ARCSTAT_BUMPDOWN(arcstat_hash_elements); 926 927 if (buf_hash_table.ht_table[idx] && 928 buf_hash_table.ht_table[idx]->b_hash_next == NULL) 929 ARCSTAT_BUMPDOWN(arcstat_hash_chains); 930 } 931 932 /* 933 * Global data structures and functions for the buf kmem cache. 934 */ 935 936 static kmem_cache_t *hdr_full_cache; 937 static kmem_cache_t *hdr_full_crypt_cache; 938 static kmem_cache_t *hdr_l2only_cache; 939 static kmem_cache_t *buf_cache; 940 941 static void 942 buf_fini(void) 943 { 944 int i; 945 946 kmem_free(buf_hash_table.ht_table, 947 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 948 for (i = 0; i < BUF_LOCKS; i++) 949 mutex_destroy(&buf_hash_table.ht_locks[i].ht_lock); 950 kmem_cache_destroy(hdr_full_cache); 951 kmem_cache_destroy(hdr_full_crypt_cache); 952 kmem_cache_destroy(hdr_l2only_cache); 953 kmem_cache_destroy(buf_cache); 954 } 955 956 /* 957 * Constructor callback - called when the cache is empty 958 * and a new buf is requested. 959 */ 960 /* ARGSUSED */ 961 static int 962 hdr_full_cons(void *vbuf, void *unused, int kmflag) 963 { 964 arc_buf_hdr_t *hdr = vbuf; 965 966 bzero(hdr, HDR_FULL_SIZE); 967 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 968 cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL); 969 zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); 970 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); 971 multilist_link_init(&hdr->b_l1hdr.b_arc_node); 972 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); 973 974 return (0); 975 } 976 977 /* ARGSUSED */ 978 static int 979 hdr_full_crypt_cons(void *vbuf, void *unused, int kmflag) 980 { 981 arc_buf_hdr_t *hdr = vbuf; 982 983 (void) hdr_full_cons(vbuf, unused, kmflag); 984 bzero(&hdr->b_crypt_hdr, sizeof (hdr->b_crypt_hdr)); 985 arc_space_consume(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 986 987 return (0); 988 } 989 990 /* ARGSUSED */ 991 static int 992 hdr_l2only_cons(void *vbuf, void *unused, int kmflag) 993 { 994 arc_buf_hdr_t *hdr = vbuf; 995 996 bzero(hdr, HDR_L2ONLY_SIZE); 997 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 998 999 return (0); 1000 } 1001 1002 /* ARGSUSED */ 1003 static int 1004 buf_cons(void *vbuf, void *unused, int kmflag) 1005 { 1006 arc_buf_t *buf = vbuf; 1007 1008 bzero(buf, sizeof (arc_buf_t)); 1009 mutex_init(&buf->b_evict_lock, NULL, MUTEX_DEFAULT, NULL); 1010 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1011 1012 return (0); 1013 } 1014 1015 /* 1016 * Destructor callback - called when a cached buf is 1017 * no longer required. 1018 */ 1019 /* ARGSUSED */ 1020 static void 1021 hdr_full_dest(void *vbuf, void *unused) 1022 { 1023 arc_buf_hdr_t *hdr = vbuf; 1024 1025 ASSERT(HDR_EMPTY(hdr)); 1026 cv_destroy(&hdr->b_l1hdr.b_cv); 1027 zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); 1028 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); 1029 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 1030 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1031 } 1032 1033 /* ARGSUSED */ 1034 static void 1035 hdr_full_crypt_dest(void *vbuf, void *unused) 1036 { 1037 arc_buf_hdr_t *hdr = vbuf; 1038 1039 hdr_full_dest(hdr, unused); 1040 arc_space_return(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 1041 } 1042 1043 /* ARGSUSED */ 1044 static void 1045 hdr_l2only_dest(void *vbuf, void *unused) 1046 { 1047 arc_buf_hdr_t *hdr = vbuf; 1048 1049 ASSERT(HDR_EMPTY(hdr)); 1050 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1051 } 1052 1053 /* ARGSUSED */ 1054 static void 1055 buf_dest(void *vbuf, void *unused) 1056 { 1057 arc_buf_t *buf = vbuf; 1058 1059 mutex_destroy(&buf->b_evict_lock); 1060 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1061 } 1062 1063 /* 1064 * Reclaim callback -- invoked when memory is low. 1065 */ 1066 /* ARGSUSED */ 1067 static void 1068 hdr_recl(void *unused) 1069 { 1070 dprintf("hdr_recl called\n"); 1071 /* 1072 * umem calls the reclaim func when we destroy the buf cache, 1073 * which is after we do arc_fini(). 1074 */ 1075 if (arc_initialized) 1076 zthr_wakeup(arc_reap_zthr); 1077 } 1078 1079 static void 1080 buf_init(void) 1081 { 1082 uint64_t *ct; 1083 uint64_t hsize = 1ULL << 12; 1084 int i, j; 1085 1086 /* 1087 * The hash table is big enough to fill all of physical memory 1088 * with an average block size of zfs_arc_average_blocksize (default 8K). 1089 * By default, the table will take up 1090 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). 1091 */ 1092 while (hsize * zfs_arc_average_blocksize < physmem * PAGESIZE) 1093 hsize <<= 1; 1094 retry: 1095 buf_hash_table.ht_mask = hsize - 1; 1096 buf_hash_table.ht_table = 1097 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); 1098 if (buf_hash_table.ht_table == NULL) { 1099 ASSERT(hsize > (1ULL << 8)); 1100 hsize >>= 1; 1101 goto retry; 1102 } 1103 1104 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, 1105 0, hdr_full_cons, hdr_full_dest, hdr_recl, NULL, NULL, 0); 1106 hdr_full_crypt_cache = kmem_cache_create("arc_buf_hdr_t_full_crypt", 1107 HDR_FULL_CRYPT_SIZE, 0, hdr_full_crypt_cons, hdr_full_crypt_dest, 1108 hdr_recl, NULL, NULL, 0); 1109 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", 1110 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, hdr_recl, 1111 NULL, NULL, 0); 1112 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), 1113 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); 1114 1115 for (i = 0; i < 256; i++) 1116 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) 1117 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); 1118 1119 for (i = 0; i < BUF_LOCKS; i++) { 1120 mutex_init(&buf_hash_table.ht_locks[i].ht_lock, 1121 NULL, MUTEX_DEFAULT, NULL); 1122 } 1123 } 1124 1125 /* 1126 * This is the size that the buf occupies in memory. If the buf is compressed, 1127 * it will correspond to the compressed size. You should use this method of 1128 * getting the buf size unless you explicitly need the logical size. 1129 */ 1130 int32_t 1131 arc_buf_size(arc_buf_t *buf) 1132 { 1133 return (ARC_BUF_COMPRESSED(buf) ? 1134 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); 1135 } 1136 1137 int32_t 1138 arc_buf_lsize(arc_buf_t *buf) 1139 { 1140 return (HDR_GET_LSIZE(buf->b_hdr)); 1141 } 1142 1143 /* 1144 * This function will return B_TRUE if the buffer is encrypted in memory. 1145 * This buffer can be decrypted by calling arc_untransform(). 1146 */ 1147 boolean_t 1148 arc_is_encrypted(arc_buf_t *buf) 1149 { 1150 return (ARC_BUF_ENCRYPTED(buf) != 0); 1151 } 1152 1153 /* 1154 * Returns B_TRUE if the buffer represents data that has not had its MAC 1155 * verified yet. 1156 */ 1157 boolean_t 1158 arc_is_unauthenticated(arc_buf_t *buf) 1159 { 1160 return (HDR_NOAUTH(buf->b_hdr) != 0); 1161 } 1162 1163 void 1164 arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, 1165 uint8_t *iv, uint8_t *mac) 1166 { 1167 arc_buf_hdr_t *hdr = buf->b_hdr; 1168 1169 ASSERT(HDR_PROTECTED(hdr)); 1170 1171 bcopy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 1172 bcopy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 1173 bcopy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 1174 *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 1175 /* CONSTCOND */ 1176 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 1177 } 1178 1179 /* 1180 * Indicates how this buffer is compressed in memory. If it is not compressed 1181 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with 1182 * arc_untransform() as long as it is also unencrypted. 1183 */ 1184 enum zio_compress 1185 arc_get_compression(arc_buf_t *buf) 1186 { 1187 return (ARC_BUF_COMPRESSED(buf) ? 1188 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); 1189 } 1190 1191 #define ARC_MINTIME (hz>>4) /* 62 ms */ 1192 1193 /* 1194 * Return the compression algorithm used to store this data in the ARC. If ARC 1195 * compression is enabled or this is an encrypted block, this will be the same 1196 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. 1197 */ 1198 static inline enum zio_compress 1199 arc_hdr_get_compress(arc_buf_hdr_t *hdr) 1200 { 1201 return (HDR_COMPRESSION_ENABLED(hdr) ? 1202 HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); 1203 } 1204 1205 static inline boolean_t 1206 arc_buf_is_shared(arc_buf_t *buf) 1207 { 1208 boolean_t shared = (buf->b_data != NULL && 1209 buf->b_hdr->b_l1hdr.b_pabd != NULL && 1210 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && 1211 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); 1212 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); 1213 IMPLY(shared, ARC_BUF_SHARED(buf)); 1214 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); 1215 1216 /* 1217 * It would be nice to assert arc_can_share() too, but the "hdr isn't 1218 * already being shared" requirement prevents us from doing that. 1219 */ 1220 1221 return (shared); 1222 } 1223 1224 /* 1225 * Free the checksum associated with this header. If there is no checksum, this 1226 * is a no-op. 1227 */ 1228 static inline void 1229 arc_cksum_free(arc_buf_hdr_t *hdr) 1230 { 1231 ASSERT(HDR_HAS_L1HDR(hdr)); 1232 1233 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1234 if (hdr->b_l1hdr.b_freeze_cksum != NULL) { 1235 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); 1236 hdr->b_l1hdr.b_freeze_cksum = NULL; 1237 } 1238 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1239 } 1240 1241 /* 1242 * Return true iff at least one of the bufs on hdr is not compressed. 1243 * Encrypted buffers count as compressed. 1244 */ 1245 static boolean_t 1246 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) 1247 { 1248 ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); 1249 1250 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { 1251 if (!ARC_BUF_COMPRESSED(b)) { 1252 return (B_TRUE); 1253 } 1254 } 1255 return (B_FALSE); 1256 } 1257 1258 /* 1259 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data 1260 * matches the checksum that is stored in the hdr. If there is no checksum, 1261 * or if the buf is compressed, this is a no-op. 1262 */ 1263 static void 1264 arc_cksum_verify(arc_buf_t *buf) 1265 { 1266 arc_buf_hdr_t *hdr = buf->b_hdr; 1267 zio_cksum_t zc; 1268 1269 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1270 return; 1271 1272 if (ARC_BUF_COMPRESSED(buf)) 1273 return; 1274 1275 ASSERT(HDR_HAS_L1HDR(hdr)); 1276 1277 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1278 1279 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { 1280 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1281 return; 1282 } 1283 1284 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); 1285 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) 1286 panic("buffer modified while frozen!"); 1287 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1288 } 1289 1290 /* 1291 * This function makes the assumption that data stored in the L2ARC 1292 * will be transformed exactly as it is in the main pool. Because of 1293 * this we can verify the checksum against the reading process's bp. 1294 */ 1295 static boolean_t 1296 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) 1297 { 1298 enum zio_compress compress = BP_GET_COMPRESS(zio->io_bp); 1299 boolean_t valid_cksum; 1300 1301 ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); 1302 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); 1303 1304 /* 1305 * We rely on the blkptr's checksum to determine if the block 1306 * is valid or not. When compressed arc is enabled, the l2arc 1307 * writes the block to the l2arc just as it appears in the pool. 1308 * This allows us to use the blkptr's checksum to validate the 1309 * data that we just read off of the l2arc without having to store 1310 * a separate checksum in the arc_buf_hdr_t. However, if compressed 1311 * arc is disabled, then the data written to the l2arc is always 1312 * uncompressed and won't match the block as it exists in the main 1313 * pool. When this is the case, we must first compress it if it is 1314 * compressed on the main pool before we can validate the checksum. 1315 */ 1316 if (!HDR_COMPRESSION_ENABLED(hdr) && compress != ZIO_COMPRESS_OFF) { 1317 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1318 uint64_t lsize = HDR_GET_LSIZE(hdr); 1319 uint64_t csize; 1320 1321 abd_t *cdata = abd_alloc_linear(HDR_GET_PSIZE(hdr), B_TRUE); 1322 csize = zio_compress_data(compress, zio->io_abd, 1323 abd_to_buf(cdata), lsize); 1324 1325 ASSERT3U(csize, <=, HDR_GET_PSIZE(hdr)); 1326 if (csize < HDR_GET_PSIZE(hdr)) { 1327 /* 1328 * Compressed blocks are always a multiple of the 1329 * smallest ashift in the pool. Ideally, we would 1330 * like to round up the csize to the next 1331 * spa_min_ashift but that value may have changed 1332 * since the block was last written. Instead, 1333 * we rely on the fact that the hdr's psize 1334 * was set to the psize of the block when it was 1335 * last written. We set the csize to that value 1336 * and zero out any part that should not contain 1337 * data. 1338 */ 1339 abd_zero_off(cdata, csize, HDR_GET_PSIZE(hdr) - csize); 1340 csize = HDR_GET_PSIZE(hdr); 1341 } 1342 zio_push_transform(zio, cdata, csize, HDR_GET_PSIZE(hdr), NULL); 1343 } 1344 1345 /* 1346 * Block pointers always store the checksum for the logical data. 1347 * If the block pointer has the gang bit set, then the checksum 1348 * it represents is for the reconstituted data and not for an 1349 * individual gang member. The zio pipeline, however, must be able to 1350 * determine the checksum of each of the gang constituents so it 1351 * treats the checksum comparison differently than what we need 1352 * for l2arc blocks. This prevents us from using the 1353 * zio_checksum_error() interface directly. Instead we must call the 1354 * zio_checksum_error_impl() so that we can ensure the checksum is 1355 * generated using the correct checksum algorithm and accounts for the 1356 * logical I/O size and not just a gang fragment. 1357 */ 1358 valid_cksum = (zio_checksum_error_impl(zio->io_spa, zio->io_bp, 1359 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, 1360 zio->io_offset, NULL) == 0); 1361 zio_pop_transforms(zio); 1362 return (valid_cksum); 1363 } 1364 1365 /* 1366 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a 1367 * checksum and attaches it to the buf's hdr so that we can ensure that the buf 1368 * isn't modified later on. If buf is compressed or there is already a checksum 1369 * on the hdr, this is a no-op (we only checksum uncompressed bufs). 1370 */ 1371 static void 1372 arc_cksum_compute(arc_buf_t *buf) 1373 { 1374 arc_buf_hdr_t *hdr = buf->b_hdr; 1375 1376 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1377 return; 1378 1379 ASSERT(HDR_HAS_L1HDR(hdr)); 1380 1381 mutex_enter(&buf->b_hdr->b_l1hdr.b_freeze_lock); 1382 if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { 1383 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1384 return; 1385 } 1386 1387 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 1388 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1389 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), 1390 KM_SLEEP); 1391 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, 1392 hdr->b_l1hdr.b_freeze_cksum); 1393 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1394 arc_buf_watch(buf); 1395 } 1396 1397 #ifndef _KERNEL 1398 typedef struct procctl { 1399 long cmd; 1400 prwatch_t prwatch; 1401 } procctl_t; 1402 #endif 1403 1404 /* ARGSUSED */ 1405 static void 1406 arc_buf_unwatch(arc_buf_t *buf) 1407 { 1408 #ifndef _KERNEL 1409 if (arc_watch) { 1410 int result; 1411 procctl_t ctl; 1412 ctl.cmd = PCWATCH; 1413 ctl.prwatch.pr_vaddr = (uintptr_t)buf->b_data; 1414 ctl.prwatch.pr_size = 0; 1415 ctl.prwatch.pr_wflags = 0; 1416 result = write(arc_procfd, &ctl, sizeof (ctl)); 1417 ASSERT3U(result, ==, sizeof (ctl)); 1418 } 1419 #endif 1420 } 1421 1422 /* ARGSUSED */ 1423 static void 1424 arc_buf_watch(arc_buf_t *buf) 1425 { 1426 #ifndef _KERNEL 1427 if (arc_watch) { 1428 int result; 1429 procctl_t ctl; 1430 ctl.cmd = PCWATCH; 1431 ctl.prwatch.pr_vaddr = (uintptr_t)buf->b_data; 1432 ctl.prwatch.pr_size = arc_buf_size(buf); 1433 ctl.prwatch.pr_wflags = WA_WRITE; 1434 result = write(arc_procfd, &ctl, sizeof (ctl)); 1435 ASSERT3U(result, ==, sizeof (ctl)); 1436 } 1437 #endif 1438 } 1439 1440 static arc_buf_contents_t 1441 arc_buf_type(arc_buf_hdr_t *hdr) 1442 { 1443 arc_buf_contents_t type; 1444 if (HDR_ISTYPE_METADATA(hdr)) { 1445 type = ARC_BUFC_METADATA; 1446 } else { 1447 type = ARC_BUFC_DATA; 1448 } 1449 VERIFY3U(hdr->b_type, ==, type); 1450 return (type); 1451 } 1452 1453 boolean_t 1454 arc_is_metadata(arc_buf_t *buf) 1455 { 1456 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); 1457 } 1458 1459 static uint32_t 1460 arc_bufc_to_flags(arc_buf_contents_t type) 1461 { 1462 switch (type) { 1463 case ARC_BUFC_DATA: 1464 /* metadata field is 0 if buffer contains normal data */ 1465 return (0); 1466 case ARC_BUFC_METADATA: 1467 return (ARC_FLAG_BUFC_METADATA); 1468 default: 1469 break; 1470 } 1471 panic("undefined ARC buffer type!"); 1472 return ((uint32_t)-1); 1473 } 1474 1475 void 1476 arc_buf_thaw(arc_buf_t *buf) 1477 { 1478 arc_buf_hdr_t *hdr = buf->b_hdr; 1479 1480 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 1481 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 1482 1483 arc_cksum_verify(buf); 1484 1485 /* 1486 * Compressed buffers do not manipulate the b_freeze_cksum. 1487 */ 1488 if (ARC_BUF_COMPRESSED(buf)) 1489 return; 1490 1491 ASSERT(HDR_HAS_L1HDR(hdr)); 1492 arc_cksum_free(hdr); 1493 1494 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1495 #ifdef ZFS_DEBUG 1496 if (zfs_flags & ZFS_DEBUG_MODIFY) { 1497 if (hdr->b_l1hdr.b_thawed != NULL) 1498 kmem_free(hdr->b_l1hdr.b_thawed, 1); 1499 hdr->b_l1hdr.b_thawed = kmem_alloc(1, KM_SLEEP); 1500 } 1501 #endif 1502 1503 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1504 1505 arc_buf_unwatch(buf); 1506 } 1507 1508 void 1509 arc_buf_freeze(arc_buf_t *buf) 1510 { 1511 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1512 return; 1513 1514 if (ARC_BUF_COMPRESSED(buf)) 1515 return; 1516 1517 ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); 1518 arc_cksum_compute(buf); 1519 } 1520 1521 /* 1522 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, 1523 * the following functions should be used to ensure that the flags are 1524 * updated in a thread-safe way. When manipulating the flags either 1525 * the hash_lock must be held or the hdr must be undiscoverable. This 1526 * ensures that we're not racing with any other threads when updating 1527 * the flags. 1528 */ 1529 static inline void 1530 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1531 { 1532 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1533 hdr->b_flags |= flags; 1534 } 1535 1536 static inline void 1537 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1538 { 1539 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1540 hdr->b_flags &= ~flags; 1541 } 1542 1543 /* 1544 * Setting the compression bits in the arc_buf_hdr_t's b_flags is 1545 * done in a special way since we have to clear and set bits 1546 * at the same time. Consumers that wish to set the compression bits 1547 * must use this function to ensure that the flags are updated in 1548 * thread-safe manner. 1549 */ 1550 static void 1551 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) 1552 { 1553 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1554 1555 /* 1556 * Holes and embedded blocks will always have a psize = 0 so 1557 * we ignore the compression of the blkptr and set the 1558 * arc_buf_hdr_t's compression to ZIO_COMPRESS_OFF. 1559 * Holes and embedded blocks remain anonymous so we don't 1560 * want to uncompress them. Mark them as uncompressed. 1561 */ 1562 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { 1563 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1564 ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); 1565 } else { 1566 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1567 ASSERT(HDR_COMPRESSION_ENABLED(hdr)); 1568 } 1569 1570 HDR_SET_COMPRESS(hdr, cmp); 1571 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); 1572 } 1573 1574 /* 1575 * Looks for another buf on the same hdr which has the data decompressed, copies 1576 * from it, and returns true. If no such buf exists, returns false. 1577 */ 1578 static boolean_t 1579 arc_buf_try_copy_decompressed_data(arc_buf_t *buf) 1580 { 1581 arc_buf_hdr_t *hdr = buf->b_hdr; 1582 boolean_t copied = B_FALSE; 1583 1584 ASSERT(HDR_HAS_L1HDR(hdr)); 1585 ASSERT3P(buf->b_data, !=, NULL); 1586 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1587 1588 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; 1589 from = from->b_next) { 1590 /* can't use our own data buffer */ 1591 if (from == buf) { 1592 continue; 1593 } 1594 1595 if (!ARC_BUF_COMPRESSED(from)) { 1596 bcopy(from->b_data, buf->b_data, arc_buf_size(buf)); 1597 copied = B_TRUE; 1598 break; 1599 } 1600 } 1601 1602 /* 1603 * Note: With encryption support, the following assertion is no longer 1604 * necessarily valid. If we receive two back to back raw snapshots 1605 * (send -w), the second receive can use a hdr with a cksum already 1606 * calculated. This happens via: 1607 * dmu_recv_stream() -> receive_read_record() -> arc_loan_raw_buf() 1608 * The rsend/send_mixed_raw test case exercises this code path. 1609 * 1610 * There were no decompressed bufs, so there should not be a 1611 * checksum on the hdr either. 1612 * EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); 1613 */ 1614 1615 return (copied); 1616 } 1617 1618 /* 1619 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. 1620 */ 1621 static uint64_t 1622 arc_hdr_size(arc_buf_hdr_t *hdr) 1623 { 1624 uint64_t size; 1625 1626 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 1627 HDR_GET_PSIZE(hdr) > 0) { 1628 size = HDR_GET_PSIZE(hdr); 1629 } else { 1630 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); 1631 size = HDR_GET_LSIZE(hdr); 1632 } 1633 return (size); 1634 } 1635 1636 static int 1637 arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) 1638 { 1639 int ret; 1640 uint64_t csize; 1641 uint64_t lsize = HDR_GET_LSIZE(hdr); 1642 uint64_t psize = HDR_GET_PSIZE(hdr); 1643 void *tmpbuf = NULL; 1644 abd_t *abd = hdr->b_l1hdr.b_pabd; 1645 1646 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1647 ASSERT(HDR_AUTHENTICATED(hdr)); 1648 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1649 1650 /* 1651 * The MAC is calculated on the compressed data that is stored on disk. 1652 * However, if compressed arc is disabled we will only have the 1653 * decompressed data available to us now. Compress it into a temporary 1654 * abd so we can verify the MAC. The performance overhead of this will 1655 * be relatively low, since most objects in an encrypted objset will 1656 * be encrypted (instead of authenticated) anyway. 1657 */ 1658 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1659 !HDR_COMPRESSION_ENABLED(hdr)) { 1660 tmpbuf = zio_buf_alloc(lsize); 1661 abd = abd_get_from_buf(tmpbuf, lsize); 1662 abd_take_ownership_of_buf(abd, B_TRUE); 1663 1664 csize = zio_compress_data(HDR_GET_COMPRESS(hdr), 1665 hdr->b_l1hdr.b_pabd, tmpbuf, lsize); 1666 ASSERT3U(csize, <=, psize); 1667 abd_zero_off(abd, csize, psize - csize); 1668 } 1669 1670 /* 1671 * Authentication is best effort. We authenticate whenever the key is 1672 * available. If we succeed we clear ARC_FLAG_NOAUTH. 1673 */ 1674 if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { 1675 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1676 ASSERT3U(lsize, ==, psize); 1677 ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, 1678 psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1679 } else { 1680 ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, 1681 hdr->b_crypt_hdr.b_mac); 1682 } 1683 1684 if (ret == 0) 1685 arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); 1686 else if (ret != ENOENT) 1687 goto error; 1688 1689 if (tmpbuf != NULL) 1690 abd_free(abd); 1691 1692 return (0); 1693 1694 error: 1695 if (tmpbuf != NULL) 1696 abd_free(abd); 1697 1698 return (ret); 1699 } 1700 1701 /* 1702 * This function will take a header that only has raw encrypted data in 1703 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in 1704 * b_l1hdr.b_pabd. If designated in the header flags, this function will 1705 * also decompress the data. 1706 */ 1707 static int 1708 arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) 1709 { 1710 int ret; 1711 abd_t *cabd = NULL; 1712 void *tmp = NULL; 1713 boolean_t no_crypt = B_FALSE; 1714 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1715 1716 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1717 ASSERT(HDR_ENCRYPTED(hdr)); 1718 1719 arc_hdr_alloc_pabd(hdr, ARC_HDR_DO_ADAPT); 1720 1721 ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, 1722 B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, 1723 hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, 1724 hdr->b_crypt_hdr.b_rabd, &no_crypt); 1725 if (ret != 0) 1726 goto error; 1727 1728 if (no_crypt) { 1729 abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, 1730 HDR_GET_PSIZE(hdr)); 1731 } 1732 1733 /* 1734 * If this header has disabled arc compression but the b_pabd is 1735 * compressed after decrypting it, we need to decompress the newly 1736 * decrypted data. 1737 */ 1738 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1739 !HDR_COMPRESSION_ENABLED(hdr)) { 1740 /* 1741 * We want to make sure that we are correctly honoring the 1742 * zfs_abd_scatter_enabled setting, so we allocate an abd here 1743 * and then loan a buffer from it, rather than allocating a 1744 * linear buffer and wrapping it in an abd later. 1745 */ 1746 cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, B_TRUE); 1747 tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 1748 1749 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1750 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 1751 HDR_GET_LSIZE(hdr)); 1752 if (ret != 0) { 1753 abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); 1754 goto error; 1755 } 1756 1757 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 1758 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 1759 arc_hdr_size(hdr), hdr); 1760 hdr->b_l1hdr.b_pabd = cabd; 1761 } 1762 1763 return (0); 1764 1765 error: 1766 arc_hdr_free_pabd(hdr, B_FALSE); 1767 if (cabd != NULL) 1768 arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); 1769 1770 return (ret); 1771 } 1772 1773 /* 1774 * This function is called during arc_buf_fill() to prepare the header's 1775 * abd plaintext pointer for use. This involves authenticated protected 1776 * data and decrypting encrypted data into the plaintext abd. 1777 */ 1778 static int 1779 arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, 1780 const zbookmark_phys_t *zb, boolean_t noauth) 1781 { 1782 int ret; 1783 1784 ASSERT(HDR_PROTECTED(hdr)); 1785 1786 if (hash_lock != NULL) 1787 mutex_enter(hash_lock); 1788 1789 if (HDR_NOAUTH(hdr) && !noauth) { 1790 /* 1791 * The caller requested authenticated data but our data has 1792 * not been authenticated yet. Verify the MAC now if we can. 1793 */ 1794 ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); 1795 if (ret != 0) 1796 goto error; 1797 } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { 1798 /* 1799 * If we only have the encrypted version of the data, but the 1800 * unencrypted version was requested we take this opportunity 1801 * to store the decrypted version in the header for future use. 1802 */ 1803 ret = arc_hdr_decrypt(hdr, spa, zb); 1804 if (ret != 0) 1805 goto error; 1806 } 1807 1808 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1809 1810 if (hash_lock != NULL) 1811 mutex_exit(hash_lock); 1812 1813 return (0); 1814 1815 error: 1816 if (hash_lock != NULL) 1817 mutex_exit(hash_lock); 1818 1819 return (ret); 1820 } 1821 1822 /* 1823 * This function is used by the dbuf code to decrypt bonus buffers in place. 1824 * The dbuf code itself doesn't have any locking for decrypting a shared dnode 1825 * block, so we use the hash lock here to protect against concurrent calls to 1826 * arc_buf_fill(). 1827 */ 1828 /* ARGSUSED */ 1829 static void 1830 arc_buf_untransform_in_place(arc_buf_t *buf, kmutex_t *hash_lock) 1831 { 1832 arc_buf_hdr_t *hdr = buf->b_hdr; 1833 1834 ASSERT(HDR_ENCRYPTED(hdr)); 1835 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1836 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1837 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1838 1839 zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, 1840 arc_buf_size(buf)); 1841 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 1842 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 1843 hdr->b_crypt_hdr.b_ebufcnt -= 1; 1844 } 1845 1846 /* 1847 * Given a buf that has a data buffer attached to it, this function will 1848 * efficiently fill the buf with data of the specified compression setting from 1849 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr 1850 * are already sharing a data buf, no copy is performed. 1851 * 1852 * If the buf is marked as compressed but uncompressed data was requested, this 1853 * will allocate a new data buffer for the buf, remove that flag, and fill the 1854 * buf with uncompressed data. You can't request a compressed buf on a hdr with 1855 * uncompressed data, and (since we haven't added support for it yet) if you 1856 * want compressed data your buf must already be marked as compressed and have 1857 * the correct-sized data buffer. 1858 */ 1859 static int 1860 arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 1861 arc_fill_flags_t flags) 1862 { 1863 int error = 0; 1864 arc_buf_hdr_t *hdr = buf->b_hdr; 1865 boolean_t hdr_compressed = 1866 (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 1867 boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; 1868 boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; 1869 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; 1870 kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); 1871 1872 ASSERT3P(buf->b_data, !=, NULL); 1873 IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); 1874 IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); 1875 IMPLY(encrypted, HDR_ENCRYPTED(hdr)); 1876 IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); 1877 IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); 1878 IMPLY(encrypted, !ARC_BUF_SHARED(buf)); 1879 1880 /* 1881 * If the caller wanted encrypted data we just need to copy it from 1882 * b_rabd and potentially byteswap it. We won't be able to do any 1883 * further transforms on it. 1884 */ 1885 if (encrypted) { 1886 ASSERT(HDR_HAS_RABD(hdr)); 1887 abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, 1888 HDR_GET_PSIZE(hdr)); 1889 goto byteswap; 1890 } 1891 1892 /* 1893 * Adjust encrypted and authenticated headers to accomodate 1894 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are 1895 * allowed to fail decryption due to keys not being loaded 1896 * without being marked as an IO error. 1897 */ 1898 if (HDR_PROTECTED(hdr)) { 1899 error = arc_fill_hdr_crypt(hdr, hash_lock, spa, 1900 zb, !!(flags & ARC_FILL_NOAUTH)); 1901 if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { 1902 return (error); 1903 } else if (error != 0) { 1904 if (hash_lock != NULL) 1905 mutex_enter(hash_lock); 1906 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 1907 if (hash_lock != NULL) 1908 mutex_exit(hash_lock); 1909 return (error); 1910 } 1911 } 1912 1913 /* 1914 * There is a special case here for dnode blocks which are 1915 * decrypting their bonus buffers. These blocks may request to 1916 * be decrypted in-place. This is necessary because there may 1917 * be many dnodes pointing into this buffer and there is 1918 * currently no method to synchronize replacing the backing 1919 * b_data buffer and updating all of the pointers. Here we use 1920 * the hash lock to ensure there are no races. If the need 1921 * arises for other types to be decrypted in-place, they must 1922 * add handling here as well. 1923 */ 1924 if ((flags & ARC_FILL_IN_PLACE) != 0) { 1925 ASSERT(!hdr_compressed); 1926 ASSERT(!compressed); 1927 ASSERT(!encrypted); 1928 1929 if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { 1930 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1931 1932 if (hash_lock != NULL) 1933 mutex_enter(hash_lock); 1934 arc_buf_untransform_in_place(buf, hash_lock); 1935 if (hash_lock != NULL) 1936 mutex_exit(hash_lock); 1937 1938 /* Compute the hdr's checksum if necessary */ 1939 arc_cksum_compute(buf); 1940 } 1941 1942 return (0); 1943 } 1944 1945 if (hdr_compressed == compressed) { 1946 if (!arc_buf_is_shared(buf)) { 1947 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, 1948 arc_buf_size(buf)); 1949 } 1950 } else { 1951 ASSERT(hdr_compressed); 1952 ASSERT(!compressed); 1953 ASSERT3U(HDR_GET_LSIZE(hdr), !=, HDR_GET_PSIZE(hdr)); 1954 1955 /* 1956 * If the buf is sharing its data with the hdr, unlink it and 1957 * allocate a new data buffer for the buf. 1958 */ 1959 if (arc_buf_is_shared(buf)) { 1960 ASSERT(ARC_BUF_COMPRESSED(buf)); 1961 1962 /* We need to give the buf its own b_data */ 1963 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 1964 buf->b_data = 1965 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 1966 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 1967 1968 /* Previously overhead was 0; just add new overhead */ 1969 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); 1970 } else if (ARC_BUF_COMPRESSED(buf)) { 1971 /* We need to reallocate the buf's b_data */ 1972 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), 1973 buf); 1974 buf->b_data = 1975 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 1976 1977 /* We increased the size of b_data; update overhead */ 1978 ARCSTAT_INCR(arcstat_overhead_size, 1979 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); 1980 } 1981 1982 /* 1983 * Regardless of the buf's previous compression settings, it 1984 * should not be compressed at the end of this function. 1985 */ 1986 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 1987 1988 /* 1989 * Try copying the data from another buf which already has a 1990 * decompressed version. If that's not possible, it's time to 1991 * bite the bullet and decompress the data from the hdr. 1992 */ 1993 if (arc_buf_try_copy_decompressed_data(buf)) { 1994 /* Skip byteswapping and checksumming (already done) */ 1995 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, !=, NULL); 1996 return (0); 1997 } else { 1998 error = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1999 hdr->b_l1hdr.b_pabd, buf->b_data, 2000 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2001 2002 /* 2003 * Absent hardware errors or software bugs, this should 2004 * be impossible, but log it anyway so we can debug it. 2005 */ 2006 if (error != 0) { 2007 zfs_dbgmsg( 2008 "hdr %p, compress %d, psize %d, lsize %d", 2009 hdr, arc_hdr_get_compress(hdr), 2010 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2011 if (hash_lock != NULL) 2012 mutex_enter(hash_lock); 2013 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2014 if (hash_lock != NULL) 2015 mutex_exit(hash_lock); 2016 return (SET_ERROR(EIO)); 2017 } 2018 } 2019 } 2020 2021 byteswap: 2022 /* Byteswap the buf's data if necessary */ 2023 if (bswap != DMU_BSWAP_NUMFUNCS) { 2024 ASSERT(!HDR_SHARED_DATA(hdr)); 2025 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); 2026 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); 2027 } 2028 2029 /* Compute the hdr's checksum if necessary */ 2030 arc_cksum_compute(buf); 2031 2032 return (0); 2033 } 2034 2035 /* 2036 * If this function is being called to decrypt an encrypted buffer or verify an 2037 * authenticated one, the key must be loaded and a mapping must be made 2038 * available in the keystore via spa_keystore_create_mapping() or one of its 2039 * callers. 2040 */ 2041 int 2042 arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2043 boolean_t in_place) 2044 { 2045 int ret; 2046 arc_fill_flags_t flags = 0; 2047 2048 if (in_place) 2049 flags |= ARC_FILL_IN_PLACE; 2050 2051 ret = arc_buf_fill(buf, spa, zb, flags); 2052 if (ret == ECKSUM) { 2053 /* 2054 * Convert authentication and decryption errors to EIO 2055 * (and generate an ereport) before leaving the ARC. 2056 */ 2057 ret = SET_ERROR(EIO); 2058 spa_log_error(spa, zb); 2059 (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, 2060 spa, NULL, zb, NULL, 0, 0); 2061 } 2062 2063 return (ret); 2064 } 2065 2066 /* 2067 * Increment the amount of evictable space in the arc_state_t's refcount. 2068 * We account for the space used by the hdr and the arc buf individually 2069 * so that we can add and remove them from the refcount individually. 2070 */ 2071 static void 2072 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) 2073 { 2074 arc_buf_contents_t type = arc_buf_type(hdr); 2075 2076 ASSERT(HDR_HAS_L1HDR(hdr)); 2077 2078 if (GHOST_STATE(state)) { 2079 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2080 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2081 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2082 ASSERT(!HDR_HAS_RABD(hdr)); 2083 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2084 HDR_GET_LSIZE(hdr), hdr); 2085 return; 2086 } 2087 2088 ASSERT(!GHOST_STATE(state)); 2089 if (hdr->b_l1hdr.b_pabd != NULL) { 2090 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2091 arc_hdr_size(hdr), hdr); 2092 } 2093 if (HDR_HAS_RABD(hdr)) { 2094 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2095 HDR_GET_PSIZE(hdr), hdr); 2096 } 2097 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2098 buf = buf->b_next) { 2099 if (arc_buf_is_shared(buf)) 2100 continue; 2101 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2102 arc_buf_size(buf), buf); 2103 } 2104 } 2105 2106 /* 2107 * Decrement the amount of evictable space in the arc_state_t's refcount. 2108 * We account for the space used by the hdr and the arc buf individually 2109 * so that we can add and remove them from the refcount individually. 2110 */ 2111 static void 2112 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) 2113 { 2114 arc_buf_contents_t type = arc_buf_type(hdr); 2115 2116 ASSERT(HDR_HAS_L1HDR(hdr)); 2117 2118 if (GHOST_STATE(state)) { 2119 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2120 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2121 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2122 ASSERT(!HDR_HAS_RABD(hdr)); 2123 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2124 HDR_GET_LSIZE(hdr), hdr); 2125 return; 2126 } 2127 2128 ASSERT(!GHOST_STATE(state)); 2129 if (hdr->b_l1hdr.b_pabd != NULL) { 2130 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2131 arc_hdr_size(hdr), hdr); 2132 } 2133 if (HDR_HAS_RABD(hdr)) { 2134 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2135 HDR_GET_PSIZE(hdr), hdr); 2136 } 2137 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2138 buf = buf->b_next) { 2139 if (arc_buf_is_shared(buf)) 2140 continue; 2141 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2142 arc_buf_size(buf), buf); 2143 } 2144 } 2145 2146 /* 2147 * Add a reference to this hdr indicating that someone is actively 2148 * referencing that memory. When the refcount transitions from 0 to 1, 2149 * we remove it from the respective arc_state_t list to indicate that 2150 * it is not evictable. 2151 */ 2152 static void 2153 add_reference(arc_buf_hdr_t *hdr, void *tag) 2154 { 2155 ASSERT(HDR_HAS_L1HDR(hdr)); 2156 if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { 2157 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 2158 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2159 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2160 } 2161 2162 arc_state_t *state = hdr->b_l1hdr.b_state; 2163 2164 if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && 2165 (state != arc_anon)) { 2166 /* We don't use the L2-only state list. */ 2167 if (state != arc_l2c_only) { 2168 multilist_remove(state->arcs_list[arc_buf_type(hdr)], 2169 hdr); 2170 arc_evictable_space_decrement(hdr, state); 2171 } 2172 /* remove the prefetch flag if we get a reference */ 2173 arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH); 2174 } 2175 } 2176 2177 /* 2178 * Remove a reference from this hdr. When the reference transitions from 2179 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's 2180 * list making it eligible for eviction. 2181 */ 2182 static int 2183 remove_reference(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, void *tag) 2184 { 2185 int cnt; 2186 arc_state_t *state = hdr->b_l1hdr.b_state; 2187 2188 ASSERT(HDR_HAS_L1HDR(hdr)); 2189 ASSERT(state == arc_anon || MUTEX_HELD(hash_lock)); 2190 ASSERT(!GHOST_STATE(state)); 2191 2192 /* 2193 * arc_l2c_only counts as a ghost state so we don't need to explicitly 2194 * check to prevent usage of the arc_l2c_only list. 2195 */ 2196 if (((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) && 2197 (state != arc_anon)) { 2198 multilist_insert(state->arcs_list[arc_buf_type(hdr)], hdr); 2199 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 2200 arc_evictable_space_increment(hdr, state); 2201 } 2202 return (cnt); 2203 } 2204 2205 /* 2206 * Move the supplied buffer to the indicated state. The hash lock 2207 * for the buffer must be held by the caller. 2208 */ 2209 static void 2210 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr, 2211 kmutex_t *hash_lock) 2212 { 2213 arc_state_t *old_state; 2214 int64_t refcnt; 2215 uint32_t bufcnt; 2216 boolean_t update_old, update_new; 2217 arc_buf_contents_t buftype = arc_buf_type(hdr); 2218 2219 /* 2220 * We almost always have an L1 hdr here, since we call arc_hdr_realloc() 2221 * in arc_read() when bringing a buffer out of the L2ARC. However, the 2222 * L1 hdr doesn't always exist when we change state to arc_anon before 2223 * destroying a header, in which case reallocating to add the L1 hdr is 2224 * pointless. 2225 */ 2226 if (HDR_HAS_L1HDR(hdr)) { 2227 old_state = hdr->b_l1hdr.b_state; 2228 refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); 2229 bufcnt = hdr->b_l1hdr.b_bufcnt; 2230 2231 update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL || 2232 HDR_HAS_RABD(hdr)); 2233 } else { 2234 old_state = arc_l2c_only; 2235 refcnt = 0; 2236 bufcnt = 0; 2237 update_old = B_FALSE; 2238 } 2239 update_new = update_old; 2240 2241 ASSERT(MUTEX_HELD(hash_lock)); 2242 ASSERT3P(new_state, !=, old_state); 2243 ASSERT(!GHOST_STATE(new_state) || bufcnt == 0); 2244 ASSERT(old_state != arc_anon || bufcnt <= 1); 2245 2246 /* 2247 * If this buffer is evictable, transfer it from the 2248 * old state list to the new state list. 2249 */ 2250 if (refcnt == 0) { 2251 if (old_state != arc_anon && old_state != arc_l2c_only) { 2252 ASSERT(HDR_HAS_L1HDR(hdr)); 2253 multilist_remove(old_state->arcs_list[buftype], hdr); 2254 2255 if (GHOST_STATE(old_state)) { 2256 ASSERT0(bufcnt); 2257 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2258 update_old = B_TRUE; 2259 } 2260 arc_evictable_space_decrement(hdr, old_state); 2261 } 2262 if (new_state != arc_anon && new_state != arc_l2c_only) { 2263 2264 /* 2265 * An L1 header always exists here, since if we're 2266 * moving to some L1-cached state (i.e. not l2c_only or 2267 * anonymous), we realloc the header to add an L1hdr 2268 * beforehand. 2269 */ 2270 ASSERT(HDR_HAS_L1HDR(hdr)); 2271 multilist_insert(new_state->arcs_list[buftype], hdr); 2272 2273 if (GHOST_STATE(new_state)) { 2274 ASSERT0(bufcnt); 2275 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2276 update_new = B_TRUE; 2277 } 2278 arc_evictable_space_increment(hdr, new_state); 2279 } 2280 } 2281 2282 ASSERT(!HDR_EMPTY(hdr)); 2283 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) 2284 buf_hash_remove(hdr); 2285 2286 /* adjust state sizes (ignore arc_l2c_only) */ 2287 2288 if (update_new && new_state != arc_l2c_only) { 2289 ASSERT(HDR_HAS_L1HDR(hdr)); 2290 if (GHOST_STATE(new_state)) { 2291 ASSERT0(bufcnt); 2292 2293 /* 2294 * When moving a header to a ghost state, we first 2295 * remove all arc buffers. Thus, we'll have a 2296 * bufcnt of zero, and no arc buffer to use for 2297 * the reference. As a result, we use the arc 2298 * header pointer for the reference. 2299 */ 2300 (void) zfs_refcount_add_many(&new_state->arcs_size, 2301 HDR_GET_LSIZE(hdr), hdr); 2302 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2303 ASSERT(!HDR_HAS_RABD(hdr)); 2304 } else { 2305 uint32_t buffers = 0; 2306 2307 /* 2308 * Each individual buffer holds a unique reference, 2309 * thus we must remove each of these references one 2310 * at a time. 2311 */ 2312 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2313 buf = buf->b_next) { 2314 ASSERT3U(bufcnt, !=, 0); 2315 buffers++; 2316 2317 /* 2318 * When the arc_buf_t is sharing the data 2319 * block with the hdr, the owner of the 2320 * reference belongs to the hdr. Only 2321 * add to the refcount if the arc_buf_t is 2322 * not shared. 2323 */ 2324 if (arc_buf_is_shared(buf)) 2325 continue; 2326 2327 (void) zfs_refcount_add_many( 2328 &new_state->arcs_size, 2329 arc_buf_size(buf), buf); 2330 } 2331 ASSERT3U(bufcnt, ==, buffers); 2332 2333 if (hdr->b_l1hdr.b_pabd != NULL) { 2334 (void) zfs_refcount_add_many( 2335 &new_state->arcs_size, 2336 arc_hdr_size(hdr), hdr); 2337 } 2338 2339 if (HDR_HAS_RABD(hdr)) { 2340 (void) zfs_refcount_add_many( 2341 &new_state->arcs_size, 2342 HDR_GET_PSIZE(hdr), hdr); 2343 } 2344 } 2345 } 2346 2347 if (update_old && old_state != arc_l2c_only) { 2348 ASSERT(HDR_HAS_L1HDR(hdr)); 2349 if (GHOST_STATE(old_state)) { 2350 ASSERT0(bufcnt); 2351 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2352 ASSERT(!HDR_HAS_RABD(hdr)); 2353 2354 /* 2355 * When moving a header off of a ghost state, 2356 * the header will not contain any arc buffers. 2357 * We use the arc header pointer for the reference 2358 * which is exactly what we did when we put the 2359 * header on the ghost state. 2360 */ 2361 2362 (void) zfs_refcount_remove_many(&old_state->arcs_size, 2363 HDR_GET_LSIZE(hdr), hdr); 2364 } else { 2365 uint32_t buffers = 0; 2366 2367 /* 2368 * Each individual buffer holds a unique reference, 2369 * thus we must remove each of these references one 2370 * at a time. 2371 */ 2372 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2373 buf = buf->b_next) { 2374 ASSERT3U(bufcnt, !=, 0); 2375 buffers++; 2376 2377 /* 2378 * When the arc_buf_t is sharing the data 2379 * block with the hdr, the owner of the 2380 * reference belongs to the hdr. Only 2381 * add to the refcount if the arc_buf_t is 2382 * not shared. 2383 */ 2384 if (arc_buf_is_shared(buf)) 2385 continue; 2386 2387 (void) zfs_refcount_remove_many( 2388 &old_state->arcs_size, arc_buf_size(buf), 2389 buf); 2390 } 2391 ASSERT3U(bufcnt, ==, buffers); 2392 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 2393 HDR_HAS_RABD(hdr)); 2394 2395 if (hdr->b_l1hdr.b_pabd != NULL) { 2396 (void) zfs_refcount_remove_many( 2397 &old_state->arcs_size, arc_hdr_size(hdr), 2398 hdr); 2399 } 2400 2401 if (HDR_HAS_RABD(hdr)) { 2402 (void) zfs_refcount_remove_many( 2403 &old_state->arcs_size, HDR_GET_PSIZE(hdr), 2404 hdr); 2405 } 2406 } 2407 } 2408 2409 if (HDR_HAS_L1HDR(hdr)) 2410 hdr->b_l1hdr.b_state = new_state; 2411 2412 /* 2413 * L2 headers should never be on the L2 state list since they don't 2414 * have L1 headers allocated. 2415 */ 2416 ASSERT(multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_DATA]) && 2417 multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_METADATA])); 2418 } 2419 2420 void 2421 arc_space_consume(uint64_t space, arc_space_type_t type) 2422 { 2423 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2424 2425 switch (type) { 2426 case ARC_SPACE_DATA: 2427 aggsum_add(&astat_data_size, space); 2428 break; 2429 case ARC_SPACE_META: 2430 aggsum_add(&astat_metadata_size, space); 2431 break; 2432 case ARC_SPACE_OTHER: 2433 aggsum_add(&astat_other_size, space); 2434 break; 2435 case ARC_SPACE_HDRS: 2436 aggsum_add(&astat_hdr_size, space); 2437 break; 2438 case ARC_SPACE_L2HDRS: 2439 aggsum_add(&astat_l2_hdr_size, space); 2440 break; 2441 } 2442 2443 if (type != ARC_SPACE_DATA) 2444 aggsum_add(&arc_meta_used, space); 2445 2446 aggsum_add(&arc_size, space); 2447 } 2448 2449 void 2450 arc_space_return(uint64_t space, arc_space_type_t type) 2451 { 2452 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2453 2454 switch (type) { 2455 case ARC_SPACE_DATA: 2456 aggsum_add(&astat_data_size, -space); 2457 break; 2458 case ARC_SPACE_META: 2459 aggsum_add(&astat_metadata_size, -space); 2460 break; 2461 case ARC_SPACE_OTHER: 2462 aggsum_add(&astat_other_size, -space); 2463 break; 2464 case ARC_SPACE_HDRS: 2465 aggsum_add(&astat_hdr_size, -space); 2466 break; 2467 case ARC_SPACE_L2HDRS: 2468 aggsum_add(&astat_l2_hdr_size, -space); 2469 break; 2470 } 2471 2472 if (type != ARC_SPACE_DATA) { 2473 ASSERT(aggsum_compare(&arc_meta_used, space) >= 0); 2474 /* 2475 * We use the upper bound here rather than the precise value 2476 * because the arc_meta_max value doesn't need to be 2477 * precise. It's only consumed by humans via arcstats. 2478 */ 2479 if (arc_meta_max < aggsum_upper_bound(&arc_meta_used)) 2480 arc_meta_max = aggsum_upper_bound(&arc_meta_used); 2481 aggsum_add(&arc_meta_used, -space); 2482 } 2483 2484 ASSERT(aggsum_compare(&arc_size, space) >= 0); 2485 aggsum_add(&arc_size, -space); 2486 } 2487 2488 /* 2489 * Given a hdr and a buf, returns whether that buf can share its b_data buffer 2490 * with the hdr's b_pabd. 2491 */ 2492 static boolean_t 2493 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2494 { 2495 /* 2496 * The criteria for sharing a hdr's data are: 2497 * 1. the buffer is not encrypted 2498 * 2. the hdr's compression matches the buf's compression 2499 * 3. the hdr doesn't need to be byteswapped 2500 * 4. the hdr isn't already being shared 2501 * 5. the buf is either compressed or it is the last buf in the hdr list 2502 * 2503 * Criterion #5 maintains the invariant that shared uncompressed 2504 * bufs must be the final buf in the hdr's b_buf list. Reading this, you 2505 * might ask, "if a compressed buf is allocated first, won't that be the 2506 * last thing in the list?", but in that case it's impossible to create 2507 * a shared uncompressed buf anyway (because the hdr must be compressed 2508 * to have the compressed buf). You might also think that #3 is 2509 * sufficient to make this guarantee, however it's possible 2510 * (specifically in the rare L2ARC write race mentioned in 2511 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that 2512 * is sharable, but wasn't at the time of its allocation. Rather than 2513 * allow a new shared uncompressed buf to be created and then shuffle 2514 * the list around to make it the last element, this simply disallows 2515 * sharing if the new buf isn't the first to be added. 2516 */ 2517 ASSERT3P(buf->b_hdr, ==, hdr); 2518 boolean_t hdr_compressed = arc_hdr_get_compress(hdr) != 2519 ZIO_COMPRESS_OFF; 2520 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; 2521 return (!ARC_BUF_ENCRYPTED(buf) && 2522 buf_compressed == hdr_compressed && 2523 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && 2524 !HDR_SHARED_DATA(hdr) && 2525 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); 2526 } 2527 2528 /* 2529 * Allocate a buf for this hdr. If you care about the data that's in the hdr, 2530 * or if you want a compressed buffer, pass those flags in. Returns 0 if the 2531 * copy was made successfully, or an error code otherwise. 2532 */ 2533 static int 2534 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, 2535 void *tag, boolean_t encrypted, boolean_t compressed, boolean_t noauth, 2536 boolean_t fill, arc_buf_t **ret) 2537 { 2538 arc_buf_t *buf; 2539 arc_fill_flags_t flags = ARC_FILL_LOCKED; 2540 2541 ASSERT(HDR_HAS_L1HDR(hdr)); 2542 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 2543 VERIFY(hdr->b_type == ARC_BUFC_DATA || 2544 hdr->b_type == ARC_BUFC_METADATA); 2545 ASSERT3P(ret, !=, NULL); 2546 ASSERT3P(*ret, ==, NULL); 2547 IMPLY(encrypted, compressed); 2548 2549 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); 2550 buf->b_hdr = hdr; 2551 buf->b_data = NULL; 2552 buf->b_next = hdr->b_l1hdr.b_buf; 2553 buf->b_flags = 0; 2554 2555 add_reference(hdr, tag); 2556 2557 /* 2558 * We're about to change the hdr's b_flags. We must either 2559 * hold the hash_lock or be undiscoverable. 2560 */ 2561 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2562 2563 /* 2564 * Only honor requests for compressed bufs if the hdr is actually 2565 * compressed. This must be overriden if the buffer is encrypted since 2566 * encrypted buffers cannot be decompressed. 2567 */ 2568 if (encrypted) { 2569 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2570 buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; 2571 flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; 2572 } else if (compressed && 2573 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 2574 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2575 flags |= ARC_FILL_COMPRESSED; 2576 } 2577 2578 if (noauth) { 2579 ASSERT0(encrypted); 2580 flags |= ARC_FILL_NOAUTH; 2581 } 2582 2583 /* 2584 * If the hdr's data can be shared then we share the data buffer and 2585 * set the appropriate bit in the hdr's b_flags to indicate the hdr is 2586 * allocate a new buffer to store the buf's data. 2587 * 2588 * There are two additional restrictions here because we're sharing 2589 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be 2590 * actively involved in an L2ARC write, because if this buf is used by 2591 * an arc_write() then the hdr's data buffer will be released when the 2592 * write completes, even though the L2ARC write might still be using it. 2593 * Second, the hdr's ABD must be linear so that the buf's user doesn't 2594 * need to be ABD-aware. 2595 */ 2596 boolean_t can_share = arc_can_share(hdr, buf) && !HDR_L2_WRITING(hdr) && 2597 hdr->b_l1hdr.b_pabd != NULL && abd_is_linear(hdr->b_l1hdr.b_pabd); 2598 2599 /* Set up b_data and sharing */ 2600 if (can_share) { 2601 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); 2602 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2603 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2604 } else { 2605 buf->b_data = 2606 arc_get_data_buf(hdr, arc_buf_size(buf), buf); 2607 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2608 } 2609 VERIFY3P(buf->b_data, !=, NULL); 2610 2611 hdr->b_l1hdr.b_buf = buf; 2612 hdr->b_l1hdr.b_bufcnt += 1; 2613 if (encrypted) 2614 hdr->b_crypt_hdr.b_ebufcnt += 1; 2615 2616 /* 2617 * If the user wants the data from the hdr, we need to either copy or 2618 * decompress the data. 2619 */ 2620 if (fill) { 2621 ASSERT3P(zb, !=, NULL); 2622 return (arc_buf_fill(buf, spa, zb, flags)); 2623 } 2624 2625 return (0); 2626 } 2627 2628 static char *arc_onloan_tag = "onloan"; 2629 2630 static inline void 2631 arc_loaned_bytes_update(int64_t delta) 2632 { 2633 atomic_add_64(&arc_loaned_bytes, delta); 2634 2635 /* assert that it did not wrap around */ 2636 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 2637 } 2638 2639 /* 2640 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in 2641 * flight data by arc_tempreserve_space() until they are "returned". Loaned 2642 * buffers must be returned to the arc before they can be used by the DMU or 2643 * freed. 2644 */ 2645 arc_buf_t * 2646 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) 2647 { 2648 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, 2649 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); 2650 2651 arc_loaned_bytes_update(arc_buf_size(buf)); 2652 2653 return (buf); 2654 } 2655 2656 arc_buf_t * 2657 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, 2658 enum zio_compress compression_type) 2659 { 2660 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, 2661 psize, lsize, compression_type); 2662 2663 arc_loaned_bytes_update(arc_buf_size(buf)); 2664 2665 return (buf); 2666 } 2667 2668 arc_buf_t * 2669 arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, 2670 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 2671 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 2672 enum zio_compress compression_type) 2673 { 2674 arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, 2675 byteorder, salt, iv, mac, ot, psize, lsize, compression_type); 2676 2677 atomic_add_64(&arc_loaned_bytes, psize); 2678 return (buf); 2679 } 2680 2681 /* 2682 * Performance tuning of L2ARC persistence: 2683 * 2684 * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding 2685 * an L2ARC device (either at pool import or later) will attempt 2686 * to rebuild L2ARC buffer contents. 2687 * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls 2688 * whether log blocks are written to the L2ARC device. If the L2ARC 2689 * device is less than 1GB, the amount of data l2arc_evict() 2690 * evicts is significant compared to the amount of restored L2ARC 2691 * data. In this case do not write log blocks in L2ARC in order 2692 * not to waste space. 2693 */ 2694 int l2arc_rebuild_enabled = B_TRUE; 2695 unsigned long l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; 2696 2697 /* L2ARC persistence rebuild control routines. */ 2698 void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); 2699 static void l2arc_dev_rebuild_start(l2arc_dev_t *dev); 2700 static int l2arc_rebuild(l2arc_dev_t *dev); 2701 2702 /* L2ARC persistence read I/O routines. */ 2703 static int l2arc_dev_hdr_read(l2arc_dev_t *dev); 2704 static int l2arc_log_blk_read(l2arc_dev_t *dev, 2705 const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, 2706 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 2707 zio_t *this_io, zio_t **next_io); 2708 static zio_t *l2arc_log_blk_fetch(vdev_t *vd, 2709 const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); 2710 static void l2arc_log_blk_fetch_abort(zio_t *zio); 2711 2712 /* L2ARC persistence block restoration routines. */ 2713 static void l2arc_log_blk_restore(l2arc_dev_t *dev, 2714 const l2arc_log_blk_phys_t *lb, uint64_t lb_asize, uint64_t lb_daddr); 2715 static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, 2716 l2arc_dev_t *dev); 2717 2718 /* L2ARC persistence write I/O routines. */ 2719 static void l2arc_dev_hdr_update(l2arc_dev_t *dev); 2720 static void l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, 2721 l2arc_write_callback_t *cb); 2722 2723 /* L2ARC persistence auxilliary routines. */ 2724 boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, 2725 const l2arc_log_blkptr_t *lbp); 2726 static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, 2727 const arc_buf_hdr_t *ab); 2728 boolean_t l2arc_range_check_overlap(uint64_t bottom, 2729 uint64_t top, uint64_t check); 2730 static void l2arc_blk_fetch_done(zio_t *zio); 2731 static inline uint64_t 2732 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); 2733 2734 /* 2735 * Return a loaned arc buffer to the arc. 2736 */ 2737 void 2738 arc_return_buf(arc_buf_t *buf, void *tag) 2739 { 2740 arc_buf_hdr_t *hdr = buf->b_hdr; 2741 2742 ASSERT3P(buf->b_data, !=, NULL); 2743 ASSERT(HDR_HAS_L1HDR(hdr)); 2744 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); 2745 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2746 2747 arc_loaned_bytes_update(-arc_buf_size(buf)); 2748 } 2749 2750 /* Detach an arc_buf from a dbuf (tag) */ 2751 void 2752 arc_loan_inuse_buf(arc_buf_t *buf, void *tag) 2753 { 2754 arc_buf_hdr_t *hdr = buf->b_hdr; 2755 2756 ASSERT3P(buf->b_data, !=, NULL); 2757 ASSERT(HDR_HAS_L1HDR(hdr)); 2758 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2759 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); 2760 2761 arc_loaned_bytes_update(arc_buf_size(buf)); 2762 } 2763 2764 static void 2765 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) 2766 { 2767 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); 2768 2769 df->l2df_abd = abd; 2770 df->l2df_size = size; 2771 df->l2df_type = type; 2772 mutex_enter(&l2arc_free_on_write_mtx); 2773 list_insert_head(l2arc_free_on_write, df); 2774 mutex_exit(&l2arc_free_on_write_mtx); 2775 } 2776 2777 static void 2778 arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) 2779 { 2780 arc_state_t *state = hdr->b_l1hdr.b_state; 2781 arc_buf_contents_t type = arc_buf_type(hdr); 2782 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 2783 2784 /* protected by hash lock, if in the hash table */ 2785 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2786 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2787 ASSERT(state != arc_anon && state != arc_l2c_only); 2788 2789 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2790 size, hdr); 2791 } 2792 (void) zfs_refcount_remove_many(&state->arcs_size, size, hdr); 2793 if (type == ARC_BUFC_METADATA) { 2794 arc_space_return(size, ARC_SPACE_META); 2795 } else { 2796 ASSERT(type == ARC_BUFC_DATA); 2797 arc_space_return(size, ARC_SPACE_DATA); 2798 } 2799 2800 if (free_rdata) { 2801 l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); 2802 } else { 2803 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); 2804 } 2805 } 2806 2807 /* 2808 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the 2809 * data buffer, we transfer the refcount ownership to the hdr and update 2810 * the appropriate kstats. 2811 */ 2812 static void 2813 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2814 { 2815 /* LINTED */ 2816 arc_state_t *state = hdr->b_l1hdr.b_state; 2817 2818 ASSERT(arc_can_share(hdr, buf)); 2819 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2820 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 2821 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2822 2823 /* 2824 * Start sharing the data buffer. We transfer the 2825 * refcount ownership to the hdr since it always owns 2826 * the refcount whenever an arc_buf_t is shared. 2827 */ 2828 zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, 2829 arc_hdr_size(hdr), buf, hdr); 2830 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); 2831 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, 2832 HDR_ISTYPE_METADATA(hdr)); 2833 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2834 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2835 2836 /* 2837 * Since we've transferred ownership to the hdr we need 2838 * to increment its compressed and uncompressed kstats and 2839 * decrement the overhead size. 2840 */ 2841 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); 2842 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 2843 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); 2844 } 2845 2846 static void 2847 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2848 { 2849 /* LINTED */ 2850 arc_state_t *state = hdr->b_l1hdr.b_state; 2851 2852 ASSERT(arc_buf_is_shared(buf)); 2853 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 2854 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2855 2856 /* 2857 * We are no longer sharing this buffer so we need 2858 * to transfer its ownership to the rightful owner. 2859 */ 2860 zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, 2861 arc_hdr_size(hdr), hdr, buf); 2862 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2863 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); 2864 abd_put(hdr->b_l1hdr.b_pabd); 2865 hdr->b_l1hdr.b_pabd = NULL; 2866 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2867 2868 /* 2869 * Since the buffer is no longer shared between 2870 * the arc buf and the hdr, count it as overhead. 2871 */ 2872 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); 2873 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 2874 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2875 } 2876 2877 /* 2878 * Remove an arc_buf_t from the hdr's buf list and return the last 2879 * arc_buf_t on the list. If no buffers remain on the list then return 2880 * NULL. 2881 */ 2882 static arc_buf_t * 2883 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2884 { 2885 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; 2886 arc_buf_t *lastbuf = NULL; 2887 2888 ASSERT(HDR_HAS_L1HDR(hdr)); 2889 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2890 2891 /* 2892 * Remove the buf from the hdr list and locate the last 2893 * remaining buffer on the list. 2894 */ 2895 while (*bufp != NULL) { 2896 if (*bufp == buf) 2897 *bufp = buf->b_next; 2898 2899 /* 2900 * If we've removed a buffer in the middle of 2901 * the list then update the lastbuf and update 2902 * bufp. 2903 */ 2904 if (*bufp != NULL) { 2905 lastbuf = *bufp; 2906 bufp = &(*bufp)->b_next; 2907 } 2908 } 2909 buf->b_next = NULL; 2910 ASSERT3P(lastbuf, !=, buf); 2911 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL); 2912 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL); 2913 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); 2914 2915 return (lastbuf); 2916 } 2917 2918 /* 2919 * Free up buf->b_data and pull the arc_buf_t off of the the arc_buf_hdr_t's 2920 * list and free it. 2921 */ 2922 static void 2923 arc_buf_destroy_impl(arc_buf_t *buf) 2924 { 2925 arc_buf_hdr_t *hdr = buf->b_hdr; 2926 2927 /* 2928 * Free up the data associated with the buf but only if we're not 2929 * sharing this with the hdr. If we are sharing it with the hdr, the 2930 * hdr is responsible for doing the free. 2931 */ 2932 if (buf->b_data != NULL) { 2933 /* 2934 * We're about to change the hdr's b_flags. We must either 2935 * hold the hash_lock or be undiscoverable. 2936 */ 2937 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2938 2939 arc_cksum_verify(buf); 2940 arc_buf_unwatch(buf); 2941 2942 if (arc_buf_is_shared(buf)) { 2943 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2944 } else { 2945 uint64_t size = arc_buf_size(buf); 2946 arc_free_data_buf(hdr, buf->b_data, size, buf); 2947 ARCSTAT_INCR(arcstat_overhead_size, -size); 2948 } 2949 buf->b_data = NULL; 2950 2951 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 2952 hdr->b_l1hdr.b_bufcnt -= 1; 2953 2954 if (ARC_BUF_ENCRYPTED(buf)) { 2955 hdr->b_crypt_hdr.b_ebufcnt -= 1; 2956 2957 /* 2958 * If we have no more encrypted buffers and we've 2959 * already gotten a copy of the decrypted data we can 2960 * free b_rabd to save some space. 2961 */ 2962 if (hdr->b_crypt_hdr.b_ebufcnt == 0 && 2963 HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd != NULL && 2964 !HDR_IO_IN_PROGRESS(hdr)) { 2965 arc_hdr_free_pabd(hdr, B_TRUE); 2966 } 2967 } 2968 } 2969 2970 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 2971 2972 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 2973 /* 2974 * If the current arc_buf_t is sharing its data buffer with the 2975 * hdr, then reassign the hdr's b_pabd to share it with the new 2976 * buffer at the end of the list. The shared buffer is always 2977 * the last one on the hdr's buffer list. 2978 * 2979 * There is an equivalent case for compressed bufs, but since 2980 * they aren't guaranteed to be the last buf in the list and 2981 * that is an exceedingly rare case, we just allow that space be 2982 * wasted temporarily. We must also be careful not to share 2983 * encrypted buffers, since they cannot be shared. 2984 */ 2985 if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { 2986 /* Only one buf can be shared at once */ 2987 VERIFY(!arc_buf_is_shared(lastbuf)); 2988 /* hdr is uncompressed so can't have compressed buf */ 2989 VERIFY(!ARC_BUF_COMPRESSED(lastbuf)); 2990 2991 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 2992 arc_hdr_free_pabd(hdr, B_FALSE); 2993 2994 /* 2995 * We must setup a new shared block between the 2996 * last buffer and the hdr. The data would have 2997 * been allocated by the arc buf so we need to transfer 2998 * ownership to the hdr since it's now being shared. 2999 */ 3000 arc_share_buf(hdr, lastbuf); 3001 } 3002 } else if (HDR_SHARED_DATA(hdr)) { 3003 /* 3004 * Uncompressed shared buffers are always at the end 3005 * of the list. Compressed buffers don't have the 3006 * same requirements. This makes it hard to 3007 * simply assert that the lastbuf is shared so 3008 * we rely on the hdr's compression flags to determine 3009 * if we have a compressed, shared buffer. 3010 */ 3011 ASSERT3P(lastbuf, !=, NULL); 3012 ASSERT(arc_buf_is_shared(lastbuf) || 3013 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 3014 } 3015 3016 /* 3017 * Free the checksum if we're removing the last uncompressed buf from 3018 * this hdr. 3019 */ 3020 if (!arc_hdr_has_uncompressed_buf(hdr)) { 3021 arc_cksum_free(hdr); 3022 } 3023 3024 /* clean up the buf */ 3025 buf->b_hdr = NULL; 3026 kmem_cache_free(buf_cache, buf); 3027 } 3028 3029 static void 3030 arc_hdr_alloc_pabd(arc_buf_hdr_t *hdr, int alloc_flags) 3031 { 3032 uint64_t size; 3033 boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); 3034 boolean_t do_adapt = ((alloc_flags & ARC_HDR_DO_ADAPT) != 0); 3035 3036 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 3037 ASSERT(HDR_HAS_L1HDR(hdr)); 3038 ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); 3039 IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); 3040 3041 if (alloc_rdata) { 3042 size = HDR_GET_PSIZE(hdr); 3043 ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); 3044 hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, 3045 do_adapt); 3046 ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); 3047 } else { 3048 size = arc_hdr_size(hdr); 3049 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3050 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, 3051 do_adapt); 3052 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3053 } 3054 3055 ARCSTAT_INCR(arcstat_compressed_size, size); 3056 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3057 } 3058 3059 static void 3060 arc_hdr_free_pabd(arc_buf_hdr_t *hdr, boolean_t free_rdata) 3061 { 3062 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 3063 3064 ASSERT(HDR_HAS_L1HDR(hdr)); 3065 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 3066 IMPLY(free_rdata, HDR_HAS_RABD(hdr)); 3067 3068 3069 /* 3070 * If the hdr is currently being written to the l2arc then 3071 * we defer freeing the data by adding it to the l2arc_free_on_write 3072 * list. The l2arc will free the data once it's finished 3073 * writing it to the l2arc device. 3074 */ 3075 if (HDR_L2_WRITING(hdr)) { 3076 arc_hdr_free_on_write(hdr, free_rdata); 3077 ARCSTAT_BUMP(arcstat_l2_free_on_write); 3078 } else if (free_rdata) { 3079 arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); 3080 } else { 3081 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 3082 size, hdr); 3083 } 3084 3085 if (free_rdata) { 3086 hdr->b_crypt_hdr.b_rabd = NULL; 3087 } else { 3088 hdr->b_l1hdr.b_pabd = NULL; 3089 } 3090 3091 if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) 3092 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 3093 3094 ARCSTAT_INCR(arcstat_compressed_size, -size); 3095 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3096 } 3097 3098 static arc_buf_hdr_t * 3099 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, 3100 boolean_t protected, enum zio_compress compression_type, 3101 arc_buf_contents_t type, boolean_t alloc_rdata) 3102 { 3103 arc_buf_hdr_t *hdr; 3104 int flags = ARC_HDR_DO_ADAPT; 3105 3106 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); 3107 if (protected) { 3108 hdr = kmem_cache_alloc(hdr_full_crypt_cache, KM_PUSHPAGE); 3109 } else { 3110 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); 3111 } 3112 flags |= alloc_rdata ? ARC_HDR_ALLOC_RDATA : 0; 3113 ASSERT(HDR_EMPTY(hdr)); 3114 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3115 ASSERT3P(hdr->b_l1hdr.b_thawed, ==, NULL); 3116 HDR_SET_PSIZE(hdr, psize); 3117 HDR_SET_LSIZE(hdr, lsize); 3118 hdr->b_spa = spa; 3119 hdr->b_type = type; 3120 hdr->b_flags = 0; 3121 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); 3122 arc_hdr_set_compress(hdr, compression_type); 3123 if (protected) 3124 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3125 3126 hdr->b_l1hdr.b_state = arc_anon; 3127 hdr->b_l1hdr.b_arc_access = 0; 3128 hdr->b_l1hdr.b_bufcnt = 0; 3129 hdr->b_l1hdr.b_buf = NULL; 3130 3131 /* 3132 * Allocate the hdr's buffer. This will contain either 3133 * the compressed or uncompressed data depending on the block 3134 * it references and compressed arc enablement. 3135 */ 3136 arc_hdr_alloc_pabd(hdr, flags); 3137 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3138 3139 return (hdr); 3140 } 3141 3142 /* 3143 * Transition between the two allocation states for the arc_buf_hdr struct. 3144 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without 3145 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller 3146 * version is used when a cache buffer is only in the L2ARC in order to reduce 3147 * memory usage. 3148 */ 3149 static arc_buf_hdr_t * 3150 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) 3151 { 3152 ASSERT(HDR_HAS_L2HDR(hdr)); 3153 3154 arc_buf_hdr_t *nhdr; 3155 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3156 3157 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || 3158 (old == hdr_l2only_cache && new == hdr_full_cache)); 3159 3160 /* 3161 * if the caller wanted a new full header and the header is to be 3162 * encrypted we will actually allocate the header from the full crypt 3163 * cache instead. The same applies to freeing from the old cache. 3164 */ 3165 if (HDR_PROTECTED(hdr) && new == hdr_full_cache) 3166 new = hdr_full_crypt_cache; 3167 if (HDR_PROTECTED(hdr) && old == hdr_full_cache) 3168 old = hdr_full_crypt_cache; 3169 3170 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); 3171 3172 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3173 buf_hash_remove(hdr); 3174 3175 bcopy(hdr, nhdr, HDR_L2ONLY_SIZE); 3176 3177 if (new == hdr_full_cache || new == hdr_full_crypt_cache) { 3178 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3179 /* 3180 * arc_access and arc_change_state need to be aware that a 3181 * header has just come out of L2ARC, so we set its state to 3182 * l2c_only even though it's about to change. 3183 */ 3184 nhdr->b_l1hdr.b_state = arc_l2c_only; 3185 3186 /* Verify previous threads set to NULL before freeing */ 3187 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); 3188 ASSERT(!HDR_HAS_RABD(hdr)); 3189 } else { 3190 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3191 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3192 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3193 3194 /* 3195 * If we've reached here, We must have been called from 3196 * arc_evict_hdr(), as such we should have already been 3197 * removed from any ghost list we were previously on 3198 * (which protects us from racing with arc_evict_state), 3199 * thus no locking is needed during this check. 3200 */ 3201 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3202 3203 /* 3204 * A buffer must not be moved into the arc_l2c_only 3205 * state if it's not finished being written out to the 3206 * l2arc device. Otherwise, the b_l1hdr.b_pabd field 3207 * might try to be accessed, even though it was removed. 3208 */ 3209 VERIFY(!HDR_L2_WRITING(hdr)); 3210 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3211 ASSERT(!HDR_HAS_RABD(hdr)); 3212 3213 #ifdef ZFS_DEBUG 3214 if (hdr->b_l1hdr.b_thawed != NULL) { 3215 kmem_free(hdr->b_l1hdr.b_thawed, 1); 3216 hdr->b_l1hdr.b_thawed = NULL; 3217 } 3218 #endif 3219 3220 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3221 } 3222 /* 3223 * The header has been reallocated so we need to re-insert it into any 3224 * lists it was on. 3225 */ 3226 (void) buf_hash_insert(nhdr, NULL); 3227 3228 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); 3229 3230 mutex_enter(&dev->l2ad_mtx); 3231 3232 /* 3233 * We must place the realloc'ed header back into the list at 3234 * the same spot. Otherwise, if it's placed earlier in the list, 3235 * l2arc_write_buffers() could find it during the function's 3236 * write phase, and try to write it out to the l2arc. 3237 */ 3238 list_insert_after(&dev->l2ad_buflist, hdr, nhdr); 3239 list_remove(&dev->l2ad_buflist, hdr); 3240 3241 mutex_exit(&dev->l2ad_mtx); 3242 3243 /* 3244 * Since we're using the pointer address as the tag when 3245 * incrementing and decrementing the l2ad_alloc refcount, we 3246 * must remove the old pointer (that we're about to destroy) and 3247 * add the new pointer to the refcount. Otherwise we'd remove 3248 * the wrong pointer address when calling arc_hdr_destroy() later. 3249 */ 3250 3251 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3252 hdr); 3253 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(nhdr), 3254 nhdr); 3255 3256 buf_discard_identity(hdr); 3257 kmem_cache_free(old, hdr); 3258 3259 return (nhdr); 3260 } 3261 3262 /* 3263 * This function allows an L1 header to be reallocated as a crypt 3264 * header and vice versa. If we are going to a crypt header, the 3265 * new fields will be zeroed out. 3266 */ 3267 static arc_buf_hdr_t * 3268 arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt) 3269 { 3270 arc_buf_hdr_t *nhdr; 3271 arc_buf_t *buf; 3272 kmem_cache_t *ncache, *ocache; 3273 3274 ASSERT(HDR_HAS_L1HDR(hdr)); 3275 ASSERT3U(!!HDR_PROTECTED(hdr), !=, need_crypt); 3276 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3277 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3278 ASSERT(!list_link_active(&hdr->b_l2hdr.b_l2node)); 3279 ASSERT3P(hdr->b_hash_next, ==, NULL); 3280 3281 if (need_crypt) { 3282 ncache = hdr_full_crypt_cache; 3283 ocache = hdr_full_cache; 3284 } else { 3285 ncache = hdr_full_cache; 3286 ocache = hdr_full_crypt_cache; 3287 } 3288 3289 nhdr = kmem_cache_alloc(ncache, KM_PUSHPAGE); 3290 3291 /* 3292 * Copy all members that aren't locks or condvars to the new header. 3293 * No lists are pointing to us (as we asserted above), so we don't 3294 * need to worry about the list nodes. 3295 */ 3296 nhdr->b_dva = hdr->b_dva; 3297 nhdr->b_birth = hdr->b_birth; 3298 nhdr->b_type = hdr->b_type; 3299 nhdr->b_flags = hdr->b_flags; 3300 nhdr->b_psize = hdr->b_psize; 3301 nhdr->b_lsize = hdr->b_lsize; 3302 nhdr->b_spa = hdr->b_spa; 3303 nhdr->b_l2hdr.b_dev = hdr->b_l2hdr.b_dev; 3304 nhdr->b_l2hdr.b_daddr = hdr->b_l2hdr.b_daddr; 3305 nhdr->b_l1hdr.b_freeze_cksum = hdr->b_l1hdr.b_freeze_cksum; 3306 nhdr->b_l1hdr.b_bufcnt = hdr->b_l1hdr.b_bufcnt; 3307 nhdr->b_l1hdr.b_byteswap = hdr->b_l1hdr.b_byteswap; 3308 nhdr->b_l1hdr.b_state = hdr->b_l1hdr.b_state; 3309 nhdr->b_l1hdr.b_arc_access = hdr->b_l1hdr.b_arc_access; 3310 nhdr->b_l1hdr.b_acb = hdr->b_l1hdr.b_acb; 3311 nhdr->b_l1hdr.b_pabd = hdr->b_l1hdr.b_pabd; 3312 #ifdef ZFS_DEBUG 3313 if (hdr->b_l1hdr.b_thawed != NULL) { 3314 nhdr->b_l1hdr.b_thawed = hdr->b_l1hdr.b_thawed; 3315 hdr->b_l1hdr.b_thawed = NULL; 3316 } 3317 #endif 3318 3319 /* 3320 * This refcount_add() exists only to ensure that the individual 3321 * arc buffers always point to a header that is referenced, avoiding 3322 * a small race condition that could trigger ASSERTs. 3323 */ 3324 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, FTAG); 3325 nhdr->b_l1hdr.b_buf = hdr->b_l1hdr.b_buf; 3326 for (buf = nhdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) { 3327 mutex_enter(&buf->b_evict_lock); 3328 buf->b_hdr = nhdr; 3329 mutex_exit(&buf->b_evict_lock); 3330 } 3331 zfs_refcount_transfer(&nhdr->b_l1hdr.b_refcnt, &hdr->b_l1hdr.b_refcnt); 3332 (void) zfs_refcount_remove(&nhdr->b_l1hdr.b_refcnt, FTAG); 3333 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3334 3335 if (need_crypt) { 3336 arc_hdr_set_flags(nhdr, ARC_FLAG_PROTECTED); 3337 } else { 3338 arc_hdr_clear_flags(nhdr, ARC_FLAG_PROTECTED); 3339 } 3340 3341 /* unset all members of the original hdr */ 3342 bzero(&hdr->b_dva, sizeof (dva_t)); 3343 hdr->b_birth = 0; 3344 hdr->b_type = ARC_BUFC_INVALID; 3345 hdr->b_flags = 0; 3346 hdr->b_psize = 0; 3347 hdr->b_lsize = 0; 3348 hdr->b_spa = 0; 3349 hdr->b_l2hdr.b_dev = NULL; 3350 hdr->b_l2hdr.b_daddr = 0; 3351 hdr->b_l1hdr.b_freeze_cksum = NULL; 3352 hdr->b_l1hdr.b_buf = NULL; 3353 hdr->b_l1hdr.b_bufcnt = 0; 3354 hdr->b_l1hdr.b_byteswap = 0; 3355 hdr->b_l1hdr.b_state = NULL; 3356 hdr->b_l1hdr.b_arc_access = 0; 3357 hdr->b_l1hdr.b_acb = NULL; 3358 hdr->b_l1hdr.b_pabd = NULL; 3359 3360 if (ocache == hdr_full_crypt_cache) { 3361 ASSERT(!HDR_HAS_RABD(hdr)); 3362 hdr->b_crypt_hdr.b_ot = DMU_OT_NONE; 3363 hdr->b_crypt_hdr.b_ebufcnt = 0; 3364 hdr->b_crypt_hdr.b_dsobj = 0; 3365 bzero(hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3366 bzero(hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3367 bzero(hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3368 } 3369 3370 buf_discard_identity(hdr); 3371 kmem_cache_free(ocache, hdr); 3372 3373 return (nhdr); 3374 } 3375 3376 /* 3377 * This function is used by the send / receive code to convert a newly 3378 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It 3379 * is also used to allow the root objset block to be uupdated without altering 3380 * its embedded MACs. Both block types will always be uncompressed so we do not 3381 * have to worry about compression type or psize. 3382 */ 3383 void 3384 arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, 3385 dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, 3386 const uint8_t *mac) 3387 { 3388 arc_buf_hdr_t *hdr = buf->b_hdr; 3389 3390 ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); 3391 ASSERT(HDR_HAS_L1HDR(hdr)); 3392 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3393 3394 buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); 3395 if (!HDR_PROTECTED(hdr)) 3396 hdr = arc_hdr_realloc_crypt(hdr, B_TRUE); 3397 hdr->b_crypt_hdr.b_dsobj = dsobj; 3398 hdr->b_crypt_hdr.b_ot = ot; 3399 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3400 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3401 if (!arc_hdr_has_uncompressed_buf(hdr)) 3402 arc_cksum_free(hdr); 3403 3404 if (salt != NULL) 3405 bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3406 if (iv != NULL) 3407 bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3408 if (mac != NULL) 3409 bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3410 } 3411 3412 /* 3413 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. 3414 * The buf is returned thawed since we expect the consumer to modify it. 3415 */ 3416 arc_buf_t * 3417 arc_alloc_buf(spa_t *spa, void *tag, arc_buf_contents_t type, int32_t size) 3418 { 3419 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, 3420 B_FALSE, ZIO_COMPRESS_OFF, type, B_FALSE); 3421 3422 arc_buf_t *buf = NULL; 3423 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, 3424 B_FALSE, B_FALSE, &buf)); 3425 arc_buf_thaw(buf); 3426 3427 return (buf); 3428 } 3429 3430 /* 3431 * Allocates an ARC buf header that's in an evicted & L2-cached state. 3432 * This is used during l2arc reconstruction to make empty ARC buffers 3433 * which circumvent the regular disk->arc->l2arc path and instead come 3434 * into being in the reverse order, i.e. l2arc->arc. 3435 */ 3436 arc_buf_hdr_t * 3437 arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, 3438 dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, 3439 enum zio_compress compress, boolean_t protected, boolean_t prefetch) 3440 { 3441 arc_buf_hdr_t *hdr; 3442 3443 ASSERT(size != 0); 3444 hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); 3445 hdr->b_birth = birth; 3446 hdr->b_type = type; 3447 hdr->b_flags = 0; 3448 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); 3449 HDR_SET_LSIZE(hdr, size); 3450 HDR_SET_PSIZE(hdr, psize); 3451 arc_hdr_set_compress(hdr, compress); 3452 if (protected) 3453 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3454 if (prefetch) 3455 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 3456 hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); 3457 3458 hdr->b_dva = dva; 3459 3460 hdr->b_l2hdr.b_dev = dev; 3461 hdr->b_l2hdr.b_daddr = daddr; 3462 3463 return (hdr); 3464 } 3465 3466 /* 3467 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this 3468 * for bufs containing metadata. 3469 */ 3470 arc_buf_t * 3471 arc_alloc_compressed_buf(spa_t *spa, void *tag, uint64_t psize, uint64_t lsize, 3472 enum zio_compress compression_type) 3473 { 3474 ASSERT3U(lsize, >, 0); 3475 ASSERT3U(lsize, >=, psize); 3476 ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); 3477 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3478 3479 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 3480 B_FALSE, compression_type, ARC_BUFC_DATA, B_FALSE); 3481 3482 arc_buf_t *buf = NULL; 3483 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, 3484 B_TRUE, B_FALSE, B_FALSE, &buf)); 3485 arc_buf_thaw(buf); 3486 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3487 3488 if (!arc_buf_is_shared(buf)) { 3489 /* 3490 * To ensure that the hdr has the correct data in it if we call 3491 * arc_untransform() on this buf before it's been written to 3492 * disk, it's easiest if we just set up sharing between the 3493 * buf and the hdr. 3494 */ 3495 ASSERT(!abd_is_linear(hdr->b_l1hdr.b_pabd)); 3496 arc_hdr_free_pabd(hdr, B_FALSE); 3497 arc_share_buf(hdr, buf); 3498 } 3499 3500 return (buf); 3501 } 3502 3503 arc_buf_t * 3504 arc_alloc_raw_buf(spa_t *spa, void *tag, uint64_t dsobj, boolean_t byteorder, 3505 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 3506 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 3507 enum zio_compress compression_type) 3508 { 3509 arc_buf_hdr_t *hdr; 3510 arc_buf_t *buf; 3511 arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? 3512 ARC_BUFC_METADATA : ARC_BUFC_DATA; 3513 3514 ASSERT3U(lsize, >, 0); 3515 ASSERT3U(lsize, >=, psize); 3516 ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); 3517 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3518 3519 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, 3520 compression_type, type, B_TRUE); 3521 3522 hdr->b_crypt_hdr.b_dsobj = dsobj; 3523 hdr->b_crypt_hdr.b_ot = ot; 3524 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3525 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3526 bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3527 bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3528 bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3529 3530 /* 3531 * This buffer will be considered encrypted even if the ot is not an 3532 * encrypted type. It will become authenticated instead in 3533 * arc_write_ready(). 3534 */ 3535 buf = NULL; 3536 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, 3537 B_FALSE, B_FALSE, &buf)); 3538 arc_buf_thaw(buf); 3539 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3540 3541 return (buf); 3542 } 3543 3544 static void 3545 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) 3546 { 3547 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3548 l2arc_dev_t *dev = l2hdr->b_dev; 3549 uint64_t psize = HDR_GET_PSIZE(hdr); 3550 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3551 3552 ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); 3553 ASSERT(HDR_HAS_L2HDR(hdr)); 3554 3555 list_remove(&dev->l2ad_buflist, hdr); 3556 3557 ARCSTAT_INCR(arcstat_l2_psize, -psize); 3558 ARCSTAT_INCR(arcstat_l2_lsize, -HDR_GET_LSIZE(hdr)); 3559 3560 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 3561 3562 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3563 hdr); 3564 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 3565 } 3566 3567 static void 3568 arc_hdr_destroy(arc_buf_hdr_t *hdr) 3569 { 3570 if (HDR_HAS_L1HDR(hdr)) { 3571 ASSERT(hdr->b_l1hdr.b_buf == NULL || 3572 hdr->b_l1hdr.b_bufcnt > 0); 3573 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3574 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3575 } 3576 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3577 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 3578 3579 if (HDR_HAS_L2HDR(hdr)) { 3580 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3581 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); 3582 3583 if (!buflist_held) 3584 mutex_enter(&dev->l2ad_mtx); 3585 3586 /* 3587 * Even though we checked this conditional above, we 3588 * need to check this again now that we have the 3589 * l2ad_mtx. This is because we could be racing with 3590 * another thread calling l2arc_evict() which might have 3591 * destroyed this header's L2 portion as we were waiting 3592 * to acquire the l2ad_mtx. If that happens, we don't 3593 * want to re-destroy the header's L2 portion. 3594 */ 3595 if (HDR_HAS_L2HDR(hdr)) 3596 arc_hdr_l2hdr_destroy(hdr); 3597 3598 if (!buflist_held) 3599 mutex_exit(&dev->l2ad_mtx); 3600 } 3601 3602 /* 3603 * The header's identity can only be safely discarded once it is no 3604 * longer discoverable. This requires removing it from the hash table 3605 * and the l2arc header list. After this point the hash lock can not 3606 * be used to protect the header. 3607 */ 3608 if (!HDR_EMPTY(hdr)) 3609 buf_discard_identity(hdr); 3610 3611 if (HDR_HAS_L1HDR(hdr)) { 3612 arc_cksum_free(hdr); 3613 3614 while (hdr->b_l1hdr.b_buf != NULL) 3615 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); 3616 3617 #ifdef ZFS_DEBUG 3618 if (hdr->b_l1hdr.b_thawed != NULL) { 3619 kmem_free(hdr->b_l1hdr.b_thawed, 1); 3620 hdr->b_l1hdr.b_thawed = NULL; 3621 } 3622 #endif 3623 3624 if (hdr->b_l1hdr.b_pabd != NULL) 3625 arc_hdr_free_pabd(hdr, B_FALSE); 3626 3627 if (HDR_HAS_RABD(hdr)) 3628 arc_hdr_free_pabd(hdr, B_TRUE); 3629 } 3630 3631 ASSERT3P(hdr->b_hash_next, ==, NULL); 3632 if (HDR_HAS_L1HDR(hdr)) { 3633 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3634 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 3635 3636 if (!HDR_PROTECTED(hdr)) { 3637 kmem_cache_free(hdr_full_cache, hdr); 3638 } else { 3639 kmem_cache_free(hdr_full_crypt_cache, hdr); 3640 } 3641 } else { 3642 kmem_cache_free(hdr_l2only_cache, hdr); 3643 } 3644 } 3645 3646 void 3647 arc_buf_destroy(arc_buf_t *buf, void* tag) 3648 { 3649 arc_buf_hdr_t *hdr = buf->b_hdr; 3650 3651 if (hdr->b_l1hdr.b_state == arc_anon) { 3652 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 3653 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3654 VERIFY0(remove_reference(hdr, NULL, tag)); 3655 arc_hdr_destroy(hdr); 3656 return; 3657 } 3658 3659 kmutex_t *hash_lock = HDR_LOCK(hdr); 3660 mutex_enter(hash_lock); 3661 3662 ASSERT3P(hdr, ==, buf->b_hdr); 3663 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3664 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 3665 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); 3666 ASSERT3P(buf->b_data, !=, NULL); 3667 3668 (void) remove_reference(hdr, hash_lock, tag); 3669 arc_buf_destroy_impl(buf); 3670 mutex_exit(hash_lock); 3671 } 3672 3673 /* 3674 * Evict the arc_buf_hdr that is provided as a parameter. The resultant 3675 * state of the header is dependent on its state prior to entering this 3676 * function. The following transitions are possible: 3677 * 3678 * - arc_mru -> arc_mru_ghost 3679 * - arc_mfu -> arc_mfu_ghost 3680 * - arc_mru_ghost -> arc_l2c_only 3681 * - arc_mru_ghost -> deleted 3682 * - arc_mfu_ghost -> arc_l2c_only 3683 * - arc_mfu_ghost -> deleted 3684 */ 3685 static int64_t 3686 arc_evict_hdr(arc_buf_hdr_t *hdr, kmutex_t *hash_lock) 3687 { 3688 arc_state_t *evicted_state, *state; 3689 int64_t bytes_evicted = 0; 3690 int min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? 3691 zfs_arc_min_prescient_prefetch_ms : zfs_arc_min_prefetch_ms; 3692 3693 ASSERT(MUTEX_HELD(hash_lock)); 3694 ASSERT(HDR_HAS_L1HDR(hdr)); 3695 3696 state = hdr->b_l1hdr.b_state; 3697 if (GHOST_STATE(state)) { 3698 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3699 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3700 3701 /* 3702 * l2arc_write_buffers() relies on a header's L1 portion 3703 * (i.e. its b_pabd field) during its write phase. 3704 * Thus, we cannot push a header onto the arc_l2c_only 3705 * state (removing its L1 piece) until the header is 3706 * done being written to the l2arc. 3707 */ 3708 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { 3709 ARCSTAT_BUMP(arcstat_evict_l2_skip); 3710 return (bytes_evicted); 3711 } 3712 3713 ARCSTAT_BUMP(arcstat_deleted); 3714 bytes_evicted += HDR_GET_LSIZE(hdr); 3715 3716 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); 3717 3718 if (HDR_HAS_L2HDR(hdr)) { 3719 ASSERT(hdr->b_l1hdr.b_pabd == NULL); 3720 ASSERT(!HDR_HAS_RABD(hdr)); 3721 /* 3722 * This buffer is cached on the 2nd Level ARC; 3723 * don't destroy the header. 3724 */ 3725 arc_change_state(arc_l2c_only, hdr, hash_lock); 3726 /* 3727 * dropping from L1+L2 cached to L2-only, 3728 * realloc to remove the L1 header. 3729 */ 3730 hdr = arc_hdr_realloc(hdr, hdr_full_cache, 3731 hdr_l2only_cache); 3732 } else { 3733 arc_change_state(arc_anon, hdr, hash_lock); 3734 arc_hdr_destroy(hdr); 3735 } 3736 return (bytes_evicted); 3737 } 3738 3739 ASSERT(state == arc_mru || state == arc_mfu); 3740 evicted_state = (state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost; 3741 3742 /* prefetch buffers have a minimum lifespan */ 3743 if (HDR_IO_IN_PROGRESS(hdr) || 3744 ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && 3745 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < min_lifetime * hz)) { 3746 ARCSTAT_BUMP(arcstat_evict_skip); 3747 return (bytes_evicted); 3748 } 3749 3750 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3751 while (hdr->b_l1hdr.b_buf) { 3752 arc_buf_t *buf = hdr->b_l1hdr.b_buf; 3753 if (!mutex_tryenter(&buf->b_evict_lock)) { 3754 ARCSTAT_BUMP(arcstat_mutex_miss); 3755 break; 3756 } 3757 if (buf->b_data != NULL) 3758 bytes_evicted += HDR_GET_LSIZE(hdr); 3759 mutex_exit(&buf->b_evict_lock); 3760 arc_buf_destroy_impl(buf); 3761 } 3762 3763 if (HDR_HAS_L2HDR(hdr)) { 3764 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); 3765 } else { 3766 if (l2arc_write_eligible(hdr->b_spa, hdr)) { 3767 ARCSTAT_INCR(arcstat_evict_l2_eligible, 3768 HDR_GET_LSIZE(hdr)); 3769 } else { 3770 ARCSTAT_INCR(arcstat_evict_l2_ineligible, 3771 HDR_GET_LSIZE(hdr)); 3772 } 3773 } 3774 3775 if (hdr->b_l1hdr.b_bufcnt == 0) { 3776 arc_cksum_free(hdr); 3777 3778 bytes_evicted += arc_hdr_size(hdr); 3779 3780 /* 3781 * If this hdr is being evicted and has a compressed 3782 * buffer then we discard it here before we change states. 3783 * This ensures that the accounting is updated correctly 3784 * in arc_free_data_impl(). 3785 */ 3786 if (hdr->b_l1hdr.b_pabd != NULL) 3787 arc_hdr_free_pabd(hdr, B_FALSE); 3788 3789 if (HDR_HAS_RABD(hdr)) 3790 arc_hdr_free_pabd(hdr, B_TRUE); 3791 3792 arc_change_state(evicted_state, hdr, hash_lock); 3793 ASSERT(HDR_IN_HASH_TABLE(hdr)); 3794 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 3795 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); 3796 } 3797 3798 return (bytes_evicted); 3799 } 3800 3801 static uint64_t 3802 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, 3803 uint64_t spa, int64_t bytes) 3804 { 3805 multilist_sublist_t *mls; 3806 uint64_t bytes_evicted = 0; 3807 arc_buf_hdr_t *hdr; 3808 kmutex_t *hash_lock; 3809 int evict_count = 0; 3810 3811 ASSERT3P(marker, !=, NULL); 3812 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL); 3813 3814 mls = multilist_sublist_lock(ml, idx); 3815 3816 for (hdr = multilist_sublist_prev(mls, marker); hdr != NULL; 3817 hdr = multilist_sublist_prev(mls, marker)) { 3818 if ((bytes != ARC_EVICT_ALL && bytes_evicted >= bytes) || 3819 (evict_count >= zfs_arc_evict_batch_limit)) 3820 break; 3821 3822 /* 3823 * To keep our iteration location, move the marker 3824 * forward. Since we're not holding hdr's hash lock, we 3825 * must be very careful and not remove 'hdr' from the 3826 * sublist. Otherwise, other consumers might mistake the 3827 * 'hdr' as not being on a sublist when they call the 3828 * multilist_link_active() function (they all rely on 3829 * the hash lock protecting concurrent insertions and 3830 * removals). multilist_sublist_move_forward() was 3831 * specifically implemented to ensure this is the case 3832 * (only 'marker' will be removed and re-inserted). 3833 */ 3834 multilist_sublist_move_forward(mls, marker); 3835 3836 /* 3837 * The only case where the b_spa field should ever be 3838 * zero, is the marker headers inserted by 3839 * arc_evict_state(). It's possible for multiple threads 3840 * to be calling arc_evict_state() concurrently (e.g. 3841 * dsl_pool_close() and zio_inject_fault()), so we must 3842 * skip any markers we see from these other threads. 3843 */ 3844 if (hdr->b_spa == 0) 3845 continue; 3846 3847 /* we're only interested in evicting buffers of a certain spa */ 3848 if (spa != 0 && hdr->b_spa != spa) { 3849 ARCSTAT_BUMP(arcstat_evict_skip); 3850 continue; 3851 } 3852 3853 hash_lock = HDR_LOCK(hdr); 3854 3855 /* 3856 * We aren't calling this function from any code path 3857 * that would already be holding a hash lock, so we're 3858 * asserting on this assumption to be defensive in case 3859 * this ever changes. Without this check, it would be 3860 * possible to incorrectly increment arcstat_mutex_miss 3861 * below (e.g. if the code changed such that we called 3862 * this function with a hash lock held). 3863 */ 3864 ASSERT(!MUTEX_HELD(hash_lock)); 3865 3866 if (mutex_tryenter(hash_lock)) { 3867 uint64_t evicted = arc_evict_hdr(hdr, hash_lock); 3868 mutex_exit(hash_lock); 3869 3870 bytes_evicted += evicted; 3871 3872 /* 3873 * If evicted is zero, arc_evict_hdr() must have 3874 * decided to skip this header, don't increment 3875 * evict_count in this case. 3876 */ 3877 if (evicted != 0) 3878 evict_count++; 3879 3880 /* 3881 * If arc_size isn't overflowing, signal any 3882 * threads that might happen to be waiting. 3883 * 3884 * For each header evicted, we wake up a single 3885 * thread. If we used cv_broadcast, we could 3886 * wake up "too many" threads causing arc_size 3887 * to significantly overflow arc_c; since 3888 * arc_get_data_impl() doesn't check for overflow 3889 * when it's woken up (it doesn't because it's 3890 * possible for the ARC to be overflowing while 3891 * full of un-evictable buffers, and the 3892 * function should proceed in this case). 3893 * 3894 * If threads are left sleeping, due to not 3895 * using cv_broadcast here, they will be woken 3896 * up via cv_broadcast in arc_adjust_cb() just 3897 * before arc_adjust_zthr sleeps. 3898 */ 3899 mutex_enter(&arc_adjust_lock); 3900 if (!arc_is_overflowing()) 3901 cv_signal(&arc_adjust_waiters_cv); 3902 mutex_exit(&arc_adjust_lock); 3903 } else { 3904 ARCSTAT_BUMP(arcstat_mutex_miss); 3905 } 3906 } 3907 3908 multilist_sublist_unlock(mls); 3909 3910 return (bytes_evicted); 3911 } 3912 3913 /* 3914 * Evict buffers from the given arc state, until we've removed the 3915 * specified number of bytes. Move the removed buffers to the 3916 * appropriate evict state. 3917 * 3918 * This function makes a "best effort". It skips over any buffers 3919 * it can't get a hash_lock on, and so, may not catch all candidates. 3920 * It may also return without evicting as much space as requested. 3921 * 3922 * If bytes is specified using the special value ARC_EVICT_ALL, this 3923 * will evict all available (i.e. unlocked and evictable) buffers from 3924 * the given arc state; which is used by arc_flush(). 3925 */ 3926 static uint64_t 3927 arc_evict_state(arc_state_t *state, uint64_t spa, int64_t bytes, 3928 arc_buf_contents_t type) 3929 { 3930 uint64_t total_evicted = 0; 3931 multilist_t *ml = state->arcs_list[type]; 3932 int num_sublists; 3933 arc_buf_hdr_t **markers; 3934 3935 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL); 3936 3937 num_sublists = multilist_get_num_sublists(ml); 3938 3939 /* 3940 * If we've tried to evict from each sublist, made some 3941 * progress, but still have not hit the target number of bytes 3942 * to evict, we want to keep trying. The markers allow us to 3943 * pick up where we left off for each individual sublist, rather 3944 * than starting from the tail each time. 3945 */ 3946 markers = kmem_zalloc(sizeof (*markers) * num_sublists, KM_SLEEP); 3947 for (int i = 0; i < num_sublists; i++) { 3948 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); 3949 3950 /* 3951 * A b_spa of 0 is used to indicate that this header is 3952 * a marker. This fact is used in arc_adjust_type() and 3953 * arc_evict_state_impl(). 3954 */ 3955 markers[i]->b_spa = 0; 3956 3957 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 3958 multilist_sublist_insert_tail(mls, markers[i]); 3959 multilist_sublist_unlock(mls); 3960 } 3961 3962 /* 3963 * While we haven't hit our target number of bytes to evict, or 3964 * we're evicting all available buffers. 3965 */ 3966 while (total_evicted < bytes || bytes == ARC_EVICT_ALL) { 3967 /* 3968 * Start eviction using a randomly selected sublist, 3969 * this is to try and evenly balance eviction across all 3970 * sublists. Always starting at the same sublist 3971 * (e.g. index 0) would cause evictions to favor certain 3972 * sublists over others. 3973 */ 3974 int sublist_idx = multilist_get_random_index(ml); 3975 uint64_t scan_evicted = 0; 3976 3977 for (int i = 0; i < num_sublists; i++) { 3978 uint64_t bytes_remaining; 3979 uint64_t bytes_evicted; 3980 3981 if (bytes == ARC_EVICT_ALL) 3982 bytes_remaining = ARC_EVICT_ALL; 3983 else if (total_evicted < bytes) 3984 bytes_remaining = bytes - total_evicted; 3985 else 3986 break; 3987 3988 bytes_evicted = arc_evict_state_impl(ml, sublist_idx, 3989 markers[sublist_idx], spa, bytes_remaining); 3990 3991 scan_evicted += bytes_evicted; 3992 total_evicted += bytes_evicted; 3993 3994 /* we've reached the end, wrap to the beginning */ 3995 if (++sublist_idx >= num_sublists) 3996 sublist_idx = 0; 3997 } 3998 3999 /* 4000 * If we didn't evict anything during this scan, we have 4001 * no reason to believe we'll evict more during another 4002 * scan, so break the loop. 4003 */ 4004 if (scan_evicted == 0) { 4005 /* This isn't possible, let's make that obvious */ 4006 ASSERT3S(bytes, !=, 0); 4007 4008 /* 4009 * When bytes is ARC_EVICT_ALL, the only way to 4010 * break the loop is when scan_evicted is zero. 4011 * In that case, we actually have evicted enough, 4012 * so we don't want to increment the kstat. 4013 */ 4014 if (bytes != ARC_EVICT_ALL) { 4015 ASSERT3S(total_evicted, <, bytes); 4016 ARCSTAT_BUMP(arcstat_evict_not_enough); 4017 } 4018 4019 break; 4020 } 4021 } 4022 4023 for (int i = 0; i < num_sublists; i++) { 4024 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 4025 multilist_sublist_remove(mls, markers[i]); 4026 multilist_sublist_unlock(mls); 4027 4028 kmem_cache_free(hdr_full_cache, markers[i]); 4029 } 4030 kmem_free(markers, sizeof (*markers) * num_sublists); 4031 4032 return (total_evicted); 4033 } 4034 4035 /* 4036 * Flush all "evictable" data of the given type from the arc state 4037 * specified. This will not evict any "active" buffers (i.e. referenced). 4038 * 4039 * When 'retry' is set to B_FALSE, the function will make a single pass 4040 * over the state and evict any buffers that it can. Since it doesn't 4041 * continually retry the eviction, it might end up leaving some buffers 4042 * in the ARC due to lock misses. 4043 * 4044 * When 'retry' is set to B_TRUE, the function will continually retry the 4045 * eviction until *all* evictable buffers have been removed from the 4046 * state. As a result, if concurrent insertions into the state are 4047 * allowed (e.g. if the ARC isn't shutting down), this function might 4048 * wind up in an infinite loop, continually trying to evict buffers. 4049 */ 4050 static uint64_t 4051 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, 4052 boolean_t retry) 4053 { 4054 uint64_t evicted = 0; 4055 4056 while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { 4057 evicted += arc_evict_state(state, spa, ARC_EVICT_ALL, type); 4058 4059 if (!retry) 4060 break; 4061 } 4062 4063 return (evicted); 4064 } 4065 4066 /* 4067 * Evict the specified number of bytes from the state specified, 4068 * restricting eviction to the spa and type given. This function 4069 * prevents us from trying to evict more from a state's list than 4070 * is "evictable", and to skip evicting altogether when passed a 4071 * negative value for "bytes". In contrast, arc_evict_state() will 4072 * evict everything it can, when passed a negative value for "bytes". 4073 */ 4074 static uint64_t 4075 arc_adjust_impl(arc_state_t *state, uint64_t spa, int64_t bytes, 4076 arc_buf_contents_t type) 4077 { 4078 int64_t delta; 4079 4080 if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { 4081 delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), 4082 bytes); 4083 return (arc_evict_state(state, spa, delta, type)); 4084 } 4085 4086 return (0); 4087 } 4088 4089 /* 4090 * Evict metadata buffers from the cache, such that arc_meta_used is 4091 * capped by the arc_meta_limit tunable. 4092 */ 4093 static uint64_t 4094 arc_adjust_meta(uint64_t meta_used) 4095 { 4096 uint64_t total_evicted = 0; 4097 int64_t target; 4098 4099 /* 4100 * If we're over the meta limit, we want to evict enough 4101 * metadata to get back under the meta limit. We don't want to 4102 * evict so much that we drop the MRU below arc_p, though. If 4103 * we're over the meta limit more than we're over arc_p, we 4104 * evict some from the MRU here, and some from the MFU below. 4105 */ 4106 target = MIN((int64_t)(meta_used - arc_meta_limit), 4107 (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + 4108 zfs_refcount_count(&arc_mru->arcs_size) - arc_p)); 4109 4110 total_evicted += arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4111 4112 /* 4113 * Similar to the above, we want to evict enough bytes to get us 4114 * below the meta limit, but not so much as to drop us below the 4115 * space allotted to the MFU (which is defined as arc_c - arc_p). 4116 */ 4117 target = MIN((int64_t)(meta_used - arc_meta_limit), 4118 (int64_t)(zfs_refcount_count(&arc_mfu->arcs_size) - 4119 (arc_c - arc_p))); 4120 4121 total_evicted += arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4122 4123 return (total_evicted); 4124 } 4125 4126 /* 4127 * Return the type of the oldest buffer in the given arc state 4128 * 4129 * This function will select a random sublist of type ARC_BUFC_DATA and 4130 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist 4131 * is compared, and the type which contains the "older" buffer will be 4132 * returned. 4133 */ 4134 static arc_buf_contents_t 4135 arc_adjust_type(arc_state_t *state) 4136 { 4137 multilist_t *data_ml = state->arcs_list[ARC_BUFC_DATA]; 4138 multilist_t *meta_ml = state->arcs_list[ARC_BUFC_METADATA]; 4139 int data_idx = multilist_get_random_index(data_ml); 4140 int meta_idx = multilist_get_random_index(meta_ml); 4141 multilist_sublist_t *data_mls; 4142 multilist_sublist_t *meta_mls; 4143 arc_buf_contents_t type; 4144 arc_buf_hdr_t *data_hdr; 4145 arc_buf_hdr_t *meta_hdr; 4146 4147 /* 4148 * We keep the sublist lock until we're finished, to prevent 4149 * the headers from being destroyed via arc_evict_state(). 4150 */ 4151 data_mls = multilist_sublist_lock(data_ml, data_idx); 4152 meta_mls = multilist_sublist_lock(meta_ml, meta_idx); 4153 4154 /* 4155 * These two loops are to ensure we skip any markers that 4156 * might be at the tail of the lists due to arc_evict_state(). 4157 */ 4158 4159 for (data_hdr = multilist_sublist_tail(data_mls); data_hdr != NULL; 4160 data_hdr = multilist_sublist_prev(data_mls, data_hdr)) { 4161 if (data_hdr->b_spa != 0) 4162 break; 4163 } 4164 4165 for (meta_hdr = multilist_sublist_tail(meta_mls); meta_hdr != NULL; 4166 meta_hdr = multilist_sublist_prev(meta_mls, meta_hdr)) { 4167 if (meta_hdr->b_spa != 0) 4168 break; 4169 } 4170 4171 if (data_hdr == NULL && meta_hdr == NULL) { 4172 type = ARC_BUFC_DATA; 4173 } else if (data_hdr == NULL) { 4174 ASSERT3P(meta_hdr, !=, NULL); 4175 type = ARC_BUFC_METADATA; 4176 } else if (meta_hdr == NULL) { 4177 ASSERT3P(data_hdr, !=, NULL); 4178 type = ARC_BUFC_DATA; 4179 } else { 4180 ASSERT3P(data_hdr, !=, NULL); 4181 ASSERT3P(meta_hdr, !=, NULL); 4182 4183 /* The headers can't be on the sublist without an L1 header */ 4184 ASSERT(HDR_HAS_L1HDR(data_hdr)); 4185 ASSERT(HDR_HAS_L1HDR(meta_hdr)); 4186 4187 if (data_hdr->b_l1hdr.b_arc_access < 4188 meta_hdr->b_l1hdr.b_arc_access) { 4189 type = ARC_BUFC_DATA; 4190 } else { 4191 type = ARC_BUFC_METADATA; 4192 } 4193 } 4194 4195 multilist_sublist_unlock(meta_mls); 4196 multilist_sublist_unlock(data_mls); 4197 4198 return (type); 4199 } 4200 4201 /* 4202 * Evict buffers from the cache, such that arc_size is capped by arc_c. 4203 */ 4204 static uint64_t 4205 arc_adjust(void) 4206 { 4207 uint64_t total_evicted = 0; 4208 uint64_t bytes; 4209 int64_t target; 4210 uint64_t asize = aggsum_value(&arc_size); 4211 uint64_t ameta = aggsum_value(&arc_meta_used); 4212 4213 /* 4214 * If we're over arc_meta_limit, we want to correct that before 4215 * potentially evicting data buffers below. 4216 */ 4217 total_evicted += arc_adjust_meta(ameta); 4218 4219 /* 4220 * Adjust MRU size 4221 * 4222 * If we're over the target cache size, we want to evict enough 4223 * from the list to get back to our target size. We don't want 4224 * to evict too much from the MRU, such that it drops below 4225 * arc_p. So, if we're over our target cache size more than 4226 * the MRU is over arc_p, we'll evict enough to get back to 4227 * arc_p here, and then evict more from the MFU below. 4228 */ 4229 target = MIN((int64_t)(asize - arc_c), 4230 (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + 4231 zfs_refcount_count(&arc_mru->arcs_size) + ameta - arc_p)); 4232 4233 /* 4234 * If we're below arc_meta_min, always prefer to evict data. 4235 * Otherwise, try to satisfy the requested number of bytes to 4236 * evict from the type which contains older buffers; in an 4237 * effort to keep newer buffers in the cache regardless of their 4238 * type. If we cannot satisfy the number of bytes from this 4239 * type, spill over into the next type. 4240 */ 4241 if (arc_adjust_type(arc_mru) == ARC_BUFC_METADATA && 4242 ameta > arc_meta_min) { 4243 bytes = arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4244 total_evicted += bytes; 4245 4246 /* 4247 * If we couldn't evict our target number of bytes from 4248 * metadata, we try to get the rest from data. 4249 */ 4250 target -= bytes; 4251 4252 total_evicted += 4253 arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_DATA); 4254 } else { 4255 bytes = arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_DATA); 4256 total_evicted += bytes; 4257 4258 /* 4259 * If we couldn't evict our target number of bytes from 4260 * data, we try to get the rest from metadata. 4261 */ 4262 target -= bytes; 4263 4264 total_evicted += 4265 arc_adjust_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4266 } 4267 4268 /* 4269 * Adjust MFU size 4270 * 4271 * Now that we've tried to evict enough from the MRU to get its 4272 * size back to arc_p, if we're still above the target cache 4273 * size, we evict the rest from the MFU. 4274 */ 4275 target = asize - arc_c; 4276 4277 if (arc_adjust_type(arc_mfu) == ARC_BUFC_METADATA && 4278 ameta > arc_meta_min) { 4279 bytes = arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4280 total_evicted += bytes; 4281 4282 /* 4283 * If we couldn't evict our target number of bytes from 4284 * metadata, we try to get the rest from data. 4285 */ 4286 target -= bytes; 4287 4288 total_evicted += 4289 arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_DATA); 4290 } else { 4291 bytes = arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_DATA); 4292 total_evicted += bytes; 4293 4294 /* 4295 * If we couldn't evict our target number of bytes from 4296 * data, we try to get the rest from data. 4297 */ 4298 target -= bytes; 4299 4300 total_evicted += 4301 arc_adjust_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4302 } 4303 4304 /* 4305 * Adjust ghost lists 4306 * 4307 * In addition to the above, the ARC also defines target values 4308 * for the ghost lists. The sum of the mru list and mru ghost 4309 * list should never exceed the target size of the cache, and 4310 * the sum of the mru list, mfu list, mru ghost list, and mfu 4311 * ghost list should never exceed twice the target size of the 4312 * cache. The following logic enforces these limits on the ghost 4313 * caches, and evicts from them as needed. 4314 */ 4315 target = zfs_refcount_count(&arc_mru->arcs_size) + 4316 zfs_refcount_count(&arc_mru_ghost->arcs_size) - arc_c; 4317 4318 bytes = arc_adjust_impl(arc_mru_ghost, 0, target, ARC_BUFC_DATA); 4319 total_evicted += bytes; 4320 4321 target -= bytes; 4322 4323 total_evicted += 4324 arc_adjust_impl(arc_mru_ghost, 0, target, ARC_BUFC_METADATA); 4325 4326 /* 4327 * We assume the sum of the mru list and mfu list is less than 4328 * or equal to arc_c (we enforced this above), which means we 4329 * can use the simpler of the two equations below: 4330 * 4331 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c 4332 * mru ghost + mfu ghost <= arc_c 4333 */ 4334 target = zfs_refcount_count(&arc_mru_ghost->arcs_size) + 4335 zfs_refcount_count(&arc_mfu_ghost->arcs_size) - arc_c; 4336 4337 bytes = arc_adjust_impl(arc_mfu_ghost, 0, target, ARC_BUFC_DATA); 4338 total_evicted += bytes; 4339 4340 target -= bytes; 4341 4342 total_evicted += 4343 arc_adjust_impl(arc_mfu_ghost, 0, target, ARC_BUFC_METADATA); 4344 4345 return (total_evicted); 4346 } 4347 4348 void 4349 arc_flush(spa_t *spa, boolean_t retry) 4350 { 4351 uint64_t guid = 0; 4352 4353 /* 4354 * If retry is B_TRUE, a spa must not be specified since we have 4355 * no good way to determine if all of a spa's buffers have been 4356 * evicted from an arc state. 4357 */ 4358 ASSERT(!retry || spa == 0); 4359 4360 if (spa != NULL) 4361 guid = spa_load_guid(spa); 4362 4363 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); 4364 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); 4365 4366 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); 4367 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); 4368 4369 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); 4370 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); 4371 4372 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); 4373 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); 4374 } 4375 4376 static void 4377 arc_reduce_target_size(int64_t to_free) 4378 { 4379 uint64_t asize = aggsum_value(&arc_size); 4380 if (arc_c > arc_c_min) { 4381 4382 if (arc_c > arc_c_min + to_free) 4383 atomic_add_64(&arc_c, -to_free); 4384 else 4385 arc_c = arc_c_min; 4386 4387 atomic_add_64(&arc_p, -(arc_p >> arc_shrink_shift)); 4388 if (asize < arc_c) 4389 arc_c = MAX(asize, arc_c_min); 4390 if (arc_p > arc_c) 4391 arc_p = (arc_c >> 1); 4392 ASSERT(arc_c >= arc_c_min); 4393 ASSERT((int64_t)arc_p >= 0); 4394 } 4395 4396 if (asize > arc_c) { 4397 /* See comment in arc_adjust_cb_check() on why lock+flag */ 4398 mutex_enter(&arc_adjust_lock); 4399 arc_adjust_needed = B_TRUE; 4400 mutex_exit(&arc_adjust_lock); 4401 zthr_wakeup(arc_adjust_zthr); 4402 } 4403 } 4404 4405 typedef enum free_memory_reason_t { 4406 FMR_UNKNOWN, 4407 FMR_NEEDFREE, 4408 FMR_LOTSFREE, 4409 FMR_SWAPFS_MINFREE, 4410 FMR_PAGES_PP_MAXIMUM, 4411 FMR_HEAP_ARENA, 4412 FMR_ZIO_ARENA, 4413 } free_memory_reason_t; 4414 4415 int64_t last_free_memory; 4416 free_memory_reason_t last_free_reason; 4417 4418 /* 4419 * Additional reserve of pages for pp_reserve. 4420 */ 4421 int64_t arc_pages_pp_reserve = 64; 4422 4423 /* 4424 * Additional reserve of pages for swapfs. 4425 */ 4426 int64_t arc_swapfs_reserve = 64; 4427 4428 /* 4429 * Return the amount of memory that can be consumed before reclaim will be 4430 * needed. Positive if there is sufficient free memory, negative indicates 4431 * the amount of memory that needs to be freed up. 4432 */ 4433 static int64_t 4434 arc_available_memory(void) 4435 { 4436 int64_t lowest = INT64_MAX; 4437 int64_t n; 4438 free_memory_reason_t r = FMR_UNKNOWN; 4439 4440 #ifdef _KERNEL 4441 if (needfree > 0) { 4442 n = PAGESIZE * (-needfree); 4443 if (n < lowest) { 4444 lowest = n; 4445 r = FMR_NEEDFREE; 4446 } 4447 } 4448 4449 /* 4450 * check that we're out of range of the pageout scanner. It starts to 4451 * schedule paging if freemem is less than lotsfree and needfree. 4452 * lotsfree is the high-water mark for pageout, and needfree is the 4453 * number of needed free pages. We add extra pages here to make sure 4454 * the scanner doesn't start up while we're freeing memory. 4455 */ 4456 n = PAGESIZE * (freemem - lotsfree - needfree - desfree); 4457 if (n < lowest) { 4458 lowest = n; 4459 r = FMR_LOTSFREE; 4460 } 4461 4462 /* 4463 * check to make sure that swapfs has enough space so that anon 4464 * reservations can still succeed. anon_resvmem() checks that the 4465 * availrmem is greater than swapfs_minfree, and the number of reserved 4466 * swap pages. We also add a bit of extra here just to prevent 4467 * circumstances from getting really dire. 4468 */ 4469 n = PAGESIZE * (availrmem - swapfs_minfree - swapfs_reserve - 4470 desfree - arc_swapfs_reserve); 4471 if (n < lowest) { 4472 lowest = n; 4473 r = FMR_SWAPFS_MINFREE; 4474 } 4475 4476 4477 /* 4478 * Check that we have enough availrmem that memory locking (e.g., via 4479 * mlock(3C) or memcntl(2)) can still succeed. (pages_pp_maximum 4480 * stores the number of pages that cannot be locked; when availrmem 4481 * drops below pages_pp_maximum, page locking mechanisms such as 4482 * page_pp_lock() will fail.) 4483 */ 4484 n = PAGESIZE * (availrmem - pages_pp_maximum - 4485 arc_pages_pp_reserve); 4486 if (n < lowest) { 4487 lowest = n; 4488 r = FMR_PAGES_PP_MAXIMUM; 4489 } 4490 4491 #if defined(__i386) 4492 /* 4493 * If we're on an i386 platform, it's possible that we'll exhaust the 4494 * kernel heap space before we ever run out of available physical 4495 * memory. Most checks of the size of the heap_area compare against 4496 * tune.t_minarmem, which is the minimum available real memory that we 4497 * can have in the system. However, this is generally fixed at 25 pages 4498 * which is so low that it's useless. In this comparison, we seek to 4499 * calculate the total heap-size, and reclaim if more than 3/4ths of the 4500 * heap is allocated. (Or, in the calculation, if less than 1/4th is 4501 * free) 4502 */ 4503 n = (int64_t)vmem_size(heap_arena, VMEM_FREE) - 4504 (vmem_size(heap_arena, VMEM_FREE | VMEM_ALLOC) >> 2); 4505 if (n < lowest) { 4506 lowest = n; 4507 r = FMR_HEAP_ARENA; 4508 } 4509 #endif 4510 4511 /* 4512 * If zio data pages are being allocated out of a separate heap segment, 4513 * then enforce that the size of available vmem for this arena remains 4514 * above about 1/4th (1/(2^arc_zio_arena_free_shift)) free. 4515 * 4516 * Note that reducing the arc_zio_arena_free_shift keeps more virtual 4517 * memory (in the zio_arena) free, which can avoid memory 4518 * fragmentation issues. 4519 */ 4520 if (zio_arena != NULL) { 4521 n = (int64_t)vmem_size(zio_arena, VMEM_FREE) - 4522 (vmem_size(zio_arena, VMEM_ALLOC) >> 4523 arc_zio_arena_free_shift); 4524 if (n < lowest) { 4525 lowest = n; 4526 r = FMR_ZIO_ARENA; 4527 } 4528 } 4529 #else 4530 /* Every 100 calls, free a small amount */ 4531 if (spa_get_random(100) == 0) 4532 lowest = -1024; 4533 #endif 4534 4535 last_free_memory = lowest; 4536 last_free_reason = r; 4537 4538 return (lowest); 4539 } 4540 4541 4542 /* 4543 * Determine if the system is under memory pressure and is asking 4544 * to reclaim memory. A return value of B_TRUE indicates that the system 4545 * is under memory pressure and that the arc should adjust accordingly. 4546 */ 4547 static boolean_t 4548 arc_reclaim_needed(void) 4549 { 4550 return (arc_available_memory() < 0); 4551 } 4552 4553 static void 4554 arc_kmem_reap_soon(void) 4555 { 4556 size_t i; 4557 kmem_cache_t *prev_cache = NULL; 4558 kmem_cache_t *prev_data_cache = NULL; 4559 extern kmem_cache_t *zio_buf_cache[]; 4560 extern kmem_cache_t *zio_data_buf_cache[]; 4561 extern kmem_cache_t *zfs_btree_leaf_cache; 4562 extern kmem_cache_t *abd_chunk_cache; 4563 4564 #ifdef _KERNEL 4565 if (aggsum_compare(&arc_meta_used, arc_meta_limit) >= 0) { 4566 /* 4567 * We are exceeding our meta-data cache limit. 4568 * Purge some DNLC entries to release holds on meta-data. 4569 */ 4570 dnlc_reduce_cache((void *)(uintptr_t)arc_reduce_dnlc_percent); 4571 } 4572 #if defined(__i386) 4573 /* 4574 * Reclaim unused memory from all kmem caches. 4575 */ 4576 kmem_reap(); 4577 #endif 4578 #endif 4579 4580 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { 4581 if (zio_buf_cache[i] != prev_cache) { 4582 prev_cache = zio_buf_cache[i]; 4583 kmem_cache_reap_soon(zio_buf_cache[i]); 4584 } 4585 if (zio_data_buf_cache[i] != prev_data_cache) { 4586 prev_data_cache = zio_data_buf_cache[i]; 4587 kmem_cache_reap_soon(zio_data_buf_cache[i]); 4588 } 4589 } 4590 kmem_cache_reap_soon(abd_chunk_cache); 4591 kmem_cache_reap_soon(buf_cache); 4592 kmem_cache_reap_soon(hdr_full_cache); 4593 kmem_cache_reap_soon(hdr_l2only_cache); 4594 kmem_cache_reap_soon(zfs_btree_leaf_cache); 4595 4596 if (zio_arena != NULL) { 4597 /* 4598 * Ask the vmem arena to reclaim unused memory from its 4599 * quantum caches. 4600 */ 4601 vmem_qcache_reap(zio_arena); 4602 } 4603 } 4604 4605 /* ARGSUSED */ 4606 static boolean_t 4607 arc_adjust_cb_check(void *arg, zthr_t *zthr) 4608 { 4609 /* 4610 * This is necessary in order for the mdb ::arc dcmd to 4611 * show up to date information. Since the ::arc command 4612 * does not call the kstat's update function, without 4613 * this call, the command may show stale stats for the 4614 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even 4615 * with this change, the data might be up to 1 second 4616 * out of date(the arc_adjust_zthr has a maximum sleep 4617 * time of 1 second); but that should suffice. The 4618 * arc_state_t structures can be queried directly if more 4619 * accurate information is needed. 4620 */ 4621 if (arc_ksp != NULL) 4622 arc_ksp->ks_update(arc_ksp, KSTAT_READ); 4623 4624 /* 4625 * We have to rely on arc_get_data_impl() to tell us when to adjust, 4626 * rather than checking if we are overflowing here, so that we are 4627 * sure to not leave arc_get_data_impl() waiting on 4628 * arc_adjust_waiters_cv. If we have become "not overflowing" since 4629 * arc_get_data_impl() checked, we need to wake it up. We could 4630 * broadcast the CV here, but arc_get_data_impl() may have not yet 4631 * gone to sleep. We would need to use a mutex to ensure that this 4632 * function doesn't broadcast until arc_get_data_impl() has gone to 4633 * sleep (e.g. the arc_adjust_lock). However, the lock ordering of 4634 * such a lock would necessarily be incorrect with respect to the 4635 * zthr_lock, which is held before this function is called, and is 4636 * held by arc_get_data_impl() when it calls zthr_wakeup(). 4637 */ 4638 return (arc_adjust_needed); 4639 } 4640 4641 /* 4642 * Keep arc_size under arc_c by running arc_adjust which evicts data 4643 * from the ARC. 4644 */ 4645 /* ARGSUSED */ 4646 static void 4647 arc_adjust_cb(void *arg, zthr_t *zthr) 4648 { 4649 uint64_t evicted = 0; 4650 4651 /* Evict from cache */ 4652 evicted = arc_adjust(); 4653 4654 /* 4655 * If evicted is zero, we couldn't evict anything 4656 * via arc_adjust(). This could be due to hash lock 4657 * collisions, but more likely due to the majority of 4658 * arc buffers being unevictable. Therefore, even if 4659 * arc_size is above arc_c, another pass is unlikely to 4660 * be helpful and could potentially cause us to enter an 4661 * infinite loop. Additionally, zthr_iscancelled() is 4662 * checked here so that if the arc is shutting down, the 4663 * broadcast will wake any remaining arc adjust waiters. 4664 */ 4665 mutex_enter(&arc_adjust_lock); 4666 arc_adjust_needed = !zthr_iscancelled(arc_adjust_zthr) && 4667 evicted > 0 && aggsum_compare(&arc_size, arc_c) > 0; 4668 if (!arc_adjust_needed) { 4669 /* 4670 * We're either no longer overflowing, or we 4671 * can't evict anything more, so we should wake 4672 * up any waiters. 4673 */ 4674 cv_broadcast(&arc_adjust_waiters_cv); 4675 } 4676 mutex_exit(&arc_adjust_lock); 4677 } 4678 4679 /* ARGSUSED */ 4680 static boolean_t 4681 arc_reap_cb_check(void *arg, zthr_t *zthr) 4682 { 4683 int64_t free_memory = arc_available_memory(); 4684 4685 /* 4686 * If a kmem reap is already active, don't schedule more. We must 4687 * check for this because kmem_cache_reap_soon() won't actually 4688 * block on the cache being reaped (this is to prevent callers from 4689 * becoming implicitly blocked by a system-wide kmem reap -- which, 4690 * on a system with many, many full magazines, can take minutes). 4691 */ 4692 if (!kmem_cache_reap_active() && 4693 free_memory < 0) { 4694 arc_no_grow = B_TRUE; 4695 arc_warm = B_TRUE; 4696 /* 4697 * Wait at least zfs_grow_retry (default 60) seconds 4698 * before considering growing. 4699 */ 4700 arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); 4701 return (B_TRUE); 4702 } else if (free_memory < arc_c >> arc_no_grow_shift) { 4703 arc_no_grow = B_TRUE; 4704 } else if (gethrtime() >= arc_growtime) { 4705 arc_no_grow = B_FALSE; 4706 } 4707 4708 return (B_FALSE); 4709 } 4710 4711 /* 4712 * Keep enough free memory in the system by reaping the ARC's kmem 4713 * caches. To cause more slabs to be reapable, we may reduce the 4714 * target size of the cache (arc_c), causing the arc_adjust_cb() 4715 * to free more buffers. 4716 */ 4717 /* ARGSUSED */ 4718 static void 4719 arc_reap_cb(void *arg, zthr_t *zthr) 4720 { 4721 int64_t free_memory; 4722 4723 /* 4724 * Kick off asynchronous kmem_reap()'s of all our caches. 4725 */ 4726 arc_kmem_reap_soon(); 4727 4728 /* 4729 * Wait at least arc_kmem_cache_reap_retry_ms between 4730 * arc_kmem_reap_soon() calls. Without this check it is possible to 4731 * end up in a situation where we spend lots of time reaping 4732 * caches, while we're near arc_c_min. Waiting here also gives the 4733 * subsequent free memory check a chance of finding that the 4734 * asynchronous reap has already freed enough memory, and we don't 4735 * need to call arc_reduce_target_size(). 4736 */ 4737 delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); 4738 4739 /* 4740 * Reduce the target size as needed to maintain the amount of free 4741 * memory in the system at a fraction of the arc_size (1/128th by 4742 * default). If oversubscribed (free_memory < 0) then reduce the 4743 * target arc_size by the deficit amount plus the fractional 4744 * amount. If free memory is positive but less then the fractional 4745 * amount, reduce by what is needed to hit the fractional amount. 4746 */ 4747 free_memory = arc_available_memory(); 4748 4749 int64_t to_free = 4750 (arc_c >> arc_shrink_shift) - free_memory; 4751 if (to_free > 0) { 4752 #ifdef _KERNEL 4753 to_free = MAX(to_free, ptob(needfree)); 4754 #endif 4755 arc_reduce_target_size(to_free); 4756 } 4757 } 4758 4759 /* 4760 * Adapt arc info given the number of bytes we are trying to add and 4761 * the state that we are coming from. This function is only called 4762 * when we are adding new content to the cache. 4763 */ 4764 static void 4765 arc_adapt(int bytes, arc_state_t *state) 4766 { 4767 int mult; 4768 uint64_t arc_p_min = (arc_c >> arc_p_min_shift); 4769 int64_t mrug_size = zfs_refcount_count(&arc_mru_ghost->arcs_size); 4770 int64_t mfug_size = zfs_refcount_count(&arc_mfu_ghost->arcs_size); 4771 4772 if (state == arc_l2c_only) 4773 return; 4774 4775 ASSERT(bytes > 0); 4776 /* 4777 * Adapt the target size of the MRU list: 4778 * - if we just hit in the MRU ghost list, then increase 4779 * the target size of the MRU list. 4780 * - if we just hit in the MFU ghost list, then increase 4781 * the target size of the MFU list by decreasing the 4782 * target size of the MRU list. 4783 */ 4784 if (state == arc_mru_ghost) { 4785 mult = (mrug_size >= mfug_size) ? 1 : (mfug_size / mrug_size); 4786 mult = MIN(mult, 10); /* avoid wild arc_p adjustment */ 4787 4788 arc_p = MIN(arc_c - arc_p_min, arc_p + bytes * mult); 4789 } else if (state == arc_mfu_ghost) { 4790 uint64_t delta; 4791 4792 mult = (mfug_size >= mrug_size) ? 1 : (mrug_size / mfug_size); 4793 mult = MIN(mult, 10); 4794 4795 delta = MIN(bytes * mult, arc_p); 4796 arc_p = MAX(arc_p_min, arc_p - delta); 4797 } 4798 ASSERT((int64_t)arc_p >= 0); 4799 4800 /* 4801 * Wake reap thread if we do not have any available memory 4802 */ 4803 if (arc_reclaim_needed()) { 4804 zthr_wakeup(arc_reap_zthr); 4805 return; 4806 } 4807 4808 4809 if (arc_no_grow) 4810 return; 4811 4812 if (arc_c >= arc_c_max) 4813 return; 4814 4815 /* 4816 * If we're within (2 * maxblocksize) bytes of the target 4817 * cache size, increment the target cache size 4818 */ 4819 if (aggsum_compare(&arc_size, arc_c - (2ULL << SPA_MAXBLOCKSHIFT)) > 4820 0) { 4821 atomic_add_64(&arc_c, (int64_t)bytes); 4822 if (arc_c > arc_c_max) 4823 arc_c = arc_c_max; 4824 else if (state == arc_anon) 4825 atomic_add_64(&arc_p, (int64_t)bytes); 4826 if (arc_p > arc_c) 4827 arc_p = arc_c; 4828 } 4829 ASSERT((int64_t)arc_p >= 0); 4830 } 4831 4832 /* 4833 * Check if arc_size has grown past our upper threshold, determined by 4834 * zfs_arc_overflow_shift. 4835 */ 4836 static boolean_t 4837 arc_is_overflowing(void) 4838 { 4839 /* Always allow at least one block of overflow */ 4840 uint64_t overflow = MAX(SPA_MAXBLOCKSIZE, 4841 arc_c >> zfs_arc_overflow_shift); 4842 4843 /* 4844 * We just compare the lower bound here for performance reasons. Our 4845 * primary goals are to make sure that the arc never grows without 4846 * bound, and that it can reach its maximum size. This check 4847 * accomplishes both goals. The maximum amount we could run over by is 4848 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block 4849 * in the ARC. In practice, that's in the tens of MB, which is low 4850 * enough to be safe. 4851 */ 4852 return (aggsum_lower_bound(&arc_size) >= arc_c + overflow); 4853 } 4854 4855 static abd_t * 4856 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, void *tag, 4857 boolean_t do_adapt) 4858 { 4859 arc_buf_contents_t type = arc_buf_type(hdr); 4860 4861 arc_get_data_impl(hdr, size, tag, do_adapt); 4862 if (type == ARC_BUFC_METADATA) { 4863 return (abd_alloc(size, B_TRUE)); 4864 } else { 4865 ASSERT(type == ARC_BUFC_DATA); 4866 return (abd_alloc(size, B_FALSE)); 4867 } 4868 } 4869 4870 static void * 4871 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, void *tag) 4872 { 4873 arc_buf_contents_t type = arc_buf_type(hdr); 4874 4875 arc_get_data_impl(hdr, size, tag, B_TRUE); 4876 if (type == ARC_BUFC_METADATA) { 4877 return (zio_buf_alloc(size)); 4878 } else { 4879 ASSERT(type == ARC_BUFC_DATA); 4880 return (zio_data_buf_alloc(size)); 4881 } 4882 } 4883 4884 /* 4885 * Allocate a block and return it to the caller. If we are hitting the 4886 * hard limit for the cache size, we must sleep, waiting for the eviction 4887 * thread to catch up. If we're past the target size but below the hard 4888 * limit, we'll only signal the reclaim thread and continue on. 4889 */ 4890 static void 4891 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag, 4892 boolean_t do_adapt) 4893 { 4894 arc_state_t *state = hdr->b_l1hdr.b_state; 4895 arc_buf_contents_t type = arc_buf_type(hdr); 4896 4897 if (do_adapt) 4898 arc_adapt(size, state); 4899 4900 /* 4901 * If arc_size is currently overflowing, and has grown past our 4902 * upper limit, we must be adding data faster than the evict 4903 * thread can evict. Thus, to ensure we don't compound the 4904 * problem by adding more data and forcing arc_size to grow even 4905 * further past its target size, we halt and wait for the 4906 * eviction thread to catch up. 4907 * 4908 * It's also possible that the reclaim thread is unable to evict 4909 * enough buffers to get arc_size below the overflow limit (e.g. 4910 * due to buffers being un-evictable, or hash lock collisions). 4911 * In this case, we want to proceed regardless if we're 4912 * overflowing; thus we don't use a while loop here. 4913 */ 4914 if (arc_is_overflowing()) { 4915 mutex_enter(&arc_adjust_lock); 4916 4917 /* 4918 * Now that we've acquired the lock, we may no longer be 4919 * over the overflow limit, lets check. 4920 * 4921 * We're ignoring the case of spurious wake ups. If that 4922 * were to happen, it'd let this thread consume an ARC 4923 * buffer before it should have (i.e. before we're under 4924 * the overflow limit and were signalled by the reclaim 4925 * thread). As long as that is a rare occurrence, it 4926 * shouldn't cause any harm. 4927 */ 4928 if (arc_is_overflowing()) { 4929 arc_adjust_needed = B_TRUE; 4930 zthr_wakeup(arc_adjust_zthr); 4931 (void) cv_wait(&arc_adjust_waiters_cv, 4932 &arc_adjust_lock); 4933 } 4934 mutex_exit(&arc_adjust_lock); 4935 } 4936 4937 VERIFY3U(hdr->b_type, ==, type); 4938 if (type == ARC_BUFC_METADATA) { 4939 arc_space_consume(size, ARC_SPACE_META); 4940 } else { 4941 arc_space_consume(size, ARC_SPACE_DATA); 4942 } 4943 4944 /* 4945 * Update the state size. Note that ghost states have a 4946 * "ghost size" and so don't need to be updated. 4947 */ 4948 if (!GHOST_STATE(state)) { 4949 4950 (void) zfs_refcount_add_many(&state->arcs_size, size, tag); 4951 4952 /* 4953 * If this is reached via arc_read, the link is 4954 * protected by the hash lock. If reached via 4955 * arc_buf_alloc, the header should not be accessed by 4956 * any other thread. And, if reached via arc_read_done, 4957 * the hash lock will protect it if it's found in the 4958 * hash table; otherwise no other thread should be 4959 * trying to [add|remove]_reference it. 4960 */ 4961 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 4962 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 4963 (void) zfs_refcount_add_many(&state->arcs_esize[type], 4964 size, tag); 4965 } 4966 4967 /* 4968 * If we are growing the cache, and we are adding anonymous 4969 * data, and we have outgrown arc_p, update arc_p 4970 */ 4971 if (aggsum_compare(&arc_size, arc_c) < 0 && 4972 hdr->b_l1hdr.b_state == arc_anon && 4973 (zfs_refcount_count(&arc_anon->arcs_size) + 4974 zfs_refcount_count(&arc_mru->arcs_size) > arc_p)) 4975 arc_p = MIN(arc_c, arc_p + size); 4976 } 4977 } 4978 4979 static void 4980 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, void *tag) 4981 { 4982 arc_free_data_impl(hdr, size, tag); 4983 abd_free(abd); 4984 } 4985 4986 static void 4987 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, void *tag) 4988 { 4989 arc_buf_contents_t type = arc_buf_type(hdr); 4990 4991 arc_free_data_impl(hdr, size, tag); 4992 if (type == ARC_BUFC_METADATA) { 4993 zio_buf_free(buf, size); 4994 } else { 4995 ASSERT(type == ARC_BUFC_DATA); 4996 zio_data_buf_free(buf, size); 4997 } 4998 } 4999 5000 /* 5001 * Free the arc data buffer. 5002 */ 5003 static void 5004 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag) 5005 { 5006 arc_state_t *state = hdr->b_l1hdr.b_state; 5007 arc_buf_contents_t type = arc_buf_type(hdr); 5008 5009 /* protected by hash lock, if in the hash table */ 5010 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5011 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5012 ASSERT(state != arc_anon && state != arc_l2c_only); 5013 5014 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 5015 size, tag); 5016 } 5017 (void) zfs_refcount_remove_many(&state->arcs_size, size, tag); 5018 5019 VERIFY3U(hdr->b_type, ==, type); 5020 if (type == ARC_BUFC_METADATA) { 5021 arc_space_return(size, ARC_SPACE_META); 5022 } else { 5023 ASSERT(type == ARC_BUFC_DATA); 5024 arc_space_return(size, ARC_SPACE_DATA); 5025 } 5026 } 5027 5028 /* 5029 * This routine is called whenever a buffer is accessed. 5030 * NOTE: the hash lock is dropped in this function. 5031 */ 5032 static void 5033 arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock) 5034 { 5035 clock_t now; 5036 5037 ASSERT(MUTEX_HELD(hash_lock)); 5038 ASSERT(HDR_HAS_L1HDR(hdr)); 5039 5040 if (hdr->b_l1hdr.b_state == arc_anon) { 5041 /* 5042 * This buffer is not in the cache, and does not 5043 * appear in our "ghost" list. Add the new buffer 5044 * to the MRU state. 5045 */ 5046 5047 ASSERT0(hdr->b_l1hdr.b_arc_access); 5048 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5049 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5050 arc_change_state(arc_mru, hdr, hash_lock); 5051 5052 } else if (hdr->b_l1hdr.b_state == arc_mru) { 5053 now = ddi_get_lbolt(); 5054 5055 /* 5056 * If this buffer is here because of a prefetch, then either: 5057 * - clear the flag if this is a "referencing" read 5058 * (any subsequent access will bump this into the MFU state). 5059 * or 5060 * - move the buffer to the head of the list if this is 5061 * another prefetch (to make it less likely to be evicted). 5062 */ 5063 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5064 if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) { 5065 /* link protected by hash lock */ 5066 ASSERT(multilist_link_active( 5067 &hdr->b_l1hdr.b_arc_node)); 5068 } else { 5069 arc_hdr_clear_flags(hdr, 5070 ARC_FLAG_PREFETCH | 5071 ARC_FLAG_PRESCIENT_PREFETCH); 5072 ARCSTAT_BUMP(arcstat_mru_hits); 5073 } 5074 hdr->b_l1hdr.b_arc_access = now; 5075 return; 5076 } 5077 5078 /* 5079 * This buffer has been "accessed" only once so far, 5080 * but it is still in the cache. Move it to the MFU 5081 * state. 5082 */ 5083 if (now > hdr->b_l1hdr.b_arc_access + ARC_MINTIME) { 5084 /* 5085 * More than 125ms have passed since we 5086 * instantiated this buffer. Move it to the 5087 * most frequently used state. 5088 */ 5089 hdr->b_l1hdr.b_arc_access = now; 5090 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5091 arc_change_state(arc_mfu, hdr, hash_lock); 5092 } 5093 ARCSTAT_BUMP(arcstat_mru_hits); 5094 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { 5095 arc_state_t *new_state; 5096 /* 5097 * This buffer has been "accessed" recently, but 5098 * was evicted from the cache. Move it to the 5099 * MFU state. 5100 */ 5101 5102 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5103 new_state = arc_mru; 5104 if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) > 0) { 5105 arc_hdr_clear_flags(hdr, 5106 ARC_FLAG_PREFETCH | 5107 ARC_FLAG_PRESCIENT_PREFETCH); 5108 } 5109 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5110 } else { 5111 new_state = arc_mfu; 5112 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5113 } 5114 5115 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5116 arc_change_state(new_state, hdr, hash_lock); 5117 5118 ARCSTAT_BUMP(arcstat_mru_ghost_hits); 5119 } else if (hdr->b_l1hdr.b_state == arc_mfu) { 5120 /* 5121 * This buffer has been accessed more than once and is 5122 * still in the cache. Keep it in the MFU state. 5123 * 5124 * NOTE: an add_reference() that occurred when we did 5125 * the arc_read() will have kicked this off the list. 5126 * If it was a prefetch, we will explicitly move it to 5127 * the head of the list now. 5128 */ 5129 ARCSTAT_BUMP(arcstat_mfu_hits); 5130 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5131 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { 5132 arc_state_t *new_state = arc_mfu; 5133 /* 5134 * This buffer has been accessed more than once but has 5135 * been evicted from the cache. Move it back to the 5136 * MFU state. 5137 */ 5138 5139 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5140 /* 5141 * This is a prefetch access... 5142 * move this block back to the MRU state. 5143 */ 5144 new_state = arc_mru; 5145 } 5146 5147 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5148 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5149 arc_change_state(new_state, hdr, hash_lock); 5150 5151 ARCSTAT_BUMP(arcstat_mfu_ghost_hits); 5152 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { 5153 /* 5154 * This buffer is on the 2nd Level ARC. 5155 */ 5156 5157 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5158 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5159 arc_change_state(arc_mfu, hdr, hash_lock); 5160 } else { 5161 ASSERT(!"invalid arc state"); 5162 } 5163 } 5164 5165 /* 5166 * This routine is called by dbuf_hold() to update the arc_access() state 5167 * which otherwise would be skipped for entries in the dbuf cache. 5168 */ 5169 void 5170 arc_buf_access(arc_buf_t *buf) 5171 { 5172 mutex_enter(&buf->b_evict_lock); 5173 arc_buf_hdr_t *hdr = buf->b_hdr; 5174 5175 /* 5176 * Avoid taking the hash_lock when possible as an optimization. 5177 * The header must be checked again under the hash_lock in order 5178 * to handle the case where it is concurrently being released. 5179 */ 5180 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5181 mutex_exit(&buf->b_evict_lock); 5182 return; 5183 } 5184 5185 kmutex_t *hash_lock = HDR_LOCK(hdr); 5186 mutex_enter(hash_lock); 5187 5188 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5189 mutex_exit(hash_lock); 5190 mutex_exit(&buf->b_evict_lock); 5191 ARCSTAT_BUMP(arcstat_access_skip); 5192 return; 5193 } 5194 5195 mutex_exit(&buf->b_evict_lock); 5196 5197 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5198 hdr->b_l1hdr.b_state == arc_mfu); 5199 5200 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5201 arc_access(hdr, hash_lock); 5202 mutex_exit(hash_lock); 5203 5204 ARCSTAT_BUMP(arcstat_hits); 5205 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), 5206 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); 5207 } 5208 5209 /* a generic arc_read_done_func_t which you can use */ 5210 /* ARGSUSED */ 5211 void 5212 arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5213 arc_buf_t *buf, void *arg) 5214 { 5215 if (buf == NULL) 5216 return; 5217 5218 bcopy(buf->b_data, arg, arc_buf_size(buf)); 5219 arc_buf_destroy(buf, arg); 5220 } 5221 5222 /* a generic arc_read_done_func_t */ 5223 void 5224 arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5225 arc_buf_t *buf, void *arg) 5226 { 5227 arc_buf_t **bufp = arg; 5228 5229 if (buf == NULL) { 5230 ASSERT(zio == NULL || zio->io_error != 0); 5231 *bufp = NULL; 5232 } else { 5233 ASSERT(zio == NULL || zio->io_error == 0); 5234 *bufp = buf; 5235 ASSERT(buf->b_data != NULL); 5236 } 5237 } 5238 5239 static void 5240 arc_hdr_verify(arc_buf_hdr_t *hdr, const blkptr_t *bp) 5241 { 5242 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 5243 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); 5244 ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); 5245 } else { 5246 if (HDR_COMPRESSION_ENABLED(hdr)) { 5247 ASSERT3U(arc_hdr_get_compress(hdr), ==, 5248 BP_GET_COMPRESS(bp)); 5249 } 5250 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 5251 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); 5252 ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); 5253 } 5254 } 5255 5256 /* 5257 * XXX this should be changed to return an error, and callers 5258 * re-read from disk on failure (on nondebug bits). 5259 */ 5260 static void 5261 arc_hdr_verify_checksum(spa_t *spa, arc_buf_hdr_t *hdr, const blkptr_t *bp) 5262 { 5263 arc_hdr_verify(hdr, bp); 5264 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) 5265 return; 5266 int err = 0; 5267 abd_t *abd = NULL; 5268 if (BP_IS_ENCRYPTED(bp)) { 5269 if (HDR_HAS_RABD(hdr)) { 5270 abd = hdr->b_crypt_hdr.b_rabd; 5271 } 5272 } else if (HDR_COMPRESSION_ENABLED(hdr)) { 5273 abd = hdr->b_l1hdr.b_pabd; 5274 } 5275 if (abd != NULL) { 5276 /* 5277 * The offset is only used for labels, which are not 5278 * cached in the ARC, so it doesn't matter what we 5279 * pass for the offset parameter. 5280 */ 5281 int psize = HDR_GET_PSIZE(hdr); 5282 err = zio_checksum_error_impl(spa, bp, 5283 BP_GET_CHECKSUM(bp), abd, psize, 0, NULL); 5284 if (err != 0) { 5285 /* 5286 * Use abd_copy_to_buf() rather than 5287 * abd_borrow_buf_copy() so that we are sure to 5288 * include the buf in crash dumps. 5289 */ 5290 void *buf = kmem_alloc(psize, KM_SLEEP); 5291 abd_copy_to_buf(buf, abd, psize); 5292 panic("checksum of cached data doesn't match BP " 5293 "err=%u hdr=%p bp=%p abd=%p buf=%p", 5294 err, (void *)hdr, (void *)bp, (void *)abd, buf); 5295 } 5296 } 5297 } 5298 5299 static void 5300 arc_read_done(zio_t *zio) 5301 { 5302 blkptr_t *bp = zio->io_bp; 5303 arc_buf_hdr_t *hdr = zio->io_private; 5304 kmutex_t *hash_lock = NULL; 5305 arc_callback_t *callback_list; 5306 arc_callback_t *acb; 5307 boolean_t freeable = B_FALSE; 5308 5309 /* 5310 * The hdr was inserted into hash-table and removed from lists 5311 * prior to starting I/O. We should find this header, since 5312 * it's in the hash table, and it should be legit since it's 5313 * not possible to evict it during the I/O. The only possible 5314 * reason for it not to be found is if we were freed during the 5315 * read. 5316 */ 5317 if (HDR_IN_HASH_TABLE(hdr)) { 5318 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); 5319 ASSERT3U(hdr->b_dva.dva_word[0], ==, 5320 BP_IDENTITY(zio->io_bp)->dva_word[0]); 5321 ASSERT3U(hdr->b_dva.dva_word[1], ==, 5322 BP_IDENTITY(zio->io_bp)->dva_word[1]); 5323 5324 arc_buf_hdr_t *found = buf_hash_find(hdr->b_spa, zio->io_bp, 5325 &hash_lock); 5326 5327 ASSERT((found == hdr && 5328 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || 5329 (found == hdr && HDR_L2_READING(hdr))); 5330 ASSERT3P(hash_lock, !=, NULL); 5331 } 5332 5333 if (BP_IS_PROTECTED(bp)) { 5334 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 5335 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 5336 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 5337 hdr->b_crypt_hdr.b_iv); 5338 5339 if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { 5340 void *tmpbuf; 5341 5342 tmpbuf = abd_borrow_buf_copy(zio->io_abd, 5343 sizeof (zil_chain_t)); 5344 zio_crypt_decode_mac_zil(tmpbuf, 5345 hdr->b_crypt_hdr.b_mac); 5346 abd_return_buf(zio->io_abd, tmpbuf, 5347 sizeof (zil_chain_t)); 5348 } else { 5349 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 5350 } 5351 } 5352 5353 if (zio->io_error == 0) { 5354 /* byteswap if necessary */ 5355 if (BP_SHOULD_BYTESWAP(zio->io_bp)) { 5356 if (BP_GET_LEVEL(zio->io_bp) > 0) { 5357 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 5358 } else { 5359 hdr->b_l1hdr.b_byteswap = 5360 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); 5361 } 5362 } else { 5363 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 5364 } 5365 } 5366 5367 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); 5368 if (l2arc_noprefetch && HDR_PREFETCH(hdr)) 5369 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); 5370 5371 callback_list = hdr->b_l1hdr.b_acb; 5372 ASSERT3P(callback_list, !=, NULL); 5373 5374 if (hash_lock && zio->io_error == 0 && 5375 hdr->b_l1hdr.b_state == arc_anon) { 5376 /* 5377 * Only call arc_access on anonymous buffers. This is because 5378 * if we've issued an I/O for an evicted buffer, we've already 5379 * called arc_access (to prevent any simultaneous readers from 5380 * getting confused). 5381 */ 5382 arc_access(hdr, hash_lock); 5383 } 5384 5385 /* 5386 * If a read request has a callback (i.e. acb_done is not NULL), then we 5387 * make a buf containing the data according to the parameters which were 5388 * passed in. The implementation of arc_buf_alloc_impl() ensures that we 5389 * aren't needlessly decompressing the data multiple times. 5390 */ 5391 int callback_cnt = 0; 5392 for (acb = callback_list; acb != NULL; acb = acb->acb_next) { 5393 if (!acb->acb_done) 5394 continue; 5395 5396 callback_cnt++; 5397 5398 if (zio->io_error != 0) 5399 continue; 5400 5401 int error = arc_buf_alloc_impl(hdr, zio->io_spa, 5402 &acb->acb_zb, acb->acb_private, acb->acb_encrypted, 5403 acb->acb_compressed, acb->acb_noauth, B_TRUE, 5404 &acb->acb_buf); 5405 5406 /* 5407 * Assert non-speculative zios didn't fail because an 5408 * encryption key wasn't loaded 5409 */ 5410 ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || 5411 error != EACCES); 5412 5413 /* 5414 * If we failed to decrypt, report an error now (as the zio 5415 * layer would have done if it had done the transforms). 5416 */ 5417 if (error == ECKSUM) { 5418 ASSERT(BP_IS_PROTECTED(bp)); 5419 error = SET_ERROR(EIO); 5420 if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5421 spa_log_error(zio->io_spa, &acb->acb_zb); 5422 (void) zfs_ereport_post( 5423 FM_EREPORT_ZFS_AUTHENTICATION, 5424 zio->io_spa, NULL, &acb->acb_zb, zio, 0, 0); 5425 } 5426 } 5427 5428 if (error != 0) { 5429 /* 5430 * Decompression failed. Set io_error 5431 * so that when we call acb_done (below), 5432 * we will indicate that the read failed. 5433 * Note that in the unusual case where one 5434 * callback is compressed and another 5435 * uncompressed, we will mark all of them 5436 * as failed, even though the uncompressed 5437 * one can't actually fail. In this case, 5438 * the hdr will not be anonymous, because 5439 * if there are multiple callbacks, it's 5440 * because multiple threads found the same 5441 * arc buf in the hash table. 5442 */ 5443 zio->io_error = error; 5444 } 5445 } 5446 5447 /* 5448 * If there are multiple callbacks, we must have the hash lock, 5449 * because the only way for multiple threads to find this hdr is 5450 * in the hash table. This ensures that if there are multiple 5451 * callbacks, the hdr is not anonymous. If it were anonymous, 5452 * we couldn't use arc_buf_destroy() in the error case below. 5453 */ 5454 ASSERT(callback_cnt < 2 || hash_lock != NULL); 5455 5456 hdr->b_l1hdr.b_acb = NULL; 5457 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5458 if (callback_cnt == 0) 5459 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 5460 5461 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt) || 5462 callback_list != NULL); 5463 5464 if (zio->io_error == 0) { 5465 arc_hdr_verify(hdr, zio->io_bp); 5466 } else { 5467 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 5468 if (hdr->b_l1hdr.b_state != arc_anon) 5469 arc_change_state(arc_anon, hdr, hash_lock); 5470 if (HDR_IN_HASH_TABLE(hdr)) 5471 buf_hash_remove(hdr); 5472 freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); 5473 } 5474 5475 /* 5476 * Broadcast before we drop the hash_lock to avoid the possibility 5477 * that the hdr (and hence the cv) might be freed before we get to 5478 * the cv_broadcast(). 5479 */ 5480 cv_broadcast(&hdr->b_l1hdr.b_cv); 5481 5482 if (hash_lock != NULL) { 5483 mutex_exit(hash_lock); 5484 } else { 5485 /* 5486 * This block was freed while we waited for the read to 5487 * complete. It has been removed from the hash table and 5488 * moved to the anonymous state (so that it won't show up 5489 * in the cache). 5490 */ 5491 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 5492 freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); 5493 } 5494 5495 /* execute each callback and free its structure */ 5496 while ((acb = callback_list) != NULL) { 5497 5498 if (acb->acb_done != NULL) { 5499 if (zio->io_error != 0 && acb->acb_buf != NULL) { 5500 /* 5501 * If arc_buf_alloc_impl() fails during 5502 * decompression, the buf will still be 5503 * allocated, and needs to be freed here. 5504 */ 5505 arc_buf_destroy(acb->acb_buf, acb->acb_private); 5506 acb->acb_buf = NULL; 5507 } 5508 acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, 5509 acb->acb_buf, acb->acb_private); 5510 } 5511 5512 if (acb->acb_zio_dummy != NULL) { 5513 acb->acb_zio_dummy->io_error = zio->io_error; 5514 zio_nowait(acb->acb_zio_dummy); 5515 } 5516 5517 callback_list = acb->acb_next; 5518 kmem_free(acb, sizeof (arc_callback_t)); 5519 } 5520 5521 if (freeable) 5522 arc_hdr_destroy(hdr); 5523 } 5524 5525 /* 5526 * "Read" the block at the specified DVA (in bp) via the 5527 * cache. If the block is found in the cache, invoke the provided 5528 * callback immediately and return. Note that the `zio' parameter 5529 * in the callback will be NULL in this case, since no IO was 5530 * required. If the block is not in the cache pass the read request 5531 * on to the spa with a substitute callback function, so that the 5532 * requested block will be added to the cache. 5533 * 5534 * If a read request arrives for a block that has a read in-progress, 5535 * either wait for the in-progress read to complete (and return the 5536 * results); or, if this is a read with a "done" func, add a record 5537 * to the read to invoke the "done" func when the read completes, 5538 * and return; or just return. 5539 * 5540 * arc_read_done() will invoke all the requested "done" functions 5541 * for readers of this block. 5542 */ 5543 int 5544 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, arc_read_done_func_t *done, 5545 void *private, zio_priority_t priority, int zio_flags, 5546 arc_flags_t *arc_flags, const zbookmark_phys_t *zb) 5547 { 5548 arc_buf_hdr_t *hdr = NULL; 5549 kmutex_t *hash_lock = NULL; 5550 zio_t *rzio; 5551 uint64_t guid = spa_load_guid(spa); 5552 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; 5553 boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && 5554 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5555 boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && 5556 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5557 int rc = 0; 5558 5559 ASSERT(!BP_IS_EMBEDDED(bp) || 5560 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); 5561 5562 top: 5563 if (!BP_IS_EMBEDDED(bp)) { 5564 /* 5565 * Embedded BP's have no DVA and require no I/O to "read". 5566 * Create an anonymous arc buf to back it. 5567 */ 5568 hdr = buf_hash_find(guid, bp, &hash_lock); 5569 } 5570 5571 /* 5572 * Determine if we have an L1 cache hit or a cache miss. For simplicity 5573 * we maintain encrypted data seperately from compressed / uncompressed 5574 * data. If the user is requesting raw encrypted data and we don't have 5575 * that in the header we will read from disk to guarantee that we can 5576 * get it even if the encryption keys aren't loaded. 5577 */ 5578 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || 5579 (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { 5580 arc_buf_t *buf = NULL; 5581 *arc_flags |= ARC_FLAG_CACHED; 5582 5583 if (HDR_IO_IN_PROGRESS(hdr)) { 5584 zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; 5585 5586 ASSERT3P(head_zio, !=, NULL); 5587 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && 5588 priority == ZIO_PRIORITY_SYNC_READ) { 5589 /* 5590 * This is a sync read that needs to wait for 5591 * an in-flight async read. Request that the 5592 * zio have its priority upgraded. 5593 */ 5594 zio_change_priority(head_zio, priority); 5595 DTRACE_PROBE1(arc__async__upgrade__sync, 5596 arc_buf_hdr_t *, hdr); 5597 ARCSTAT_BUMP(arcstat_async_upgrade_sync); 5598 } 5599 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { 5600 arc_hdr_clear_flags(hdr, 5601 ARC_FLAG_PREDICTIVE_PREFETCH); 5602 } 5603 5604 if (*arc_flags & ARC_FLAG_WAIT) { 5605 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 5606 mutex_exit(hash_lock); 5607 goto top; 5608 } 5609 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 5610 5611 if (done) { 5612 arc_callback_t *acb = NULL; 5613 5614 acb = kmem_zalloc(sizeof (arc_callback_t), 5615 KM_SLEEP); 5616 acb->acb_done = done; 5617 acb->acb_private = private; 5618 acb->acb_compressed = compressed_read; 5619 acb->acb_encrypted = encrypted_read; 5620 acb->acb_noauth = noauth_read; 5621 acb->acb_zb = *zb; 5622 if (pio != NULL) 5623 acb->acb_zio_dummy = zio_null(pio, 5624 spa, NULL, NULL, NULL, zio_flags); 5625 5626 ASSERT3P(acb->acb_done, !=, NULL); 5627 acb->acb_zio_head = head_zio; 5628 acb->acb_next = hdr->b_l1hdr.b_acb; 5629 hdr->b_l1hdr.b_acb = acb; 5630 mutex_exit(hash_lock); 5631 return (0); 5632 } 5633 mutex_exit(hash_lock); 5634 return (0); 5635 } 5636 5637 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5638 hdr->b_l1hdr.b_state == arc_mfu); 5639 5640 if (done) { 5641 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { 5642 /* 5643 * This is a demand read which does not have to 5644 * wait for i/o because we did a predictive 5645 * prefetch i/o for it, which has completed. 5646 */ 5647 DTRACE_PROBE1( 5648 arc__demand__hit__predictive__prefetch, 5649 arc_buf_hdr_t *, hdr); 5650 ARCSTAT_BUMP( 5651 arcstat_demand_hit_predictive_prefetch); 5652 arc_hdr_clear_flags(hdr, 5653 ARC_FLAG_PREDICTIVE_PREFETCH); 5654 } 5655 5656 if (hdr->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) { 5657 ARCSTAT_BUMP( 5658 arcstat_demand_hit_prescient_prefetch); 5659 arc_hdr_clear_flags(hdr, 5660 ARC_FLAG_PRESCIENT_PREFETCH); 5661 } 5662 5663 ASSERT(!BP_IS_EMBEDDED(bp) || !BP_IS_HOLE(bp)); 5664 5665 arc_hdr_verify_checksum(spa, hdr, bp); 5666 5667 /* Get a buf with the desired data in it. */ 5668 rc = arc_buf_alloc_impl(hdr, spa, zb, private, 5669 encrypted_read, compressed_read, noauth_read, 5670 B_TRUE, &buf); 5671 if (rc == ECKSUM) { 5672 /* 5673 * Convert authentication and decryption errors 5674 * to EIO (and generate an ereport if needed) 5675 * before leaving the ARC. 5676 */ 5677 rc = SET_ERROR(EIO); 5678 if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5679 spa_log_error(spa, zb); 5680 (void) zfs_ereport_post( 5681 FM_EREPORT_ZFS_AUTHENTICATION, 5682 spa, NULL, zb, NULL, 0, 0); 5683 } 5684 } 5685 if (rc != 0) { 5686 (void) remove_reference(hdr, hash_lock, 5687 private); 5688 arc_buf_destroy_impl(buf); 5689 buf = NULL; 5690 } 5691 /* assert any errors weren't due to unloaded keys */ 5692 ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || 5693 rc != EACCES); 5694 } else if (*arc_flags & ARC_FLAG_PREFETCH && 5695 zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) { 5696 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 5697 } 5698 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5699 arc_access(hdr, hash_lock); 5700 if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) 5701 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 5702 if (*arc_flags & ARC_FLAG_L2CACHE) 5703 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 5704 mutex_exit(hash_lock); 5705 ARCSTAT_BUMP(arcstat_hits); 5706 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), 5707 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), 5708 data, metadata, hits); 5709 5710 if (done) 5711 done(NULL, zb, bp, buf, private); 5712 } else { 5713 uint64_t lsize = BP_GET_LSIZE(bp); 5714 uint64_t psize = BP_GET_PSIZE(bp); 5715 arc_callback_t *acb; 5716 vdev_t *vd = NULL; 5717 uint64_t addr = 0; 5718 boolean_t devw = B_FALSE; 5719 uint64_t size; 5720 abd_t *hdr_abd; 5721 int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; 5722 5723 if (hdr == NULL) { 5724 /* this block is not in the cache */ 5725 arc_buf_hdr_t *exists = NULL; 5726 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); 5727 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 5728 BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), type, 5729 encrypted_read); 5730 5731 if (!BP_IS_EMBEDDED(bp)) { 5732 hdr->b_dva = *BP_IDENTITY(bp); 5733 hdr->b_birth = BP_PHYSICAL_BIRTH(bp); 5734 exists = buf_hash_insert(hdr, &hash_lock); 5735 } 5736 if (exists != NULL) { 5737 /* somebody beat us to the hash insert */ 5738 mutex_exit(hash_lock); 5739 buf_discard_identity(hdr); 5740 arc_hdr_destroy(hdr); 5741 goto top; /* restart the IO request */ 5742 } 5743 } else { 5744 /* 5745 * This block is in the ghost cache or encrypted data 5746 * was requested and we didn't have it. If it was 5747 * L2-only (and thus didn't have an L1 hdr), 5748 * we realloc the header to add an L1 hdr. 5749 */ 5750 if (!HDR_HAS_L1HDR(hdr)) { 5751 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, 5752 hdr_full_cache); 5753 } 5754 5755 if (GHOST_STATE(hdr->b_l1hdr.b_state)) { 5756 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 5757 ASSERT(!HDR_HAS_RABD(hdr)); 5758 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 5759 ASSERT0(zfs_refcount_count( 5760 &hdr->b_l1hdr.b_refcnt)); 5761 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 5762 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 5763 } else if (HDR_IO_IN_PROGRESS(hdr)) { 5764 /* 5765 * If this header already had an IO in progress 5766 * and we are performing another IO to fetch 5767 * encrypted data we must wait until the first 5768 * IO completes so as not to confuse 5769 * arc_read_done(). This should be very rare 5770 * and so the performance impact shouldn't 5771 * matter. 5772 */ 5773 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 5774 mutex_exit(hash_lock); 5775 goto top; 5776 } 5777 5778 /* 5779 * This is a delicate dance that we play here. 5780 * This hdr might be in the ghost list so we access 5781 * it to move it out of the ghost list before we 5782 * initiate the read. If it's a prefetch then 5783 * it won't have a callback so we'll remove the 5784 * reference that arc_buf_alloc_impl() created. We 5785 * do this after we've called arc_access() to 5786 * avoid hitting an assert in remove_reference(). 5787 */ 5788 arc_adapt(arc_hdr_size(hdr), hdr->b_l1hdr.b_state); 5789 arc_access(hdr, hash_lock); 5790 arc_hdr_alloc_pabd(hdr, alloc_flags); 5791 } 5792 5793 if (encrypted_read) { 5794 ASSERT(HDR_HAS_RABD(hdr)); 5795 size = HDR_GET_PSIZE(hdr); 5796 hdr_abd = hdr->b_crypt_hdr.b_rabd; 5797 zio_flags |= ZIO_FLAG_RAW; 5798 } else { 5799 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 5800 size = arc_hdr_size(hdr); 5801 hdr_abd = hdr->b_l1hdr.b_pabd; 5802 5803 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 5804 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 5805 } 5806 5807 /* 5808 * For authenticated bp's, we do not ask the ZIO layer 5809 * to authenticate them since this will cause the entire 5810 * IO to fail if the key isn't loaded. Instead, we 5811 * defer authentication until arc_buf_fill(), which will 5812 * verify the data when the key is available. 5813 */ 5814 if (BP_IS_AUTHENTICATED(bp)) 5815 zio_flags |= ZIO_FLAG_RAW_ENCRYPT; 5816 } 5817 5818 if (*arc_flags & ARC_FLAG_PREFETCH && 5819 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) 5820 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 5821 if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) 5822 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 5823 5824 if (*arc_flags & ARC_FLAG_L2CACHE) 5825 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 5826 if (BP_IS_AUTHENTICATED(bp)) 5827 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 5828 if (BP_GET_LEVEL(bp) > 0) 5829 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); 5830 if (*arc_flags & ARC_FLAG_PREDICTIVE_PREFETCH) 5831 arc_hdr_set_flags(hdr, ARC_FLAG_PREDICTIVE_PREFETCH); 5832 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); 5833 5834 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); 5835 acb->acb_done = done; 5836 acb->acb_private = private; 5837 acb->acb_compressed = compressed_read; 5838 acb->acb_encrypted = encrypted_read; 5839 acb->acb_noauth = noauth_read; 5840 acb->acb_zb = *zb; 5841 5842 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 5843 hdr->b_l1hdr.b_acb = acb; 5844 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5845 5846 if (HDR_HAS_L2HDR(hdr) && 5847 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { 5848 devw = hdr->b_l2hdr.b_dev->l2ad_writing; 5849 addr = hdr->b_l2hdr.b_daddr; 5850 /* 5851 * Lock out L2ARC device removal. 5852 */ 5853 if (vdev_is_dead(vd) || 5854 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) 5855 vd = NULL; 5856 } 5857 5858 /* 5859 * We count both async reads and scrub IOs as asynchronous so 5860 * that both can be upgraded in the event of a cache hit while 5861 * the read IO is still in-flight. 5862 */ 5863 if (priority == ZIO_PRIORITY_ASYNC_READ || 5864 priority == ZIO_PRIORITY_SCRUB) 5865 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 5866 else 5867 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 5868 5869 /* 5870 * At this point, we have a level 1 cache miss. Try again in 5871 * L2ARC if possible. 5872 */ 5873 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); 5874 5875 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, blkptr_t *, bp, 5876 uint64_t, lsize, zbookmark_phys_t *, zb); 5877 ARCSTAT_BUMP(arcstat_misses); 5878 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), 5879 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), 5880 data, metadata, misses); 5881 5882 if (vd != NULL && l2arc_ndev != 0 && !(l2arc_norw && devw)) { 5883 /* 5884 * Read from the L2ARC if the following are true: 5885 * 1. The L2ARC vdev was previously cached. 5886 * 2. This buffer still has L2ARC metadata. 5887 * 3. This buffer isn't currently writing to the L2ARC. 5888 * 4. The L2ARC entry wasn't evicted, which may 5889 * also have invalidated the vdev. 5890 * 5. This isn't prefetch and l2arc_noprefetch is set. 5891 */ 5892 if (HDR_HAS_L2HDR(hdr) && 5893 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) && 5894 !(l2arc_noprefetch && HDR_PREFETCH(hdr))) { 5895 l2arc_read_callback_t *cb; 5896 abd_t *abd; 5897 uint64_t asize; 5898 5899 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); 5900 ARCSTAT_BUMP(arcstat_l2_hits); 5901 5902 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), 5903 KM_SLEEP); 5904 cb->l2rcb_hdr = hdr; 5905 cb->l2rcb_bp = *bp; 5906 cb->l2rcb_zb = *zb; 5907 cb->l2rcb_flags = zio_flags; 5908 5909 asize = vdev_psize_to_asize(vd, size); 5910 if (asize != size) { 5911 abd = abd_alloc_for_io(asize, 5912 HDR_ISTYPE_METADATA(hdr)); 5913 cb->l2rcb_abd = abd; 5914 } else { 5915 abd = hdr_abd; 5916 } 5917 5918 ASSERT(addr >= VDEV_LABEL_START_SIZE && 5919 addr + asize <= vd->vdev_psize - 5920 VDEV_LABEL_END_SIZE); 5921 5922 /* 5923 * l2arc read. The SCL_L2ARC lock will be 5924 * released by l2arc_read_done(). 5925 * Issue a null zio if the underlying buffer 5926 * was squashed to zero size by compression. 5927 */ 5928 ASSERT3U(arc_hdr_get_compress(hdr), !=, 5929 ZIO_COMPRESS_EMPTY); 5930 rzio = zio_read_phys(pio, vd, addr, 5931 asize, abd, 5932 ZIO_CHECKSUM_OFF, 5933 l2arc_read_done, cb, priority, 5934 zio_flags | ZIO_FLAG_DONT_CACHE | 5935 ZIO_FLAG_CANFAIL | 5936 ZIO_FLAG_DONT_PROPAGATE | 5937 ZIO_FLAG_DONT_RETRY, B_FALSE); 5938 acb->acb_zio_head = rzio; 5939 5940 if (hash_lock != NULL) 5941 mutex_exit(hash_lock); 5942 5943 DTRACE_PROBE2(l2arc__read, vdev_t *, vd, 5944 zio_t *, rzio); 5945 ARCSTAT_INCR(arcstat_l2_read_bytes, 5946 HDR_GET_PSIZE(hdr)); 5947 5948 if (*arc_flags & ARC_FLAG_NOWAIT) { 5949 zio_nowait(rzio); 5950 return (0); 5951 } 5952 5953 ASSERT(*arc_flags & ARC_FLAG_WAIT); 5954 if (zio_wait(rzio) == 0) 5955 return (0); 5956 5957 /* l2arc read error; goto zio_read() */ 5958 if (hash_lock != NULL) 5959 mutex_enter(hash_lock); 5960 } else { 5961 DTRACE_PROBE1(l2arc__miss, 5962 arc_buf_hdr_t *, hdr); 5963 ARCSTAT_BUMP(arcstat_l2_misses); 5964 if (HDR_L2_WRITING(hdr)) 5965 ARCSTAT_BUMP(arcstat_l2_rw_clash); 5966 spa_config_exit(spa, SCL_L2ARC, vd); 5967 } 5968 } else { 5969 if (vd != NULL) 5970 spa_config_exit(spa, SCL_L2ARC, vd); 5971 if (l2arc_ndev != 0) { 5972 DTRACE_PROBE1(l2arc__miss, 5973 arc_buf_hdr_t *, hdr); 5974 ARCSTAT_BUMP(arcstat_l2_misses); 5975 } 5976 } 5977 5978 rzio = zio_read(pio, spa, bp, hdr_abd, size, 5979 arc_read_done, hdr, priority, zio_flags, zb); 5980 acb->acb_zio_head = rzio; 5981 5982 if (hash_lock != NULL) 5983 mutex_exit(hash_lock); 5984 5985 if (*arc_flags & ARC_FLAG_WAIT) 5986 return (zio_wait(rzio)); 5987 5988 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 5989 zio_nowait(rzio); 5990 } 5991 return (rc); 5992 } 5993 5994 /* 5995 * Notify the arc that a block was freed, and thus will never be used again. 5996 */ 5997 void 5998 arc_freed(spa_t *spa, const blkptr_t *bp) 5999 { 6000 arc_buf_hdr_t *hdr; 6001 kmutex_t *hash_lock; 6002 uint64_t guid = spa_load_guid(spa); 6003 6004 ASSERT(!BP_IS_EMBEDDED(bp)); 6005 6006 hdr = buf_hash_find(guid, bp, &hash_lock); 6007 if (hdr == NULL) 6008 return; 6009 6010 /* 6011 * We might be trying to free a block that is still doing I/O 6012 * (i.e. prefetch) or has a reference (i.e. a dedup-ed, 6013 * dmu_sync-ed block). If this block is being prefetched, then it 6014 * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr 6015 * until the I/O completes. A block may also have a reference if it is 6016 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would 6017 * have written the new block to its final resting place on disk but 6018 * without the dedup flag set. This would have left the hdr in the MRU 6019 * state and discoverable. When the txg finally syncs it detects that 6020 * the block was overridden in open context and issues an override I/O. 6021 * Since this is a dedup block, the override I/O will determine if the 6022 * block is already in the DDT. If so, then it will replace the io_bp 6023 * with the bp from the DDT and allow the I/O to finish. When the I/O 6024 * reaches the done callback, dbuf_write_override_done, it will 6025 * check to see if the io_bp and io_bp_override are identical. 6026 * If they are not, then it indicates that the bp was replaced with 6027 * the bp in the DDT and the override bp is freed. This allows 6028 * us to arrive here with a reference on a block that is being 6029 * freed. So if we have an I/O in progress, or a reference to 6030 * this hdr, then we don't destroy the hdr. 6031 */ 6032 if (!HDR_HAS_L1HDR(hdr) || (!HDR_IO_IN_PROGRESS(hdr) && 6033 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) { 6034 arc_change_state(arc_anon, hdr, hash_lock); 6035 arc_hdr_destroy(hdr); 6036 mutex_exit(hash_lock); 6037 } else { 6038 mutex_exit(hash_lock); 6039 } 6040 6041 } 6042 6043 /* 6044 * Release this buffer from the cache, making it an anonymous buffer. This 6045 * must be done after a read and prior to modifying the buffer contents. 6046 * If the buffer has more than one reference, we must make 6047 * a new hdr for the buffer. 6048 */ 6049 void 6050 arc_release(arc_buf_t *buf, void *tag) 6051 { 6052 arc_buf_hdr_t *hdr = buf->b_hdr; 6053 6054 /* 6055 * It would be nice to assert that if its DMU metadata (level > 6056 * 0 || it's the dnode file), then it must be syncing context. 6057 * But we don't know that information at this level. 6058 */ 6059 6060 mutex_enter(&buf->b_evict_lock); 6061 6062 ASSERT(HDR_HAS_L1HDR(hdr)); 6063 6064 /* 6065 * We don't grab the hash lock prior to this check, because if 6066 * the buffer's header is in the arc_anon state, it won't be 6067 * linked into the hash table. 6068 */ 6069 if (hdr->b_l1hdr.b_state == arc_anon) { 6070 mutex_exit(&buf->b_evict_lock); 6071 /* 6072 * If we are called from dmu_convert_mdn_block_to_raw(), 6073 * a write might be in progress. This is OK because 6074 * the caller won't change the content of this buffer, 6075 * only the flags (via arc_convert_to_raw()). 6076 */ 6077 /* ASSERT(!HDR_IO_IN_PROGRESS(hdr)); */ 6078 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 6079 ASSERT(!HDR_HAS_L2HDR(hdr)); 6080 ASSERT(HDR_EMPTY(hdr)); 6081 6082 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6083 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); 6084 ASSERT(!list_link_active(&hdr->b_l1hdr.b_arc_node)); 6085 6086 hdr->b_l1hdr.b_arc_access = 0; 6087 6088 /* 6089 * If the buf is being overridden then it may already 6090 * have a hdr that is not empty. 6091 */ 6092 buf_discard_identity(hdr); 6093 arc_buf_thaw(buf); 6094 6095 return; 6096 } 6097 6098 kmutex_t *hash_lock = HDR_LOCK(hdr); 6099 mutex_enter(hash_lock); 6100 6101 /* 6102 * This assignment is only valid as long as the hash_lock is 6103 * held, we must be careful not to reference state or the 6104 * b_state field after dropping the lock. 6105 */ 6106 arc_state_t *state = hdr->b_l1hdr.b_state; 6107 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 6108 ASSERT3P(state, !=, arc_anon); 6109 6110 /* this buffer is not on any list */ 6111 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); 6112 6113 if (HDR_HAS_L2HDR(hdr)) { 6114 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6115 6116 /* 6117 * We have to recheck this conditional again now that 6118 * we're holding the l2ad_mtx to prevent a race with 6119 * another thread which might be concurrently calling 6120 * l2arc_evict(). In that case, l2arc_evict() might have 6121 * destroyed the header's L2 portion as we were waiting 6122 * to acquire the l2ad_mtx. 6123 */ 6124 if (HDR_HAS_L2HDR(hdr)) 6125 arc_hdr_l2hdr_destroy(hdr); 6126 6127 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6128 } 6129 6130 /* 6131 * Do we have more than one buf? 6132 */ 6133 if (hdr->b_l1hdr.b_bufcnt > 1) { 6134 arc_buf_hdr_t *nhdr; 6135 uint64_t spa = hdr->b_spa; 6136 uint64_t psize = HDR_GET_PSIZE(hdr); 6137 uint64_t lsize = HDR_GET_LSIZE(hdr); 6138 boolean_t protected = HDR_PROTECTED(hdr); 6139 enum zio_compress compress = arc_hdr_get_compress(hdr); 6140 arc_buf_contents_t type = arc_buf_type(hdr); 6141 VERIFY3U(hdr->b_type, ==, type); 6142 6143 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); 6144 (void) remove_reference(hdr, hash_lock, tag); 6145 6146 if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) { 6147 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6148 ASSERT(ARC_BUF_LAST(buf)); 6149 } 6150 6151 /* 6152 * Pull the data off of this hdr and attach it to 6153 * a new anonymous hdr. Also find the last buffer 6154 * in the hdr's buffer list. 6155 */ 6156 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 6157 ASSERT3P(lastbuf, !=, NULL); 6158 6159 /* 6160 * If the current arc_buf_t and the hdr are sharing their data 6161 * buffer, then we must stop sharing that block. 6162 */ 6163 if (arc_buf_is_shared(buf)) { 6164 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6165 VERIFY(!arc_buf_is_shared(lastbuf)); 6166 6167 /* 6168 * First, sever the block sharing relationship between 6169 * buf and the arc_buf_hdr_t. 6170 */ 6171 arc_unshare_buf(hdr, buf); 6172 6173 /* 6174 * Now we need to recreate the hdr's b_pabd. Since we 6175 * have lastbuf handy, we try to share with it, but if 6176 * we can't then we allocate a new b_pabd and copy the 6177 * data from buf into it. 6178 */ 6179 if (arc_can_share(hdr, lastbuf)) { 6180 arc_share_buf(hdr, lastbuf); 6181 } else { 6182 arc_hdr_alloc_pabd(hdr, ARC_HDR_DO_ADAPT); 6183 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, 6184 buf->b_data, psize); 6185 } 6186 VERIFY3P(lastbuf->b_data, !=, NULL); 6187 } else if (HDR_SHARED_DATA(hdr)) { 6188 /* 6189 * Uncompressed shared buffers are always at the end 6190 * of the list. Compressed buffers don't have the 6191 * same requirements. This makes it hard to 6192 * simply assert that the lastbuf is shared so 6193 * we rely on the hdr's compression flags to determine 6194 * if we have a compressed, shared buffer. 6195 */ 6196 ASSERT(arc_buf_is_shared(lastbuf) || 6197 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 6198 ASSERT(!ARC_BUF_SHARED(buf)); 6199 } 6200 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 6201 ASSERT3P(state, !=, arc_l2c_only); 6202 6203 (void) zfs_refcount_remove_many(&state->arcs_size, 6204 arc_buf_size(buf), buf); 6205 6206 if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6207 ASSERT3P(state, !=, arc_l2c_only); 6208 (void) zfs_refcount_remove_many( 6209 &state->arcs_esize[type], 6210 arc_buf_size(buf), buf); 6211 } 6212 6213 hdr->b_l1hdr.b_bufcnt -= 1; 6214 if (ARC_BUF_ENCRYPTED(buf)) 6215 hdr->b_crypt_hdr.b_ebufcnt -= 1; 6216 6217 arc_cksum_verify(buf); 6218 arc_buf_unwatch(buf); 6219 6220 /* if this is the last uncompressed buf free the checksum */ 6221 if (!arc_hdr_has_uncompressed_buf(hdr)) 6222 arc_cksum_free(hdr); 6223 6224 mutex_exit(hash_lock); 6225 6226 /* 6227 * Allocate a new hdr. The new hdr will contain a b_pabd 6228 * buffer which will be freed in arc_write(). 6229 */ 6230 nhdr = arc_hdr_alloc(spa, psize, lsize, protected, 6231 compress, type, HDR_HAS_RABD(hdr)); 6232 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); 6233 ASSERT0(nhdr->b_l1hdr.b_bufcnt); 6234 ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); 6235 VERIFY3U(nhdr->b_type, ==, type); 6236 ASSERT(!HDR_SHARED_DATA(nhdr)); 6237 6238 nhdr->b_l1hdr.b_buf = buf; 6239 nhdr->b_l1hdr.b_bufcnt = 1; 6240 if (ARC_BUF_ENCRYPTED(buf)) 6241 nhdr->b_crypt_hdr.b_ebufcnt = 1; 6242 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); 6243 buf->b_hdr = nhdr; 6244 6245 mutex_exit(&buf->b_evict_lock); 6246 (void) zfs_refcount_add_many(&arc_anon->arcs_size, 6247 arc_buf_size(buf), buf); 6248 } else { 6249 mutex_exit(&buf->b_evict_lock); 6250 ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); 6251 /* protected by hash lock, or hdr is on arc_anon */ 6252 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6253 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6254 arc_change_state(arc_anon, hdr, hash_lock); 6255 hdr->b_l1hdr.b_arc_access = 0; 6256 6257 mutex_exit(hash_lock); 6258 buf_discard_identity(hdr); 6259 arc_buf_thaw(buf); 6260 } 6261 } 6262 6263 int 6264 arc_released(arc_buf_t *buf) 6265 { 6266 int released; 6267 6268 mutex_enter(&buf->b_evict_lock); 6269 released = (buf->b_data != NULL && 6270 buf->b_hdr->b_l1hdr.b_state == arc_anon); 6271 mutex_exit(&buf->b_evict_lock); 6272 return (released); 6273 } 6274 6275 #ifdef ZFS_DEBUG 6276 int 6277 arc_referenced(arc_buf_t *buf) 6278 { 6279 int referenced; 6280 6281 mutex_enter(&buf->b_evict_lock); 6282 referenced = (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); 6283 mutex_exit(&buf->b_evict_lock); 6284 return (referenced); 6285 } 6286 #endif 6287 6288 static void 6289 arc_write_ready(zio_t *zio) 6290 { 6291 arc_write_callback_t *callback = zio->io_private; 6292 arc_buf_t *buf = callback->awcb_buf; 6293 arc_buf_hdr_t *hdr = buf->b_hdr; 6294 blkptr_t *bp = zio->io_bp; 6295 uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); 6296 6297 ASSERT(HDR_HAS_L1HDR(hdr)); 6298 ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); 6299 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 6300 6301 /* 6302 * If we're reexecuting this zio because the pool suspended, then 6303 * cleanup any state that was previously set the first time the 6304 * callback was invoked. 6305 */ 6306 if (zio->io_flags & ZIO_FLAG_REEXECUTED) { 6307 arc_cksum_free(hdr); 6308 arc_buf_unwatch(buf); 6309 if (hdr->b_l1hdr.b_pabd != NULL) { 6310 if (arc_buf_is_shared(buf)) { 6311 arc_unshare_buf(hdr, buf); 6312 } else { 6313 arc_hdr_free_pabd(hdr, B_FALSE); 6314 } 6315 } 6316 6317 if (HDR_HAS_RABD(hdr)) 6318 arc_hdr_free_pabd(hdr, B_TRUE); 6319 } 6320 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6321 ASSERT(!HDR_HAS_RABD(hdr)); 6322 ASSERT(!HDR_SHARED_DATA(hdr)); 6323 ASSERT(!arc_buf_is_shared(buf)); 6324 6325 callback->awcb_ready(zio, buf, callback->awcb_private); 6326 6327 if (HDR_IO_IN_PROGRESS(hdr)) 6328 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); 6329 6330 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6331 6332 if (BP_IS_PROTECTED(bp) != !!HDR_PROTECTED(hdr)) 6333 hdr = arc_hdr_realloc_crypt(hdr, BP_IS_PROTECTED(bp)); 6334 6335 if (BP_IS_PROTECTED(bp)) { 6336 /* ZIL blocks are written through zio_rewrite */ 6337 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 6338 ASSERT(HDR_PROTECTED(hdr)); 6339 6340 if (BP_SHOULD_BYTESWAP(bp)) { 6341 if (BP_GET_LEVEL(bp) > 0) { 6342 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 6343 } else { 6344 hdr->b_l1hdr.b_byteswap = 6345 DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); 6346 } 6347 } else { 6348 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 6349 } 6350 6351 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 6352 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 6353 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 6354 hdr->b_crypt_hdr.b_iv); 6355 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 6356 } 6357 6358 /* 6359 * If this block was written for raw encryption but the zio layer 6360 * ended up only authenticating it, adjust the buffer flags now. 6361 */ 6362 if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { 6363 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6364 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6365 if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) 6366 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6367 } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { 6368 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6369 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6370 } 6371 6372 /* this must be done after the buffer flags are adjusted */ 6373 arc_cksum_compute(buf); 6374 6375 enum zio_compress compress; 6376 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 6377 compress = ZIO_COMPRESS_OFF; 6378 } else { 6379 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 6380 compress = BP_GET_COMPRESS(bp); 6381 } 6382 HDR_SET_PSIZE(hdr, psize); 6383 arc_hdr_set_compress(hdr, compress); 6384 6385 if (zio->io_error != 0 || psize == 0) 6386 goto out; 6387 6388 /* 6389 * Fill the hdr with data. If the buffer is encrypted we have no choice 6390 * but to copy the data into b_rabd. If the hdr is compressed, the data 6391 * we want is available from the zio, otherwise we can take it from 6392 * the buf. 6393 * 6394 * We might be able to share the buf's data with the hdr here. However, 6395 * doing so would cause the ARC to be full of linear ABDs if we write a 6396 * lot of shareable data. As a compromise, we check whether scattered 6397 * ABDs are allowed, and assume that if they are then the user wants 6398 * the ARC to be primarily filled with them regardless of the data being 6399 * written. Therefore, if they're allowed then we allocate one and copy 6400 * the data into it; otherwise, we share the data directly if we can. 6401 */ 6402 if (ARC_BUF_ENCRYPTED(buf)) { 6403 ASSERT3U(psize, >, 0); 6404 ASSERT(ARC_BUF_COMPRESSED(buf)); 6405 arc_hdr_alloc_pabd(hdr, ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); 6406 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6407 } else if (zfs_abd_scatter_enabled || !arc_can_share(hdr, buf)) { 6408 /* 6409 * Ideally, we would always copy the io_abd into b_pabd, but the 6410 * user may have disabled compressed ARC, thus we must check the 6411 * hdr's compression setting rather than the io_bp's. 6412 */ 6413 if (BP_IS_ENCRYPTED(bp)) { 6414 ASSERT3U(psize, >, 0); 6415 arc_hdr_alloc_pabd(hdr, 6416 ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); 6417 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6418 } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 6419 !ARC_BUF_COMPRESSED(buf)) { 6420 ASSERT3U(psize, >, 0); 6421 arc_hdr_alloc_pabd(hdr, ARC_HDR_DO_ADAPT); 6422 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); 6423 } else { 6424 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); 6425 arc_hdr_alloc_pabd(hdr, ARC_HDR_DO_ADAPT); 6426 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, 6427 arc_buf_size(buf)); 6428 } 6429 } else { 6430 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); 6431 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); 6432 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6433 arc_share_buf(hdr, buf); 6434 } 6435 6436 out: 6437 arc_hdr_verify(hdr, bp); 6438 } 6439 6440 static void 6441 arc_write_children_ready(zio_t *zio) 6442 { 6443 arc_write_callback_t *callback = zio->io_private; 6444 arc_buf_t *buf = callback->awcb_buf; 6445 6446 callback->awcb_children_ready(zio, buf, callback->awcb_private); 6447 } 6448 6449 /* 6450 * The SPA calls this callback for each physical write that happens on behalf 6451 * of a logical write. See the comment in dbuf_write_physdone() for details. 6452 */ 6453 static void 6454 arc_write_physdone(zio_t *zio) 6455 { 6456 arc_write_callback_t *cb = zio->io_private; 6457 if (cb->awcb_physdone != NULL) 6458 cb->awcb_physdone(zio, cb->awcb_buf, cb->awcb_private); 6459 } 6460 6461 static void 6462 arc_write_done(zio_t *zio) 6463 { 6464 arc_write_callback_t *callback = zio->io_private; 6465 arc_buf_t *buf = callback->awcb_buf; 6466 arc_buf_hdr_t *hdr = buf->b_hdr; 6467 6468 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6469 6470 if (zio->io_error == 0) { 6471 arc_hdr_verify(hdr, zio->io_bp); 6472 6473 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { 6474 buf_discard_identity(hdr); 6475 } else { 6476 hdr->b_dva = *BP_IDENTITY(zio->io_bp); 6477 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); 6478 } 6479 } else { 6480 ASSERT(HDR_EMPTY(hdr)); 6481 } 6482 6483 /* 6484 * If the block to be written was all-zero or compressed enough to be 6485 * embedded in the BP, no write was performed so there will be no 6486 * dva/birth/checksum. The buffer must therefore remain anonymous 6487 * (and uncached). 6488 */ 6489 if (!HDR_EMPTY(hdr)) { 6490 arc_buf_hdr_t *exists; 6491 kmutex_t *hash_lock; 6492 6493 ASSERT3U(zio->io_error, ==, 0); 6494 6495 arc_cksum_verify(buf); 6496 6497 exists = buf_hash_insert(hdr, &hash_lock); 6498 if (exists != NULL) { 6499 /* 6500 * This can only happen if we overwrite for 6501 * sync-to-convergence, because we remove 6502 * buffers from the hash table when we arc_free(). 6503 */ 6504 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { 6505 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6506 panic("bad overwrite, hdr=%p exists=%p", 6507 (void *)hdr, (void *)exists); 6508 ASSERT(zfs_refcount_is_zero( 6509 &exists->b_l1hdr.b_refcnt)); 6510 arc_change_state(arc_anon, exists, hash_lock); 6511 arc_hdr_destroy(exists); 6512 mutex_exit(hash_lock); 6513 exists = buf_hash_insert(hdr, &hash_lock); 6514 ASSERT3P(exists, ==, NULL); 6515 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { 6516 /* nopwrite */ 6517 ASSERT(zio->io_prop.zp_nopwrite); 6518 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6519 panic("bad nopwrite, hdr=%p exists=%p", 6520 (void *)hdr, (void *)exists); 6521 } else { 6522 /* Dedup */ 6523 ASSERT(hdr->b_l1hdr.b_bufcnt == 1); 6524 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 6525 ASSERT(BP_GET_DEDUP(zio->io_bp)); 6526 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); 6527 } 6528 } 6529 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6530 /* if it's not anon, we are doing a scrub */ 6531 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) 6532 arc_access(hdr, hash_lock); 6533 mutex_exit(hash_lock); 6534 } else { 6535 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6536 } 6537 6538 ASSERT(!zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 6539 callback->awcb_done(zio, buf, callback->awcb_private); 6540 6541 abd_put(zio->io_abd); 6542 kmem_free(callback, sizeof (arc_write_callback_t)); 6543 } 6544 6545 zio_t * 6546 arc_write(zio_t *pio, spa_t *spa, uint64_t txg, blkptr_t *bp, arc_buf_t *buf, 6547 boolean_t l2arc, const zio_prop_t *zp, arc_write_done_func_t *ready, 6548 arc_write_done_func_t *children_ready, arc_write_done_func_t *physdone, 6549 arc_write_done_func_t *done, void *private, zio_priority_t priority, 6550 int zio_flags, const zbookmark_phys_t *zb) 6551 { 6552 arc_buf_hdr_t *hdr = buf->b_hdr; 6553 arc_write_callback_t *callback; 6554 zio_t *zio; 6555 zio_prop_t localprop = *zp; 6556 6557 ASSERT3P(ready, !=, NULL); 6558 ASSERT3P(done, !=, NULL); 6559 ASSERT(!HDR_IO_ERROR(hdr)); 6560 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6561 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6562 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 6563 if (l2arc) 6564 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6565 6566 if (ARC_BUF_ENCRYPTED(buf)) { 6567 ASSERT(ARC_BUF_COMPRESSED(buf)); 6568 localprop.zp_encrypt = B_TRUE; 6569 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6570 /* CONSTCOND */ 6571 localprop.zp_byteorder = 6572 (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 6573 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 6574 bcopy(hdr->b_crypt_hdr.b_salt, localprop.zp_salt, 6575 ZIO_DATA_SALT_LEN); 6576 bcopy(hdr->b_crypt_hdr.b_iv, localprop.zp_iv, 6577 ZIO_DATA_IV_LEN); 6578 bcopy(hdr->b_crypt_hdr.b_mac, localprop.zp_mac, 6579 ZIO_DATA_MAC_LEN); 6580 if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { 6581 localprop.zp_nopwrite = B_FALSE; 6582 localprop.zp_copies = 6583 MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); 6584 } 6585 zio_flags |= ZIO_FLAG_RAW; 6586 } else if (ARC_BUF_COMPRESSED(buf)) { 6587 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); 6588 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6589 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 6590 } 6591 6592 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); 6593 callback->awcb_ready = ready; 6594 callback->awcb_children_ready = children_ready; 6595 callback->awcb_physdone = physdone; 6596 callback->awcb_done = done; 6597 callback->awcb_private = private; 6598 callback->awcb_buf = buf; 6599 6600 /* 6601 * The hdr's b_pabd is now stale, free it now. A new data block 6602 * will be allocated when the zio pipeline calls arc_write_ready(). 6603 */ 6604 if (hdr->b_l1hdr.b_pabd != NULL) { 6605 /* 6606 * If the buf is currently sharing the data block with 6607 * the hdr then we need to break that relationship here. 6608 * The hdr will remain with a NULL data pointer and the 6609 * buf will take sole ownership of the block. 6610 */ 6611 if (arc_buf_is_shared(buf)) { 6612 arc_unshare_buf(hdr, buf); 6613 } else { 6614 arc_hdr_free_pabd(hdr, B_FALSE); 6615 } 6616 VERIFY3P(buf->b_data, !=, NULL); 6617 } 6618 6619 if (HDR_HAS_RABD(hdr)) 6620 arc_hdr_free_pabd(hdr, B_TRUE); 6621 6622 if (!(zio_flags & ZIO_FLAG_RAW)) 6623 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); 6624 6625 ASSERT(!arc_buf_is_shared(buf)); 6626 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6627 6628 zio = zio_write(pio, spa, txg, bp, 6629 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), 6630 HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, 6631 (children_ready != NULL) ? arc_write_children_ready : NULL, 6632 arc_write_physdone, arc_write_done, callback, 6633 priority, zio_flags, zb); 6634 6635 return (zio); 6636 } 6637 6638 static int 6639 arc_memory_throttle(spa_t *spa, uint64_t reserve, uint64_t txg) 6640 { 6641 #ifdef _KERNEL 6642 uint64_t available_memory = ptob(freemem); 6643 6644 #if defined(__i386) 6645 available_memory = 6646 MIN(available_memory, vmem_size(heap_arena, VMEM_FREE)); 6647 #endif 6648 6649 if (freemem > physmem * arc_lotsfree_percent / 100) 6650 return (0); 6651 6652 if (txg > spa->spa_lowmem_last_txg) { 6653 spa->spa_lowmem_last_txg = txg; 6654 spa->spa_lowmem_page_load = 0; 6655 } 6656 /* 6657 * If we are in pageout, we know that memory is already tight, 6658 * the arc is already going to be evicting, so we just want to 6659 * continue to let page writes occur as quickly as possible. 6660 */ 6661 if (curproc == proc_pageout) { 6662 if (spa->spa_lowmem_page_load > 6663 MAX(ptob(minfree), available_memory) / 4) 6664 return (SET_ERROR(ERESTART)); 6665 /* Note: reserve is inflated, so we deflate */ 6666 atomic_add_64(&spa->spa_lowmem_page_load, reserve / 8); 6667 return (0); 6668 } else if (spa->spa_lowmem_page_load > 0 && arc_reclaim_needed()) { 6669 /* memory is low, delay before restarting */ 6670 ARCSTAT_INCR(arcstat_memory_throttle_count, 1); 6671 return (SET_ERROR(EAGAIN)); 6672 } 6673 spa->spa_lowmem_page_load = 0; 6674 #endif /* _KERNEL */ 6675 return (0); 6676 } 6677 6678 void 6679 arc_tempreserve_clear(uint64_t reserve) 6680 { 6681 atomic_add_64(&arc_tempreserve, -reserve); 6682 ASSERT((int64_t)arc_tempreserve >= 0); 6683 } 6684 6685 int 6686 arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) 6687 { 6688 int error; 6689 uint64_t anon_size; 6690 6691 if (reserve > arc_c/4 && !arc_no_grow) 6692 arc_c = MIN(arc_c_max, reserve * 4); 6693 if (reserve > arc_c) 6694 return (SET_ERROR(ENOMEM)); 6695 6696 /* 6697 * Don't count loaned bufs as in flight dirty data to prevent long 6698 * network delays from blocking transactions that are ready to be 6699 * assigned to a txg. 6700 */ 6701 6702 /* assert that it has not wrapped around */ 6703 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 6704 6705 anon_size = MAX((int64_t)(zfs_refcount_count(&arc_anon->arcs_size) - 6706 arc_loaned_bytes), 0); 6707 6708 /* 6709 * Writes will, almost always, require additional memory allocations 6710 * in order to compress/encrypt/etc the data. We therefore need to 6711 * make sure that there is sufficient available memory for this. 6712 */ 6713 error = arc_memory_throttle(spa, reserve, txg); 6714 if (error != 0) 6715 return (error); 6716 6717 /* 6718 * Throttle writes when the amount of dirty data in the cache 6719 * gets too large. We try to keep the cache less than half full 6720 * of dirty blocks so that our sync times don't grow too large. 6721 * 6722 * In the case of one pool being built on another pool, we want 6723 * to make sure we don't end up throttling the lower (backing) 6724 * pool when the upper pool is the majority contributor to dirty 6725 * data. To insure we make forward progress during throttling, we 6726 * also check the current pool's net dirty data and only throttle 6727 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty 6728 * data in the cache. 6729 * 6730 * Note: if two requests come in concurrently, we might let them 6731 * both succeed, when one of them should fail. Not a huge deal. 6732 */ 6733 uint64_t total_dirty = reserve + arc_tempreserve + anon_size; 6734 uint64_t spa_dirty_anon = spa_dirty_data(spa); 6735 6736 if (total_dirty > arc_c * zfs_arc_dirty_limit_percent / 100 && 6737 anon_size > arc_c * zfs_arc_anon_limit_percent / 100 && 6738 spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { 6739 uint64_t meta_esize = 6740 zfs_refcount_count( 6741 &arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6742 uint64_t data_esize = 6743 zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6744 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " 6745 "anon_data=%lluK tempreserve=%lluK arc_c=%lluK\n", 6746 arc_tempreserve >> 10, meta_esize >> 10, 6747 data_esize >> 10, reserve >> 10, arc_c >> 10); 6748 return (SET_ERROR(ERESTART)); 6749 } 6750 atomic_add_64(&arc_tempreserve, reserve); 6751 return (0); 6752 } 6753 6754 static void 6755 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, 6756 kstat_named_t *evict_data, kstat_named_t *evict_metadata) 6757 { 6758 size->value.ui64 = zfs_refcount_count(&state->arcs_size); 6759 evict_data->value.ui64 = 6760 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); 6761 evict_metadata->value.ui64 = 6762 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); 6763 } 6764 6765 static int 6766 arc_kstat_update(kstat_t *ksp, int rw) 6767 { 6768 arc_stats_t *as = ksp->ks_data; 6769 6770 if (rw == KSTAT_WRITE) { 6771 return (EACCES); 6772 } else { 6773 arc_kstat_update_state(arc_anon, 6774 &as->arcstat_anon_size, 6775 &as->arcstat_anon_evictable_data, 6776 &as->arcstat_anon_evictable_metadata); 6777 arc_kstat_update_state(arc_mru, 6778 &as->arcstat_mru_size, 6779 &as->arcstat_mru_evictable_data, 6780 &as->arcstat_mru_evictable_metadata); 6781 arc_kstat_update_state(arc_mru_ghost, 6782 &as->arcstat_mru_ghost_size, 6783 &as->arcstat_mru_ghost_evictable_data, 6784 &as->arcstat_mru_ghost_evictable_metadata); 6785 arc_kstat_update_state(arc_mfu, 6786 &as->arcstat_mfu_size, 6787 &as->arcstat_mfu_evictable_data, 6788 &as->arcstat_mfu_evictable_metadata); 6789 arc_kstat_update_state(arc_mfu_ghost, 6790 &as->arcstat_mfu_ghost_size, 6791 &as->arcstat_mfu_ghost_evictable_data, 6792 &as->arcstat_mfu_ghost_evictable_metadata); 6793 6794 ARCSTAT(arcstat_size) = aggsum_value(&arc_size); 6795 ARCSTAT(arcstat_meta_used) = aggsum_value(&arc_meta_used); 6796 ARCSTAT(arcstat_data_size) = aggsum_value(&astat_data_size); 6797 ARCSTAT(arcstat_metadata_size) = 6798 aggsum_value(&astat_metadata_size); 6799 ARCSTAT(arcstat_hdr_size) = aggsum_value(&astat_hdr_size); 6800 ARCSTAT(arcstat_other_size) = aggsum_value(&astat_other_size); 6801 ARCSTAT(arcstat_l2_hdr_size) = aggsum_value(&astat_l2_hdr_size); 6802 } 6803 6804 return (0); 6805 } 6806 6807 /* 6808 * This function *must* return indices evenly distributed between all 6809 * sublists of the multilist. This is needed due to how the ARC eviction 6810 * code is laid out; arc_evict_state() assumes ARC buffers are evenly 6811 * distributed between all sublists and uses this assumption when 6812 * deciding which sublist to evict from and how much to evict from it. 6813 */ 6814 unsigned int 6815 arc_state_multilist_index_func(multilist_t *ml, void *obj) 6816 { 6817 arc_buf_hdr_t *hdr = obj; 6818 6819 /* 6820 * We rely on b_dva to generate evenly distributed index 6821 * numbers using buf_hash below. So, as an added precaution, 6822 * let's make sure we never add empty buffers to the arc lists. 6823 */ 6824 ASSERT(!HDR_EMPTY(hdr)); 6825 6826 /* 6827 * The assumption here, is the hash value for a given 6828 * arc_buf_hdr_t will remain constant throughout its lifetime 6829 * (i.e. its b_spa, b_dva, and b_birth fields don't change). 6830 * Thus, we don't need to store the header's sublist index 6831 * on insertion, as this index can be recalculated on removal. 6832 * 6833 * Also, the low order bits of the hash value are thought to be 6834 * distributed evenly. Otherwise, in the case that the multilist 6835 * has a power of two number of sublists, each sublists' usage 6836 * would not be evenly distributed. 6837 */ 6838 return (buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % 6839 multilist_get_num_sublists(ml)); 6840 } 6841 6842 static void 6843 arc_state_init(void) 6844 { 6845 arc_anon = &ARC_anon; 6846 arc_mru = &ARC_mru; 6847 arc_mru_ghost = &ARC_mru_ghost; 6848 arc_mfu = &ARC_mfu; 6849 arc_mfu_ghost = &ARC_mfu_ghost; 6850 arc_l2c_only = &ARC_l2c_only; 6851 6852 arc_mru->arcs_list[ARC_BUFC_METADATA] = 6853 multilist_create(sizeof (arc_buf_hdr_t), 6854 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6855 arc_state_multilist_index_func); 6856 arc_mru->arcs_list[ARC_BUFC_DATA] = 6857 multilist_create(sizeof (arc_buf_hdr_t), 6858 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6859 arc_state_multilist_index_func); 6860 arc_mru_ghost->arcs_list[ARC_BUFC_METADATA] = 6861 multilist_create(sizeof (arc_buf_hdr_t), 6862 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6863 arc_state_multilist_index_func); 6864 arc_mru_ghost->arcs_list[ARC_BUFC_DATA] = 6865 multilist_create(sizeof (arc_buf_hdr_t), 6866 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6867 arc_state_multilist_index_func); 6868 arc_mfu->arcs_list[ARC_BUFC_METADATA] = 6869 multilist_create(sizeof (arc_buf_hdr_t), 6870 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6871 arc_state_multilist_index_func); 6872 arc_mfu->arcs_list[ARC_BUFC_DATA] = 6873 multilist_create(sizeof (arc_buf_hdr_t), 6874 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6875 arc_state_multilist_index_func); 6876 arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA] = 6877 multilist_create(sizeof (arc_buf_hdr_t), 6878 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6879 arc_state_multilist_index_func); 6880 arc_mfu_ghost->arcs_list[ARC_BUFC_DATA] = 6881 multilist_create(sizeof (arc_buf_hdr_t), 6882 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6883 arc_state_multilist_index_func); 6884 arc_l2c_only->arcs_list[ARC_BUFC_METADATA] = 6885 multilist_create(sizeof (arc_buf_hdr_t), 6886 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6887 arc_state_multilist_index_func); 6888 arc_l2c_only->arcs_list[ARC_BUFC_DATA] = 6889 multilist_create(sizeof (arc_buf_hdr_t), 6890 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 6891 arc_state_multilist_index_func); 6892 6893 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6894 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6895 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 6896 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 6897 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 6898 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 6899 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 6900 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 6901 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 6902 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 6903 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 6904 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 6905 6906 zfs_refcount_create(&arc_anon->arcs_size); 6907 zfs_refcount_create(&arc_mru->arcs_size); 6908 zfs_refcount_create(&arc_mru_ghost->arcs_size); 6909 zfs_refcount_create(&arc_mfu->arcs_size); 6910 zfs_refcount_create(&arc_mfu_ghost->arcs_size); 6911 zfs_refcount_create(&arc_l2c_only->arcs_size); 6912 6913 aggsum_init(&arc_meta_used, 0); 6914 aggsum_init(&arc_size, 0); 6915 aggsum_init(&astat_data_size, 0); 6916 aggsum_init(&astat_metadata_size, 0); 6917 aggsum_init(&astat_hdr_size, 0); 6918 aggsum_init(&astat_other_size, 0); 6919 aggsum_init(&astat_l2_hdr_size, 0); 6920 } 6921 6922 static void 6923 arc_state_fini(void) 6924 { 6925 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6926 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6927 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 6928 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 6929 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 6930 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 6931 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 6932 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 6933 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 6934 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 6935 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 6936 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 6937 6938 zfs_refcount_destroy(&arc_anon->arcs_size); 6939 zfs_refcount_destroy(&arc_mru->arcs_size); 6940 zfs_refcount_destroy(&arc_mru_ghost->arcs_size); 6941 zfs_refcount_destroy(&arc_mfu->arcs_size); 6942 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size); 6943 zfs_refcount_destroy(&arc_l2c_only->arcs_size); 6944 6945 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_METADATA]); 6946 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); 6947 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_METADATA]); 6948 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); 6949 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_DATA]); 6950 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); 6951 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_DATA]); 6952 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); 6953 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); 6954 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_DATA]); 6955 6956 aggsum_fini(&arc_meta_used); 6957 aggsum_fini(&arc_size); 6958 aggsum_fini(&astat_data_size); 6959 aggsum_fini(&astat_metadata_size); 6960 aggsum_fini(&astat_hdr_size); 6961 aggsum_fini(&astat_other_size); 6962 aggsum_fini(&astat_l2_hdr_size); 6963 6964 } 6965 6966 uint64_t 6967 arc_max_bytes(void) 6968 { 6969 return (arc_c_max); 6970 } 6971 6972 void 6973 arc_init(void) 6974 { 6975 /* 6976 * allmem is "all memory that we could possibly use". 6977 */ 6978 #ifdef _KERNEL 6979 uint64_t allmem = ptob(physmem - swapfs_minfree); 6980 #else 6981 uint64_t allmem = (physmem * PAGESIZE) / 2; 6982 #endif 6983 mutex_init(&arc_adjust_lock, NULL, MUTEX_DEFAULT, NULL); 6984 cv_init(&arc_adjust_waiters_cv, NULL, CV_DEFAULT, NULL); 6985 6986 /* set min cache to 1/32 of all memory, or 64MB, whichever is more */ 6987 arc_c_min = MAX(allmem / 32, 64 << 20); 6988 /* set max to 3/4 of all memory, or all but 1GB, whichever is more */ 6989 if (allmem >= 1 << 30) 6990 arc_c_max = allmem - (1 << 30); 6991 else 6992 arc_c_max = arc_c_min; 6993 arc_c_max = MAX(allmem * 3 / 4, arc_c_max); 6994 6995 /* 6996 * In userland, there's only the memory pressure that we artificially 6997 * create (see arc_available_memory()). Don't let arc_c get too 6998 * small, because it can cause transactions to be larger than 6999 * arc_c, causing arc_tempreserve_space() to fail. 7000 */ 7001 #ifndef _KERNEL 7002 arc_c_min = arc_c_max / 2; 7003 #endif 7004 7005 /* 7006 * Allow the tunables to override our calculations if they are 7007 * reasonable (ie. over 64MB) 7008 */ 7009 if (zfs_arc_max > 64 << 20 && zfs_arc_max < allmem) { 7010 arc_c_max = zfs_arc_max; 7011 arc_c_min = MIN(arc_c_min, arc_c_max); 7012 } 7013 if (zfs_arc_min > 64 << 20 && zfs_arc_min <= arc_c_max) 7014 arc_c_min = zfs_arc_min; 7015 7016 arc_c = arc_c_max; 7017 arc_p = (arc_c >> 1); 7018 7019 /* limit meta-data to 1/4 of the arc capacity */ 7020 arc_meta_limit = arc_c_max / 4; 7021 7022 #ifdef _KERNEL 7023 /* 7024 * Metadata is stored in the kernel's heap. Don't let us 7025 * use more than half the heap for the ARC. 7026 */ 7027 arc_meta_limit = MIN(arc_meta_limit, 7028 vmem_size(heap_arena, VMEM_ALLOC | VMEM_FREE) / 2); 7029 #endif 7030 7031 /* Allow the tunable to override if it is reasonable */ 7032 if (zfs_arc_meta_limit > 0 && zfs_arc_meta_limit <= arc_c_max) 7033 arc_meta_limit = zfs_arc_meta_limit; 7034 7035 if (arc_c_min < arc_meta_limit / 2 && zfs_arc_min == 0) 7036 arc_c_min = arc_meta_limit / 2; 7037 7038 if (zfs_arc_meta_min > 0) { 7039 arc_meta_min = zfs_arc_meta_min; 7040 } else { 7041 arc_meta_min = arc_c_min / 2; 7042 } 7043 7044 if (zfs_arc_grow_retry > 0) 7045 arc_grow_retry = zfs_arc_grow_retry; 7046 7047 if (zfs_arc_shrink_shift > 0) 7048 arc_shrink_shift = zfs_arc_shrink_shift; 7049 7050 /* 7051 * Ensure that arc_no_grow_shift is less than arc_shrink_shift. 7052 */ 7053 if (arc_no_grow_shift >= arc_shrink_shift) 7054 arc_no_grow_shift = arc_shrink_shift - 1; 7055 7056 if (zfs_arc_p_min_shift > 0) 7057 arc_p_min_shift = zfs_arc_p_min_shift; 7058 7059 /* if kmem_flags are set, lets try to use less memory */ 7060 if (kmem_debugging()) 7061 arc_c = arc_c / 2; 7062 if (arc_c < arc_c_min) 7063 arc_c = arc_c_min; 7064 7065 arc_state_init(); 7066 7067 /* 7068 * The arc must be "uninitialized", so that hdr_recl() (which is 7069 * registered by buf_init()) will not access arc_reap_zthr before 7070 * it is created. 7071 */ 7072 ASSERT(!arc_initialized); 7073 buf_init(); 7074 7075 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, 7076 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); 7077 7078 if (arc_ksp != NULL) { 7079 arc_ksp->ks_data = &arc_stats; 7080 arc_ksp->ks_update = arc_kstat_update; 7081 kstat_install(arc_ksp); 7082 } 7083 7084 arc_adjust_zthr = zthr_create(arc_adjust_cb_check, 7085 arc_adjust_cb, NULL); 7086 arc_reap_zthr = zthr_create_timer(arc_reap_cb_check, 7087 arc_reap_cb, NULL, SEC2NSEC(1)); 7088 7089 arc_initialized = B_TRUE; 7090 arc_warm = B_FALSE; 7091 7092 /* 7093 * Calculate maximum amount of dirty data per pool. 7094 * 7095 * If it has been set by /etc/system, take that. 7096 * Otherwise, use a percentage of physical memory defined by 7097 * zfs_dirty_data_max_percent (default 10%) with a cap at 7098 * zfs_dirty_data_max_max (default 4GB). 7099 */ 7100 if (zfs_dirty_data_max == 0) { 7101 zfs_dirty_data_max = physmem * PAGESIZE * 7102 zfs_dirty_data_max_percent / 100; 7103 zfs_dirty_data_max = MIN(zfs_dirty_data_max, 7104 zfs_dirty_data_max_max); 7105 } 7106 } 7107 7108 void 7109 arc_fini(void) 7110 { 7111 /* Use B_TRUE to ensure *all* buffers are evicted */ 7112 arc_flush(NULL, B_TRUE); 7113 7114 arc_initialized = B_FALSE; 7115 7116 if (arc_ksp != NULL) { 7117 kstat_delete(arc_ksp); 7118 arc_ksp = NULL; 7119 } 7120 7121 (void) zthr_cancel(arc_adjust_zthr); 7122 zthr_destroy(arc_adjust_zthr); 7123 7124 (void) zthr_cancel(arc_reap_zthr); 7125 zthr_destroy(arc_reap_zthr); 7126 7127 mutex_destroy(&arc_adjust_lock); 7128 cv_destroy(&arc_adjust_waiters_cv); 7129 7130 /* 7131 * buf_fini() must proceed arc_state_fini() because buf_fin() may 7132 * trigger the release of kmem magazines, which can callback to 7133 * arc_space_return() which accesses aggsums freed in act_state_fini(). 7134 */ 7135 buf_fini(); 7136 arc_state_fini(); 7137 7138 ASSERT0(arc_loaned_bytes); 7139 } 7140 7141 /* 7142 * Level 2 ARC 7143 * 7144 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. 7145 * It uses dedicated storage devices to hold cached data, which are populated 7146 * using large infrequent writes. The main role of this cache is to boost 7147 * the performance of random read workloads. The intended L2ARC devices 7148 * include short-stroked disks, solid state disks, and other media with 7149 * substantially faster read latency than disk. 7150 * 7151 * +-----------------------+ 7152 * | ARC | 7153 * +-----------------------+ 7154 * | ^ ^ 7155 * | | | 7156 * l2arc_feed_thread() arc_read() 7157 * | | | 7158 * | l2arc read | 7159 * V | | 7160 * +---------------+ | 7161 * | L2ARC | | 7162 * +---------------+ | 7163 * | ^ | 7164 * l2arc_write() | | 7165 * | | | 7166 * V | | 7167 * +-------+ +-------+ 7168 * | vdev | | vdev | 7169 * | cache | | cache | 7170 * +-------+ +-------+ 7171 * +=========+ .-----. 7172 * : L2ARC : |-_____-| 7173 * : devices : | Disks | 7174 * +=========+ `-_____-' 7175 * 7176 * Read requests are satisfied from the following sources, in order: 7177 * 7178 * 1) ARC 7179 * 2) vdev cache of L2ARC devices 7180 * 3) L2ARC devices 7181 * 4) vdev cache of disks 7182 * 5) disks 7183 * 7184 * Some L2ARC device types exhibit extremely slow write performance. 7185 * To accommodate for this there are some significant differences between 7186 * the L2ARC and traditional cache design: 7187 * 7188 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from 7189 * the ARC behave as usual, freeing buffers and placing headers on ghost 7190 * lists. The ARC does not send buffers to the L2ARC during eviction as 7191 * this would add inflated write latencies for all ARC memory pressure. 7192 * 7193 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 7194 * It does this by periodically scanning buffers from the eviction-end of 7195 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 7196 * not already there. It scans until a headroom of buffers is satisfied, 7197 * which itself is a buffer for ARC eviction. If a compressible buffer is 7198 * found during scanning and selected for writing to an L2ARC device, we 7199 * temporarily boost scanning headroom during the next scan cycle to make 7200 * sure we adapt to compression effects (which might significantly reduce 7201 * the data volume we write to L2ARC). The thread that does this is 7202 * l2arc_feed_thread(), illustrated below; example sizes are included to 7203 * provide a better sense of ratio than this diagram: 7204 * 7205 * head --> tail 7206 * +---------------------+----------+ 7207 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC 7208 * +---------------------+----------+ | o L2ARC eligible 7209 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer 7210 * +---------------------+----------+ | 7211 * 15.9 Gbytes ^ 32 Mbytes | 7212 * headroom | 7213 * l2arc_feed_thread() 7214 * | 7215 * l2arc write hand <--[oooo]--' 7216 * | 8 Mbyte 7217 * | write max 7218 * V 7219 * +==============================+ 7220 * L2ARC dev |####|#|###|###| |####| ... | 7221 * +==============================+ 7222 * 32 Gbytes 7223 * 7224 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of 7225 * evicted, then the L2ARC has cached a buffer much sooner than it probably 7226 * needed to, potentially wasting L2ARC device bandwidth and storage. It is 7227 * safe to say that this is an uncommon case, since buffers at the end of 7228 * the ARC lists have moved there due to inactivity. 7229 * 7230 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, 7231 * then the L2ARC simply misses copying some buffers. This serves as a 7232 * pressure valve to prevent heavy read workloads from both stalling the ARC 7233 * with waits and clogging the L2ARC with writes. This also helps prevent 7234 * the potential for the L2ARC to churn if it attempts to cache content too 7235 * quickly, such as during backups of the entire pool. 7236 * 7237 * 5. After system boot and before the ARC has filled main memory, there are 7238 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru 7239 * lists can remain mostly static. Instead of searching from tail of these 7240 * lists as pictured, the l2arc_feed_thread() will search from the list heads 7241 * for eligible buffers, greatly increasing its chance of finding them. 7242 * 7243 * The L2ARC device write speed is also boosted during this time so that 7244 * the L2ARC warms up faster. Since there have been no ARC evictions yet, 7245 * there are no L2ARC reads, and no fear of degrading read performance 7246 * through increased writes. 7247 * 7248 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that 7249 * the vdev queue can aggregate them into larger and fewer writes. Each 7250 * device is written to in a rotor fashion, sweeping writes through 7251 * available space then repeating. 7252 * 7253 * 7. The L2ARC does not store dirty content. It never needs to flush 7254 * write buffers back to disk based storage. 7255 * 7256 * 8. If an ARC buffer is written (and dirtied) which also exists in the 7257 * L2ARC, the now stale L2ARC buffer is immediately dropped. 7258 * 7259 * The performance of the L2ARC can be tweaked by a number of tunables, which 7260 * may be necessary for different workloads: 7261 * 7262 * l2arc_write_max max write bytes per interval 7263 * l2arc_write_boost extra write bytes during device warmup 7264 * l2arc_noprefetch skip caching prefetched buffers 7265 * l2arc_headroom number of max device writes to precache 7266 * l2arc_headroom_boost when we find compressed buffers during ARC 7267 * scanning, we multiply headroom by this 7268 * percentage factor for the next scan cycle, 7269 * since more compressed buffers are likely to 7270 * be present 7271 * l2arc_feed_secs seconds between L2ARC writing 7272 * 7273 * Tunables may be removed or added as future performance improvements are 7274 * integrated, and also may become zpool properties. 7275 * 7276 * There are three key functions that control how the L2ARC warms up: 7277 * 7278 * l2arc_write_eligible() check if a buffer is eligible to cache 7279 * l2arc_write_size() calculate how much to write 7280 * l2arc_write_interval() calculate sleep delay between writes 7281 * 7282 * These three functions determine what to write, how much, and how quickly 7283 * to send writes. 7284 * 7285 * L2ARC persistence: 7286 * 7287 * When writing buffers to L2ARC, we periodically add some metadata to 7288 * make sure we can pick them up after reboot, thus dramatically reducing 7289 * the impact that any downtime has on the performance of storage systems 7290 * with large caches. 7291 * 7292 * The implementation works fairly simply by integrating the following two 7293 * modifications: 7294 * 7295 * *) When writing to the L2ARC, we occasionally write a "l2arc log block", 7296 * which is an additional piece of metadata which describes what's been 7297 * written. This allows us to rebuild the arc_buf_hdr_t structures of the 7298 * main ARC buffers. There are 2 linked-lists of log blocks headed by 7299 * dh_start_lbps[2]. We alternate which chain we append to, so they are 7300 * time-wise and offset-wise interleaved, but that is an optimization rather 7301 * than for correctness. The log block also includes a pointer to the 7302 * previous block in its chain. 7303 * 7304 * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device 7305 * for our header bookkeeping purposes. This contains a device header, 7306 * which contains our top-level reference structures. We update it each 7307 * time we write a new log block, so that we're able to locate it in the 7308 * L2ARC device. If this write results in an inconsistent device header 7309 * (e.g. due to power failure), we detect this by verifying the header's 7310 * checksum and simply fail to reconstruct the L2ARC after reboot. 7311 * 7312 * Implementation diagram: 7313 * 7314 * +=== L2ARC device (not to scale) ======================================+ 7315 * | ___two newest log block pointers__.__________ | 7316 * | / \dh_start_lbps[1] | 7317 * | / \ \dh_start_lbps[0]| 7318 * |.___/__. V V | 7319 * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| 7320 * || hdr| ^ /^ /^ / / | 7321 * |+------+ ...--\-------/ \-----/--\------/ / | 7322 * | \--------------/ \--------------/ | 7323 * +======================================================================+ 7324 * 7325 * As can be seen on the diagram, rather than using a simple linked list, 7326 * we use a pair of linked lists with alternating elements. This is a 7327 * performance enhancement due to the fact that we only find out the 7328 * address of the next log block access once the current block has been 7329 * completely read in. Obviously, this hurts performance, because we'd be 7330 * keeping the device's I/O queue at only a 1 operation deep, thus 7331 * incurring a large amount of I/O round-trip latency. Having two lists 7332 * allows us to fetch two log blocks ahead of where we are currently 7333 * rebuilding L2ARC buffers. 7334 * 7335 * On-device data structures: 7336 * 7337 * L2ARC device header: l2arc_dev_hdr_phys_t 7338 * L2ARC log block: l2arc_log_blk_phys_t 7339 * 7340 * L2ARC reconstruction: 7341 * 7342 * When writing data, we simply write in the standard rotary fashion, 7343 * evicting buffers as we go and simply writing new data over them (writing 7344 * a new log block every now and then). This obviously means that once we 7345 * loop around the end of the device, we will start cutting into an already 7346 * committed log block (and its referenced data buffers), like so: 7347 * 7348 * current write head__ __old tail 7349 * \ / 7350 * V V 7351 * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> 7352 * ^ ^^^^^^^^^___________________________________ 7353 * | \ 7354 * <<nextwrite>> may overwrite this blk and/or its bufs --' 7355 * 7356 * When importing the pool, we detect this situation and use it to stop 7357 * our scanning process (see l2arc_rebuild). 7358 * 7359 * There is one significant caveat to consider when rebuilding ARC contents 7360 * from an L2ARC device: what about invalidated buffers? Given the above 7361 * construction, we cannot update blocks which we've already written to amend 7362 * them to remove buffers which were invalidated. Thus, during reconstruction, 7363 * we might be populating the cache with buffers for data that's not on the 7364 * main pool anymore, or may have been overwritten! 7365 * 7366 * As it turns out, this isn't a problem. Every arc_read request includes 7367 * both the DVA and, crucially, the birth TXG of the BP the caller is 7368 * looking for. So even if the cache were populated by completely rotten 7369 * blocks for data that had been long deleted and/or overwritten, we'll 7370 * never actually return bad data from the cache, since the DVA with the 7371 * birth TXG uniquely identify a block in space and time - once created, 7372 * a block is immutable on disk. The worst thing we have done is wasted 7373 * some time and memory at l2arc rebuild to reconstruct outdated ARC 7374 * entries that will get dropped from the l2arc as it is being updated 7375 * with new blocks. 7376 * 7377 * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write 7378 * hand are not restored. This is done by saving the offset (in bytes) 7379 * l2arc_evict() has evicted to in the L2ARC device header and taking it 7380 * into account when restoring buffers. 7381 */ 7382 7383 static boolean_t 7384 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) 7385 { 7386 /* 7387 * A buffer is *not* eligible for the L2ARC if it: 7388 * 1. belongs to a different spa. 7389 * 2. is already cached on the L2ARC. 7390 * 3. has an I/O in progress (it may be an incomplete read). 7391 * 4. is flagged not eligible (zfs property). 7392 */ 7393 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || 7394 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) 7395 return (B_FALSE); 7396 7397 return (B_TRUE); 7398 } 7399 7400 static uint64_t 7401 l2arc_write_size(l2arc_dev_t *dev) 7402 { 7403 uint64_t size, dev_size; 7404 7405 /* 7406 * Make sure our globals have meaningful values in case the user 7407 * altered them. 7408 */ 7409 size = l2arc_write_max; 7410 if (size == 0) { 7411 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " 7412 "be greater than zero, resetting it to the default (%d)", 7413 L2ARC_WRITE_SIZE); 7414 size = l2arc_write_max = L2ARC_WRITE_SIZE; 7415 } 7416 7417 if (arc_warm == B_FALSE) 7418 size += l2arc_write_boost; 7419 7420 /* 7421 * Make sure the write size does not exceed the size of the cache 7422 * device. This is important in l2arc_evict(), otherwise infinite 7423 * iteration can occur. 7424 */ 7425 dev_size = dev->l2ad_end - dev->l2ad_start; 7426 if ((size + l2arc_log_blk_overhead(size, dev)) >= dev_size) { 7427 cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " 7428 "plus the overhead of log blocks (persistent L2ARC, " 7429 "%" PRIu64 " bytes) exceeds the size of the cache device " 7430 "(guid %" PRIu64 "), resetting them to the default (%d)", 7431 l2arc_log_blk_overhead(size, dev), 7432 dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); 7433 size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; 7434 7435 if (arc_warm == B_FALSE) 7436 size += l2arc_write_boost; 7437 } 7438 7439 return (size); 7440 7441 } 7442 7443 static clock_t 7444 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) 7445 { 7446 clock_t interval, next, now; 7447 7448 /* 7449 * If the ARC lists are busy, increase our write rate; if the 7450 * lists are stale, idle back. This is achieved by checking 7451 * how much we previously wrote - if it was more than half of 7452 * what we wanted, schedule the next write much sooner. 7453 */ 7454 if (l2arc_feed_again && wrote > (wanted / 2)) 7455 interval = (hz * l2arc_feed_min_ms) / 1000; 7456 else 7457 interval = hz * l2arc_feed_secs; 7458 7459 now = ddi_get_lbolt(); 7460 next = MAX(now, MIN(now + interval, began + interval)); 7461 7462 return (next); 7463 } 7464 7465 /* 7466 * Cycle through L2ARC devices. This is how L2ARC load balances. 7467 * If a device is returned, this also returns holding the spa config lock. 7468 */ 7469 static l2arc_dev_t * 7470 l2arc_dev_get_next(void) 7471 { 7472 l2arc_dev_t *first, *next = NULL; 7473 7474 /* 7475 * Lock out the removal of spas (spa_namespace_lock), then removal 7476 * of cache devices (l2arc_dev_mtx). Once a device has been selected, 7477 * both locks will be dropped and a spa config lock held instead. 7478 */ 7479 mutex_enter(&spa_namespace_lock); 7480 mutex_enter(&l2arc_dev_mtx); 7481 7482 /* if there are no vdevs, there is nothing to do */ 7483 if (l2arc_ndev == 0) 7484 goto out; 7485 7486 first = NULL; 7487 next = l2arc_dev_last; 7488 do { 7489 /* loop around the list looking for a non-faulted vdev */ 7490 if (next == NULL) { 7491 next = list_head(l2arc_dev_list); 7492 } else { 7493 next = list_next(l2arc_dev_list, next); 7494 if (next == NULL) 7495 next = list_head(l2arc_dev_list); 7496 } 7497 7498 /* if we have come back to the start, bail out */ 7499 if (first == NULL) 7500 first = next; 7501 else if (next == first) 7502 break; 7503 7504 } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild); 7505 7506 /* if we were unable to find any usable vdevs, return NULL */ 7507 if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild) 7508 next = NULL; 7509 7510 l2arc_dev_last = next; 7511 7512 out: 7513 mutex_exit(&l2arc_dev_mtx); 7514 7515 /* 7516 * Grab the config lock to prevent the 'next' device from being 7517 * removed while we are writing to it. 7518 */ 7519 if (next != NULL) 7520 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); 7521 mutex_exit(&spa_namespace_lock); 7522 7523 return (next); 7524 } 7525 7526 /* 7527 * Free buffers that were tagged for destruction. 7528 */ 7529 static void 7530 l2arc_do_free_on_write() 7531 { 7532 list_t *buflist; 7533 l2arc_data_free_t *df, *df_prev; 7534 7535 mutex_enter(&l2arc_free_on_write_mtx); 7536 buflist = l2arc_free_on_write; 7537 7538 for (df = list_tail(buflist); df; df = df_prev) { 7539 df_prev = list_prev(buflist, df); 7540 ASSERT3P(df->l2df_abd, !=, NULL); 7541 abd_free(df->l2df_abd); 7542 list_remove(buflist, df); 7543 kmem_free(df, sizeof (l2arc_data_free_t)); 7544 } 7545 7546 mutex_exit(&l2arc_free_on_write_mtx); 7547 } 7548 7549 /* 7550 * A write to a cache device has completed. Update all headers to allow 7551 * reads from these buffers to begin. 7552 */ 7553 static void 7554 l2arc_write_done(zio_t *zio) 7555 { 7556 l2arc_write_callback_t *cb; 7557 l2arc_lb_abd_buf_t *abd_buf; 7558 l2arc_lb_ptr_buf_t *lb_ptr_buf; 7559 l2arc_dev_t *dev; 7560 l2arc_dev_hdr_phys_t *l2dhdr; 7561 list_t *buflist; 7562 arc_buf_hdr_t *head, *hdr, *hdr_prev; 7563 kmutex_t *hash_lock; 7564 int64_t bytes_dropped = 0; 7565 7566 cb = zio->io_private; 7567 ASSERT3P(cb, !=, NULL); 7568 dev = cb->l2wcb_dev; 7569 l2dhdr = dev->l2ad_dev_hdr; 7570 ASSERT3P(dev, !=, NULL); 7571 head = cb->l2wcb_head; 7572 ASSERT3P(head, !=, NULL); 7573 buflist = &dev->l2ad_buflist; 7574 ASSERT3P(buflist, !=, NULL); 7575 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, 7576 l2arc_write_callback_t *, cb); 7577 7578 if (zio->io_error != 0) 7579 ARCSTAT_BUMP(arcstat_l2_writes_error); 7580 7581 /* 7582 * All writes completed, or an error was hit. 7583 */ 7584 top: 7585 mutex_enter(&dev->l2ad_mtx); 7586 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { 7587 hdr_prev = list_prev(buflist, hdr); 7588 7589 hash_lock = HDR_LOCK(hdr); 7590 7591 /* 7592 * We cannot use mutex_enter or else we can deadlock 7593 * with l2arc_write_buffers (due to swapping the order 7594 * the hash lock and l2ad_mtx are taken). 7595 */ 7596 if (!mutex_tryenter(hash_lock)) { 7597 /* 7598 * Missed the hash lock. We must retry so we 7599 * don't leave the ARC_FLAG_L2_WRITING bit set. 7600 */ 7601 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); 7602 7603 /* 7604 * We don't want to rescan the headers we've 7605 * already marked as having been written out, so 7606 * we reinsert the head node so we can pick up 7607 * where we left off. 7608 */ 7609 list_remove(buflist, head); 7610 list_insert_after(buflist, hdr, head); 7611 7612 mutex_exit(&dev->l2ad_mtx); 7613 7614 /* 7615 * We wait for the hash lock to become available 7616 * to try and prevent busy waiting, and increase 7617 * the chance we'll be able to acquire the lock 7618 * the next time around. 7619 */ 7620 mutex_enter(hash_lock); 7621 mutex_exit(hash_lock); 7622 goto top; 7623 } 7624 7625 /* 7626 * We could not have been moved into the arc_l2c_only 7627 * state while in-flight due to our ARC_FLAG_L2_WRITING 7628 * bit being set. Let's just ensure that's being enforced. 7629 */ 7630 ASSERT(HDR_HAS_L1HDR(hdr)); 7631 7632 if (zio->io_error != 0) { 7633 /* 7634 * Error - drop L2ARC entry. 7635 */ 7636 list_remove(buflist, hdr); 7637 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 7638 7639 uint64_t psize = HDR_GET_PSIZE(hdr); 7640 ARCSTAT_INCR(arcstat_l2_psize, -psize); 7641 ARCSTAT_INCR(arcstat_l2_lsize, -HDR_GET_LSIZE(hdr)); 7642 7643 bytes_dropped += 7644 vdev_psize_to_asize(dev->l2ad_vdev, psize); 7645 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 7646 arc_hdr_size(hdr), hdr); 7647 } 7648 7649 /* 7650 * Allow ARC to begin reads and ghost list evictions to 7651 * this L2ARC entry. 7652 */ 7653 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); 7654 7655 mutex_exit(hash_lock); 7656 } 7657 7658 /* 7659 * Free the allocated abd buffers for writing the log blocks. 7660 * If the zio failed reclaim the allocated space and remove the 7661 * pointers to these log blocks from the log block pointer list 7662 * of the L2ARC device. 7663 */ 7664 while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { 7665 abd_free(abd_buf->abd); 7666 zio_buf_free(abd_buf, sizeof (*abd_buf)); 7667 if (zio->io_error != 0) { 7668 lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); 7669 /* 7670 * L2BLK_GET_PSIZE returns aligned size for log 7671 * blocks. 7672 */ 7673 uint64_t asize = 7674 L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); 7675 bytes_dropped += asize; 7676 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 7677 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 7678 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 7679 lb_ptr_buf); 7680 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 7681 kmem_free(lb_ptr_buf->lb_ptr, 7682 sizeof (l2arc_log_blkptr_t)); 7683 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 7684 } 7685 } 7686 list_destroy(&cb->l2wcb_abd_list); 7687 7688 if (zio->io_error != 0) { 7689 /* 7690 * Restore the lbps array in the header to its previous state. 7691 * If the list of log block pointers is empty, zero out the 7692 * log block pointers in the device header. 7693 */ 7694 lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); 7695 for (int i = 0; i < 2; i++) { 7696 if (lb_ptr_buf == NULL) { 7697 /* 7698 * If the list is empty zero out the device 7699 * header. Otherwise zero out the second log 7700 * block pointer in the header. 7701 */ 7702 if (i == 0) { 7703 bzero(l2dhdr, dev->l2ad_dev_hdr_asize); 7704 } else { 7705 bzero(&l2dhdr->dh_start_lbps[i], 7706 sizeof (l2arc_log_blkptr_t)); 7707 } 7708 break; 7709 } 7710 bcopy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[i], 7711 sizeof (l2arc_log_blkptr_t)); 7712 lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, 7713 lb_ptr_buf); 7714 } 7715 } 7716 7717 atomic_inc_64(&l2arc_writes_done); 7718 list_remove(buflist, head); 7719 ASSERT(!HDR_HAS_L1HDR(head)); 7720 kmem_cache_free(hdr_l2only_cache, head); 7721 mutex_exit(&dev->l2ad_mtx); 7722 7723 ASSERT(dev->l2ad_vdev != NULL); 7724 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); 7725 7726 l2arc_do_free_on_write(); 7727 7728 kmem_free(cb, sizeof (l2arc_write_callback_t)); 7729 } 7730 7731 static int 7732 l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) 7733 { 7734 int ret; 7735 spa_t *spa = zio->io_spa; 7736 arc_buf_hdr_t *hdr = cb->l2rcb_hdr; 7737 blkptr_t *bp = zio->io_bp; 7738 uint8_t salt[ZIO_DATA_SALT_LEN]; 7739 uint8_t iv[ZIO_DATA_IV_LEN]; 7740 uint8_t mac[ZIO_DATA_MAC_LEN]; 7741 boolean_t no_crypt = B_FALSE; 7742 7743 /* 7744 * ZIL data is never be written to the L2ARC, so we don't need 7745 * special handling for its unique MAC storage. 7746 */ 7747 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 7748 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 7749 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 7750 7751 /* 7752 * If the data was encrypted, decrypt it now. Note that 7753 * we must check the bp here and not the hdr, since the 7754 * hdr does not have its encryption parameters updated 7755 * until arc_read_done(). 7756 */ 7757 if (BP_IS_ENCRYPTED(bp)) { 7758 abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 7759 B_TRUE); 7760 7761 zio_crypt_decode_params_bp(bp, salt, iv); 7762 zio_crypt_decode_mac_bp(bp, mac); 7763 7764 ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, 7765 BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), 7766 salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, 7767 hdr->b_l1hdr.b_pabd, &no_crypt); 7768 if (ret != 0) { 7769 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 7770 goto error; 7771 } 7772 7773 /* 7774 * If we actually performed decryption, replace b_pabd 7775 * with the decrypted data. Otherwise we can just throw 7776 * our decryption buffer away. 7777 */ 7778 if (!no_crypt) { 7779 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 7780 arc_hdr_size(hdr), hdr); 7781 hdr->b_l1hdr.b_pabd = eabd; 7782 zio->io_abd = eabd; 7783 } else { 7784 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 7785 } 7786 } 7787 7788 /* 7789 * If the L2ARC block was compressed, but ARC compression 7790 * is disabled we decompress the data into a new buffer and 7791 * replace the existing data. 7792 */ 7793 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 7794 !HDR_COMPRESSION_ENABLED(hdr)) { 7795 abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 7796 B_TRUE); 7797 void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 7798 7799 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 7800 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 7801 HDR_GET_LSIZE(hdr)); 7802 if (ret != 0) { 7803 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 7804 arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); 7805 goto error; 7806 } 7807 7808 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 7809 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 7810 arc_hdr_size(hdr), hdr); 7811 hdr->b_l1hdr.b_pabd = cabd; 7812 zio->io_abd = cabd; 7813 zio->io_size = HDR_GET_LSIZE(hdr); 7814 } 7815 7816 return (0); 7817 7818 error: 7819 return (ret); 7820 } 7821 7822 7823 /* 7824 * A read to a cache device completed. Validate buffer contents before 7825 * handing over to the regular ARC routines. 7826 */ 7827 static void 7828 l2arc_read_done(zio_t *zio) 7829 { 7830 int tfm_error = 0; 7831 l2arc_read_callback_t *cb = zio->io_private; 7832 arc_buf_hdr_t *hdr; 7833 kmutex_t *hash_lock; 7834 boolean_t valid_cksum; 7835 boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && 7836 (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); 7837 7838 ASSERT3P(zio->io_vd, !=, NULL); 7839 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); 7840 7841 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); 7842 7843 ASSERT3P(cb, !=, NULL); 7844 hdr = cb->l2rcb_hdr; 7845 ASSERT3P(hdr, !=, NULL); 7846 7847 hash_lock = HDR_LOCK(hdr); 7848 mutex_enter(hash_lock); 7849 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 7850 7851 /* 7852 * If the data was read into a temporary buffer, 7853 * move it and free the buffer. 7854 */ 7855 if (cb->l2rcb_abd != NULL) { 7856 ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); 7857 if (zio->io_error == 0) { 7858 if (using_rdata) { 7859 abd_copy(hdr->b_crypt_hdr.b_rabd, 7860 cb->l2rcb_abd, arc_hdr_size(hdr)); 7861 } else { 7862 abd_copy(hdr->b_l1hdr.b_pabd, 7863 cb->l2rcb_abd, arc_hdr_size(hdr)); 7864 } 7865 } 7866 7867 /* 7868 * The following must be done regardless of whether 7869 * there was an error: 7870 * - free the temporary buffer 7871 * - point zio to the real ARC buffer 7872 * - set zio size accordingly 7873 * These are required because zio is either re-used for 7874 * an I/O of the block in the case of the error 7875 * or the zio is passed to arc_read_done() and it 7876 * needs real data. 7877 */ 7878 abd_free(cb->l2rcb_abd); 7879 zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); 7880 7881 if (using_rdata) { 7882 ASSERT(HDR_HAS_RABD(hdr)); 7883 zio->io_abd = zio->io_orig_abd = 7884 hdr->b_crypt_hdr.b_rabd; 7885 } else { 7886 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 7887 zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; 7888 } 7889 } 7890 7891 ASSERT3P(zio->io_abd, !=, NULL); 7892 7893 /* 7894 * Check this survived the L2ARC journey. 7895 */ 7896 ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || 7897 (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); 7898 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ 7899 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ 7900 7901 valid_cksum = arc_cksum_is_equal(hdr, zio); 7902 7903 /* 7904 * b_rabd will always match the data as it exists on disk if it is 7905 * being used. Therefore if we are reading into b_rabd we do not 7906 * attempt to untransform the data. 7907 */ 7908 if (valid_cksum && !using_rdata) 7909 tfm_error = l2arc_untransform(zio, cb); 7910 7911 if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && 7912 !HDR_L2_EVICTED(hdr)) { 7913 mutex_exit(hash_lock); 7914 zio->io_private = hdr; 7915 arc_read_done(zio); 7916 } else { 7917 /* 7918 * Buffer didn't survive caching. Increment stats and 7919 * reissue to the original storage device. 7920 */ 7921 if (zio->io_error != 0) { 7922 ARCSTAT_BUMP(arcstat_l2_io_error); 7923 } else { 7924 zio->io_error = SET_ERROR(EIO); 7925 } 7926 if (!valid_cksum || tfm_error != 0) 7927 ARCSTAT_BUMP(arcstat_l2_cksum_bad); 7928 7929 /* 7930 * If there's no waiter, issue an async i/o to the primary 7931 * storage now. If there *is* a waiter, the caller must 7932 * issue the i/o in a context where it's OK to block. 7933 */ 7934 if (zio->io_waiter == NULL) { 7935 zio_t *pio = zio_unique_parent(zio); 7936 void *abd = (using_rdata) ? 7937 hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; 7938 7939 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); 7940 7941 zio = zio_read(pio, zio->io_spa, zio->io_bp, 7942 abd, zio->io_size, arc_read_done, 7943 hdr, zio->io_priority, cb->l2rcb_flags, 7944 &cb->l2rcb_zb); 7945 7946 /* 7947 * Original ZIO will be freed, so we need to update 7948 * ARC header with the new ZIO pointer to be used 7949 * by zio_change_priority() in arc_read(). 7950 */ 7951 for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; 7952 acb != NULL; acb = acb->acb_next) 7953 acb->acb_zio_head = zio; 7954 7955 mutex_exit(hash_lock); 7956 zio_nowait(zio); 7957 } else { 7958 mutex_exit(hash_lock); 7959 } 7960 } 7961 7962 kmem_free(cb, sizeof (l2arc_read_callback_t)); 7963 } 7964 7965 /* 7966 * This is the list priority from which the L2ARC will search for pages to 7967 * cache. This is used within loops (0..3) to cycle through lists in the 7968 * desired order. This order can have a significant effect on cache 7969 * performance. 7970 * 7971 * Currently the metadata lists are hit first, MFU then MRU, followed by 7972 * the data lists. This function returns a locked list, and also returns 7973 * the lock pointer. 7974 */ 7975 static multilist_sublist_t * 7976 l2arc_sublist_lock(int list_num) 7977 { 7978 multilist_t *ml = NULL; 7979 unsigned int idx; 7980 7981 ASSERT(list_num >= 0 && list_num <= 3); 7982 7983 switch (list_num) { 7984 case 0: 7985 ml = arc_mfu->arcs_list[ARC_BUFC_METADATA]; 7986 break; 7987 case 1: 7988 ml = arc_mru->arcs_list[ARC_BUFC_METADATA]; 7989 break; 7990 case 2: 7991 ml = arc_mfu->arcs_list[ARC_BUFC_DATA]; 7992 break; 7993 case 3: 7994 ml = arc_mru->arcs_list[ARC_BUFC_DATA]; 7995 break; 7996 } 7997 7998 /* 7999 * Return a randomly-selected sublist. This is acceptable 8000 * because the caller feeds only a little bit of data for each 8001 * call (8MB). Subsequent calls will result in different 8002 * sublists being selected. 8003 */ 8004 idx = multilist_get_random_index(ml); 8005 return (multilist_sublist_lock(ml, idx)); 8006 } 8007 8008 /* 8009 * Calculates the maximum overhead of L2ARC metadata log blocks for a given 8010 * L2ARC write size. l2arc_evict and l2arc_write_size need to include this 8011 * overhead in processing to make sure there is enough headroom available 8012 * when writing buffers. 8013 */ 8014 static inline uint64_t 8015 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) 8016 { 8017 if (dev->l2ad_log_entries == 0) { 8018 return (0); 8019 } else { 8020 uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; 8021 8022 uint64_t log_blocks = (log_entries + 8023 dev->l2ad_log_entries - 1) / 8024 dev->l2ad_log_entries; 8025 8026 return (vdev_psize_to_asize(dev->l2ad_vdev, 8027 sizeof (l2arc_log_blk_phys_t)) * log_blocks); 8028 } 8029 } 8030 8031 /* 8032 * Evict buffers from the device write hand to the distance specified in 8033 * bytes. This distance may span populated buffers, it may span nothing. 8034 * This is clearing a region on the L2ARC device ready for writing. 8035 * If the 'all' boolean is set, every buffer is evicted. 8036 */ 8037 static void 8038 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) 8039 { 8040 list_t *buflist; 8041 arc_buf_hdr_t *hdr, *hdr_prev; 8042 kmutex_t *hash_lock; 8043 uint64_t taddr; 8044 l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; 8045 boolean_t rerun; 8046 8047 buflist = &dev->l2ad_buflist; 8048 8049 /* 8050 * We need to add in the worst case scenario of log block overhead. 8051 */ 8052 distance += l2arc_log_blk_overhead(distance, dev); 8053 8054 top: 8055 rerun = B_FALSE; 8056 if (dev->l2ad_hand >= (dev->l2ad_end - distance)) { 8057 /* 8058 * When there is no space to accommodate upcoming writes, 8059 * evict to the end. Then bump the write and evict hands 8060 * to the start and iterate. This iteration does not 8061 * happen indefinitely as we make sure in 8062 * l2arc_write_size() that when the write hand is reset, 8063 * the write size does not exceed the end of the device. 8064 */ 8065 rerun = B_TRUE; 8066 taddr = dev->l2ad_end; 8067 } else { 8068 taddr = dev->l2ad_hand + distance; 8069 } 8070 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, 8071 uint64_t, taddr, boolean_t, all); 8072 8073 /* 8074 * This check has to be placed after deciding whether to iterate 8075 * (rerun). 8076 */ 8077 if (!all && dev->l2ad_first) { 8078 /* 8079 * This is the first sweep through the device. There is 8080 * nothing to evict. 8081 */ 8082 goto out; 8083 } 8084 8085 /* 8086 * When rebuilding L2ARC we retrieve the evict hand from the header of 8087 * the device. Of note, l2arc_evict() does not actually delete buffers 8088 * from the cache device, but keeping track of the evict hand will be 8089 * useful when TRIM is implemented. 8090 */ 8091 dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); 8092 8093 retry: 8094 mutex_enter(&dev->l2ad_mtx); 8095 /* 8096 * We have to account for evicted log blocks. Run vdev_space_update() 8097 * on log blocks whose offset (in bytes) is before the evicted offset 8098 * (in bytes) by searching in the list of pointers to log blocks 8099 * present in the L2ARC device. 8100 */ 8101 for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; 8102 lb_ptr_buf = lb_ptr_buf_prev) { 8103 8104 lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); 8105 8106 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 8107 uint64_t asize = L2BLK_GET_PSIZE( 8108 (lb_ptr_buf->lb_ptr)->lbp_prop); 8109 8110 /* 8111 * We don't worry about log blocks left behind (ie 8112 * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() 8113 * will never write more than l2arc_evict() evicts. 8114 */ 8115 if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { 8116 break; 8117 } else { 8118 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 8119 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8120 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8121 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8122 lb_ptr_buf); 8123 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8124 list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); 8125 kmem_free(lb_ptr_buf->lb_ptr, 8126 sizeof (l2arc_log_blkptr_t)); 8127 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8128 } 8129 } 8130 8131 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { 8132 hdr_prev = list_prev(buflist, hdr); 8133 8134 ASSERT(!HDR_EMPTY(hdr)); 8135 hash_lock = HDR_LOCK(hdr); 8136 8137 /* 8138 * We cannot use mutex_enter or else we can deadlock 8139 * with l2arc_write_buffers (due to swapping the order 8140 * the hash lock and l2ad_mtx are taken). 8141 */ 8142 if (!mutex_tryenter(hash_lock)) { 8143 /* 8144 * Missed the hash lock. Retry. 8145 */ 8146 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); 8147 mutex_exit(&dev->l2ad_mtx); 8148 mutex_enter(hash_lock); 8149 mutex_exit(hash_lock); 8150 goto retry; 8151 } 8152 8153 /* 8154 * A header can't be on this list if it doesn't have L2 header. 8155 */ 8156 ASSERT(HDR_HAS_L2HDR(hdr)); 8157 8158 /* Ensure this header has finished being written. */ 8159 ASSERT(!HDR_L2_WRITING(hdr)); 8160 ASSERT(!HDR_L2_WRITE_HEAD(hdr)); 8161 8162 if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || 8163 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { 8164 /* 8165 * We've evicted to the target address, 8166 * or the end of the device. 8167 */ 8168 mutex_exit(hash_lock); 8169 break; 8170 } 8171 8172 if (!HDR_HAS_L1HDR(hdr)) { 8173 ASSERT(!HDR_L2_READING(hdr)); 8174 /* 8175 * This doesn't exist in the ARC. Destroy. 8176 * arc_hdr_destroy() will call list_remove() 8177 * and decrement arcstat_l2_lsize. 8178 */ 8179 arc_change_state(arc_anon, hdr, hash_lock); 8180 arc_hdr_destroy(hdr); 8181 } else { 8182 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); 8183 ARCSTAT_BUMP(arcstat_l2_evict_l1cached); 8184 /* 8185 * Invalidate issued or about to be issued 8186 * reads, since we may be about to write 8187 * over this location. 8188 */ 8189 if (HDR_L2_READING(hdr)) { 8190 ARCSTAT_BUMP(arcstat_l2_evict_reading); 8191 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); 8192 } 8193 8194 arc_hdr_l2hdr_destroy(hdr); 8195 } 8196 mutex_exit(hash_lock); 8197 } 8198 mutex_exit(&dev->l2ad_mtx); 8199 8200 out: 8201 /* 8202 * We need to check if we evict all buffers, otherwise we may iterate 8203 * unnecessarily. 8204 */ 8205 if (!all && rerun) { 8206 /* 8207 * Bump device hand to the device start if it is approaching the 8208 * end. l2arc_evict() has already evicted ahead for this case. 8209 */ 8210 dev->l2ad_hand = dev->l2ad_start; 8211 dev->l2ad_evict = dev->l2ad_start; 8212 dev->l2ad_first = B_FALSE; 8213 goto top; 8214 } 8215 8216 ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); 8217 if (!dev->l2ad_first) 8218 ASSERT3U(dev->l2ad_hand, <, dev->l2ad_evict); 8219 } 8220 8221 /* 8222 * Handle any abd transforms that might be required for writing to the L2ARC. 8223 * If successful, this function will always return an abd with the data 8224 * transformed as it is on disk in a new abd of asize bytes. 8225 */ 8226 static int 8227 l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, 8228 abd_t **abd_out) 8229 { 8230 int ret; 8231 void *tmp = NULL; 8232 abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; 8233 enum zio_compress compress = HDR_GET_COMPRESS(hdr); 8234 uint64_t psize = HDR_GET_PSIZE(hdr); 8235 uint64_t size = arc_hdr_size(hdr); 8236 boolean_t ismd = HDR_ISTYPE_METADATA(hdr); 8237 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 8238 dsl_crypto_key_t *dck = NULL; 8239 uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; 8240 boolean_t no_crypt = B_FALSE; 8241 8242 ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8243 !HDR_COMPRESSION_ENABLED(hdr)) || 8244 HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); 8245 ASSERT3U(psize, <=, asize); 8246 8247 /* 8248 * If this data simply needs its own buffer, we simply allocate it 8249 * and copy the data. This may be done to eliminate a dependency on a 8250 * shared buffer or to reallocate the buffer to match asize. 8251 */ 8252 if (HDR_HAS_RABD(hdr) && asize != psize) { 8253 ASSERT3U(asize, >=, psize); 8254 to_write = abd_alloc_for_io(asize, ismd); 8255 abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); 8256 if (psize != asize) 8257 abd_zero_off(to_write, psize, asize - psize); 8258 goto out; 8259 } 8260 8261 if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && 8262 !HDR_ENCRYPTED(hdr)) { 8263 ASSERT3U(size, ==, psize); 8264 to_write = abd_alloc_for_io(asize, ismd); 8265 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 8266 if (size != asize) 8267 abd_zero_off(to_write, size, asize - size); 8268 goto out; 8269 } 8270 8271 if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { 8272 cabd = abd_alloc_for_io(asize, ismd); 8273 tmp = abd_borrow_buf(cabd, asize); 8274 8275 psize = zio_compress_data(compress, to_write, tmp, size); 8276 ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); 8277 if (psize < asize) 8278 bzero((char *)tmp + psize, asize - psize); 8279 psize = HDR_GET_PSIZE(hdr); 8280 abd_return_buf_copy(cabd, tmp, asize); 8281 to_write = cabd; 8282 } 8283 8284 if (HDR_ENCRYPTED(hdr)) { 8285 eabd = abd_alloc_for_io(asize, ismd); 8286 8287 /* 8288 * If the dataset was disowned before the buffer 8289 * made it to this point, the key to re-encrypt 8290 * it won't be available. In this case we simply 8291 * won't write the buffer to the L2ARC. 8292 */ 8293 ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, 8294 FTAG, &dck); 8295 if (ret != 0) 8296 goto error; 8297 8298 ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, 8299 hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, 8300 hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, 8301 &no_crypt); 8302 if (ret != 0) 8303 goto error; 8304 8305 if (no_crypt) 8306 abd_copy(eabd, to_write, psize); 8307 8308 if (psize != asize) 8309 abd_zero_off(eabd, psize, asize - psize); 8310 8311 /* assert that the MAC we got here matches the one we saved */ 8312 ASSERT0(bcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); 8313 spa_keystore_dsl_key_rele(spa, dck, FTAG); 8314 8315 if (to_write == cabd) 8316 abd_free(cabd); 8317 8318 to_write = eabd; 8319 } 8320 8321 out: 8322 ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); 8323 *abd_out = to_write; 8324 return (0); 8325 8326 error: 8327 if (dck != NULL) 8328 spa_keystore_dsl_key_rele(spa, dck, FTAG); 8329 if (cabd != NULL) 8330 abd_free(cabd); 8331 if (eabd != NULL) 8332 abd_free(eabd); 8333 8334 *abd_out = NULL; 8335 return (ret); 8336 } 8337 8338 static void 8339 l2arc_blk_fetch_done(zio_t *zio) 8340 { 8341 l2arc_read_callback_t *cb; 8342 8343 cb = zio->io_private; 8344 if (cb->l2rcb_abd != NULL) 8345 abd_put(cb->l2rcb_abd); 8346 kmem_free(cb, sizeof (l2arc_read_callback_t)); 8347 } 8348 8349 /* 8350 * Find and write ARC buffers to the L2ARC device. 8351 * 8352 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid 8353 * for reading until they have completed writing. 8354 * The headroom_boost is an in-out parameter used to maintain headroom boost 8355 * state between calls to this function. 8356 * 8357 * Returns the number of bytes actually written (which may be smaller than 8358 * the delta by which the device hand has changed due to alignment and the 8359 * writing of log blocks). 8360 */ 8361 static uint64_t 8362 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) 8363 { 8364 arc_buf_hdr_t *hdr, *hdr_prev, *head; 8365 uint64_t write_asize, write_psize, write_lsize, headroom; 8366 boolean_t full; 8367 l2arc_write_callback_t *cb = NULL; 8368 zio_t *pio, *wzio; 8369 uint64_t guid = spa_load_guid(spa); 8370 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 8371 8372 ASSERT3P(dev->l2ad_vdev, !=, NULL); 8373 8374 pio = NULL; 8375 write_lsize = write_asize = write_psize = 0; 8376 full = B_FALSE; 8377 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); 8378 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); 8379 8380 /* 8381 * Copy buffers for L2ARC writing. 8382 */ 8383 for (int try = 0; try <= 3; try++) { 8384 multilist_sublist_t *mls = l2arc_sublist_lock(try); 8385 uint64_t passed_sz = 0; 8386 8387 VERIFY3P(mls, !=, NULL); 8388 8389 /* 8390 * L2ARC fast warmup. 8391 * 8392 * Until the ARC is warm and starts to evict, read from the 8393 * head of the ARC lists rather than the tail. 8394 */ 8395 if (arc_warm == B_FALSE) 8396 hdr = multilist_sublist_head(mls); 8397 else 8398 hdr = multilist_sublist_tail(mls); 8399 8400 headroom = target_sz * l2arc_headroom; 8401 if (zfs_compressed_arc_enabled) 8402 headroom = (headroom * l2arc_headroom_boost) / 100; 8403 8404 for (; hdr; hdr = hdr_prev) { 8405 kmutex_t *hash_lock; 8406 abd_t *to_write = NULL; 8407 8408 if (arc_warm == B_FALSE) 8409 hdr_prev = multilist_sublist_next(mls, hdr); 8410 else 8411 hdr_prev = multilist_sublist_prev(mls, hdr); 8412 8413 hash_lock = HDR_LOCK(hdr); 8414 if (!mutex_tryenter(hash_lock)) { 8415 /* 8416 * Skip this buffer rather than waiting. 8417 */ 8418 continue; 8419 } 8420 8421 passed_sz += HDR_GET_LSIZE(hdr); 8422 if (l2arc_headroom != 0 && passed_sz > headroom) { 8423 /* 8424 * Searched too far. 8425 */ 8426 mutex_exit(hash_lock); 8427 break; 8428 } 8429 8430 if (!l2arc_write_eligible(guid, hdr)) { 8431 mutex_exit(hash_lock); 8432 continue; 8433 } 8434 8435 /* 8436 * We rely on the L1 portion of the header below, so 8437 * it's invalid for this header to have been evicted out 8438 * of the ghost cache, prior to being written out. The 8439 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 8440 */ 8441 ASSERT(HDR_HAS_L1HDR(hdr)); 8442 8443 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 8444 ASSERT3U(arc_hdr_size(hdr), >, 0); 8445 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 8446 HDR_HAS_RABD(hdr)); 8447 uint64_t psize = HDR_GET_PSIZE(hdr); 8448 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, 8449 psize); 8450 8451 if ((write_asize + asize) > target_sz) { 8452 full = B_TRUE; 8453 mutex_exit(hash_lock); 8454 break; 8455 } 8456 8457 /* 8458 * We rely on the L1 portion of the header below, so 8459 * it's invalid for this header to have been evicted out 8460 * of the ghost cache, prior to being written out. The 8461 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 8462 */ 8463 arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); 8464 ASSERT(HDR_HAS_L1HDR(hdr)); 8465 8466 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 8467 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 8468 HDR_HAS_RABD(hdr)); 8469 ASSERT3U(arc_hdr_size(hdr), >, 0); 8470 8471 /* 8472 * If this header has b_rabd, we can use this since it 8473 * must always match the data exactly as it exists on 8474 * disk. Otherwise, the L2ARC can normally use the 8475 * hdr's data, but if we're sharing data between the 8476 * hdr and one of its bufs, L2ARC needs its own copy of 8477 * the data so that the ZIO below can't race with the 8478 * buf consumer. To ensure that this copy will be 8479 * available for the lifetime of the ZIO and be cleaned 8480 * up afterwards, we add it to the l2arc_free_on_write 8481 * queue. If we need to apply any transforms to the 8482 * data (compression, encryption) we will also need the 8483 * extra buffer. 8484 */ 8485 if (HDR_HAS_RABD(hdr) && psize == asize) { 8486 to_write = hdr->b_crypt_hdr.b_rabd; 8487 } else if ((HDR_COMPRESSION_ENABLED(hdr) || 8488 HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && 8489 !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && 8490 psize == asize) { 8491 to_write = hdr->b_l1hdr.b_pabd; 8492 } else { 8493 int ret; 8494 arc_buf_contents_t type = arc_buf_type(hdr); 8495 8496 ret = l2arc_apply_transforms(spa, hdr, asize, 8497 &to_write); 8498 if (ret != 0) { 8499 arc_hdr_clear_flags(hdr, 8500 ARC_FLAG_L2_WRITING); 8501 mutex_exit(hash_lock); 8502 continue; 8503 } 8504 8505 l2arc_free_abd_on_write(to_write, asize, type); 8506 } 8507 8508 if (pio == NULL) { 8509 /* 8510 * Insert a dummy header on the buflist so 8511 * l2arc_write_done() can find where the 8512 * write buffers begin without searching. 8513 */ 8514 mutex_enter(&dev->l2ad_mtx); 8515 list_insert_head(&dev->l2ad_buflist, head); 8516 mutex_exit(&dev->l2ad_mtx); 8517 8518 cb = kmem_alloc( 8519 sizeof (l2arc_write_callback_t), KM_SLEEP); 8520 cb->l2wcb_dev = dev; 8521 cb->l2wcb_head = head; 8522 /* 8523 * Create a list to save allocated abd buffers 8524 * for l2arc_log_blk_commit(). 8525 */ 8526 list_create(&cb->l2wcb_abd_list, 8527 sizeof (l2arc_lb_abd_buf_t), 8528 offsetof(l2arc_lb_abd_buf_t, node)); 8529 pio = zio_root(spa, l2arc_write_done, cb, 8530 ZIO_FLAG_CANFAIL); 8531 } 8532 8533 hdr->b_l2hdr.b_dev = dev; 8534 hdr->b_l2hdr.b_daddr = dev->l2ad_hand; 8535 arc_hdr_set_flags(hdr, 8536 ARC_FLAG_L2_WRITING | ARC_FLAG_HAS_L2HDR); 8537 8538 mutex_enter(&dev->l2ad_mtx); 8539 list_insert_head(&dev->l2ad_buflist, hdr); 8540 mutex_exit(&dev->l2ad_mtx); 8541 8542 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 8543 arc_hdr_size(hdr), hdr); 8544 8545 wzio = zio_write_phys(pio, dev->l2ad_vdev, 8546 hdr->b_l2hdr.b_daddr, asize, to_write, 8547 ZIO_CHECKSUM_OFF, NULL, hdr, 8548 ZIO_PRIORITY_ASYNC_WRITE, 8549 ZIO_FLAG_CANFAIL, B_FALSE); 8550 8551 write_lsize += HDR_GET_LSIZE(hdr); 8552 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, 8553 zio_t *, wzio); 8554 8555 write_psize += psize; 8556 write_asize += asize; 8557 dev->l2ad_hand += asize; 8558 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 8559 8560 mutex_exit(hash_lock); 8561 8562 /* 8563 * Append buf info to current log and commit if full. 8564 * arcstat_l2_{size,asize} kstats are updated 8565 * internally. 8566 */ 8567 if (l2arc_log_blk_insert(dev, hdr)) 8568 l2arc_log_blk_commit(dev, pio, cb); 8569 8570 (void) zio_nowait(wzio); 8571 } 8572 8573 multilist_sublist_unlock(mls); 8574 8575 if (full == B_TRUE) 8576 break; 8577 } 8578 8579 /* No buffers selected for writing? */ 8580 if (pio == NULL) { 8581 ASSERT0(write_lsize); 8582 ASSERT(!HDR_HAS_L1HDR(head)); 8583 kmem_cache_free(hdr_l2only_cache, head); 8584 8585 /* 8586 * Although we did not write any buffers l2ad_evict may 8587 * have advanced. 8588 */ 8589 if (dev->l2ad_evict != l2dhdr->dh_evict) 8590 l2arc_dev_hdr_update(dev); 8591 8592 return (0); 8593 } 8594 8595 if (!dev->l2ad_first) 8596 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 8597 8598 ASSERT3U(write_asize, <=, target_sz); 8599 ARCSTAT_BUMP(arcstat_l2_writes_sent); 8600 ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); 8601 ARCSTAT_INCR(arcstat_l2_lsize, write_lsize); 8602 ARCSTAT_INCR(arcstat_l2_psize, write_psize); 8603 8604 dev->l2ad_writing = B_TRUE; 8605 (void) zio_wait(pio); 8606 dev->l2ad_writing = B_FALSE; 8607 8608 /* 8609 * Update the device header after the zio completes as 8610 * l2arc_write_done() may have updated the memory holding the log block 8611 * pointers in the device header. 8612 */ 8613 l2arc_dev_hdr_update(dev); 8614 8615 return (write_asize); 8616 } 8617 8618 /* 8619 * This thread feeds the L2ARC at regular intervals. This is the beating 8620 * heart of the L2ARC. 8621 */ 8622 /* ARGSUSED */ 8623 static void 8624 l2arc_feed_thread(void *unused) 8625 { 8626 callb_cpr_t cpr; 8627 l2arc_dev_t *dev; 8628 spa_t *spa; 8629 uint64_t size, wrote; 8630 clock_t begin, next = ddi_get_lbolt(); 8631 8632 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); 8633 8634 mutex_enter(&l2arc_feed_thr_lock); 8635 8636 while (l2arc_thread_exit == 0) { 8637 CALLB_CPR_SAFE_BEGIN(&cpr); 8638 (void) cv_timedwait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock, 8639 next); 8640 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); 8641 next = ddi_get_lbolt() + hz; 8642 8643 /* 8644 * Quick check for L2ARC devices. 8645 */ 8646 mutex_enter(&l2arc_dev_mtx); 8647 if (l2arc_ndev == 0) { 8648 mutex_exit(&l2arc_dev_mtx); 8649 continue; 8650 } 8651 mutex_exit(&l2arc_dev_mtx); 8652 begin = ddi_get_lbolt(); 8653 8654 /* 8655 * This selects the next l2arc device to write to, and in 8656 * doing so the next spa to feed from: dev->l2ad_spa. This 8657 * will return NULL if there are now no l2arc devices or if 8658 * they are all faulted. 8659 * 8660 * If a device is returned, its spa's config lock is also 8661 * held to prevent device removal. l2arc_dev_get_next() 8662 * will grab and release l2arc_dev_mtx. 8663 */ 8664 if ((dev = l2arc_dev_get_next()) == NULL) 8665 continue; 8666 8667 spa = dev->l2ad_spa; 8668 ASSERT3P(spa, !=, NULL); 8669 8670 /* 8671 * If the pool is read-only then force the feed thread to 8672 * sleep a little longer. 8673 */ 8674 if (!spa_writeable(spa)) { 8675 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; 8676 spa_config_exit(spa, SCL_L2ARC, dev); 8677 continue; 8678 } 8679 8680 /* 8681 * Avoid contributing to memory pressure. 8682 */ 8683 if (arc_reclaim_needed()) { 8684 ARCSTAT_BUMP(arcstat_l2_abort_lowmem); 8685 spa_config_exit(spa, SCL_L2ARC, dev); 8686 continue; 8687 } 8688 8689 ARCSTAT_BUMP(arcstat_l2_feeds); 8690 8691 size = l2arc_write_size(dev); 8692 8693 /* 8694 * Evict L2ARC buffers that will be overwritten. 8695 */ 8696 l2arc_evict(dev, size, B_FALSE); 8697 8698 /* 8699 * Write ARC buffers. 8700 */ 8701 wrote = l2arc_write_buffers(spa, dev, size); 8702 8703 /* 8704 * Calculate interval between writes. 8705 */ 8706 next = l2arc_write_interval(begin, size, wrote); 8707 spa_config_exit(spa, SCL_L2ARC, dev); 8708 } 8709 8710 l2arc_thread_exit = 0; 8711 cv_broadcast(&l2arc_feed_thr_cv); 8712 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ 8713 thread_exit(); 8714 } 8715 8716 boolean_t 8717 l2arc_vdev_present(vdev_t *vd) 8718 { 8719 return (l2arc_vdev_get(vd) != NULL); 8720 } 8721 8722 /* 8723 * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if 8724 * the vdev_t isn't an L2ARC device. 8725 */ 8726 static l2arc_dev_t * 8727 l2arc_vdev_get(vdev_t *vd) 8728 { 8729 l2arc_dev_t *dev; 8730 8731 mutex_enter(&l2arc_dev_mtx); 8732 for (dev = list_head(l2arc_dev_list); dev != NULL; 8733 dev = list_next(l2arc_dev_list, dev)) { 8734 if (dev->l2ad_vdev == vd) 8735 break; 8736 } 8737 mutex_exit(&l2arc_dev_mtx); 8738 8739 return (dev); 8740 } 8741 8742 /* 8743 * Add a vdev for use by the L2ARC. By this point the spa has already 8744 * validated the vdev and opened it. 8745 */ 8746 void 8747 l2arc_add_vdev(spa_t *spa, vdev_t *vd) 8748 { 8749 l2arc_dev_t *adddev; 8750 uint64_t l2dhdr_asize; 8751 8752 ASSERT(!l2arc_vdev_present(vd)); 8753 8754 /* 8755 * Create a new l2arc device entry. 8756 */ 8757 adddev = kmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); 8758 adddev->l2ad_spa = spa; 8759 adddev->l2ad_vdev = vd; 8760 /* leave extra size for an l2arc device header */ 8761 l2dhdr_asize = adddev->l2ad_dev_hdr_asize = 8762 MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); 8763 adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; 8764 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); 8765 ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); 8766 adddev->l2ad_hand = adddev->l2ad_start; 8767 adddev->l2ad_evict = adddev->l2ad_start; 8768 adddev->l2ad_first = B_TRUE; 8769 adddev->l2ad_writing = B_FALSE; 8770 adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); 8771 8772 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); 8773 /* 8774 * This is a list of all ARC buffers that are still valid on the 8775 * device. 8776 */ 8777 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), 8778 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); 8779 8780 /* 8781 * This is a list of pointers to log blocks that are still present 8782 * on the device. 8783 */ 8784 list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), 8785 offsetof(l2arc_lb_ptr_buf_t, node)); 8786 8787 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); 8788 zfs_refcount_create(&adddev->l2ad_alloc); 8789 zfs_refcount_create(&adddev->l2ad_lb_asize); 8790 zfs_refcount_create(&adddev->l2ad_lb_count); 8791 8792 /* 8793 * Add device to global list 8794 */ 8795 mutex_enter(&l2arc_dev_mtx); 8796 list_insert_head(l2arc_dev_list, adddev); 8797 atomic_inc_64(&l2arc_ndev); 8798 mutex_exit(&l2arc_dev_mtx); 8799 8800 /* 8801 * Decide if vdev is eligible for L2ARC rebuild 8802 */ 8803 l2arc_rebuild_vdev(adddev->l2ad_vdev, B_FALSE); 8804 } 8805 8806 void 8807 l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) 8808 { 8809 l2arc_dev_t *dev = NULL; 8810 l2arc_dev_hdr_phys_t *l2dhdr; 8811 uint64_t l2dhdr_asize; 8812 spa_t *spa; 8813 int err; 8814 boolean_t l2dhdr_valid = B_TRUE; 8815 8816 dev = l2arc_vdev_get(vd); 8817 ASSERT3P(dev, !=, NULL); 8818 spa = dev->l2ad_spa; 8819 l2dhdr = dev->l2ad_dev_hdr; 8820 l2dhdr_asize = dev->l2ad_dev_hdr_asize; 8821 8822 /* 8823 * The L2ARC has to hold at least the payload of one log block for 8824 * them to be restored (persistent L2ARC). The payload of a log block 8825 * depends on the amount of its log entries. We always write log blocks 8826 * with 1022 entries. How many of them are committed or restored depends 8827 * on the size of the L2ARC device. Thus the maximum payload of 8828 * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device 8829 * is less than that, we reduce the amount of committed and restored 8830 * log entries per block so as to enable persistence. 8831 */ 8832 if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { 8833 dev->l2ad_log_entries = 0; 8834 } else { 8835 dev->l2ad_log_entries = MIN((dev->l2ad_end - 8836 dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, 8837 L2ARC_LOG_BLK_MAX_ENTRIES); 8838 } 8839 8840 /* 8841 * Read the device header, if an error is returned do not rebuild L2ARC. 8842 */ 8843 if ((err = l2arc_dev_hdr_read(dev)) != 0) 8844 l2dhdr_valid = B_FALSE; 8845 8846 if (l2dhdr_valid && dev->l2ad_log_entries > 0) { 8847 /* 8848 * If we are onlining a cache device (vdev_reopen) that was 8849 * still present (l2arc_vdev_present()) and rebuild is enabled, 8850 * we should evict all ARC buffers and pointers to log blocks 8851 * and reclaim their space before restoring its contents to 8852 * L2ARC. 8853 */ 8854 if (reopen) { 8855 if (!l2arc_rebuild_enabled) { 8856 return; 8857 } else { 8858 l2arc_evict(dev, 0, B_TRUE); 8859 /* start a new log block */ 8860 dev->l2ad_log_ent_idx = 0; 8861 dev->l2ad_log_blk_payload_asize = 0; 8862 dev->l2ad_log_blk_payload_start = 0; 8863 } 8864 } 8865 /* 8866 * Just mark the device as pending for a rebuild. We won't 8867 * be starting a rebuild in line here as it would block pool 8868 * import. Instead spa_load_impl will hand that off to an 8869 * async task which will call l2arc_spa_rebuild_start. 8870 */ 8871 dev->l2ad_rebuild = B_TRUE; 8872 } else if (spa_writeable(spa)) { 8873 /* 8874 * In this case create a new header. We zero out the memory 8875 * holding the header to reset dh_start_lbps. 8876 */ 8877 bzero(l2dhdr, l2dhdr_asize); 8878 l2arc_dev_hdr_update(dev); 8879 } 8880 } 8881 8882 /* 8883 * Remove a vdev from the L2ARC. 8884 */ 8885 void 8886 l2arc_remove_vdev(vdev_t *vd) 8887 { 8888 l2arc_dev_t *remdev = NULL; 8889 8890 /* 8891 * Find the device by vdev 8892 */ 8893 remdev = l2arc_vdev_get(vd); 8894 ASSERT3P(remdev, !=, NULL); 8895 8896 /* 8897 * Cancel any ongoing or scheduled rebuild. 8898 */ 8899 mutex_enter(&l2arc_rebuild_thr_lock); 8900 if (remdev->l2ad_rebuild_began == B_TRUE) { 8901 remdev->l2ad_rebuild_cancel = B_TRUE; 8902 while (remdev->l2ad_rebuild == B_TRUE) 8903 cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); 8904 } 8905 mutex_exit(&l2arc_rebuild_thr_lock); 8906 8907 /* 8908 * Remove device from global list 8909 */ 8910 mutex_enter(&l2arc_dev_mtx); 8911 list_remove(l2arc_dev_list, remdev); 8912 l2arc_dev_last = NULL; /* may have been invalidated */ 8913 atomic_dec_64(&l2arc_ndev); 8914 mutex_exit(&l2arc_dev_mtx); 8915 8916 /* 8917 * Clear all buflists and ARC references. L2ARC device flush. 8918 */ 8919 l2arc_evict(remdev, 0, B_TRUE); 8920 list_destroy(&remdev->l2ad_buflist); 8921 ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); 8922 list_destroy(&remdev->l2ad_lbptr_list); 8923 mutex_destroy(&remdev->l2ad_mtx); 8924 zfs_refcount_destroy(&remdev->l2ad_alloc); 8925 zfs_refcount_destroy(&remdev->l2ad_lb_asize); 8926 zfs_refcount_destroy(&remdev->l2ad_lb_count); 8927 kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); 8928 kmem_free(remdev, sizeof (l2arc_dev_t)); 8929 } 8930 8931 void 8932 l2arc_init(void) 8933 { 8934 l2arc_thread_exit = 0; 8935 l2arc_ndev = 0; 8936 l2arc_writes_sent = 0; 8937 l2arc_writes_done = 0; 8938 8939 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); 8940 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); 8941 mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); 8942 cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); 8943 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); 8944 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); 8945 8946 l2arc_dev_list = &L2ARC_dev_list; 8947 l2arc_free_on_write = &L2ARC_free_on_write; 8948 list_create(l2arc_dev_list, sizeof (l2arc_dev_t), 8949 offsetof(l2arc_dev_t, l2ad_node)); 8950 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), 8951 offsetof(l2arc_data_free_t, l2df_list_node)); 8952 } 8953 8954 void 8955 l2arc_fini(void) 8956 { 8957 /* 8958 * This is called from dmu_fini(), which is called from spa_fini(); 8959 * Because of this, we can assume that all l2arc devices have 8960 * already been removed when the pools themselves were removed. 8961 */ 8962 8963 l2arc_do_free_on_write(); 8964 8965 mutex_destroy(&l2arc_feed_thr_lock); 8966 cv_destroy(&l2arc_feed_thr_cv); 8967 mutex_destroy(&l2arc_rebuild_thr_lock); 8968 cv_destroy(&l2arc_rebuild_thr_cv); 8969 mutex_destroy(&l2arc_dev_mtx); 8970 mutex_destroy(&l2arc_free_on_write_mtx); 8971 8972 list_destroy(l2arc_dev_list); 8973 list_destroy(l2arc_free_on_write); 8974 } 8975 8976 void 8977 l2arc_start(void) 8978 { 8979 if (!(spa_mode_global & FWRITE)) 8980 return; 8981 8982 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, 8983 TS_RUN, minclsyspri); 8984 } 8985 8986 void 8987 l2arc_stop(void) 8988 { 8989 if (!(spa_mode_global & FWRITE)) 8990 return; 8991 8992 mutex_enter(&l2arc_feed_thr_lock); 8993 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ 8994 l2arc_thread_exit = 1; 8995 while (l2arc_thread_exit != 0) 8996 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); 8997 mutex_exit(&l2arc_feed_thr_lock); 8998 } 8999 9000 /* 9001 * Punches out rebuild threads for the L2ARC devices in a spa. This should 9002 * be called after pool import from the spa async thread, since starting 9003 * these threads directly from spa_import() will make them part of the 9004 * "zpool import" context and delay process exit (and thus pool import). 9005 */ 9006 void 9007 l2arc_spa_rebuild_start(spa_t *spa) 9008 { 9009 ASSERT(MUTEX_HELD(&spa_namespace_lock)); 9010 9011 /* 9012 * Locate the spa's l2arc devices and kick off rebuild threads. 9013 */ 9014 for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { 9015 l2arc_dev_t *dev = 9016 l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); 9017 if (dev == NULL) { 9018 /* Don't attempt a rebuild if the vdev is UNAVAIL */ 9019 continue; 9020 } 9021 mutex_enter(&l2arc_rebuild_thr_lock); 9022 if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { 9023 dev->l2ad_rebuild_began = B_TRUE; 9024 (void) thread_create(NULL, 0, 9025 (void (*)(void *))l2arc_dev_rebuild_start, 9026 dev, 0, &p0, TS_RUN, minclsyspri); 9027 } 9028 mutex_exit(&l2arc_rebuild_thr_lock); 9029 } 9030 } 9031 9032 /* 9033 * Main entry point for L2ARC rebuilding. 9034 */ 9035 static void 9036 l2arc_dev_rebuild_start(l2arc_dev_t *dev) 9037 { 9038 VERIFY(!dev->l2ad_rebuild_cancel); 9039 VERIFY(dev->l2ad_rebuild); 9040 (void) l2arc_rebuild(dev); 9041 mutex_enter(&l2arc_rebuild_thr_lock); 9042 dev->l2ad_rebuild_began = B_FALSE; 9043 dev->l2ad_rebuild = B_FALSE; 9044 mutex_exit(&l2arc_rebuild_thr_lock); 9045 9046 thread_exit(); 9047 } 9048 9049 /* 9050 * This function implements the actual L2ARC metadata rebuild. It: 9051 * starts reading the log block chain and restores each block's contents 9052 * to memory (reconstructing arc_buf_hdr_t's). 9053 * 9054 * Operation stops under any of the following conditions: 9055 * 9056 * 1) We reach the end of the log block chain. 9057 * 2) We encounter *any* error condition (cksum errors, io errors) 9058 */ 9059 static int 9060 l2arc_rebuild(l2arc_dev_t *dev) 9061 { 9062 vdev_t *vd = dev->l2ad_vdev; 9063 spa_t *spa = vd->vdev_spa; 9064 int err = 0; 9065 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9066 l2arc_log_blk_phys_t *this_lb, *next_lb; 9067 zio_t *this_io = NULL, *next_io = NULL; 9068 l2arc_log_blkptr_t lbps[2]; 9069 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9070 boolean_t lock_held; 9071 9072 this_lb = kmem_zalloc(sizeof (*this_lb), KM_SLEEP); 9073 next_lb = kmem_zalloc(sizeof (*next_lb), KM_SLEEP); 9074 9075 /* 9076 * We prevent device removal while issuing reads to the device, 9077 * then during the rebuilding phases we drop this lock again so 9078 * that a spa_unload or device remove can be initiated - this is 9079 * safe, because the spa will signal us to stop before removing 9080 * our device and wait for us to stop. 9081 */ 9082 spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); 9083 lock_held = B_TRUE; 9084 9085 /* 9086 * Retrieve the persistent L2ARC device state. 9087 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9088 */ 9089 dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); 9090 dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + 9091 L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), 9092 dev->l2ad_start); 9093 dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); 9094 9095 /* 9096 * In case the zfs module parameter l2arc_rebuild_enabled is false 9097 * we do not start the rebuild process. 9098 */ 9099 if (!l2arc_rebuild_enabled) 9100 goto out; 9101 9102 /* Prepare the rebuild process */ 9103 bcopy(l2dhdr->dh_start_lbps, lbps, sizeof (lbps)); 9104 9105 /* Start the rebuild process */ 9106 for (;;) { 9107 if (!l2arc_log_blkptr_valid(dev, &lbps[0])) 9108 break; 9109 9110 if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], 9111 this_lb, next_lb, this_io, &next_io)) != 0) 9112 goto out; 9113 9114 /* 9115 * Our memory pressure valve. If the system is running low 9116 * on memory, rather than swamping memory with new ARC buf 9117 * hdrs, we opt not to rebuild the L2ARC. At this point, 9118 * however, we have already set up our L2ARC dev to chain in 9119 * new metadata log blocks, so the user may choose to offline/ 9120 * online the L2ARC dev at a later time (or re-import the pool) 9121 * to reconstruct it (when there's less memory pressure). 9122 */ 9123 if (arc_reclaim_needed()) { 9124 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); 9125 cmn_err(CE_NOTE, "System running low on memory, " 9126 "aborting L2ARC rebuild."); 9127 err = SET_ERROR(ENOMEM); 9128 goto out; 9129 } 9130 9131 spa_config_exit(spa, SCL_L2ARC, vd); 9132 lock_held = B_FALSE; 9133 9134 /* 9135 * Now that we know that the next_lb checks out alright, we 9136 * can start reconstruction from this log block. 9137 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9138 */ 9139 uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); 9140 l2arc_log_blk_restore(dev, this_lb, asize, lbps[0].lbp_daddr); 9141 9142 /* 9143 * log block restored, include its pointer in the list of 9144 * pointers to log blocks present in the L2ARC device. 9145 */ 9146 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 9147 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), 9148 KM_SLEEP); 9149 bcopy(&lbps[0], lb_ptr_buf->lb_ptr, 9150 sizeof (l2arc_log_blkptr_t)); 9151 mutex_enter(&dev->l2ad_mtx); 9152 list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); 9153 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 9154 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 9155 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 9156 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 9157 mutex_exit(&dev->l2ad_mtx); 9158 vdev_space_update(vd, asize, 0, 0); 9159 9160 /* BEGIN CSTYLED */ 9161 /* 9162 * Protection against loops of log blocks: 9163 * 9164 * l2ad_hand l2ad_evict 9165 * V V 9166 * l2ad_start |=======================================| l2ad_end 9167 * -----|||----|||---|||----||| 9168 * (3) (2) (1) (0) 9169 * ---|||---|||----|||---||| 9170 * (7) (6) (5) (4) 9171 * 9172 * In this situation the pointer of log block (4) passes 9173 * l2arc_log_blkptr_valid() but the log block should not be 9174 * restored as it is overwritten by the payload of log block 9175 * (0). Only log blocks (0)-(3) should be restored. We check 9176 * whether l2ad_evict lies in between the payload starting 9177 * offset of the next log block (lbps[1].lbp_payload_start) 9178 * and the payload starting offset of the present log block 9179 * (lbps[0].lbp_payload_start). If true and this isn't the 9180 * first pass, we are looping from the beginning and we should 9181 * stop. 9182 */ 9183 /* END CSTYLED */ 9184 if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, 9185 lbps[0].lbp_payload_start, dev->l2ad_evict) && 9186 !dev->l2ad_first) 9187 goto out; 9188 9189 for (;;) { 9190 mutex_enter(&l2arc_rebuild_thr_lock); 9191 if (dev->l2ad_rebuild_cancel) { 9192 dev->l2ad_rebuild = B_FALSE; 9193 cv_signal(&l2arc_rebuild_thr_cv); 9194 mutex_exit(&l2arc_rebuild_thr_lock); 9195 err = SET_ERROR(ECANCELED); 9196 goto out; 9197 } 9198 mutex_exit(&l2arc_rebuild_thr_lock); 9199 if (spa_config_tryenter(spa, SCL_L2ARC, vd, 9200 RW_READER)) { 9201 lock_held = B_TRUE; 9202 break; 9203 } 9204 /* 9205 * L2ARC config lock held by somebody in writer, 9206 * possibly due to them trying to remove us. They'll 9207 * likely to want us to shut down, so after a little 9208 * delay, we check l2ad_rebuild_cancel and retry 9209 * the lock again. 9210 */ 9211 delay(1); 9212 } 9213 9214 /* 9215 * Continue with the next log block. 9216 */ 9217 lbps[0] = lbps[1]; 9218 lbps[1] = this_lb->lb_prev_lbp; 9219 PTR_SWAP(this_lb, next_lb); 9220 this_io = next_io; 9221 next_io = NULL; 9222 } 9223 9224 if (this_io != NULL) 9225 l2arc_log_blk_fetch_abort(this_io); 9226 out: 9227 if (next_io != NULL) 9228 l2arc_log_blk_fetch_abort(next_io); 9229 kmem_free(this_lb, sizeof (*this_lb)); 9230 kmem_free(next_lb, sizeof (*next_lb)); 9231 9232 if (!l2arc_rebuild_enabled) { 9233 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9234 "disabled"); 9235 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { 9236 ARCSTAT_BUMP(arcstat_l2_rebuild_success); 9237 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9238 "successful, restored %llu blocks", 9239 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 9240 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { 9241 /* 9242 * No error but also nothing restored, meaning the lbps array 9243 * in the device header points to invalid/non-present log 9244 * blocks. Reset the header. 9245 */ 9246 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9247 "no valid log blocks"); 9248 bzero(l2dhdr, dev->l2ad_dev_hdr_asize); 9249 l2arc_dev_hdr_update(dev); 9250 } else if (err != 0) { 9251 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9252 "aborted, restored %llu blocks", 9253 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 9254 } 9255 9256 if (lock_held) 9257 spa_config_exit(spa, SCL_L2ARC, vd); 9258 9259 return (err); 9260 } 9261 9262 /* 9263 * Attempts to read the device header on the provided L2ARC device and writes 9264 * it to `hdr'. On success, this function returns 0, otherwise the appropriate 9265 * error code is returned. 9266 */ 9267 static int 9268 l2arc_dev_hdr_read(l2arc_dev_t *dev) 9269 { 9270 int err; 9271 uint64_t guid; 9272 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9273 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9274 abd_t *abd; 9275 9276 guid = spa_guid(dev->l2ad_vdev->vdev_spa); 9277 9278 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 9279 9280 err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, 9281 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, 9282 ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_ASYNC_READ, 9283 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 9284 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | 9285 ZIO_FLAG_SPECULATIVE, B_FALSE)); 9286 9287 abd_put(abd); 9288 9289 if (err != 0) { 9290 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); 9291 zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " 9292 "vdev guid: %llu", err, dev->l2ad_vdev->vdev_guid); 9293 return (err); 9294 } 9295 9296 if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) 9297 byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); 9298 9299 if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || 9300 l2dhdr->dh_spa_guid != guid || 9301 l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || 9302 l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || 9303 l2dhdr->dh_log_entries != dev->l2ad_log_entries || 9304 l2dhdr->dh_end != dev->l2ad_end || 9305 !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, 9306 l2dhdr->dh_evict)) { 9307 /* 9308 * Attempt to rebuild a device containing no actual dev hdr 9309 * or containing a header from some other pool or from another 9310 * version of persistent L2ARC. 9311 */ 9312 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); 9313 return (SET_ERROR(ENOTSUP)); 9314 } 9315 9316 return (0); 9317 } 9318 9319 /* 9320 * Reads L2ARC log blocks from storage and validates their contents. 9321 * 9322 * This function implements a simple fetcher to make sure that while 9323 * we're processing one buffer the L2ARC is already fetching the next 9324 * one in the chain. 9325 * 9326 * The arguments this_lp and next_lp point to the current and next log block 9327 * address in the block chain. Similarly, this_lb and next_lb hold the 9328 * l2arc_log_blk_phys_t's of the current and next L2ARC blk. 9329 * 9330 * The `this_io' and `next_io' arguments are used for block fetching. 9331 * When issuing the first blk IO during rebuild, you should pass NULL for 9332 * `this_io'. This function will then issue a sync IO to read the block and 9333 * also issue an async IO to fetch the next block in the block chain. The 9334 * fetched IO is returned in `next_io'. On subsequent calls to this 9335 * function, pass the value returned in `next_io' from the previous call 9336 * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. 9337 * Prior to the call, you should initialize your `next_io' pointer to be 9338 * NULL. If no fetch IO was issued, the pointer is left set at NULL. 9339 * 9340 * On success, this function returns 0, otherwise it returns an appropriate 9341 * error code. On error the fetching IO is aborted and cleared before 9342 * returning from this function. Therefore, if we return `success', the 9343 * caller can assume that we have taken care of cleanup of fetch IOs. 9344 */ 9345 static int 9346 l2arc_log_blk_read(l2arc_dev_t *dev, 9347 const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, 9348 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 9349 zio_t *this_io, zio_t **next_io) 9350 { 9351 int err = 0; 9352 zio_cksum_t cksum; 9353 abd_t *abd = NULL; 9354 uint64_t asize; 9355 9356 ASSERT(this_lbp != NULL && next_lbp != NULL); 9357 ASSERT(this_lb != NULL && next_lb != NULL); 9358 ASSERT(next_io != NULL && *next_io == NULL); 9359 ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); 9360 9361 /* 9362 * Check to see if we have issued the IO for this log block in a 9363 * previous run. If not, this is the first call, so issue it now. 9364 */ 9365 if (this_io == NULL) { 9366 this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, 9367 this_lb); 9368 } 9369 9370 /* 9371 * Peek to see if we can start issuing the next IO immediately. 9372 */ 9373 if (l2arc_log_blkptr_valid(dev, next_lbp)) { 9374 /* 9375 * Start issuing IO for the next log block early - this 9376 * should help keep the L2ARC device busy while we 9377 * decompress and restore this log block. 9378 */ 9379 *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, 9380 next_lb); 9381 } 9382 9383 /* Wait for the IO to read this log block to complete */ 9384 if ((err = zio_wait(this_io)) != 0) { 9385 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); 9386 zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " 9387 "offset: %llu, vdev guid: %llu", err, this_lbp->lbp_daddr, 9388 dev->l2ad_vdev->vdev_guid); 9389 goto cleanup; 9390 } 9391 9392 /* 9393 * Make sure the buffer checks out. 9394 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9395 */ 9396 asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); 9397 fletcher_4_native(this_lb, asize, NULL, &cksum); 9398 if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { 9399 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); 9400 zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " 9401 "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", 9402 this_lbp->lbp_daddr, dev->l2ad_vdev->vdev_guid, 9403 dev->l2ad_hand, dev->l2ad_evict); 9404 err = SET_ERROR(ECKSUM); 9405 goto cleanup; 9406 } 9407 9408 /* Now we can take our time decoding this buffer */ 9409 switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { 9410 case ZIO_COMPRESS_OFF: 9411 break; 9412 case ZIO_COMPRESS_LZ4: 9413 abd = abd_alloc_for_io(asize, B_TRUE); 9414 abd_copy_from_buf_off(abd, this_lb, 0, asize); 9415 if ((err = zio_decompress_data( 9416 L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), 9417 abd, this_lb, asize, sizeof (*this_lb))) != 0) { 9418 err = SET_ERROR(EINVAL); 9419 goto cleanup; 9420 } 9421 break; 9422 default: 9423 err = SET_ERROR(EINVAL); 9424 goto cleanup; 9425 } 9426 if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) 9427 byteswap_uint64_array(this_lb, sizeof (*this_lb)); 9428 if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { 9429 err = SET_ERROR(EINVAL); 9430 goto cleanup; 9431 } 9432 cleanup: 9433 /* Abort an in-flight fetch I/O in case of error */ 9434 if (err != 0 && *next_io != NULL) { 9435 l2arc_log_blk_fetch_abort(*next_io); 9436 *next_io = NULL; 9437 } 9438 if (abd != NULL) 9439 abd_free(abd); 9440 return (err); 9441 } 9442 9443 /* 9444 * Restores the payload of a log block to ARC. This creates empty ARC hdr 9445 * entries which only contain an l2arc hdr, essentially restoring the 9446 * buffers to their L2ARC evicted state. This function also updates space 9447 * usage on the L2ARC vdev to make sure it tracks restored buffers. 9448 */ 9449 static void 9450 l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, 9451 uint64_t lb_asize, uint64_t lb_daddr) 9452 { 9453 uint64_t size = 0, asize = 0; 9454 uint64_t log_entries = dev->l2ad_log_entries; 9455 9456 for (int i = log_entries - 1; i >= 0; i--) { 9457 /* 9458 * Restore goes in the reverse temporal direction to preserve 9459 * correct temporal ordering of buffers in the l2ad_buflist. 9460 * l2arc_hdr_restore also does a list_insert_tail instead of 9461 * list_insert_head on the l2ad_buflist: 9462 * 9463 * LIST l2ad_buflist LIST 9464 * HEAD <------ (time) ------ TAIL 9465 * direction +-----+-----+-----+-----+-----+ direction 9466 * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild 9467 * fill +-----+-----+-----+-----+-----+ 9468 * ^ ^ 9469 * | | 9470 * | | 9471 * l2arc_feed_thread l2arc_rebuild 9472 * will place new bufs here restores bufs here 9473 * 9474 * During l2arc_rebuild() the device is not used by 9475 * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. 9476 */ 9477 size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); 9478 asize += vdev_psize_to_asize(dev->l2ad_vdev, 9479 L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); 9480 l2arc_hdr_restore(&lb->lb_entries[i], dev); 9481 } 9482 9483 /* 9484 * Record rebuild stats: 9485 * size Logical size of restored buffers in the L2ARC 9486 * asize Aligned size of restored buffers in the L2ARC 9487 */ 9488 ARCSTAT_INCR(arcstat_l2_rebuild_size, size); 9489 ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); 9490 ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); 9491 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); 9492 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); 9493 ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); 9494 } 9495 9496 /* 9497 * Restores a single ARC buf hdr from a log entry. The ARC buffer is put 9498 * into a state indicating that it has been evicted to L2ARC. 9499 */ 9500 static void 9501 l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) 9502 { 9503 arc_buf_hdr_t *hdr, *exists; 9504 kmutex_t *hash_lock; 9505 arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); 9506 uint64_t asize; 9507 9508 /* 9509 * Do all the allocation before grabbing any locks, this lets us 9510 * sleep if memory is full and we don't have to deal with failed 9511 * allocations. 9512 */ 9513 hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, 9514 dev, le->le_dva, le->le_daddr, 9515 L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, 9516 L2BLK_GET_COMPRESS((le)->le_prop), 9517 L2BLK_GET_PROTECTED((le)->le_prop), 9518 L2BLK_GET_PREFETCH((le)->le_prop)); 9519 asize = vdev_psize_to_asize(dev->l2ad_vdev, 9520 L2BLK_GET_PSIZE((le)->le_prop)); 9521 9522 /* 9523 * vdev_space_update() has to be called before arc_hdr_destroy() to 9524 * avoid underflow since the latter also calls the former. 9525 */ 9526 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9527 9528 ARCSTAT_INCR(arcstat_l2_lsize, HDR_GET_LSIZE(hdr)); 9529 ARCSTAT_INCR(arcstat_l2_psize, HDR_GET_PSIZE(hdr)); 9530 9531 mutex_enter(&dev->l2ad_mtx); 9532 list_insert_tail(&dev->l2ad_buflist, hdr); 9533 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); 9534 mutex_exit(&dev->l2ad_mtx); 9535 9536 exists = buf_hash_insert(hdr, &hash_lock); 9537 if (exists) { 9538 /* Buffer was already cached, no need to restore it. */ 9539 arc_hdr_destroy(hdr); 9540 /* 9541 * If the buffer is already cached, check whether it has 9542 * L2ARC metadata. If not, enter them and update the flag. 9543 * This is important is case of onlining a cache device, since 9544 * we previously evicted all L2ARC metadata from ARC. 9545 */ 9546 if (!HDR_HAS_L2HDR(exists)) { 9547 arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); 9548 exists->b_l2hdr.b_dev = dev; 9549 exists->b_l2hdr.b_daddr = le->le_daddr; 9550 mutex_enter(&dev->l2ad_mtx); 9551 list_insert_tail(&dev->l2ad_buflist, exists); 9552 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 9553 arc_hdr_size(exists), exists); 9554 mutex_exit(&dev->l2ad_mtx); 9555 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9556 ARCSTAT_INCR(arcstat_l2_lsize, HDR_GET_LSIZE(exists)); 9557 ARCSTAT_INCR(arcstat_l2_psize, HDR_GET_PSIZE(exists)); 9558 } 9559 ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); 9560 } 9561 9562 mutex_exit(hash_lock); 9563 } 9564 9565 /* 9566 * Starts an asynchronous read IO to read a log block. This is used in log 9567 * block reconstruction to start reading the next block before we are done 9568 * decoding and reconstructing the current block, to keep the l2arc device 9569 * nice and hot with read IO to process. 9570 * The returned zio will contain newly allocated memory buffers for the IO 9571 * data which should then be freed by the caller once the zio is no longer 9572 * needed (i.e. due to it having completed). If you wish to abort this 9573 * zio, you should do so using l2arc_log_blk_fetch_abort, which takes 9574 * care of disposing of the allocated buffers correctly. 9575 */ 9576 static zio_t * 9577 l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, 9578 l2arc_log_blk_phys_t *lb) 9579 { 9580 uint32_t asize; 9581 zio_t *pio; 9582 l2arc_read_callback_t *cb; 9583 9584 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 9585 asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 9586 ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); 9587 9588 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); 9589 cb->l2rcb_abd = abd_get_from_buf(lb, asize); 9590 pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, 9591 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | 9592 ZIO_FLAG_DONT_RETRY); 9593 (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, 9594 cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, 9595 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 9596 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); 9597 9598 return (pio); 9599 } 9600 9601 /* 9602 * Aborts a zio returned from l2arc_log_blk_fetch and frees the data 9603 * buffers allocated for it. 9604 */ 9605 static void 9606 l2arc_log_blk_fetch_abort(zio_t *zio) 9607 { 9608 (void) zio_wait(zio); 9609 } 9610 9611 /* 9612 * Creates a zio to update the device header on an l2arc device. 9613 */ 9614 static void 9615 l2arc_dev_hdr_update(l2arc_dev_t *dev) 9616 { 9617 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9618 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9619 abd_t *abd; 9620 int err; 9621 9622 VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); 9623 9624 l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; 9625 l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; 9626 l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); 9627 l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; 9628 l2dhdr->dh_log_entries = dev->l2ad_log_entries; 9629 l2dhdr->dh_evict = dev->l2ad_evict; 9630 l2dhdr->dh_start = dev->l2ad_start; 9631 l2dhdr->dh_end = dev->l2ad_end; 9632 l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); 9633 l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); 9634 l2dhdr->dh_flags = 0; 9635 if (dev->l2ad_first) 9636 l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; 9637 9638 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 9639 9640 err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, 9641 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, 9642 NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); 9643 9644 abd_put(abd); 9645 9646 if (err != 0) { 9647 zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " 9648 "vdev guid: %llu", err, dev->l2ad_vdev->vdev_guid); 9649 } 9650 } 9651 9652 /* 9653 * Commits a log block to the L2ARC device. This routine is invoked from 9654 * l2arc_write_buffers when the log block fills up. 9655 * This function allocates some memory to temporarily hold the serialized 9656 * buffer to be written. This is then released in l2arc_write_done. 9657 */ 9658 static void 9659 l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) 9660 { 9661 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 9662 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9663 uint64_t psize, asize; 9664 zio_t *wzio; 9665 l2arc_lb_abd_buf_t *abd_buf; 9666 uint8_t *tmpbuf; 9667 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9668 9669 VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); 9670 9671 tmpbuf = zio_buf_alloc(sizeof (*lb)); 9672 abd_buf = zio_buf_alloc(sizeof (*abd_buf)); 9673 abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); 9674 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 9675 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); 9676 9677 /* link the buffer into the block chain */ 9678 lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; 9679 lb->lb_magic = L2ARC_LOG_BLK_MAGIC; 9680 9681 /* 9682 * l2arc_log_blk_commit() may be called multiple times during a single 9683 * l2arc_write_buffers() call. Save the allocated abd buffers in a list 9684 * so we can free them in l2arc_write_done() later on. 9685 */ 9686 list_insert_tail(&cb->l2wcb_abd_list, abd_buf); 9687 9688 /* try to compress the buffer */ 9689 psize = zio_compress_data(ZIO_COMPRESS_LZ4, 9690 abd_buf->abd, tmpbuf, sizeof (*lb)); 9691 9692 /* a log block is never entirely zero */ 9693 ASSERT(psize != 0); 9694 asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 9695 ASSERT(asize <= sizeof (*lb)); 9696 9697 /* 9698 * Update the start log block pointer in the device header to point 9699 * to the log block we're about to write. 9700 */ 9701 l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; 9702 l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; 9703 l2dhdr->dh_start_lbps[0].lbp_payload_asize = 9704 dev->l2ad_log_blk_payload_asize; 9705 l2dhdr->dh_start_lbps[0].lbp_payload_start = 9706 dev->l2ad_log_blk_payload_start; 9707 _NOTE(CONSTCOND) 9708 L2BLK_SET_LSIZE( 9709 (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); 9710 L2BLK_SET_PSIZE( 9711 (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); 9712 L2BLK_SET_CHECKSUM( 9713 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 9714 ZIO_CHECKSUM_FLETCHER_4); 9715 if (asize < sizeof (*lb)) { 9716 /* compression succeeded */ 9717 bzero(tmpbuf + psize, asize - psize); 9718 L2BLK_SET_COMPRESS( 9719 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 9720 ZIO_COMPRESS_LZ4); 9721 } else { 9722 /* compression failed */ 9723 bcopy(lb, tmpbuf, sizeof (*lb)); 9724 L2BLK_SET_COMPRESS( 9725 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 9726 ZIO_COMPRESS_OFF); 9727 } 9728 9729 /* checksum what we're about to write */ 9730 fletcher_4_native(tmpbuf, asize, NULL, 9731 &l2dhdr->dh_start_lbps[0].lbp_cksum); 9732 9733 abd_put(abd_buf->abd); 9734 9735 /* perform the write itself */ 9736 abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); 9737 abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); 9738 wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, 9739 asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, 9740 ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); 9741 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); 9742 (void) zio_nowait(wzio); 9743 9744 dev->l2ad_hand += asize; 9745 /* 9746 * Include the committed log block's pointer in the list of pointers 9747 * to log blocks present in the L2ARC device. 9748 */ 9749 bcopy(&l2dhdr->dh_start_lbps[0], lb_ptr_buf->lb_ptr, 9750 sizeof (l2arc_log_blkptr_t)); 9751 mutex_enter(&dev->l2ad_mtx); 9752 list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); 9753 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 9754 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 9755 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 9756 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 9757 mutex_exit(&dev->l2ad_mtx); 9758 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9759 9760 /* bump the kstats */ 9761 ARCSTAT_INCR(arcstat_l2_write_bytes, asize); 9762 ARCSTAT_BUMP(arcstat_l2_log_blk_writes); 9763 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); 9764 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, 9765 dev->l2ad_log_blk_payload_asize / asize); 9766 9767 /* start a new log block */ 9768 dev->l2ad_log_ent_idx = 0; 9769 dev->l2ad_log_blk_payload_asize = 0; 9770 dev->l2ad_log_blk_payload_start = 0; 9771 } 9772 9773 /* 9774 * Validates an L2ARC log block address to make sure that it can be read 9775 * from the provided L2ARC device. 9776 */ 9777 boolean_t 9778 l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) 9779 { 9780 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 9781 uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 9782 uint64_t end = lbp->lbp_daddr + asize - 1; 9783 uint64_t start = lbp->lbp_payload_start; 9784 boolean_t evicted = B_FALSE; 9785 9786 /* BEGIN CSTYLED */ 9787 /* 9788 * A log block is valid if all of the following conditions are true: 9789 * - it fits entirely (including its payload) between l2ad_start and 9790 * l2ad_end 9791 * - it has a valid size 9792 * - neither the log block itself nor part of its payload was evicted 9793 * by l2arc_evict(): 9794 * 9795 * l2ad_hand l2ad_evict 9796 * | | lbp_daddr 9797 * | start | | end 9798 * | | | | | 9799 * V V V V V 9800 * l2ad_start ============================================ l2ad_end 9801 * --------------------------|||| 9802 * ^ ^ 9803 * | log block 9804 * payload 9805 */ 9806 /* END CSTYLED */ 9807 evicted = 9808 l2arc_range_check_overlap(start, end, dev->l2ad_hand) || 9809 l2arc_range_check_overlap(start, end, dev->l2ad_evict) || 9810 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || 9811 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); 9812 9813 return (start >= dev->l2ad_start && end <= dev->l2ad_end && 9814 asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && 9815 (!evicted || dev->l2ad_first)); 9816 } 9817 9818 /* 9819 * Inserts ARC buffer header `hdr' into the current L2ARC log block on 9820 * the device. The buffer being inserted must be present in L2ARC. 9821 * Returns B_TRUE if the L2ARC log block is full and needs to be committed 9822 * to L2ARC, or B_FALSE if it still has room for more ARC buffers. 9823 */ 9824 static boolean_t 9825 l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) 9826 { 9827 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 9828 l2arc_log_ent_phys_t *le; 9829 9830 if (dev->l2ad_log_entries == 0) 9831 return (B_FALSE); 9832 9833 int index = dev->l2ad_log_ent_idx++; 9834 9835 ASSERT3S(index, <, dev->l2ad_log_entries); 9836 ASSERT(HDR_HAS_L2HDR(hdr)); 9837 9838 le = &lb->lb_entries[index]; 9839 bzero(le, sizeof (*le)); 9840 le->le_dva = hdr->b_dva; 9841 le->le_birth = hdr->b_birth; 9842 le->le_daddr = hdr->b_l2hdr.b_daddr; 9843 if (index == 0) 9844 dev->l2ad_log_blk_payload_start = le->le_daddr; 9845 L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); 9846 L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); 9847 L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); 9848 L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); 9849 L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); 9850 L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); 9851 9852 dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, 9853 HDR_GET_PSIZE(hdr)); 9854 9855 return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); 9856 } 9857 9858 /* 9859 * Checks whether a given L2ARC device address sits in a time-sequential 9860 * range. The trick here is that the L2ARC is a rotary buffer, so we can't 9861 * just do a range comparison, we need to handle the situation in which the 9862 * range wraps around the end of the L2ARC device. Arguments: 9863 * bottom -- Lower end of the range to check (written to earlier). 9864 * top -- Upper end of the range to check (written to later). 9865 * check -- The address for which we want to determine if it sits in 9866 * between the top and bottom. 9867 * 9868 * The 3-way conditional below represents the following cases: 9869 * 9870 * bottom < top : Sequentially ordered case: 9871 * <check>--------+-------------------+ 9872 * | (overlap here?) | 9873 * L2ARC dev V V 9874 * |---------------<bottom>============<top>--------------| 9875 * 9876 * bottom > top: Looped-around case: 9877 * <check>--------+------------------+ 9878 * | (overlap here?) | 9879 * L2ARC dev V V 9880 * |===============<top>---------------<bottom>===========| 9881 * ^ ^ 9882 * | (or here?) | 9883 * +---------------+---------<check> 9884 * 9885 * top == bottom : Just a single address comparison. 9886 */ 9887 boolean_t 9888 l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) 9889 { 9890 if (bottom < top) 9891 return (bottom <= check && check <= top); 9892 else if (bottom > top) 9893 return (check <= top || bottom <= check); 9894 else 9895 return (check == top); 9896 } 9897