1# SPDX-License-Identifier: GPL-2.0-only 2# 3# Block device driver configuration 4# 5 6menuconfig MD 7 bool "Multiple devices driver support (RAID and LVM)" 8 depends on BLOCK 9 select SRCU 10 help 11 Support multiple physical spindles through a single logical device. 12 Required for RAID and logical volume management. 13 14if MD 15 16config BLK_DEV_MD 17 tristate "RAID support" 18 select BLOCK_HOLDER_DEPRECATED if SYSFS 19 # BLOCK_LEGACY_AUTOLOAD requirement should be removed 20 # after relevant mdadm enhancements - to make "names=yes" 21 # the default - are widely available. 22 select BLOCK_LEGACY_AUTOLOAD 23 help 24 This driver lets you combine several hard disk partitions into one 25 logical block device. This can be used to simply append one 26 partition to another one or to combine several redundant hard disks 27 into a RAID1/4/5 device so as to provide protection against hard 28 disk failures. This is called "Software RAID" since the combining of 29 the partitions is done by the kernel. "Hardware RAID" means that the 30 combining is done by a dedicated controller; if you have such a 31 controller, you do not need to say Y here. 32 33 More information about Software RAID on Linux is contained in the 34 Software RAID mini-HOWTO, available from 35 <https://www.tldp.org/docs.html#howto>. There you will also learn 36 where to get the supporting user space utilities raidtools. 37 38 If unsure, say N. 39 40config MD_AUTODETECT 41 bool "Autodetect RAID arrays during kernel boot" 42 depends on BLK_DEV_MD=y 43 default y 44 help 45 If you say Y here, then the kernel will try to autodetect raid 46 arrays as part of its boot process. 47 48 If you don't use raid and say Y, this autodetection can cause 49 a several-second delay in the boot time due to various 50 synchronisation steps that are part of this step. 51 52 If unsure, say Y. 53 54config MD_LINEAR 55 tristate "Linear (append) mode (deprecated)" 56 depends on BLK_DEV_MD 57 help 58 If you say Y here, then your multiple devices driver will be able to 59 use the so-called linear mode, i.e. it will combine the hard disk 60 partitions by simply appending one to the other. 61 62 To compile this as a module, choose M here: the module 63 will be called linear. 64 65 If unsure, say Y. 66 67config MD_RAID0 68 tristate "RAID-0 (striping) mode" 69 depends on BLK_DEV_MD 70 help 71 If you say Y here, then your multiple devices driver will be able to 72 use the so-called raid0 mode, i.e. it will combine the hard disk 73 partitions into one logical device in such a fashion as to fill them 74 up evenly, one chunk here and one chunk there. This will increase 75 the throughput rate if the partitions reside on distinct disks. 76 77 Information about Software RAID on Linux is contained in the 78 Software-RAID mini-HOWTO, available from 79 <https://www.tldp.org/docs.html#howto>. There you will also 80 learn where to get the supporting user space utilities raidtools. 81 82 To compile this as a module, choose M here: the module 83 will be called raid0. 84 85 If unsure, say Y. 86 87config MD_RAID1 88 tristate "RAID-1 (mirroring) mode" 89 depends on BLK_DEV_MD 90 help 91 A RAID-1 set consists of several disk drives which are exact copies 92 of each other. In the event of a mirror failure, the RAID driver 93 will continue to use the operational mirrors in the set, providing 94 an error free MD (multiple device) to the higher levels of the 95 kernel. In a set with N drives, the available space is the capacity 96 of a single drive, and the set protects against a failure of (N - 1) 97 drives. 98 99 Information about Software RAID on Linux is contained in the 100 Software-RAID mini-HOWTO, available from 101 <https://www.tldp.org/docs.html#howto>. There you will also 102 learn where to get the supporting user space utilities raidtools. 103 104 If you want to use such a RAID-1 set, say Y. To compile this code 105 as a module, choose M here: the module will be called raid1. 106 107 If unsure, say Y. 108 109config MD_RAID10 110 tristate "RAID-10 (mirrored striping) mode" 111 depends on BLK_DEV_MD 112 help 113 RAID-10 provides a combination of striping (RAID-0) and 114 mirroring (RAID-1) with easier configuration and more flexible 115 layout. 116 Unlike RAID-0, but like RAID-1, RAID-10 requires all devices to 117 be the same size (or at least, only as much as the smallest device 118 will be used). 119 RAID-10 provides a variety of layouts that provide different levels 120 of redundancy and performance. 121 122 RAID-10 requires mdadm-1.7.0 or later, available at: 123 124 https://www.kernel.org/pub/linux/utils/raid/mdadm/ 125 126 If unsure, say Y. 127 128config MD_RAID456 129 tristate "RAID-4/RAID-5/RAID-6 mode" 130 depends on BLK_DEV_MD 131 select RAID6_PQ 132 select LIBCRC32C 133 select ASYNC_MEMCPY 134 select ASYNC_XOR 135 select ASYNC_PQ 136 select ASYNC_RAID6_RECOV 137 help 138 A RAID-5 set of N drives with a capacity of C MB per drive provides 139 the capacity of C * (N - 1) MB, and protects against a failure 140 of a single drive. For a given sector (row) number, (N - 1) drives 141 contain data sectors, and one drive contains the parity protection. 142 For a RAID-4 set, the parity blocks are present on a single drive, 143 while a RAID-5 set distributes the parity across the drives in one 144 of the available parity distribution methods. 145 146 A RAID-6 set of N drives with a capacity of C MB per drive 147 provides the capacity of C * (N - 2) MB, and protects 148 against a failure of any two drives. For a given sector 149 (row) number, (N - 2) drives contain data sectors, and two 150 drives contains two independent redundancy syndromes. Like 151 RAID-5, RAID-6 distributes the syndromes across the drives 152 in one of the available parity distribution methods. 153 154 Information about Software RAID on Linux is contained in the 155 Software-RAID mini-HOWTO, available from 156 <https://www.tldp.org/docs.html#howto>. There you will also 157 learn where to get the supporting user space utilities raidtools. 158 159 If you want to use such a RAID-4/RAID-5/RAID-6 set, say Y. To 160 compile this code as a module, choose M here: the module 161 will be called raid456. 162 163 If unsure, say Y. 164 165config MD_MULTIPATH 166 tristate "Multipath I/O support (deprecated)" 167 depends on BLK_DEV_MD 168 help 169 MD_MULTIPATH provides a simple multi-path personality for use 170 the MD framework. It is not under active development. New 171 projects should consider using DM_MULTIPATH which has more 172 features and more testing. 173 174 If unsure, say N. 175 176config MD_FAULTY 177 tristate "Faulty test module for MD (deprecated)" 178 depends on BLK_DEV_MD 179 help 180 The "faulty" module allows for a block device that occasionally returns 181 read or write errors. It is useful for testing. 182 183 In unsure, say N. 184 185 186config MD_CLUSTER 187 tristate "Cluster Support for MD" 188 depends on BLK_DEV_MD 189 depends on DLM 190 default n 191 help 192 Clustering support for MD devices. This enables locking and 193 synchronization across multiple systems on the cluster, so all 194 nodes in the cluster can access the MD devices simultaneously. 195 196 This brings the redundancy (and uptime) of RAID levels across the 197 nodes of the cluster. Currently, it can work with raid1 and raid10 198 (limited support). 199 200 If unsure, say N. 201 202source "drivers/md/bcache/Kconfig" 203 204config BLK_DEV_DM_BUILTIN 205 bool 206 207config BLK_DEV_DM 208 tristate "Device mapper support" 209 select BLOCK_HOLDER_DEPRECATED if SYSFS 210 select BLK_DEV_DM_BUILTIN 211 select BLK_MQ_STACKING 212 depends on DAX || DAX=n 213 help 214 Device-mapper is a low level volume manager. It works by allowing 215 people to specify mappings for ranges of logical sectors. Various 216 mapping types are available, in addition people may write their own 217 modules containing custom mappings if they wish. 218 219 Higher level volume managers such as LVM2 use this driver. 220 221 To compile this as a module, choose M here: the module will be 222 called dm-mod. 223 224 If unsure, say N. 225 226config DM_DEBUG 227 bool "Device mapper debugging support" 228 depends on BLK_DEV_DM 229 help 230 Enable this for messages that may help debug device-mapper problems. 231 232 If unsure, say N. 233 234config DM_BUFIO 235 tristate 236 depends on BLK_DEV_DM 237 help 238 This interface allows you to do buffered I/O on a device and acts 239 as a cache, holding recently-read blocks in memory and performing 240 delayed writes. 241 242config DM_DEBUG_BLOCK_MANAGER_LOCKING 243 bool "Block manager locking" 244 depends on DM_BUFIO 245 help 246 Block manager locking can catch various metadata corruption issues. 247 248 If unsure, say N. 249 250config DM_DEBUG_BLOCK_STACK_TRACING 251 bool "Keep stack trace of persistent data block lock holders" 252 depends on STACKTRACE_SUPPORT && DM_DEBUG_BLOCK_MANAGER_LOCKING 253 select STACKTRACE 254 help 255 Enable this for messages that may help debug problems with the 256 block manager locking used by thin provisioning and caching. 257 258 If unsure, say N. 259 260config DM_BIO_PRISON 261 tristate 262 depends on BLK_DEV_DM 263 help 264 Some bio locking schemes used by other device-mapper targets 265 including thin provisioning. 266 267source "drivers/md/persistent-data/Kconfig" 268 269config DM_UNSTRIPED 270 tristate "Unstriped target" 271 depends on BLK_DEV_DM 272 help 273 Unstripes I/O so it is issued solely on a single drive in a HW 274 RAID0 or dm-striped target. 275 276config DM_CRYPT 277 tristate "Crypt target support" 278 depends on BLK_DEV_DM 279 depends on (ENCRYPTED_KEYS || ENCRYPTED_KEYS=n) 280 depends on (TRUSTED_KEYS || TRUSTED_KEYS=n) 281 select CRYPTO 282 select CRYPTO_CBC 283 select CRYPTO_ESSIV 284 help 285 This device-mapper target allows you to create a device that 286 transparently encrypts the data on it. You'll need to activate 287 the ciphers you're going to use in the cryptoapi configuration. 288 289 For further information on dm-crypt and userspace tools see: 290 <https://gitlab.com/cryptsetup/cryptsetup/wikis/DMCrypt> 291 292 To compile this code as a module, choose M here: the module will 293 be called dm-crypt. 294 295 If unsure, say N. 296 297config DM_SNAPSHOT 298 tristate "Snapshot target" 299 depends on BLK_DEV_DM 300 select DM_BUFIO 301 help 302 Allow volume managers to take writable snapshots of a device. 303 304config DM_THIN_PROVISIONING 305 tristate "Thin provisioning target" 306 depends on BLK_DEV_DM 307 select DM_PERSISTENT_DATA 308 select DM_BIO_PRISON 309 help 310 Provides thin provisioning and snapshots that share a data store. 311 312config DM_CACHE 313 tristate "Cache target (EXPERIMENTAL)" 314 depends on BLK_DEV_DM 315 default n 316 select DM_PERSISTENT_DATA 317 select DM_BIO_PRISON 318 help 319 dm-cache attempts to improve performance of a block device by 320 moving frequently used data to a smaller, higher performance 321 device. Different 'policy' plugins can be used to change the 322 algorithms used to select which blocks are promoted, demoted, 323 cleaned etc. It supports writeback and writethrough modes. 324 325config DM_CACHE_SMQ 326 tristate "Stochastic MQ Cache Policy (EXPERIMENTAL)" 327 depends on DM_CACHE 328 default y 329 help 330 A cache policy that uses a multiqueue ordered by recent hits 331 to select which blocks should be promoted and demoted. 332 This is meant to be a general purpose policy. It prioritises 333 reads over writes. This SMQ policy (vs MQ) offers the promise 334 of less memory utilization, improved performance and increased 335 adaptability in the face of changing workloads. 336 337config DM_WRITECACHE 338 tristate "Writecache target" 339 depends on BLK_DEV_DM 340 help 341 The writecache target caches writes on persistent memory or SSD. 342 It is intended for databases or other programs that need extremely 343 low commit latency. 344 345 The writecache target doesn't cache reads because reads are supposed 346 to be cached in standard RAM. 347 348config DM_EBS 349 tristate "Emulated block size target (EXPERIMENTAL)" 350 depends on BLK_DEV_DM && !HIGHMEM 351 select DM_BUFIO 352 help 353 dm-ebs emulates smaller logical block size on backing devices 354 with larger ones (e.g. 512 byte sectors on 4K native disks). 355 356config DM_ERA 357 tristate "Era target (EXPERIMENTAL)" 358 depends on BLK_DEV_DM 359 default n 360 select DM_PERSISTENT_DATA 361 select DM_BIO_PRISON 362 help 363 dm-era tracks which parts of a block device are written to 364 over time. Useful for maintaining cache coherency when using 365 vendor snapshots. 366 367config DM_CLONE 368 tristate "Clone target (EXPERIMENTAL)" 369 depends on BLK_DEV_DM 370 default n 371 select DM_PERSISTENT_DATA 372 help 373 dm-clone produces a one-to-one copy of an existing, read-only source 374 device into a writable destination device. The cloned device is 375 visible/mountable immediately and the copy of the source device to the 376 destination device happens in the background, in parallel with user 377 I/O. 378 379 If unsure, say N. 380 381config DM_MIRROR 382 tristate "Mirror target" 383 depends on BLK_DEV_DM 384 help 385 Allow volume managers to mirror logical volumes, also 386 needed for live data migration tools such as 'pvmove'. 387 388config DM_LOG_USERSPACE 389 tristate "Mirror userspace logging" 390 depends on DM_MIRROR && NET 391 select CONNECTOR 392 help 393 The userspace logging module provides a mechanism for 394 relaying the dm-dirty-log API to userspace. Log designs 395 which are more suited to userspace implementation (e.g. 396 shared storage logs) or experimental logs can be implemented 397 by leveraging this framework. 398 399config DM_RAID 400 tristate "RAID 1/4/5/6/10 target" 401 depends on BLK_DEV_DM 402 select MD_RAID0 403 select MD_RAID1 404 select MD_RAID10 405 select MD_RAID456 406 select BLK_DEV_MD 407 help 408 A dm target that supports RAID1, RAID10, RAID4, RAID5 and RAID6 mappings 409 410 A RAID-5 set of N drives with a capacity of C MB per drive provides 411 the capacity of C * (N - 1) MB, and protects against a failure 412 of a single drive. For a given sector (row) number, (N - 1) drives 413 contain data sectors, and one drive contains the parity protection. 414 For a RAID-4 set, the parity blocks are present on a single drive, 415 while a RAID-5 set distributes the parity across the drives in one 416 of the available parity distribution methods. 417 418 A RAID-6 set of N drives with a capacity of C MB per drive 419 provides the capacity of C * (N - 2) MB, and protects 420 against a failure of any two drives. For a given sector 421 (row) number, (N - 2) drives contain data sectors, and two 422 drives contains two independent redundancy syndromes. Like 423 RAID-5, RAID-6 distributes the syndromes across the drives 424 in one of the available parity distribution methods. 425 426config DM_ZERO 427 tristate "Zero target" 428 depends on BLK_DEV_DM 429 help 430 A target that discards writes, and returns all zeroes for 431 reads. Useful in some recovery situations. 432 433config DM_MULTIPATH 434 tristate "Multipath target" 435 depends on BLK_DEV_DM 436 # nasty syntax but means make DM_MULTIPATH independent 437 # of SCSI_DH if the latter isn't defined but if 438 # it is, DM_MULTIPATH must depend on it. We get a build 439 # error if SCSI_DH=m and DM_MULTIPATH=y 440 depends on !SCSI_DH || SCSI 441 help 442 Allow volume managers to support multipath hardware. 443 444config DM_MULTIPATH_QL 445 tristate "I/O Path Selector based on the number of in-flight I/Os" 446 depends on DM_MULTIPATH 447 help 448 This path selector is a dynamic load balancer which selects 449 the path with the least number of in-flight I/Os. 450 451 If unsure, say N. 452 453config DM_MULTIPATH_ST 454 tristate "I/O Path Selector based on the service time" 455 depends on DM_MULTIPATH 456 help 457 This path selector is a dynamic load balancer which selects 458 the path expected to complete the incoming I/O in the shortest 459 time. 460 461 If unsure, say N. 462 463config DM_MULTIPATH_HST 464 tristate "I/O Path Selector based on historical service time" 465 depends on DM_MULTIPATH 466 help 467 This path selector is a dynamic load balancer which selects 468 the path expected to complete the incoming I/O in the shortest 469 time by comparing estimated service time (based on historical 470 service time). 471 472 If unsure, say N. 473 474config DM_MULTIPATH_IOA 475 tristate "I/O Path Selector based on CPU submission" 476 depends on DM_MULTIPATH 477 help 478 This path selector selects the path based on the CPU the IO is 479 executed on and the CPU to path mapping setup at path addition time. 480 481 If unsure, say N. 482 483config DM_DELAY 484 tristate "I/O delaying target" 485 depends on BLK_DEV_DM 486 help 487 A target that delays reads and/or writes and can send 488 them to different devices. Useful for testing. 489 490 If unsure, say N. 491 492config DM_DUST 493 tristate "Bad sector simulation target" 494 depends on BLK_DEV_DM 495 help 496 A target that simulates bad sector behavior. 497 Useful for testing. 498 499 If unsure, say N. 500 501config DM_INIT 502 bool "DM \"dm-mod.create=\" parameter support" 503 depends on BLK_DEV_DM=y 504 help 505 Enable "dm-mod.create=" parameter to create mapped devices at init time. 506 This option is useful to allow mounting rootfs without requiring an 507 initramfs. 508 See Documentation/admin-guide/device-mapper/dm-init.rst for dm-mod.create="..." 509 format. 510 511 If unsure, say N. 512 513config DM_UEVENT 514 bool "DM uevents" 515 depends on BLK_DEV_DM 516 help 517 Generate udev events for DM events. 518 519config DM_FLAKEY 520 tristate "Flakey target" 521 depends on BLK_DEV_DM 522 help 523 A target that intermittently fails I/O for debugging purposes. 524 525config DM_VERITY 526 tristate "Verity target support" 527 depends on BLK_DEV_DM 528 select CRYPTO 529 select CRYPTO_HASH 530 select DM_BUFIO 531 help 532 This device-mapper target creates a read-only device that 533 transparently validates the data on one underlying device against 534 a pre-generated tree of cryptographic checksums stored on a second 535 device. 536 537 You'll need to activate the digests you're going to use in the 538 cryptoapi configuration. 539 540 To compile this code as a module, choose M here: the module will 541 be called dm-verity. 542 543 If unsure, say N. 544 545config DM_VERITY_VERIFY_ROOTHASH_SIG 546 def_bool n 547 bool "Verity data device root hash signature verification support" 548 depends on DM_VERITY 549 select SYSTEM_DATA_VERIFICATION 550 help 551 Add ability for dm-verity device to be validated if the 552 pre-generated tree of cryptographic checksums passed has a pkcs#7 553 signature file that can validate the roothash of the tree. 554 555 By default, rely on the builtin trusted keyring. 556 557 If unsure, say N. 558 559config DM_VERITY_VERIFY_ROOTHASH_SIG_SECONDARY_KEYRING 560 bool "Verity data device root hash signature verification with secondary keyring" 561 depends on DM_VERITY_VERIFY_ROOTHASH_SIG 562 depends on SECONDARY_TRUSTED_KEYRING 563 help 564 Rely on the secondary trusted keyring to verify dm-verity signatures. 565 566 If unsure, say N. 567 568config DM_VERITY_FEC 569 bool "Verity forward error correction support" 570 depends on DM_VERITY 571 select REED_SOLOMON 572 select REED_SOLOMON_DEC8 573 help 574 Add forward error correction support to dm-verity. This option 575 makes it possible to use pre-generated error correction data to 576 recover from corrupted blocks. 577 578 If unsure, say N. 579 580config DM_SWITCH 581 tristate "Switch target support (EXPERIMENTAL)" 582 depends on BLK_DEV_DM 583 help 584 This device-mapper target creates a device that supports an arbitrary 585 mapping of fixed-size regions of I/O across a fixed set of paths. 586 The path used for any specific region can be switched dynamically 587 by sending the target a message. 588 589 To compile this code as a module, choose M here: the module will 590 be called dm-switch. 591 592 If unsure, say N. 593 594config DM_LOG_WRITES 595 tristate "Log writes target support" 596 depends on BLK_DEV_DM 597 help 598 This device-mapper target takes two devices, one device to use 599 normally, one to log all write operations done to the first device. 600 This is for use by file system developers wishing to verify that 601 their fs is writing a consistent file system at all times by allowing 602 them to replay the log in a variety of ways and to check the 603 contents. 604 605 To compile this code as a module, choose M here: the module will 606 be called dm-log-writes. 607 608 If unsure, say N. 609 610config DM_INTEGRITY 611 tristate "Integrity target support" 612 depends on BLK_DEV_DM 613 select BLK_DEV_INTEGRITY 614 select DM_BUFIO 615 select CRYPTO 616 select CRYPTO_SKCIPHER 617 select ASYNC_XOR 618 select DM_AUDIT if AUDIT 619 help 620 This device-mapper target emulates a block device that has 621 additional per-sector tags that can be used for storing 622 integrity information. 623 624 This integrity target is used with the dm-crypt target to 625 provide authenticated disk encryption or it can be used 626 standalone. 627 628 To compile this code as a module, choose M here: the module will 629 be called dm-integrity. 630 631config DM_ZONED 632 tristate "Drive-managed zoned block device target support" 633 depends on BLK_DEV_DM 634 depends on BLK_DEV_ZONED 635 select CRC32 636 help 637 This device-mapper target takes a host-managed or host-aware zoned 638 block device and exposes most of its capacity as a regular block 639 device (drive-managed zoned block device) without any write 640 constraints. This is mainly intended for use with file systems that 641 do not natively support zoned block devices but still want to 642 benefit from the increased capacity offered by SMR disks. Other uses 643 by applications using raw block devices (for example object stores) 644 are also possible. 645 646 To compile this code as a module, choose M here: the module will 647 be called dm-zoned. 648 649 If unsure, say N. 650 651config DM_AUDIT 652 bool "DM audit events" 653 depends on AUDIT 654 help 655 Generate audit events for device-mapper. 656 657 Enables audit logging of several security relevant events in the 658 particular device-mapper targets, especially the integrity target. 659 660endif # MD 661