]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/blob - drivers/nvme/host/Kconfig
ath10k: Fix kernel panic while using worker (ath10k_sta_rc_update_wk)
[mirror_ubuntu-bionic-kernel.git] / drivers / nvme / host / Kconfig
1 config NVME_CORE
2 tristate
3
4 config BLK_DEV_NVME
5 tristate "NVM Express block device"
6 depends on PCI && BLOCK
7 select NVME_CORE
8 ---help---
9 The NVM Express driver is for solid state drives directly
10 connected to the PCI or PCI Express bus. If you know you
11 don't have one of these, it is safe to answer N.
12
13 To compile this driver as a module, choose M here: the
14 module will be called nvme.
15
16 config NVME_MULTIPATH
17 bool "NVMe multipath support"
18 depends on NVME_CORE
19 ---help---
20 This option enables support for multipath access to NVMe
21 subsystems. If this option is enabled only a single
22 /dev/nvmeXnY device will show up for each NVMe namespaces,
23 even if it is accessible through multiple controllers.
24
25 config NVME_FABRICS
26 tristate
27
28 config NVME_RDMA
29 tristate "NVM Express over Fabrics RDMA host driver"
30 depends on INFINIBAND && BLOCK
31 select NVME_CORE
32 select NVME_FABRICS
33 select SG_POOL
34 help
35 This provides support for the NVMe over Fabrics protocol using
36 the RDMA (Infiniband, RoCE, iWarp) transport. This allows you
37 to use remote block devices exported using the NVMe protocol set.
38
39 To configure a NVMe over Fabrics controller use the nvme-cli tool
40 from https://github.com/linux-nvme/nvme-cli.
41
42 If unsure, say N.
43
44 config NVME_FC
45 tristate "NVM Express over Fabrics FC host driver"
46 depends on BLOCK
47 depends on HAS_DMA
48 select NVME_CORE
49 select NVME_FABRICS
50 select SG_POOL
51 help
52 This provides support for the NVMe over Fabrics protocol using
53 the FC transport. This allows you to use remote block devices
54 exported using the NVMe protocol set.
55
56 To configure a NVMe over Fabrics controller use the nvme-cli tool
57 from https://github.com/linux-nvme/nvme-cli.
58
59 If unsure, say N.