]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/doc/vhost/getting_started.md
bump version to 12.2.12-pve1
[ceph.git] / ceph / src / spdk / doc / vhost / getting_started.md
1 # vhost Getting Started Guide {#vhost_getting_started}
2
3 The Storage Performance Development Kit vhost application is named "vhost".
4 This application extends SPDK to present virtio-scsi controllers to QEMU-based
5 VMs and process I/O submitted to devices attached to those controllers.
6
7 # Prerequisites {#vhost_prereqs}
8
9 The base SPDK build instructions are located README.md in SPDK main directory.
10 This guide assumes familiarity with building SPDK using the default options.
11
12 ## Supported Guest Operating Systems
13 The guest OS must contain virtio drivers. The SPDK vhost target has been tested
14 with Ubuntu 16.04, Fedora 25, Windows 2012 R2.
15
16 # Building
17
18 ## SPDK
19 The vhost target is built by default. To enable/disable building the vhost
20 target, either modify the following line in the CONFIG file in the root directory:
21
22 ~~~
23 CONFIG_VHOST?=y
24 ~~~
25
26 Or specify on the command line:
27
28 ~~~
29 make CONFIG_VHOST=y
30 ~~~
31
32 Once built, the binary will be at `app/vhost/vhost`.
33
34 ## QEMU
35
36 Vhost functionality is dependent on QEMU patches to enable vhost-scsi in
37 userspace - those patches are currently working their way through the QEMU
38 mailing list, but temporary patches to enable this functionality are available
39 in the spdk branch at https://github.com/spdk/qemu.
40
41 # Configuration {#vhost_config}
42
43 ## SPDK
44 A `vhost` specific configuration file is used to configure the SPDK vhost
45 target. A fully documented example configuration file is located at
46 `etc/spdk/vhost.conf.in`. This file defines the following:
47
48 ### Storage Backends
49 Storage backends are block devices which will be exposed as SCSI LUNs on
50 devices attached to the vhost-scsi controller. SPDK supports several different
51 types of storage backends, including NVMe, Linux AIO, malloc ramdisk and Ceph
52 RBD. Refer to @ref bdev_getting_started for additional information on
53 specifying storage backends in the configuration file.
54
55 ### Mappings Between SCSI Controllers and Storage Backends
56 The vhost target is exposing SCSI controllers to the virtual machines.
57 Each device in the vhost controller is associated with an SPDK block device and
58 configuration file defines those associations. The block device to Dev mappings
59 are specified in the configuration file as:
60
61 ~~~
62 [VhostScsiX]
63 Name vhost.X # Name of vhost socket
64 Dev 0 BackendX # "BackendX" is block device name from previous
65 # sections in config file
66 Dev 1 BackendY
67 ...
68 Dev n BackendN
69 #Cpumask 0x1 # Optional parameter defining which core controller uses
70
71 ~~~
72
73 ### Vhost Sockets
74 Userspace vhost uses UNIX domain sockets for communication between QEMU
75 and the vhost target. Each vhost controller is associated with a UNIX domain
76 socket file with filename equal to the Name argument in configuration file.
77 Sockets are created at current directory when starting the SPDK vhost target.
78
79 ### Core Affinity Configuration
80 Vhost target can be restricted to run on certain cores by specifying a ReactorMask.
81 Default is to allow vhost target work on core 0. For NUMA systems it is essential
82 to run vhost with cores on each socket to achieve optimal performance.
83
84 To specify which core each controller should use, it can be defined by optional
85 Cpumask parameter in configuration file. For NUMA systems the Cpumask should
86 specify cores on the same CPU socket as its associated VM.
87
88 ## QEMU
89
90 Userspace vhost-scsi adds the following command line option for QEMU:
91 ~~~
92 -device vhost-user-scsi-pci,id=scsi0,chardev=char0
93 ~~~
94
95 In order to start qemu with vhost you need to specify following options:
96
97 - Socket, which QEMU will use for vhost communication with SPDK:
98 ~~~
99 -chardev socket,id=char0,path=/path/to/vhost/socket
100 ~~~
101
102 - Hugepages to share memory between vm and vhost target
103 ~~~
104 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
105 ~~~
106
107 # Running Vhost Target
108 To get started, the following example is usually sufficient:
109 ~~~
110 app/vhost/vhost -c /path/to/vhost.conf
111 ~~~
112
113 A full list of command line arguments to vhost can be obtained by:
114 ~~~
115 app/vhost/vhost -h
116 ~~~
117
118
119 ## Example
120 Assume that qemu and spdk are in respectively `qemu` and `spdk` directories.
121 ~~~
122 ./qemu/build/x86_64-softmmu/qemu-system-x86_64 \
123 -m 1024 \
124 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
125 -numa node,memdev=mem \
126 -drive file=$PROJECTS/os.qcow2,if=none,id=disk \
127 -device ide-hd,drive=disk,bootindex=0 \
128 -chardev socket,id=char0,path=./spdk/vhost.0 \
129 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 \
130 --enable-kvm
131 ~~~
132
133 # Experimental features {#vhost_experimental}
134
135 ## Multi-Queue Block Layer (blk_mq)
136 It is possible to use multiqueue feature in vhost.
137 To enable it on linux it is required to modify kernel options inside
138 virtual machine.
139
140 Instructions below for Ubuntu OS:
141 1. `vi /etc/default/grub`
142 2. Make sure mq is enabled:
143 GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"
144 3. `sudo update-grub`
145 4. Reboot virtual machine
146
147 To achieve better performance make sure to increase number of cores
148 assigned to vm.
149
150 # Known bugs and limitations {#vhost_bugs}
151
152 ## VFIO driver is not supported
153 Currently, VFIO driver is not supported by vhost library. Please use UIO
154 driver if running SPDK vhost app. Otherwise any IO to physical device
155 (eg. IOAT or NVMe) will fail. Support for vhost with VFIO is in progress.
156
157 ## Hot plug is not supported
158 Hot plug is not supported in vhost yet. Event queue path doesn't handle that
159 case. While hot plug will be just ignored, hot removal might cause segmentation
160 fault.