]> git.proxmox.com Git - ceph.git/blame - ceph/src/spdk/doc/vhost/getting_started.md
bump version to 12.2.12-pve1
[ceph.git] / ceph / src / spdk / doc / vhost / getting_started.md
CommitLineData
7c673cae
FG
1# vhost Getting Started Guide {#vhost_getting_started}
2
3The Storage Performance Development Kit vhost application is named "vhost".
4This application extends SPDK to present virtio-scsi controllers to QEMU-based
5VMs and process I/O submitted to devices attached to those controllers.
6
7# Prerequisites {#vhost_prereqs}
8
9The base SPDK build instructions are located README.md in SPDK main directory.
10This guide assumes familiarity with building SPDK using the default options.
11
12## Supported Guest Operating Systems
13The guest OS must contain virtio drivers. The SPDK vhost target has been tested
14with Ubuntu 16.04, Fedora 25, Windows 2012 R2.
15
16# Building
17
18## SPDK
19The vhost target is built by default. To enable/disable building the vhost
20target, either modify the following line in the CONFIG file in the root directory:
21
22~~~
23 CONFIG_VHOST?=y
24~~~
25
26Or specify on the command line:
27
28~~~
29 make CONFIG_VHOST=y
30~~~
31
32Once built, the binary will be at `app/vhost/vhost`.
33
34## QEMU
35
36Vhost functionality is dependent on QEMU patches to enable vhost-scsi in
37userspace - those patches are currently working their way through the QEMU
38mailing list, but temporary patches to enable this functionality are available
39in the spdk branch at https://github.com/spdk/qemu.
40
41# Configuration {#vhost_config}
42
43## SPDK
44A `vhost` specific configuration file is used to configure the SPDK vhost
45target. A fully documented example configuration file is located at
46`etc/spdk/vhost.conf.in`. This file defines the following:
47
48### Storage Backends
49Storage backends are block devices which will be exposed as SCSI LUNs on
50devices attached to the vhost-scsi controller. SPDK supports several different
51types of storage backends, including NVMe, Linux AIO, malloc ramdisk and Ceph
52RBD. Refer to @ref bdev_getting_started for additional information on
53specifying storage backends in the configuration file.
54
55### Mappings Between SCSI Controllers and Storage Backends
56The vhost target is exposing SCSI controllers to the virtual machines.
57Each device in the vhost controller is associated with an SPDK block device and
58configuration file defines those associations. The block device to Dev mappings
59are specified in the configuration file as:
60
61~~~
62[VhostScsiX]
63 Name vhost.X # Name of vhost socket
64 Dev 0 BackendX # "BackendX" is block device name from previous
65 # sections in config file
66 Dev 1 BackendY
67 ...
68 Dev n BackendN
69 #Cpumask 0x1 # Optional parameter defining which core controller uses
70
71~~~
72
73### Vhost Sockets
74Userspace vhost uses UNIX domain sockets for communication between QEMU
75and the vhost target. Each vhost controller is associated with a UNIX domain
76socket file with filename equal to the Name argument in configuration file.
77Sockets are created at current directory when starting the SPDK vhost target.
78
79### Core Affinity Configuration
80Vhost target can be restricted to run on certain cores by specifying a ReactorMask.
81Default is to allow vhost target work on core 0. For NUMA systems it is essential
82to run vhost with cores on each socket to achieve optimal performance.
83
84To specify which core each controller should use, it can be defined by optional
85Cpumask parameter in configuration file. For NUMA systems the Cpumask should
86specify cores on the same CPU socket as its associated VM.
87
88## QEMU
89
90Userspace vhost-scsi adds the following command line option for QEMU:
91~~~
92 -device vhost-user-scsi-pci,id=scsi0,chardev=char0
93~~~
94
95In order to start qemu with vhost you need to specify following options:
96
97 - Socket, which QEMU will use for vhost communication with SPDK:
98~~~
99 -chardev socket,id=char0,path=/path/to/vhost/socket
100~~~
101
102 - Hugepages to share memory between vm and vhost target
103~~~
104 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
105~~~
106
107# Running Vhost Target
108To get started, the following example is usually sufficient:
109~~~
110app/vhost/vhost -c /path/to/vhost.conf
111~~~
112
113A full list of command line arguments to vhost can be obtained by:
114~~~
115app/vhost/vhost -h
116~~~
117
118
119## Example
120Assume that qemu and spdk are in respectively `qemu` and `spdk` directories.
121~~~
122 ./qemu/build/x86_64-softmmu/qemu-system-x86_64 \
123 -m 1024 \
124 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on \
125 -numa node,memdev=mem \
126 -drive file=$PROJECTS/os.qcow2,if=none,id=disk \
127 -device ide-hd,drive=disk,bootindex=0 \
128 -chardev socket,id=char0,path=./spdk/vhost.0 \
129 -device vhost-user-scsi-pci,id=scsi0,chardev=char0 \
130 --enable-kvm
131~~~
132
133# Experimental features {#vhost_experimental}
134
135## Multi-Queue Block Layer (blk_mq)
136It is possible to use multiqueue feature in vhost.
137To enable it on linux it is required to modify kernel options inside
138virtual machine.
139
140Instructions below for Ubuntu OS:
1411. `vi /etc/default/grub`
1422. Make sure mq is enabled:
143GRUB_CMDLINE_LINUX="scsi_mod.use_blk_mq=1"
1443. `sudo update-grub`
1454. Reboot virtual machine
146
147To achieve better performance make sure to increase number of cores
148assigned to vm.
149
150# Known bugs and limitations {#vhost_bugs}
151
152## VFIO driver is not supported
153Currently, VFIO driver is not supported by vhost library. Please use UIO
154driver if running SPDK vhost app. Otherwise any IO to physical device
155(eg. IOAT or NVMe) will fail. Support for vhost with VFIO is in progress.
156
157## Hot plug is not supported
158Hot plug is not supported in vhost yet. Event queue path doesn't handle that
159case. While hot plug will be just ignored, hot removal might cause segmentation
160fault.