]> git.proxmox.com Git - ceph.git/blame - ceph/src/spdk/doc/nvme/multi_process.md
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / src / spdk / doc / nvme / multi_process.md
CommitLineData
7c673cae
FG
1# NVMe Multi Process {#nvme_multi_process}
2
3This capability enables the SPDK NVMe driver to support multiple processes accessing the
4same NVMe device. The NVMe driver allocates critical structures from shared memory, so
5that each process can map that memory and create its own queue pairs or share the admin
6queue. There is a limited number of I/O queue pairs per NVMe controller.
7
8The primary motivation for this feature is to support management tools that can attach
9to long running applications, perform some maintenance work or gather information, and
10then detach.
11
12# Configuration {#nvme_multi_process_configuration}
13
14DPDK EAL allows different types of processes to be spawned, each with different permissions
15on the hugepage memory used by the applications.
16
17There are two types of processes:
181. a primary process which initializes the shared memory and has full privileges and
192. a secondary process which can attach to the primary process by mapping its shared memory
20regions and perform NVMe operations including creating queue pairs.
21
22This feature is enabled by default and is controlled by selecting a value for the shared
23memory group ID. This ID is a positive integer and two applications with the same shared
24memory group ID will share memory. The first application with a given shared memory group
25ID will be considered the primary and all others secondary.
26
27Example: identical shm_id and non-overlapping core masks
28~~~{.sh}
29./perf options [AIO device(s)]...
30 [-c core mask for I/O submission/completion]
31 [-i shared memory group ID]
32
33./perf -q 1 -s 4096 -w randread -c 0x1 -t 60 -i 1
34./perf -q 8 -s 131072 -w write -c 0x10 -t 60 -i 1
35~~~
36
37# Scalability and Performance {#nvme_multi_process_scalability_performance}
38
39To maximize the I/O bandwidth of an NVMe device, ensure that each application has its own
40queue pairs.
41
42The optimal threading model for SPDK is one thread per core, regardless of which processes
43that thread belongs to in the case of multi-process environment. To achieve maximum
44performance, each thread should also have its own I/O queue pair. Applications that share
45memory should be given core masks that do not overlap.
46
47However, admin commands may have some performance impact as there is only one admin queue
48pair per NVMe SSD. The NVMe driver will automatically take a cross-process capable lock
49to enable the sharing of admin queue pair. Further, when each process polls the admin
50queue for completions, it will only see completions for commands that it originated.
51
52# Limitations {#nvme_multi_process_limitations}
53
541. Two processes sharing memory may not share any cores in their core mask.
552. If a primary process exits while secondary processes are still running, those processes
56will continue to run. However, a new primary process cannot be created.
573. Applications are responsible for coordinating access to logical blocks.
58
59@sa spdk_nvme_probe, spdk_nvme_ctrlr_process_admin_completions