]>
Commit | Line | Data |
---|---|---|
7c673cae FG |
1 | # NVMe Multi Process {#nvme_multi_process} |
2 | ||
3 | This capability enables the SPDK NVMe driver to support multiple processes accessing the | |
4 | same NVMe device. The NVMe driver allocates critical structures from shared memory, so | |
5 | that each process can map that memory and create its own queue pairs or share the admin | |
6 | queue. There is a limited number of I/O queue pairs per NVMe controller. | |
7 | ||
8 | The primary motivation for this feature is to support management tools that can attach | |
9 | to long running applications, perform some maintenance work or gather information, and | |
10 | then detach. | |
11 | ||
12 | # Configuration {#nvme_multi_process_configuration} | |
13 | ||
14 | DPDK EAL allows different types of processes to be spawned, each with different permissions | |
15 | on the hugepage memory used by the applications. | |
16 | ||
17 | There are two types of processes: | |
18 | 1. a primary process which initializes the shared memory and has full privileges and | |
19 | 2. a secondary process which can attach to the primary process by mapping its shared memory | |
20 | regions and perform NVMe operations including creating queue pairs. | |
21 | ||
22 | This feature is enabled by default and is controlled by selecting a value for the shared | |
23 | memory group ID. This ID is a positive integer and two applications with the same shared | |
24 | memory group ID will share memory. The first application with a given shared memory group | |
25 | ID will be considered the primary and all others secondary. | |
26 | ||
27 | Example: identical shm_id and non-overlapping core masks | |
28 | ~~~{.sh} | |
29 | ./perf options [AIO device(s)]... | |
30 | [-c core mask for I/O submission/completion] | |
31 | [-i shared memory group ID] | |
32 | ||
33 | ./perf -q 1 -s 4096 -w randread -c 0x1 -t 60 -i 1 | |
34 | ./perf -q 8 -s 131072 -w write -c 0x10 -t 60 -i 1 | |
35 | ~~~ | |
36 | ||
37 | # Scalability and Performance {#nvme_multi_process_scalability_performance} | |
38 | ||
39 | To maximize the I/O bandwidth of an NVMe device, ensure that each application has its own | |
40 | queue pairs. | |
41 | ||
42 | The optimal threading model for SPDK is one thread per core, regardless of which processes | |
43 | that thread belongs to in the case of multi-process environment. To achieve maximum | |
44 | performance, each thread should also have its own I/O queue pair. Applications that share | |
45 | memory should be given core masks that do not overlap. | |
46 | ||
47 | However, admin commands may have some performance impact as there is only one admin queue | |
48 | pair per NVMe SSD. The NVMe driver will automatically take a cross-process capable lock | |
49 | to enable the sharing of admin queue pair. Further, when each process polls the admin | |
50 | queue for completions, it will only see completions for commands that it originated. | |
51 | ||
52 | # Limitations {#nvme_multi_process_limitations} | |
53 | ||
54 | 1. Two processes sharing memory may not share any cores in their core mask. | |
55 | 2. If a primary process exits while secondary processes are still running, those processes | |
56 | will continue to run. However, a new primary process cannot be created. | |
57 | 3. Applications are responsible for coordinating access to logical blocks. | |
58 | ||
59 | @sa spdk_nvme_probe, spdk_nvme_ctrlr_process_admin_completions |