]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/test/nvmf/test_plan.md
update sources to ceph Nautilus 14.2.1
[ceph.git] / ceph / src / spdk / test / nvmf / test_plan.md
1 # SPDK nvmf_tgt test plan
2
3 ## Objective
4 The purpose of these tests is to verify correct behavior of SPDK NVMe-oF
5 feature.
6 These tests are run either per-commit or as nightly tests.
7
8 ## Configuration
9 All tests share the same basic configuration file for SPDK nvmf_tgt to run.
10 Static configuration from config file consists of setting number of per session
11 queues and enabling RPC for further configuration via RPC calls.
12 RPC calls used for dynamic configuration consist:
13 - creating Malloc backend devices
14 - creating Null Block backend devices
15 - constructing NVMe-oF subsystems
16 - deleting NVMe-oF subsystems
17
18 ### Tests
19
20 #### Test 1: NVMe-oF namespace on a Logical Volumes device
21 This test configures a SPDK NVMe-oF subsystem backed by logical volume
22 devices and uses FIO to generate I/Os that target those subsystems.
23 The logical volume bdevs are backed by malloc bdevs.
24 Test steps:
25 - Step 1: Assign IP addresses to RDMA NICs.
26 - Step 2: Start SPDK nvmf_tgt application.
27 - Step 3: Create malloc bdevs.
28 - Step 4: Create logical volume stores on malloc bdevs.
29 - Step 5: Create 10 logical volume bdevs on each logical volume store.
30 - Step 6: Create NVMe-oF subsystems with logical volume bdev namespaces.
31 - Step 7: Connect to NVMe-oF susbsystems with kernel initiator.
32 - Step 8: Run FIO with workload parameters: blocksize=256k, iodepth=64,
33 workload=randwrite; varify flag is enabled so that FIO reads and verifies
34 the data written to the logical device. The run time is 10 seconds for a
35 quick test an 10 minutes for longer nightly test.
36 - Step 9: Disconnect kernel initiator from NVMe-oF subsystems.
37 - Step 10: Delete NVMe-oF subsystems from configuration.
38
39 ### Compatibility testing
40
41 - Verify functionality of SPDK `nvmf_tgt` with Linux kernel NVMe-oF host
42 - Exercise various kernel NVMe host parameters
43 - `nr_io_queues`
44 - `queue_size`
45 - Test discovery subsystem with `nvme` CLI tool
46 - Verify that discovery service works correctly with `nvme discover`
47 - Verify that large responses work (many subsystems)
48
49 ### Specification compliance
50
51 - NVMe base spec compliance
52 - Verify all mandatory admin commands are implemented
53 - Get Log Page
54 - Identify (including all mandatory CNS values)
55 - Identify Namespace
56 - Identify Controller
57 - Active Namespace List
58 - Allocated Namespace List
59 - Identify Allocated Namespace
60 - Attached Controller List
61 - Controller List
62 - Abort
63 - Set Features
64 - Get Features
65 - Asynchronous Event Request
66 - Keep Alive
67 - Verify all mandatory NVM command set I/O commands are implemented
68 - Flush
69 - Write
70 - Read
71 - Verify all mandatory log pages
72 - Error Information
73 - SMART / Health Information
74 - Firmware Slot Information
75 - Verify all mandatory Get/Set Features
76 - Arbitration
77 - Power Management
78 - Temperature Threshold
79 - Error Recovery
80 - Number of Queues
81 - Write Atomicity Normal
82 - Asynchronous Event Configuration
83 - Verify all implemented commands behave as required by the specification
84 - Fabric command processing
85 - Verify that Connect commands with invalid parameters are failed with correct response
86 - Invalid RECFMT
87 - Invalid SQSIZE
88 - Invalid SUBNQN, HOSTNQN (too long, incorrect format, not null terminated)
89 - QID != 0 before admin queue created
90 - CNTLID != 0xFFFF (static controller mode)
91 - Verify that non-Fabric commands are only allowed in the correct states
92
93 ### Configuration and RPC
94
95 - Verify that invalid NQNs cannot be configured via conf file or RPC