]> git.proxmox.com Git - ceph.git/blame - ceph/src/spdk/test/blobfs/test_plan.md
bump version to 15.2.11-pve1
[ceph.git] / ceph / src / spdk / test / blobfs / test_plan.md
CommitLineData
11fdf7f2
TL
1# SPDK BlobFS Test Plan
2
3## Current Tests
4
5# Unit tests (asynchronous API)
6
7- Tests BlobFS w/ Blobstore with no dependencies on SPDK bdev layer or event framework.
8 Uses simple DRAM buffer to simulate a block device - all block operations are immediately
9 completed so no special event handling is required.
10- Current tests include:
11 - basic fs initialization and unload
12 - open non-existent file fails if SPDK_BLOBFS_OPEN_CREATE not specified
13 - open non-existent file creates the file if SPDK_BLOBFS_OPEN_CREATE is specified
14 - close a file fails if there are still open references
15 - closing a file with no open references fails
16 - files can be truncated up and down in length
17 - three-way rename
18 - operations for inserting and traversing buffers in a cache tree
19 - allocating and freeing I/O channels
20
21# Unit tests (synchronous API)
22
23- Tests BlobFS w/ Blobstore with no dependencies on SPDK bdev layer or event framework.
24 The synchronous API requires a separate thread to handle any asynchronous handoffs such as
25 I/O to disk.
26 - basic read/write I/O operations
27 - appending to a file whose cache has been flushed and evicted
28
29# RocksDB
30
31- Tests BlobFS as the backing store for a RocksDB database. BlobFS uses the SPDK NVMe driver
32 through the SPDK bdev layer as its block device. Uses RocksDB db_bench utility to drive
33 the workloads. Each workload (after the initial sequential insert) reloads the database
34 which validates metadata operations completed correctly in the previous run via the
35 RocksDB MANIFEST file. RocksDB also runs checksums on key/value blocks read from disk,
36 verifying data integrity.
37 - initialize BlobFS filesystem on NVMe SSD
38 - bulk sequential insert of up to 500M keys (16B key, 1000B value)
39 - overwrite test - randomly overwrite one of the keys in the database (driving both
40 flush and compaction traffic)
41 - readwrite test - one thread randomly overwrites a key in the database, up to 16
42 threads randomly read a key in the database.
43 - writesync - same as overwrite, but enables a WAL (write-ahead log)
44 - randread - up to 16 threads randomly read a key in the database
45
46## Future tests to add
47
48# Unit tests
49
50- Corrupt data in DRAM buffer, and confirm subsequent operations such as BlobFS load or
51 opening a blob fail as expected (no panics, etc.)
52- Test synchronous API with multiple synchronous threads. May be implemented separately
53 from existing synchronous unit tests to allow for more sophisticated thread
54 synchronization.
55- Add tests for out of capacity (no more space on disk for additional blobs/files)
56- Pending addition of BlobFS superblob, verify that BlobFS load fails with missing or
57 corrupt superblob
58- Additional tests to reach 100% unit test coverage
59
60# System/integration tests
61
62- Use fio with BlobFS fuse module for more focused data integrity testing on individual
63 files.
64- Pending directory support (via an SPDK btree module), use BlobFS fuse module to do
65 things like a Linux kernel compilation. Performance may be poor but this will heavily
66 stress the mechanics of BlobFS.
67- Run RocksDB tests with varying amounts of BlobFS cache