]> git.proxmox.com Git - ceph.git/blob - ceph/src/spdk/doc/ssd_internals.md
import 15.2.0 Octopus source
[ceph.git] / ceph / src / spdk / doc / ssd_internals.md
1 # NAND Flash SSD Internals {#ssd_internals}
2
3 Solid State Devices (SSD) are complex devices and their performance depends on
4 how they're used. The following description is intended to help software
5 developers understand what is occurring inside the SSD, so that they can come
6 up with better software designs. It should not be thought of as a strictly
7 accurate guide to how SSD hardware really works.
8
9 As of this writing, SSDs are generally implemented on top of
10 [NAND Flash](https://en.wikipedia.org/wiki/Flash_memory) memory. At a
11 very high level, this media has a few important properties:
12
13 * The media is grouped onto chips called NAND dies and each die can
14 operate in parallel.
15 * Flipping a bit is a highly asymmetric process. Flipping it one way is
16 easy, but flipping it back is quite hard.
17
18 NAND Flash media is grouped into large units often referred to as **erase
19 blocks**. The size of an erase block is highly implementation specific, but
20 can be thought of as somewhere between 1MiB and 8MiB. For each erase block,
21 each bit may be written to (i.e. have its bit flipped from 0 to 1) with
22 bit-granularity once. In order to write to the erase block a second time, the
23 entire block must be erased (i.e. all bits in the block are flipped back to
24 0). This is the asymmetry part from above. Erasing a block causes a measurable
25 amount of wear and each block may only be erased a limited number of times.
26
27 SSDs expose an interface to the host system that makes it appear as if the
28 drive is composed of a set of fixed size **logical blocks** which are usually
29 512B or 4KiB in size. These blocks are entirely logical constructs of the
30 device firmware and they do not statically map to a location on the backing
31 media. Instead, upon each write to a logical block, a new location on the NAND
32 Flash is selected and written and the mapping of the logical block to its
33 physical location is updated. The algorithm for choosing this location is a
34 key part of overall SSD performance and is often called the **flash
35 translation layer** or FTL. This algorithm must correctly distribute the
36 blocks to account for wear (called **wear-leveling**) and spread them across
37 NAND dies to improve total available performance. The simplest model is to
38 group all of the physical media on each die together using an algorithm
39 similar to RAID and then write to that set sequentially. Real SSDs are far
40 more complicated, but this is an excellent simple model for software
41 developers - imagine they are simply logging to a RAID volume and updating an
42 in-memory hash-table.
43
44 One consequence of the flash translation layer is that logical blocks do not
45 necessarily correspond to physical locations on the NAND at all times. In
46 fact, there is a command that clears the translation for a block. In NVMe,
47 this command is called deallocate, in SCSI it is called unmap, and in SATA it
48 is called trim. When a user attempts to read a block that doesn't have a
49 mapping to a physical location, drives will do one of two things:
50
51 1. Immediately complete the read request successfully, without performing any
52 data transfer. This is acceptable because the data the drive would return
53 is no more valid than the data already in the user's data buffer.
54 2. Return all 0's as the data.
55
56 Choice #1 is much more common and performing reads to a fully deallocated
57 device will often show performance far beyond what the drive claims to be
58 capable of precisely because it is not actually transferring any data. Write
59 to all blocks prior to reading them when benchmarking!
60
61 As SSDs are written to, the internal log will eventually consume all of the
62 available erase blocks. In order to continue writing, the SSD must free some
63 of them. This process is often called **garbage collection**. All SSDs reserve
64 some number of erase blocks so that they can guarantee there are free erase
65 blocks available for garbage collection. Garbage collection generally proceeds
66 by:
67
68 1. Selecting a target erase block (a good mental model is that it picks the least recently used erase block)
69 2. Walking through each entry in the erase block and determining if it is still a valid logical block.
70 3. Moving valid logical blocks by reading them and writing them to a different erase block (i.e. the current head of the log)
71 4. Erasing the entire erase block and marking it available for use.
72
73 Garbage collection is clearly far more efficient when step #3 can be skipped
74 because the erase block is already empty. There are two ways to make it much
75 more likely that step #3 can be skipped. The first is that SSDs reserve
76 additional erase blocks beyond their reported capacity (called
77 **over-provisioning**), so that statistically its much more likely that an
78 erase block will not contain valid data. The second is software can write to
79 the blocks on the device in sequential order in a circular pattern, throwing
80 away old data when it is no longer needed. In this case, the software
81 guarantees that the least recently used erase blocks will not contain any
82 valid data that must be moved.
83
84 The amount of over-provisioning a device has can dramatically impact the
85 performance on random read and write workloads, if the workload is filling up
86 the entire device. However, the same effect can typically be obtained by
87 simply reserving a given amount of space on the device in software. This
88 understanding is critical to producing consistent benchmarks. In particular,
89 if background garbage collection cannot keep up and the drive must switch to
90 on-demand garbage collection, the latency of writes will increase
91 dramatically. Therefore the internal state of the device must be forced into
92 some known state prior to running benchmarks for consistency. This is usually
93 accomplished by writing to the device sequentially two times, from start to
94 finish. For a highly detailed description of exactly how to force an SSD into
95 a known state for benchmarking see this
96 [SNIA Article](http://www.snia.org/sites/default/files/SSS_PTS_Enterprise_v1.1.pdf).