]>
Commit | Line | Data |
---|---|---|
aa9f34e5 MR |
1 | .. hmm: |
2 | ||
3 | ===================================== | |
bffc33ec | 4 | Heterogeneous Memory Management (HMM) |
aa9f34e5 | 5 | ===================================== |
bffc33ec | 6 | |
e8eddfd2 JG |
7 | Provide infrastructure and helpers to integrate non-conventional memory (device |
8 | memory like GPU on board memory) into regular kernel path, with the cornerstone | |
9 | of this being specialized struct page for such memory (see sections 5 to 7 of | |
10 | this document). | |
11 | ||
12 | HMM also provides optional helpers for SVM (Share Virtual Memory), i.e., | |
2076e5c0 | 13 | allowing a device to transparently access program addresses coherently with |
24844fd3 JC |
14 | the CPU meaning that any valid pointer on the CPU is also a valid pointer |
15 | for the device. This is becoming mandatory to simplify the use of advanced | |
16 | heterogeneous computing where GPU, DSP, or FPGA are used to perform various | |
e8eddfd2 | 17 | computations on behalf of a process. |
76ea470c RC |
18 | |
19 | This document is divided as follows: in the first section I expose the problems | |
20 | related to using device specific memory allocators. In the second section, I | |
21 | expose the hardware limitations that are inherent to many platforms. The third | |
22 | section gives an overview of the HMM design. The fourth section explains how | |
e8eddfd2 | 23 | CPU page-table mirroring works and the purpose of HMM in this context. The |
76ea470c | 24 | fifth section deals with how device memory is represented inside the kernel. |
2076e5c0 RC |
25 | Finally, the last section presents a new migration helper that allows |
26 | leveraging the device DMA engine. | |
76ea470c | 27 | |
aa9f34e5 | 28 | .. contents:: :local: |
76ea470c | 29 | |
24844fd3 JC |
30 | Problems of using a device specific memory allocator |
31 | ==================================================== | |
bffc33ec | 32 | |
e8eddfd2 | 33 | Devices with a large amount of on board memory (several gigabytes) like GPUs |
76ea470c RC |
34 | have historically managed their memory through dedicated driver specific APIs. |
35 | This creates a disconnect between memory allocated and managed by a device | |
36 | driver and regular application memory (private anonymous, shared memory, or | |
37 | regular file backed memory). From here on I will refer to this aspect as split | |
38 | address space. I use shared address space to refer to the opposite situation: | |
39 | i.e., one in which any application memory region can be used by a device | |
40 | transparently. | |
bffc33ec | 41 | |
2076e5c0 RC |
42 | Split address space happens because devices can only access memory allocated |
43 | through a device specific API. This implies that all memory objects in a program | |
e8eddfd2 JG |
44 | are not equal from the device point of view which complicates large programs |
45 | that rely on a wide set of libraries. | |
bffc33ec | 46 | |
2076e5c0 RC |
47 | Concretely, this means that code that wants to leverage devices like GPUs needs |
48 | to copy objects between generically allocated memory (malloc, mmap private, mmap | |
e8eddfd2 JG |
49 | share) and memory allocated through the device driver API (this still ends up |
50 | with an mmap but of the device file). | |
bffc33ec | 51 | |
e8eddfd2 | 52 | For flat data sets (array, grid, image, ...) this isn't too hard to achieve but |
2076e5c0 | 53 | for complex data sets (list, tree, ...) it's hard to get right. Duplicating a |
e8eddfd2 | 54 | complex data set needs to re-map all the pointer relations between each of its |
2076e5c0 | 55 | elements. This is error prone and programs get harder to debug because of the |
e8eddfd2 | 56 | duplicate data set and addresses. |
bffc33ec | 57 | |
e8eddfd2 | 58 | Split address space also means that libraries cannot transparently use data |
76ea470c | 59 | they are getting from the core program or another library and thus each library |
e8eddfd2 | 60 | might have to duplicate its input data set using the device specific memory |
76ea470c RC |
61 | allocator. Large projects suffer from this and waste resources because of the |
62 | various memory copies. | |
bffc33ec | 63 | |
e8eddfd2 | 64 | Duplicating each library API to accept as input or output memory allocated by |
bffc33ec | 65 | each device specific allocator is not a viable option. It would lead to a |
76ea470c | 66 | combinatorial explosion in the library entry points. |
bffc33ec | 67 | |
76ea470c RC |
68 | Finally, with the advance of high level language constructs (in C++ but in |
69 | other languages too) it is now possible for the compiler to leverage GPUs and | |
70 | other devices without programmer knowledge. Some compiler identified patterns | |
71 | are only do-able with a shared address space. It is also more reasonable to use | |
72 | a shared address space for all other patterns. | |
bffc33ec JG |
73 | |
74 | ||
24844fd3 JC |
75 | I/O bus, device memory characteristics |
76 | ====================================== | |
bffc33ec | 77 | |
e8eddfd2 JG |
78 | I/O buses cripple shared address spaces due to a few limitations. Most I/O |
79 | buses only allow basic memory access from device to main memory; even cache | |
2076e5c0 | 80 | coherency is often optional. Access to device memory from a CPU is even more |
e8eddfd2 | 81 | limited. More often than not, it is not cache coherent. |
bffc33ec | 82 | |
76ea470c RC |
83 | If we only consider the PCIE bus, then a device can access main memory (often |
84 | through an IOMMU) and be cache coherent with the CPUs. However, it only allows | |
2076e5c0 | 85 | a limited set of atomic operations from the device on main memory. This is worse |
e8eddfd2 JG |
86 | in the other direction: the CPU can only access a limited range of the device |
87 | memory and cannot perform atomic operations on it. Thus device memory cannot | |
76ea470c | 88 | be considered the same as regular memory from the kernel point of view. |
bffc33ec JG |
89 | |
90 | Another crippling factor is the limited bandwidth (~32GBytes/s with PCIE 4.0 | |
76ea470c RC |
91 | and 16 lanes). This is 33 times less than the fastest GPU memory (1 TBytes/s). |
92 | The final limitation is latency. Access to main memory from the device has an | |
93 | order of magnitude higher latency than when the device accesses its own memory. | |
bffc33ec | 94 | |
76ea470c | 95 | Some platforms are developing new I/O buses or additions/modifications to PCIE |
2076e5c0 RC |
96 | to address some of these limitations (OpenCAPI, CCIX). They mainly allow |
97 | two-way cache coherency between CPU and device and allow all atomic operations the | |
e8eddfd2 | 98 | architecture supports. Sadly, not all platforms are following this trend and |
76ea470c | 99 | some major architectures are left without hardware solutions to these problems. |
bffc33ec | 100 | |
e8eddfd2 JG |
101 | So for shared address space to make sense, not only must we allow devices to |
102 | access any memory but we must also permit any memory to be migrated to device | |
2076e5c0 | 103 | memory while the device is using it (blocking CPU access while it happens). |
bffc33ec JG |
104 | |
105 | ||
24844fd3 JC |
106 | Shared address space and migration |
107 | ================================== | |
bffc33ec | 108 | |
2076e5c0 | 109 | HMM intends to provide two main features. The first one is to share the address |
76ea470c RC |
110 | space by duplicating the CPU page table in the device page table so the same |
111 | address points to the same physical memory for any valid main memory address in | |
bffc33ec JG |
112 | the process address space. |
113 | ||
76ea470c | 114 | To achieve this, HMM offers a set of helpers to populate the device page table |
bffc33ec | 115 | while keeping track of CPU page table updates. Device page table updates are |
76ea470c RC |
116 | not as easy as CPU page table updates. To update the device page table, you must |
117 | allocate a buffer (or use a pool of pre-allocated buffers) and write GPU | |
118 | specific commands in it to perform the update (unmap, cache invalidations, and | |
e8eddfd2 | 119 | flush, ...). This cannot be done through common code for all devices. Hence |
76ea470c RC |
120 | why HMM provides helpers to factor out everything that can be while leaving the |
121 | hardware specific details to the device driver. | |
122 | ||
e8eddfd2 | 123 | The second mechanism HMM provides is a new kind of ZONE_DEVICE memory that |
2076e5c0 | 124 | allows allocating a struct page for each page of device memory. Those pages |
e8eddfd2 | 125 | are special because the CPU cannot map them. However, they allow migrating |
76ea470c | 126 | main memory to device memory using existing migration mechanisms and everything |
2076e5c0 RC |
127 | looks like a page that is swapped out to disk from the CPU point of view. Using a |
128 | struct page gives the easiest and cleanest integration with existing mm | |
129 | mechanisms. Here again, HMM only provides helpers, first to hotplug new ZONE_DEVICE | |
76ea470c | 130 | memory for the device memory and second to perform migration. Policy decisions |
2076e5c0 | 131 | of what and when to migrate is left to the device driver. |
76ea470c RC |
132 | |
133 | Note that any CPU access to a device page triggers a page fault and a migration | |
134 | back to main memory. For example, when a page backing a given CPU address A is | |
135 | migrated from a main memory page to a device page, then any CPU access to | |
136 | address A triggers a page fault and initiates a migration back to main memory. | |
137 | ||
138 | With these two features, HMM not only allows a device to mirror process address | |
2076e5c0 RC |
139 | space and keeps both CPU and device page tables synchronized, but also |
140 | leverages device memory by migrating the part of the data set that is actively being | |
76ea470c | 141 | used by the device. |
bffc33ec JG |
142 | |
143 | ||
aa9f34e5 MR |
144 | Address space mirroring implementation and API |
145 | ============================================== | |
bffc33ec | 146 | |
76ea470c RC |
147 | Address space mirroring's main objective is to allow duplication of a range of |
148 | CPU page table into a device page table; HMM helps keep both synchronized. A | |
e8eddfd2 | 149 | device driver that wants to mirror a process address space must start with the |
a22dd506 JG |
150 | registration of a mmu_interval_notifier:: |
151 | ||
5292e24a JG |
152 | int mmu_interval_notifier_insert(struct mmu_interval_notifier *interval_sub, |
153 | struct mm_struct *mm, unsigned long start, | |
154 | unsigned long length, | |
155 | const struct mmu_interval_notifier_ops *ops); | |
a22dd506 | 156 | |
5292e24a JG |
157 | During the ops->invalidate() callback the device driver must perform the |
158 | update action to the range (mark range read only, or fully unmap, etc.). The | |
159 | device must complete the update before the driver callback returns. | |
bffc33ec | 160 | |
76ea470c | 161 | When the device driver wants to populate a range of virtual addresses, it can |
d45d464b | 162 | use:: |
aa9f34e5 | 163 | |
d45d464b | 164 | long hmm_range_fault(struct hmm_range *range, unsigned int flags); |
bffc33ec | 165 | |
d45d464b | 166 | With the HMM_RANGE_SNAPSHOT flag, it will only fetch present CPU page table |
e8eddfd2 | 167 | entries and will not trigger a page fault on missing or non-present entries. |
d45d464b CH |
168 | Without that flag, it does trigger a page fault on missing or read-only entries |
169 | if write access is requested (see below). Page faults use the generic mm page | |
2076e5c0 | 170 | fault code path just like a CPU page fault. |
bffc33ec | 171 | |
76ea470c RC |
172 | Both functions copy CPU page table entries into their pfns array argument. Each |
173 | entry in that array corresponds to an address in the virtual range. HMM | |
174 | provides a set of flags to help the driver identify special CPU page table | |
175 | entries. | |
bffc33ec | 176 | |
2076e5c0 RC |
177 | Locking within the sync_cpu_device_pagetables() callback is the most important |
178 | aspect the driver must respect in order to keep things properly synchronized. | |
179 | The usage pattern is:: | |
bffc33ec JG |
180 | |
181 | int driver_populate_range(...) | |
182 | { | |
183 | struct hmm_range range; | |
184 | ... | |
25f23a0c | 185 | |
5292e24a | 186 | range.notifier = &interval_sub; |
25f23a0c JG |
187 | range.start = ...; |
188 | range.end = ...; | |
189 | range.pfns = ...; | |
190 | range.flags = ...; | |
191 | range.values = ...; | |
192 | range.pfn_shift = ...; | |
a3e0d41c | 193 | |
5292e24a | 194 | if (!mmget_not_zero(interval_sub->notifier.mm)) |
a22dd506 | 195 | return -EFAULT; |
25f23a0c | 196 | |
bffc33ec | 197 | again: |
5292e24a | 198 | range.notifier_seq = mmu_interval_read_begin(&interval_sub); |
25f23a0c | 199 | down_read(&mm->mmap_sem); |
d45d464b | 200 | ret = hmm_range_fault(&range, HMM_RANGE_SNAPSHOT); |
25f23a0c JG |
201 | if (ret) { |
202 | up_read(&mm->mmap_sem); | |
a22dd506 JG |
203 | if (ret == -EBUSY) |
204 | goto again; | |
bffc33ec | 205 | return ret; |
25f23a0c | 206 | } |
a22dd506 JG |
207 | up_read(&mm->mmap_sem); |
208 | ||
bffc33ec | 209 | take_lock(driver->update); |
a22dd506 | 210 | if (mmu_interval_read_retry(&ni, range.notifier_seq) { |
bffc33ec JG |
211 | release_lock(driver->update); |
212 | goto again; | |
213 | } | |
214 | ||
a22dd506 JG |
215 | /* Use pfns array content to update device page table, |
216 | * under the update lock */ | |
bffc33ec JG |
217 | |
218 | release_lock(driver->update); | |
219 | return 0; | |
220 | } | |
221 | ||
76ea470c | 222 | The driver->update lock is the same lock that the driver takes inside its |
a22dd506 JG |
223 | invalidate() callback. That lock must be held before calling |
224 | mmu_interval_read_retry() to avoid any race with a concurrent CPU page table | |
225 | update. | |
bffc33ec | 226 | |
023a019a JG |
227 | Leverage default_flags and pfn_flags_mask |
228 | ========================================= | |
229 | ||
2076e5c0 RC |
230 | The hmm_range struct has 2 fields, default_flags and pfn_flags_mask, that specify |
231 | fault or snapshot policy for the whole range instead of having to set them | |
232 | for each entry in the pfns array. | |
233 | ||
234 | For instance, if the device flags for range.flags are:: | |
023a019a | 235 | |
2076e5c0 RC |
236 | range.flags[HMM_PFN_VALID] = (1 << 63); |
237 | range.flags[HMM_PFN_WRITE] = (1 << 62); | |
023a019a | 238 | |
2076e5c0 RC |
239 | and the device driver wants pages for a range with at least read permission, |
240 | it sets:: | |
91173c6e RD |
241 | |
242 | range->default_flags = (1 << 63); | |
023a019a JG |
243 | range->pfn_flags_mask = 0; |
244 | ||
2076e5c0 | 245 | and calls hmm_range_fault() as described above. This will fill fault all pages |
023a019a JG |
246 | in the range with at least read permission. |
247 | ||
2076e5c0 RC |
248 | Now let's say the driver wants to do the same except for one page in the range for |
249 | which it wants to have write permission. Now driver set:: | |
91173c6e | 250 | |
023a019a JG |
251 | range->default_flags = (1 << 63); |
252 | range->pfn_flags_mask = (1 << 62); | |
253 | range->pfns[index_of_write] = (1 << 62); | |
254 | ||
2076e5c0 | 255 | With this, HMM will fault in all pages with at least read (i.e., valid) and for the |
023a019a | 256 | address == range->start + (index_of_write << PAGE_SHIFT) it will fault with |
2076e5c0 | 257 | write permission i.e., if the CPU pte does not have write permission set then HMM |
023a019a JG |
258 | will call handle_mm_fault(). |
259 | ||
2076e5c0 RC |
260 | Note that HMM will populate the pfns array with write permission for any page |
261 | that is mapped with CPU write permission no matter what values are set | |
023a019a JG |
262 | in default_flags or pfn_flags_mask. |
263 | ||
264 | ||
aa9f34e5 MR |
265 | Represent and manage device memory from core kernel point of view |
266 | ================================================================= | |
bffc33ec | 267 | |
2076e5c0 RC |
268 | Several different designs were tried to support device memory. The first one |
269 | used a device specific data structure to keep information about migrated memory | |
270 | and HMM hooked itself in various places of mm code to handle any access to | |
76ea470c RC |
271 | addresses that were backed by device memory. It turns out that this ended up |
272 | replicating most of the fields of struct page and also needed many kernel code | |
273 | paths to be updated to understand this new kind of memory. | |
bffc33ec | 274 | |
76ea470c RC |
275 | Most kernel code paths never try to access the memory behind a page |
276 | but only care about struct page contents. Because of this, HMM switched to | |
277 | directly using struct page for device memory which left most kernel code paths | |
278 | unaware of the difference. We only need to make sure that no one ever tries to | |
279 | map those pages from the CPU side. | |
bffc33ec | 280 | |
24844fd3 JC |
281 | Migration to and from device memory |
282 | =================================== | |
bffc33ec | 283 | |
e8eddfd2 | 284 | Because the CPU cannot access device memory, migration must use the device DMA |
a7d1f22b CH |
285 | engine to perform copy from and to device memory. For this we need to use |
286 | migrate_vma_setup(), migrate_vma_pages(), and migrate_vma_finalize() helpers. | |
bffc33ec JG |
287 | |
288 | ||
aa9f34e5 MR |
289 | Memory cgroup (memcg) and rss accounting |
290 | ======================================== | |
bffc33ec | 291 | |
2076e5c0 | 292 | For now, device memory is accounted as any regular page in rss counters (either |
76ea470c | 293 | anonymous if device page is used for anonymous, file if device page is used for |
2076e5c0 | 294 | file backed page, or shmem if device page is used for shared memory). This is a |
76ea470c RC |
295 | deliberate choice to keep existing applications, that might start using device |
296 | memory without knowing about it, running unimpacted. | |
297 | ||
e8eddfd2 | 298 | A drawback is that the OOM killer might kill an application using a lot of |
76ea470c RC |
299 | device memory and not a lot of regular system memory and thus not freeing much |
300 | system memory. We want to gather more real world experience on how applications | |
301 | and system react under memory pressure in the presence of device memory before | |
bffc33ec JG |
302 | deciding to account device memory differently. |
303 | ||
304 | ||
76ea470c | 305 | Same decision was made for memory cgroup. Device memory pages are accounted |
bffc33ec JG |
306 | against same memory cgroup a regular page would be accounted to. This does |
307 | simplify migration to and from device memory. This also means that migration | |
e8eddfd2 | 308 | back from device memory to regular memory cannot fail because it would |
bffc33ec | 309 | go above memory cgroup limit. We might revisit this choice latter on once we |
76ea470c | 310 | get more experience in how device memory is used and its impact on memory |
bffc33ec JG |
311 | resource control. |
312 | ||
313 | ||
2076e5c0 | 314 | Note that device memory can never be pinned by a device driver nor through GUP |
bffc33ec | 315 | and thus such memory is always free upon process exit. Or when last reference |
76ea470c | 316 | is dropped in case of shared memory or file backed memory. |