]>
Commit | Line | Data |
---|---|---|
03909187 AK |
1 | The x86 kvm shadow mmu |
2 | ====================== | |
3 | ||
4 | The mmu (in arch/x86/kvm, files mmu.[ch] and paging_tmpl.h) is responsible | |
5 | for presenting a standard x86 mmu to the guest, while translating guest | |
6 | physical addresses to host physical addresses. | |
7 | ||
8 | The mmu code attempts to satisfy the following requirements: | |
9 | ||
10 | - correctness: the guest should not be able to determine that it is running | |
11 | on an emulated mmu except for timing (we attempt to comply | |
12 | with the specification, not emulate the characteristics of | |
13 | a particular implementation such as tlb size) | |
14 | - security: the guest must not be able to touch host memory not assigned | |
15 | to it | |
16 | - performance: minimize the performance penalty imposed by the mmu | |
17 | - scaling: need to scale to large memory and large vcpu guests | |
18 | - hardware: support the full range of x86 virtualization hardware | |
19 | - integration: Linux memory management code must be in control of guest memory | |
20 | so that swapping, page migration, page merging, transparent | |
21 | hugepages, and similar features work without change | |
22 | - dirty tracking: report writes to guest memory to enable live migration | |
23 | and framebuffer-based displays | |
24 | - footprint: keep the amount of pinned kernel memory low (most memory | |
25 | should be shrinkable) | |
25985edc | 26 | - reliability: avoid multipage or GFP_ATOMIC allocations |
03909187 AK |
27 | |
28 | Acronyms | |
29 | ======== | |
30 | ||
31 | pfn host page frame number | |
32 | hpa host physical address | |
33 | hva host virtual address | |
34 | gfn guest frame number | |
35 | gpa guest physical address | |
36 | gva guest virtual address | |
37 | ngpa nested guest physical address | |
38 | ngva nested guest virtual address | |
39 | pte page table entry (used also to refer generically to paging structure | |
40 | entries) | |
41 | gpte guest pte (referring to gfns) | |
42 | spte shadow pte (referring to pfns) | |
43 | tdp two dimensional paging (vendor neutral term for NPT and EPT) | |
44 | ||
45 | Virtual and real hardware supported | |
46 | =================================== | |
47 | ||
48 | The mmu supports first-generation mmu hardware, which allows an atomic switch | |
49 | of the current paging mode and cr3 during guest entry, as well as | |
50 | two-dimensional paging (AMD's NPT and Intel's EPT). The emulated hardware | |
51 | it exposes is the traditional 2/3/4 level x86 mmu, with support for global | |
52 | pages, pae, pse, pse36, cr0.wp, and 1GB pages. Work is in progress to support | |
53 | exposing NPT capable hardware on NPT capable hosts. | |
54 | ||
55 | Translation | |
56 | =========== | |
57 | ||
58 | The primary job of the mmu is to program the processor's mmu to translate | |
59 | addresses for the guest. Different translations are required at different | |
60 | times: | |
61 | ||
62 | - when guest paging is disabled, we translate guest physical addresses to | |
63 | host physical addresses (gpa->hpa) | |
64 | - when guest paging is enabled, we translate guest virtual addresses, to | |
65 | guest physical addresses, to host physical addresses (gva->gpa->hpa) | |
66 | - when the guest launches a guest of its own, we translate nested guest | |
67 | virtual addresses, to nested guest physical addresses, to guest physical | |
68 | addresses, to host physical addresses (ngva->ngpa->gpa->hpa) | |
69 | ||
70 | The primary challenge is to encode between 1 and 3 translations into hardware | |
71 | that support only 1 (traditional) and 2 (tdp) translations. When the | |
72 | number of required translations matches the hardware, the mmu operates in | |
73 | direct mode; otherwise it operates in shadow mode (see below). | |
74 | ||
75 | Memory | |
76 | ====== | |
77 | ||
c4bd09b2 AK |
78 | Guest memory (gpa) is part of the user address space of the process that is |
79 | using kvm. Userspace defines the translation between guest addresses and user | |
21bbe18b | 80 | addresses (gpa->hva); note that two gpas may alias to the same hva, but not |
03909187 AK |
81 | vice versa. |
82 | ||
21bbe18b | 83 | These hvas may be backed using any method available to the host: anonymous |
03909187 AK |
84 | memory, file backed memory, and device memory. Memory might be paged by the |
85 | host at any time. | |
86 | ||
87 | Events | |
88 | ====== | |
89 | ||
90 | The mmu is driven by events, some from the guest, some from the host. | |
91 | ||
92 | Guest generated events: | |
93 | - writes to control registers (especially cr3) | |
94 | - invlpg/invlpga instruction execution | |
95 | - access to missing or protected translations | |
96 | ||
97 | Host generated events: | |
98 | - changes in the gpa->hpa translation (either through gpa->hva changes or | |
99 | through hva->hpa changes) | |
100 | - memory pressure (the shrinker) | |
101 | ||
102 | Shadow pages | |
103 | ============ | |
104 | ||
105 | The principal data structure is the shadow page, 'struct kvm_mmu_page'. A | |
106 | shadow page contains 512 sptes, which can be either leaf or nonleaf sptes. A | |
107 | shadow page may contain a mix of leaf and nonleaf sptes. | |
108 | ||
109 | A nonleaf spte allows the hardware mmu to reach the leaf pages and | |
110 | is not related to a translation directly. It points to other shadow pages. | |
111 | ||
112 | A leaf spte corresponds to either one or two translations encoded into | |
113 | one paging structure entry. These are always the lowest level of the | |
c4bd09b2 | 114 | translation stack, with optional higher level translations left to NPT/EPT. |
03909187 AK |
115 | Leaf ptes point at guest pages. |
116 | ||
117 | The following table shows translations encoded by leaf ptes, with higher-level | |
118 | translations in parentheses: | |
119 | ||
120 | Non-nested guests: | |
121 | nonpaging: gpa->hpa | |
122 | paging: gva->gpa->hpa | |
123 | paging, tdp: (gva->)gpa->hpa | |
124 | Nested guests: | |
125 | non-tdp: ngva->gpa->hpa (*) | |
126 | tdp: (ngva->)ngpa->gpa->hpa | |
127 | ||
128 | (*) the guest hypervisor will encode the ngva->gpa translation into its page | |
129 | tables if npt is not present | |
130 | ||
131 | Shadow pages contain the following information: | |
132 | role.level: | |
133 | The level in the shadow paging hierarchy that this shadow page belongs to. | |
134 | 1=4k sptes, 2=2M sptes, 3=1G sptes, etc. | |
135 | role.direct: | |
136 | If set, leaf sptes reachable from this page are for a linear range. | |
137 | Examples include real mode translation, large guest pages backed by small | |
138 | host pages, and gpa->hpa translations when NPT or EPT is active. | |
139 | The linear range starts at (gfn << PAGE_SHIFT) and its size is determined | |
140 | by role.level (2MB for first level, 1GB for second level, 0.5TB for third | |
141 | level, 256TB for fourth level) | |
142 | If clear, this page corresponds to a guest page table denoted by the gfn | |
143 | field. | |
144 | role.quadrant: | |
145 | When role.cr4_pae=0, the guest uses 32-bit gptes while the host uses 64-bit | |
146 | sptes. That means a guest page table contains more ptes than the host, | |
147 | so multiple shadow pages are needed to shadow one guest page. | |
148 | For first-level shadow pages, role.quadrant can be 0 or 1 and denotes the | |
149 | first or second 512-gpte block in the guest page table. For second-level | |
150 | page tables, each 32-bit gpte is converted to two 64-bit sptes | |
151 | (since each first-level guest page is shadowed by two first-level | |
152 | shadow pages) so role.quadrant takes values in the range 0..3. Each | |
153 | quadrant maps 1GB virtual address space. | |
154 | role.access: | |
155 | Inherited guest access permissions in the form uwx. Note execute | |
156 | permission is positive, not negative. | |
157 | role.invalid: | |
158 | The page is invalid and should not be used. It is a root page that is | |
159 | currently pinned (by a cpu hardware register pointing to it); once it is | |
160 | unpinned it will be destroyed. | |
161 | role.cr4_pae: | |
162 | Contains the value of cr4.pae for which the page is valid (e.g. whether | |
163 | 32-bit or 64-bit gptes are in use). | |
6859762e | 164 | role.nxe: |
03909187 | 165 | Contains the value of efer.nxe for which the page is valid. |
3dbe1415 AK |
166 | role.cr0_wp: |
167 | Contains the value of cr0.wp for which the page is valid. | |
411c588d AK |
168 | role.smep_andnot_wp: |
169 | Contains the value of cr4.smep && !cr0.wp for which the page is valid | |
170 | (pages for which this is true are different from other pages; see the | |
171 | treatment of cr0.wp=0 below). | |
edc90b7d XG |
172 | role.smap_andnot_wp: |
173 | Contains the value of cr4.smap && !cr0.wp for which the page is valid | |
174 | (pages for which this is true are different from other pages; see the | |
175 | treatment of cr0.wp=0 below). | |
699023e2 PB |
176 | role.smm: |
177 | Is 1 if the page is valid in system management mode. This field | |
178 | determines which of the kvm_memslots array was used to build this | |
179 | shadow page; it is also used to go back from a struct kvm_mmu_page | |
180 | to a memslot, through the kvm_memslots_for_spte_role macro and | |
181 | __gfn_to_memslot. | |
03909187 AK |
182 | gfn: |
183 | Either the guest page table containing the translations shadowed by this | |
184 | page, or the base page frame for linear translations. See role.direct. | |
185 | spt: | |
c4bd09b2 | 186 | A pageful of 64-bit sptes containing the translations for this page. |
03909187 AK |
187 | Accessed by both kvm and hardware. |
188 | The page pointed to by spt will have its page->private pointing back | |
189 | at the shadow page structure. | |
190 | sptes in spt point either at guest pages, or at lower-level shadow pages. | |
191 | Specifically, if sp1 and sp2 are shadow pages, then sp1->spt[n] may point | |
192 | at __pa(sp2->spt). sp2 will point back at sp1 through parent_pte. | |
193 | The spt array forms a DAG structure with the shadow page as a node, and | |
194 | guest pages as leaves. | |
195 | gfns: | |
196 | An array of 512 guest frame numbers, one for each present pte. Used to | |
2032a93d LJ |
197 | perform a reverse map from a pte to a gfn. When role.direct is set, any |
198 | element of this array can be calculated from the gfn field when used, in | |
199 | this case, the array of gfns is not allocated. See role.direct and gfn. | |
03909187 AK |
200 | root_count: |
201 | A counter keeping track of how many hardware registers (guest cr3 or | |
202 | pdptrs) are now pointing at the page. While this counter is nonzero, the | |
203 | page cannot be destroyed. See role.invalid. | |
6c806a73 XG |
204 | parent_ptes: |
205 | The reverse mapping for the pte/ptes pointing at this page's spt. If | |
77fbbbd2 | 206 | parent_ptes bit 0 is zero, only one spte points at this page and |
6c806a73 XG |
207 | parent_ptes points at this single spte, otherwise, there exists multiple |
208 | sptes pointing at this page and (parent_ptes & ~0x1) points at a data | |
77fbbbd2 | 209 | structure with a list of parent sptes. |
03909187 AK |
210 | unsync: |
211 | If true, then the translations in this page may not match the guest's | |
212 | translation. This is equivalent to the state of the tlb when a pte is | |
213 | changed but before the tlb entry is flushed. Accordingly, unsync ptes | |
214 | are synchronized when the guest executes invlpg or flushes its tlb by | |
215 | other means. Valid for leaf pages. | |
216 | unsync_children: | |
217 | How many sptes in the page point at pages that are unsync (or have | |
218 | unsynchronized children). | |
219 | unsync_child_bitmap: | |
220 | A bitmap indicating which sptes in spt point (directly or indirectly) at | |
221 | pages that may be unsynchronized. Used to quickly locate all unsychronized | |
222 | pages reachable from a given page. | |
f6f8adee XG |
223 | mmu_valid_gen: |
224 | Generation number of the page. It is compared with kvm->arch.mmu_valid_gen | |
225 | during hash table lookup, and used to skip invalidated shadow pages (see | |
226 | "Zapping all pages" below.) | |
accaefe0 XG |
227 | clear_spte_count: |
228 | Only present on 32-bit hosts, where a 64-bit spte cannot be written | |
229 | atomically. The reader uses this while running out of the MMU lock | |
230 | to detect in-progress updates and retry them until the writer has | |
231 | finished the write. | |
0cbf8e43 XG |
232 | write_flooding_count: |
233 | A guest may write to a page table many times, causing a lot of | |
234 | emulations if the page needs to be write-protected (see "Synchronized | |
235 | and unsynchronized pages" below). Leaf pages can be unsynchronized | |
236 | so that they do not trigger frequent emulation, but this is not | |
237 | possible for non-leafs. This field counts the number of emulations | |
238 | since the last time the page table was actually used; if emulation | |
239 | is triggered too frequently on this page, KVM will unmap the page | |
240 | to avoid emulation in the future. | |
03909187 AK |
241 | |
242 | Reverse map | |
243 | =========== | |
244 | ||
245 | The mmu maintains a reverse mapping whereby all ptes mapping a page can be | |
246 | reached given its gfn. This is used, for example, when swapping out a page. | |
247 | ||
248 | Synchronized and unsynchronized pages | |
249 | ===================================== | |
250 | ||
251 | The guest uses two events to synchronize its tlb and page tables: tlb flushes | |
252 | and page invalidations (invlpg). | |
253 | ||
254 | A tlb flush means that we need to synchronize all sptes reachable from the | |
255 | guest's cr3. This is expensive, so we keep all guest page tables write | |
256 | protected, and synchronize sptes to gptes when a gpte is written. | |
257 | ||
258 | A special case is when a guest page table is reachable from the current | |
259 | guest cr3. In this case, the guest is obliged to issue an invlpg instruction | |
260 | before using the translation. We take advantage of that by removing write | |
261 | protection from the guest page, and allowing the guest to modify it freely. | |
262 | We synchronize modified gptes when the guest invokes invlpg. This reduces | |
263 | the amount of emulation we have to do when the guest modifies multiple gptes, | |
264 | or when the a guest page is no longer used as a page table and is used for | |
265 | random guest data. | |
266 | ||
c4bd09b2 | 267 | As a side effect we have to resynchronize all reachable unsynchronized shadow |
03909187 AK |
268 | pages on a tlb flush. |
269 | ||
270 | ||
271 | Reaction to events | |
272 | ================== | |
273 | ||
274 | - guest page fault (or npt page fault, or ept violation) | |
275 | ||
276 | This is the most complicated event. The cause of a page fault can be: | |
277 | ||
278 | - a true guest fault (the guest translation won't allow the access) (*) | |
279 | - access to a missing translation | |
280 | - access to a protected translation | |
281 | - when logging dirty pages, memory is write protected | |
282 | - synchronized shadow pages are write protected (*) | |
283 | - access to untranslatable memory (mmio) | |
284 | ||
285 | (*) not applicable in direct mode | |
286 | ||
287 | Handling a page fault is performed as follows: | |
288 | ||
67652ed3 XG |
289 | - if the RSV bit of the error code is set, the page fault is caused by guest |
290 | accessing MMIO and cached MMIO information is available. | |
291 | - walk shadow page table | |
5a9b3830 XG |
292 | - check for valid generation number in the spte (see "Fast invalidation of |
293 | MMIO sptes" below) | |
67652ed3 XG |
294 | - cache the information to vcpu->arch.mmio_gva, vcpu->arch.access and |
295 | vcpu->arch.mmio_gfn, and call the emulator | |
2d49c47f XG |
296 | - If both P bit and R/W bit of error code are set, this could possibly |
297 | be handled as a "fast page fault" (fixed without taking the MMU lock). See | |
298 | the description in Documentation/virtual/kvm/locking.txt. | |
03909187 AK |
299 | - if needed, walk the guest page tables to determine the guest translation |
300 | (gva->gpa or ngpa->gpa) | |
301 | - if permissions are insufficient, reflect the fault back to the guest | |
302 | - determine the host page | |
67652ed3 XG |
303 | - if this is an mmio request, there is no host page; cache the info to |
304 | vcpu->arch.mmio_gva, vcpu->arch.access and vcpu->arch.mmio_gfn | |
03909187 AK |
305 | - walk the shadow page table to find the spte for the translation, |
306 | instantiating missing intermediate page tables as necessary | |
67652ed3 XG |
307 | - If this is an mmio request, cache the mmio info to the spte and set some |
308 | reserved bit on the spte (see callers of kvm_mmu_set_mmio_spte_mask) | |
03909187 AK |
309 | - try to unsynchronize the page |
310 | - if successful, we can let the guest continue and modify the gpte | |
311 | - emulate the instruction | |
312 | - if failed, unshadow the page and let the guest continue | |
313 | - update any translations that were modified by the instruction | |
314 | ||
315 | invlpg handling: | |
316 | ||
317 | - walk the shadow page hierarchy and drop affected translations | |
318 | - try to reinstantiate the indicated translation in the hope that the | |
319 | guest will use it in the near future | |
320 | ||
321 | Guest control register updates: | |
322 | ||
323 | - mov to cr3 | |
324 | - look up new shadow roots | |
325 | - synchronize newly reachable shadow pages | |
326 | ||
327 | - mov to cr0/cr4/efer | |
328 | - set up mmu context for new paging mode | |
329 | - look up new shadow roots | |
330 | - synchronize newly reachable shadow pages | |
331 | ||
332 | Host translation updates: | |
333 | ||
334 | - mmu notifier called with updated hva | |
335 | - look up affected sptes through reverse map | |
336 | - drop (or update) translations | |
337 | ||
ec87fe2a AK |
338 | Emulating cr0.wp |
339 | ================ | |
340 | ||
341 | If tdp is not enabled, the host must keep cr0.wp=1 so page write protection | |
342 | works for the guest kernel, not guest guest userspace. When the guest | |
343 | cr0.wp=1, this does not present a problem. However when the guest cr0.wp=0, | |
344 | we cannot map the permissions for gpte.u=1, gpte.w=0 to any spte (the | |
345 | semantics require allowing any guest kernel access plus user read access). | |
346 | ||
347 | We handle this by mapping the permissions to two possible sptes, depending | |
348 | on fault type: | |
349 | ||
350 | - kernel write fault: spte.u=0, spte.w=1 (allows full kernel access, | |
351 | disallows user access) | |
352 | - read fault: spte.u=1, spte.w=0 (allows full read access, disallows kernel | |
353 | write access) | |
354 | ||
355 | (user write faults generate a #PF) | |
356 | ||
edc90b7d XG |
357 | In the first case there are two additional complications: |
358 | - if CR4.SMEP is enabled: since we've turned the page into a kernel page, | |
359 | the kernel may now execute it. We handle this by also setting spte.nx. | |
360 | If we get a user fetch or read fault, we'll change spte.u=1 and | |
844a5fe2 PB |
361 | spte.nx=gpte.nx back. For this to work, KVM forces EFER.NX to 1 when |
362 | shadow paging is in use. | |
edc90b7d XG |
363 | - if CR4.SMAP is disabled: since the page has been changed to a kernel |
364 | page, it can not be reused when CR4.SMAP is enabled. We set | |
365 | CR4.SMAP && !CR0.WP into shadow page's role to avoid this case. Note, | |
366 | here we do not care the case that CR4.SMAP is enabled since KVM will | |
367 | directly inject #PF to guest due to failed permission check. | |
411c588d AK |
368 | |
369 | To prevent an spte that was converted into a kernel page with cr0.wp=0 | |
370 | from being written by the kernel after cr0.wp has changed to 1, we make | |
371 | the value of cr0.wp part of the page role. This means that an spte created | |
372 | with one value of cr0.wp cannot be used when cr0.wp has a different value - | |
373 | it will simply be missed by the shadow page lookup code. A similar issue | |
374 | exists when an spte created with cr0.wp=0 and cr4.smep=0 is used after | |
375 | changing cr4.smep to 1. To avoid this, the value of !cr0.wp && cr4.smep | |
376 | is also made a part of the page role. | |
377 | ||
316b9521 AK |
378 | Large pages |
379 | =========== | |
380 | ||
381 | The mmu supports all combinations of large and small guest and host pages. | |
382 | Supported page sizes include 4k, 2M, 4M, and 1G. 4M pages are treated as | |
383 | two separate 2M pages, on both guest and host, since the mmu always uses PAE | |
384 | paging. | |
385 | ||
386 | To instantiate a large spte, four constraints must be satisfied: | |
387 | ||
388 | - the spte must point to a large host page | |
389 | - the guest pte must be a large pte of at least equivalent size (if tdp is | |
40e47125 | 390 | enabled, there is no guest pte and this condition is satisfied) |
316b9521 AK |
391 | - if the spte will be writeable, the large page frame may not overlap any |
392 | write-protected pages | |
393 | - the guest page must be wholly contained by a single memory slot | |
394 | ||
92f94f1e | 395 | To check the last two conditions, the mmu maintains a ->disallow_lpage set of |
316b9521 | 396 | arrays for each memory slot and large page size. Every write protected page |
92f94f1e | 397 | causes its disallow_lpage to be incremented, thus preventing instantiation of |
316b9521 | 398 | a large spte. The frames at the end of an unaligned memory slot have |
92f94f1e | 399 | artificially inflated ->disallow_lpages so they can never be instantiated. |
316b9521 | 400 | |
f6f8adee XG |
401 | Zapping all pages (page generation count) |
402 | ========================================= | |
403 | ||
404 | For the large memory guests, walking and zapping all pages is really slow | |
405 | (because there are a lot of pages), and also blocks memory accesses of | |
406 | all VCPUs because it needs to hold the MMU lock. | |
407 | ||
408 | To make it be more scalable, kvm maintains a global generation number | |
409 | which is stored in kvm->arch.mmu_valid_gen. Every shadow page stores | |
410 | the current global generation-number into sp->mmu_valid_gen when it | |
411 | is created. Pages with a mismatching generation number are "obsolete". | |
412 | ||
413 | When KVM need zap all shadow pages sptes, it just simply increases the global | |
414 | generation-number then reload root shadow pages on all vcpus. As the VCPUs | |
415 | create new shadow page tables, the old pages are not used because of the | |
416 | mismatching generation number. | |
417 | ||
418 | KVM then walks through all pages and zaps obsolete pages. While the zap | |
419 | operation needs to take the MMU lock, the lock can be released periodically | |
420 | so that the VCPUs can make progress. | |
421 | ||
5a9b3830 XG |
422 | Fast invalidation of MMIO sptes |
423 | =============================== | |
424 | ||
425 | As mentioned in "Reaction to events" above, kvm will cache MMIO | |
426 | information in leaf sptes. When a new memslot is added or an existing | |
427 | memslot is changed, this information may become stale and needs to be | |
428 | invalidated. This also needs to hold the MMU lock while walking all | |
429 | shadow pages, and is made more scalable with a similar technique. | |
430 | ||
431 | MMIO sptes have a few spare bits, which are used to store a | |
432 | generation number. The global generation number is stored in | |
433 | kvm_memslots(kvm)->generation, and increased whenever guest memory info | |
434 | changes. This generation number is distinct from the one described in | |
435 | the previous section. | |
436 | ||
437 | When KVM finds an MMIO spte, it checks the generation number of the spte. | |
438 | If the generation number of the spte does not equal the global generation | |
439 | number, it will ignore the cached MMIO information and handle the page | |
440 | fault through the slow path. | |
441 | ||
442 | Since only 19 bits are used to store generation-number on mmio spte, all | |
443 | pages are zapped when there is an overflow. | |
444 | ||
ee3d1570 DM |
445 | Unfortunately, a single memory access might access kvm_memslots(kvm) multiple |
446 | times, the last one happening when the generation number is retrieved and | |
447 | stored into the MMIO spte. Thus, the MMIO spte might be created based on | |
448 | out-of-date information, but with an up-to-date generation number. | |
449 | ||
450 | To avoid this, the generation number is incremented again after synchronize_srcu | |
451 | returns; thus, the low bit of kvm_memslots(kvm)->generation is only 1 during a | |
452 | memslot update, while some SRCU readers might be using the old copy. We do not | |
453 | want to use an MMIO sptes created with an odd generation number, and we can do | |
454 | this without losing a bit in the MMIO spte. The low bit of the generation | |
455 | is not stored in MMIO spte, and presumed zero when it is extracted out of the | |
456 | spte. If KVM is unlucky and creates an MMIO spte while the low bit is 1, | |
457 | the next access to the spte will always be a cache miss. | |
458 | ||
5a9b3830 | 459 | |
03909187 AK |
460 | Further reading |
461 | =============== | |
462 | ||
463 | - NPT presentation from KVM Forum 2008 | |
464 | http://www.linux-kvm.org/wiki/images/c/c8/KvmForum2008%24kdf2008_21.pdf | |
465 |