]>
Commit | Line | Data |
---|---|---|
9d3a4736 AK |
1 | The memory API |
2 | ============== | |
3 | ||
4 | The memory API models the memory and I/O buses and controllers of a QEMU | |
5 | machine. It attempts to allow modelling of: | |
6 | ||
7 | - ordinary RAM | |
8 | - memory-mapped I/O (MMIO) | |
9 | - memory controllers that can dynamically reroute physical memory regions | |
69ddaf66 | 10 | to different destinations |
9d3a4736 AK |
11 | |
12 | The memory model provides support for | |
13 | ||
14 | - tracking RAM changes by the guest | |
15 | - setting up coalesced memory for kvm | |
16 | - setting up ioeventfd regions for kvm | |
17 | ||
2d40178a PB |
18 | Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks |
19 | (leaves) are RAM and MMIO regions, while other nodes represent | |
20 | buses, memory controllers, and memory regions that have been rerouted. | |
21 | ||
22 | In addition to MemoryRegion objects, the memory API provides AddressSpace | |
23 | objects for every root and possibly for intermediate MemoryRegions too. | |
24 | These represent memory as seen from the CPU or a device's viewpoint. | |
9d3a4736 AK |
25 | |
26 | Types of regions | |
27 | ---------------- | |
28 | ||
5056c0c3 | 29 | There are multiple types of memory regions (all represented by a single C type |
9d3a4736 AK |
30 | MemoryRegion): |
31 | ||
32 | - RAM: a RAM region is simply a range of host memory that can be made available | |
33 | to the guest. | |
5056c0c3 PM |
34 | You typically initialize these with memory_region_init_ram(). Some special |
35 | purposes require the variants memory_region_init_resizeable_ram(), | |
36 | memory_region_init_ram_from_file(), or memory_region_init_ram_ptr(). | |
9d3a4736 AK |
37 | |
38 | - MMIO: a range of guest memory that is implemented by host callbacks; | |
39 | each read or write causes a callback to be called on the host. | |
5056c0c3 PM |
40 | You initialize these with memory_region_io(), passing it a MemoryRegionOps |
41 | structure describing the callbacks. | |
42 | ||
43 | - ROM: a ROM memory region works like RAM for reads (directly accessing | |
44 | a region of host memory), but like MMIO for writes (invoking a callback). | |
45 | You initialize these with memory_region_init_rom_device(). | |
46 | ||
47 | - IOMMU region: an IOMMU region translates addresses of accesses made to it | |
48 | and forwards them to some other target memory region. As the name suggests, | |
49 | these are only needed for modelling an IOMMU, not for simple devices. | |
50 | You initialize these with memory_region_init_iommu(). | |
9d3a4736 AK |
51 | |
52 | - container: a container simply includes other memory regions, each at | |
53 | a different offset. Containers are useful for grouping several regions | |
54 | into one unit. For example, a PCI BAR may be composed of a RAM region | |
55 | and an MMIO region. | |
56 | ||
57 | A container's subregions are usually non-overlapping. In some cases it is | |
58 | useful to have overlapping regions; for example a memory controller that | |
59 | can overlay a subregion of RAM with MMIO or ROM, or a PCI controller | |
60 | that does not prevent card from claiming overlapping BARs. | |
61 | ||
5056c0c3 PM |
62 | You initialize a pure container with memory_region_init(). |
63 | ||
9d3a4736 AK |
64 | - alias: a subsection of another region. Aliases allow a region to be |
65 | split apart into discontiguous regions. Examples of uses are memory banks | |
66 | used when the guest address space is smaller than the amount of RAM | |
67 | addressed, or a memory controller that splits main memory to expose a "PCI | |
68 | hole". Aliases may point to any type of region, including other aliases, | |
69 | but an alias may not point back to itself, directly or indirectly. | |
5056c0c3 PM |
70 | You initialize these with memory_region_init_alias(). |
71 | ||
72 | - reservation region: a reservation region is primarily for debugging. | |
73 | It claims I/O space that is not supposed to be handled by QEMU itself. | |
74 | The typical use is to track parts of the address space which will be | |
75 | handled by the host kernel when KVM is enabled. | |
76 | You initialize these with memory_region_init_reservation(), or by | |
77 | passing a NULL callback parameter to memory_region_init_io(). | |
9d3a4736 | 78 | |
6f1ce94a PM |
79 | It is valid to add subregions to a region which is not a pure container |
80 | (that is, to an MMIO, RAM or ROM region). This means that the region | |
81 | will act like a container, except that any addresses within the container's | |
82 | region which are not claimed by any subregion are handled by the | |
83 | container itself (ie by its MMIO callbacks or RAM backing). However | |
84 | it is generally possible to achieve the same effect with a pure container | |
85 | one of whose subregions is a low priority "background" region covering | |
86 | the whole address range; this is often clearer and is preferred. | |
87 | Subregions cannot be added to an alias region. | |
9d3a4736 AK |
88 | |
89 | Region names | |
90 | ------------ | |
91 | ||
92 | Regions are assigned names by the constructor. For most regions these are | |
93 | only used for debugging purposes, but RAM regions also use the name to identify | |
94 | live migration sections. This means that RAM region names need to have ABI | |
95 | stability. | |
96 | ||
97 | Region lifecycle | |
98 | ---------------- | |
99 | ||
8b5c2160 PB |
100 | A region is created by one of the memory_region_init*() functions and |
101 | attached to an object, which acts as its owner or parent. QEMU ensures | |
102 | that the owner object remains alive as long as the region is visible to | |
103 | the guest, or as long as the region is in use by a virtual CPU or another | |
104 | device. For example, the owner object will not die between an | |
105 | address_space_map operation and the corresponding address_space_unmap. | |
d8d95814 | 106 | |
8b5c2160 PB |
107 | After creation, a region can be added to an address space or a |
108 | container with memory_region_add_subregion(), and removed using | |
109 | memory_region_del_subregion(). | |
110 | ||
111 | Various region attributes (read-only, dirty logging, coalesced mmio, | |
112 | ioeventfd) can be changed during the region lifecycle. They take effect | |
113 | as soon as the region is made visible. This can be immediately, later, | |
114 | or never. | |
115 | ||
116 | Destruction of a memory region happens automatically when the owner | |
117 | object dies. | |
118 | ||
119 | If however the memory region is part of a dynamically allocated data | |
120 | structure, you should call object_unparent() to destroy the memory region | |
121 | before the data structure is freed. For an example see VFIOMSIXInfo | |
122 | and VFIOQuirk in hw/vfio/pci.c. | |
123 | ||
124 | You must not destroy a memory region as long as it may be in use by a | |
125 | device or CPU. In order to do this, as a general rule do not create or | |
126 | destroy memory regions dynamically during a device's lifetime, and only | |
127 | call object_unparent() in the memory region owner's instance_finalize | |
128 | callback. The dynamically allocated data structure that contains the | |
129 | memory region then should obviously be freed in the instance_finalize | |
130 | callback as well. | |
131 | ||
132 | If you break this rule, the following situation can happen: | |
133 | ||
134 | - the memory region's owner had a reference taken via memory_region_ref | |
135 | (for example by address_space_map) | |
136 | ||
137 | - the region is unparented, and has no owner anymore | |
138 | ||
139 | - when address_space_unmap is called, the reference to the memory region's | |
140 | owner is leaked. | |
141 | ||
142 | ||
143 | There is an exception to the above rule: it is okay to call | |
144 | object_unparent at any time for an alias or a container region. It is | |
145 | therefore also okay to create or destroy alias and container regions | |
146 | dynamically during a device's lifetime. | |
147 | ||
148 | This exceptional usage is valid because aliases and containers only help | |
149 | QEMU building the guest's memory map; they are never accessed directly. | |
150 | memory_region_ref and memory_region_unref are never called on aliases | |
151 | or containers, and the above situation then cannot happen. Exploiting | |
152 | this exception is rarely necessary, and therefore it is discouraged, | |
153 | but nevertheless it is used in a few places. | |
154 | ||
155 | For regions that "have no owner" (NULL is passed at creation time), the | |
156 | machine object is actually used as the owner. Since instance_finalize is | |
157 | never called for the machine object, you must never call object_unparent | |
158 | on regions that have no owner, unless they are aliases or containers. | |
d8d95814 | 159 | |
9d3a4736 AK |
160 | |
161 | Overlapping regions and priority | |
162 | -------------------------------- | |
163 | Usually, regions may not overlap each other; a memory address decodes into | |
164 | exactly one target. In some cases it is useful to allow regions to overlap, | |
165 | and sometimes to control which of an overlapping regions is visible to the | |
166 | guest. This is done with memory_region_add_subregion_overlap(), which | |
167 | allows the region to overlap any other region in the same container, and | |
168 | specifies a priority that allows the core to decide which of two regions at | |
169 | the same address are visible (highest wins). | |
8002ccd6 MA |
170 | Priority values are signed, and the default value is zero. This means that |
171 | you can use memory_region_add_subregion_overlap() both to specify a region | |
172 | that must sit 'above' any others (with a positive priority) and also a | |
173 | background region that sits 'below' others (with a negative priority). | |
9d3a4736 | 174 | |
6f1ce94a PM |
175 | If the higher priority region in an overlap is a container or alias, then |
176 | the lower priority region will appear in any "holes" that the higher priority | |
177 | region has left by not mapping subregions to that area of its address range. | |
178 | (This applies recursively -- if the subregions are themselves containers or | |
179 | aliases that leave holes then the lower priority region will appear in these | |
180 | holes too.) | |
181 | ||
182 | For example, suppose we have a container A of size 0x8000 with two subregions | |
183 | B and C. B is a container mapped at 0x2000, size 0x4000, priority 1; C is | |
184 | an MMIO region mapped at 0x0, size 0x6000, priority 2. B currently has two | |
185 | of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at | |
186 | offset 0x2000. As a diagram: | |
187 | ||
188 | 0 1000 2000 3000 4000 5000 6000 7000 8000 | |
189 | |------|------|------|------|------|------|------|-------| | |
190 | A: [ ] | |
191 | C: [CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC] | |
192 | B: [ ] | |
193 | D: [DDDDD] | |
194 | E: [EEEEE] | |
195 | ||
196 | The regions that will be seen within this address range then are: | |
197 | [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC] | |
198 | ||
199 | Since B has higher priority than C, its subregions appear in the flat map | |
200 | even where they overlap with C. In ranges where B has not mapped anything | |
201 | C's region appears. | |
202 | ||
203 | If B had provided its own MMIO operations (ie it was not a pure container) | |
204 | then these would be used for any addresses in its range not handled by | |
205 | D or E, and the result would be: | |
206 | [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB] | |
207 | ||
208 | Priority values are local to a container, because the priorities of two | |
209 | regions are only compared when they are both children of the same container. | |
210 | This means that the device in charge of the container (typically modelling | |
211 | a bus or a memory controller) can use them to manage the interaction of | |
212 | its child regions without any side effects on other parts of the system. | |
213 | In the example above, the priorities of D and E are unimportant because | |
214 | they do not overlap each other. It is the relative priority of B and C | |
215 | that causes D and E to appear on top of C: D and E's priorities are never | |
216 | compared against the priority of C. | |
217 | ||
9d3a4736 AK |
218 | Visibility |
219 | ---------- | |
220 | The memory core uses the following rules to select a memory region when the | |
221 | guest accesses an address: | |
222 | ||
223 | - all direct subregions of the root region are matched against the address, in | |
224 | descending priority order | |
225 | - if the address lies outside the region offset/size, the subregion is | |
226 | discarded | |
6f1ce94a PM |
227 | - if the subregion is a leaf (RAM or MMIO), the search terminates, returning |
228 | this leaf region | |
9d3a4736 AK |
229 | - if the subregion is a container, the same algorithm is used within the |
230 | subregion (after the address is adjusted by the subregion offset) | |
6f1ce94a | 231 | - if the subregion is an alias, the search is continued at the alias target |
9d3a4736 | 232 | (after the address is adjusted by the subregion offset and alias offset) |
6f1ce94a PM |
233 | - if a recursive search within a container or alias subregion does not |
234 | find a match (because of a "hole" in the container's coverage of its | |
235 | address range), then if this is a container with its own MMIO or RAM | |
236 | backing the search terminates, returning the container itself. Otherwise | |
237 | we continue with the next subregion in priority order | |
238 | - if none of the subregions match the address then the search terminates | |
239 | with no match found | |
9d3a4736 AK |
240 | |
241 | Example memory map | |
242 | ------------------ | |
243 | ||
244 | system_memory: container@0-2^48-1 | |
245 | | | |
246 | +---- lomem: alias@0-0xdfffffff ---> #ram (0-0xdfffffff) | |
247 | | | |
248 | +---- himem: alias@0x100000000-0x11fffffff ---> #ram (0xe0000000-0xffffffff) | |
249 | | | |
250 | +---- vga-window: alias@0xa0000-0xbfffff ---> #pci (0xa0000-0xbffff) | |
251 | | (prio 1) | |
252 | | | |
253 | +---- pci-hole: alias@0xe0000000-0xffffffff ---> #pci (0xe0000000-0xffffffff) | |
254 | ||
255 | pci (0-2^32-1) | |
256 | | | |
257 | +--- vga-area: container@0xa0000-0xbffff | |
258 | | | | |
259 | | +--- alias@0x00000-0x7fff ---> #vram (0x010000-0x017fff) | |
260 | | | | |
261 | | +--- alias@0x08000-0xffff ---> #vram (0x020000-0x027fff) | |
262 | | | |
263 | +---- vram: ram@0xe1000000-0xe1ffffff | |
264 | | | |
265 | +---- vga-mmio: mmio@0xe2000000-0xe200ffff | |
266 | ||
267 | ram: ram@0x00000000-0xffffffff | |
268 | ||
69ddaf66 | 269 | This is a (simplified) PC memory map. The 4GB RAM block is mapped into the |
9d3a4736 AK |
270 | system address space via two aliases: "lomem" is a 1:1 mapping of the first |
271 | 3.5GB; "himem" maps the last 0.5GB at address 4GB. This leaves 0.5GB for the | |
272 | so-called PCI hole, that allows a 32-bit PCI bus to exist in a system with | |
273 | 4GB of memory. | |
274 | ||
275 | The memory controller diverts addresses in the range 640K-768K to the PCI | |
7075ba30 | 276 | address space. This is modelled using the "vga-window" alias, mapped at a |
9d3a4736 AK |
277 | higher priority so it obscures the RAM at the same addresses. The vga window |
278 | can be removed by programming the memory controller; this is modelled by | |
279 | removing the alias and exposing the RAM underneath. | |
280 | ||
281 | The pci address space is not a direct child of the system address space, since | |
282 | we only want parts of it to be visible (we accomplish this using aliases). | |
283 | It has two subregions: vga-area models the legacy vga window and is occupied | |
284 | by two 32K memory banks pointing at two sections of the framebuffer. | |
285 | In addition the vram is mapped as a BAR at address e1000000, and an additional | |
286 | BAR containing MMIO registers is mapped after it. | |
287 | ||
288 | Note that if the guest maps a BAR outside the PCI hole, it would not be | |
289 | visible as the pci-hole alias clips it to a 0.5GB range. | |
290 | ||
9d3a4736 AK |
291 | MMIO Operations |
292 | --------------- | |
293 | ||
294 | MMIO regions are provided with ->read() and ->write() callbacks; in addition | |
295 | various constraints can be supplied to control how these callbacks are called: | |
296 | ||
297 | - .valid.min_access_size, .valid.max_access_size define the access sizes | |
298 | (in bytes) which the device accepts; accesses outside this range will | |
299 | have device and bus specific behaviour (ignored, or machine check) | |
300 | - .valid.aligned specifies that the device only accepts naturally aligned | |
301 | accesses. Unaligned accesses invoke device and bus specific behaviour. | |
302 | - .impl.min_access_size, .impl.max_access_size define the access sizes | |
303 | (in bytes) supported by the *implementation*; other access sizes will be | |
304 | emulated using the ones available. For example a 4-byte write will be | |
69ddaf66 | 305 | emulated using four 1-byte writes, if .impl.max_access_size = 1. |
edc1ba7a FZ |
306 | - .impl.unaligned specifies that the *implementation* supports unaligned |
307 | accesses; if false, unaligned accesses will be emulated by two aligned | |
308 | accesses. | |
309 | - .old_mmio can be used to ease porting from code using | |
310 | cpu_register_io_memory(). It should not be used in new code. |