]> git.proxmox.com Git - mirror_qemu.git/blame - docs/memory.txt
qcow2: Discard unaligned tail when wiping image
[mirror_qemu.git] / docs / memory.txt
CommitLineData
9d3a4736
AK
1The memory API
2==============
3
4The memory API models the memory and I/O buses and controllers of a QEMU
5machine. It attempts to allow modelling of:
6
7 - ordinary RAM
8 - memory-mapped I/O (MMIO)
9 - memory controllers that can dynamically reroute physical memory regions
69ddaf66 10 to different destinations
9d3a4736
AK
11
12The memory model provides support for
13
14 - tracking RAM changes by the guest
15 - setting up coalesced memory for kvm
16 - setting up ioeventfd regions for kvm
17
2d40178a
PB
18Memory is modelled as an acyclic graph of MemoryRegion objects. Sinks
19(leaves) are RAM and MMIO regions, while other nodes represent
20buses, memory controllers, and memory regions that have been rerouted.
21
22In addition to MemoryRegion objects, the memory API provides AddressSpace
23objects for every root and possibly for intermediate MemoryRegions too.
24These represent memory as seen from the CPU or a device's viewpoint.
9d3a4736
AK
25
26Types of regions
27----------------
28
5056c0c3 29There are multiple types of memory regions (all represented by a single C type
9d3a4736
AK
30MemoryRegion):
31
32- RAM: a RAM region is simply a range of host memory that can be made available
33 to the guest.
5056c0c3
PM
34 You typically initialize these with memory_region_init_ram(). Some special
35 purposes require the variants memory_region_init_resizeable_ram(),
36 memory_region_init_ram_from_file(), or memory_region_init_ram_ptr().
9d3a4736
AK
37
38- MMIO: a range of guest memory that is implemented by host callbacks;
39 each read or write causes a callback to be called on the host.
0c52a80e
C
40 You initialize these with memory_region_init_io(), passing it a
41 MemoryRegionOps structure describing the callbacks.
5056c0c3
PM
42
43- ROM: a ROM memory region works like RAM for reads (directly accessing
a1777f7f
PM
44 a region of host memory), and forbids writes. You initialize these with
45 memory_region_init_rom().
46
47- ROM device: a ROM device memory region works like RAM for reads
48 (directly accessing a region of host memory), but like MMIO for
49 writes (invoking a callback). You initialize these with
50 memory_region_init_rom_device().
5056c0c3
PM
51
52- IOMMU region: an IOMMU region translates addresses of accesses made to it
53 and forwards them to some other target memory region. As the name suggests,
54 these are only needed for modelling an IOMMU, not for simple devices.
55 You initialize these with memory_region_init_iommu().
9d3a4736
AK
56
57- container: a container simply includes other memory regions, each at
58 a different offset. Containers are useful for grouping several regions
59 into one unit. For example, a PCI BAR may be composed of a RAM region
60 and an MMIO region.
61
62 A container's subregions are usually non-overlapping. In some cases it is
63 useful to have overlapping regions; for example a memory controller that
64 can overlay a subregion of RAM with MMIO or ROM, or a PCI controller
65 that does not prevent card from claiming overlapping BARs.
66
5056c0c3
PM
67 You initialize a pure container with memory_region_init().
68
9d3a4736
AK
69- alias: a subsection of another region. Aliases allow a region to be
70 split apart into discontiguous regions. Examples of uses are memory banks
71 used when the guest address space is smaller than the amount of RAM
72 addressed, or a memory controller that splits main memory to expose a "PCI
73 hole". Aliases may point to any type of region, including other aliases,
74 but an alias may not point back to itself, directly or indirectly.
5056c0c3
PM
75 You initialize these with memory_region_init_alias().
76
77- reservation region: a reservation region is primarily for debugging.
78 It claims I/O space that is not supposed to be handled by QEMU itself.
79 The typical use is to track parts of the address space which will be
80 handled by the host kernel when KVM is enabled.
81 You initialize these with memory_region_init_reservation(), or by
82 passing a NULL callback parameter to memory_region_init_io().
9d3a4736 83
6f1ce94a
PM
84It is valid to add subregions to a region which is not a pure container
85(that is, to an MMIO, RAM or ROM region). This means that the region
86will act like a container, except that any addresses within the container's
87region which are not claimed by any subregion are handled by the
88container itself (ie by its MMIO callbacks or RAM backing). However
89it is generally possible to achieve the same effect with a pure container
90one of whose subregions is a low priority "background" region covering
91the whole address range; this is often clearer and is preferred.
92Subregions cannot be added to an alias region.
9d3a4736
AK
93
94Region names
95------------
96
97Regions are assigned names by the constructor. For most regions these are
98only used for debugging purposes, but RAM regions also use the name to identify
99live migration sections. This means that RAM region names need to have ABI
100stability.
101
102Region lifecycle
103----------------
104
8b5c2160
PB
105A region is created by one of the memory_region_init*() functions and
106attached to an object, which acts as its owner or parent. QEMU ensures
107that the owner object remains alive as long as the region is visible to
108the guest, or as long as the region is in use by a virtual CPU or another
109device. For example, the owner object will not die between an
110address_space_map operation and the corresponding address_space_unmap.
d8d95814 111
8b5c2160
PB
112After creation, a region can be added to an address space or a
113container with memory_region_add_subregion(), and removed using
114memory_region_del_subregion().
115
116Various region attributes (read-only, dirty logging, coalesced mmio,
117ioeventfd) can be changed during the region lifecycle. They take effect
118as soon as the region is made visible. This can be immediately, later,
119or never.
120
121Destruction of a memory region happens automatically when the owner
122object dies.
123
124If however the memory region is part of a dynamically allocated data
125structure, you should call object_unparent() to destroy the memory region
126before the data structure is freed. For an example see VFIOMSIXInfo
127and VFIOQuirk in hw/vfio/pci.c.
128
129You must not destroy a memory region as long as it may be in use by a
130device or CPU. In order to do this, as a general rule do not create or
131destroy memory regions dynamically during a device's lifetime, and only
132call object_unparent() in the memory region owner's instance_finalize
133callback. The dynamically allocated data structure that contains the
134memory region then should obviously be freed in the instance_finalize
135callback as well.
136
137If you break this rule, the following situation can happen:
138
139- the memory region's owner had a reference taken via memory_region_ref
140 (for example by address_space_map)
141
142- the region is unparented, and has no owner anymore
143
144- when address_space_unmap is called, the reference to the memory region's
145 owner is leaked.
146
147
148There is an exception to the above rule: it is okay to call
149object_unparent at any time for an alias or a container region. It is
150therefore also okay to create or destroy alias and container regions
151dynamically during a device's lifetime.
152
153This exceptional usage is valid because aliases and containers only help
154QEMU building the guest's memory map; they are never accessed directly.
155memory_region_ref and memory_region_unref are never called on aliases
156or containers, and the above situation then cannot happen. Exploiting
157this exception is rarely necessary, and therefore it is discouraged,
158but nevertheless it is used in a few places.
159
160For regions that "have no owner" (NULL is passed at creation time), the
161machine object is actually used as the owner. Since instance_finalize is
162never called for the machine object, you must never call object_unparent
163on regions that have no owner, unless they are aliases or containers.
d8d95814 164
9d3a4736
AK
165
166Overlapping regions and priority
167--------------------------------
168Usually, regions may not overlap each other; a memory address decodes into
169exactly one target. In some cases it is useful to allow regions to overlap,
170and sometimes to control which of an overlapping regions is visible to the
171guest. This is done with memory_region_add_subregion_overlap(), which
172allows the region to overlap any other region in the same container, and
173specifies a priority that allows the core to decide which of two regions at
174the same address are visible (highest wins).
8002ccd6
MA
175Priority values are signed, and the default value is zero. This means that
176you can use memory_region_add_subregion_overlap() both to specify a region
177that must sit 'above' any others (with a positive priority) and also a
178background region that sits 'below' others (with a negative priority).
9d3a4736 179
6f1ce94a
PM
180If the higher priority region in an overlap is a container or alias, then
181the lower priority region will appear in any "holes" that the higher priority
182region has left by not mapping subregions to that area of its address range.
183(This applies recursively -- if the subregions are themselves containers or
184aliases that leave holes then the lower priority region will appear in these
185holes too.)
186
187For example, suppose we have a container A of size 0x8000 with two subregions
8210f5f6
XZ
188B and C. B is a container mapped at 0x2000, size 0x4000, priority 2; C is
189an MMIO region mapped at 0x0, size 0x6000, priority 1. B currently has two
6f1ce94a
PM
190of its own subregions: D of size 0x1000 at offset 0 and E of size 0x1000 at
191offset 0x2000. As a diagram:
192
b3f3fdeb
WJ
193 0 1000 2000 3000 4000 5000 6000 7000 8000
194 |------|------|------|------|------|------|------|------|
195 A: [ ]
6f1ce94a
PM
196 C: [CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC]
197 B: [ ]
198 D: [DDDDD]
199 E: [EEEEE]
200
201The regions that will be seen within this address range then are:
202 [CCCCCCCCCCCC][DDDDD][CCCCC][EEEEE][CCCCC]
203
204Since B has higher priority than C, its subregions appear in the flat map
205even where they overlap with C. In ranges where B has not mapped anything
206C's region appears.
207
208If B had provided its own MMIO operations (ie it was not a pure container)
209then these would be used for any addresses in its range not handled by
210D or E, and the result would be:
211 [CCCCCCCCCCCC][DDDDD][BBBBB][EEEEE][BBBBB]
212
213Priority values are local to a container, because the priorities of two
214regions are only compared when they are both children of the same container.
215This means that the device in charge of the container (typically modelling
216a bus or a memory controller) can use them to manage the interaction of
217its child regions without any side effects on other parts of the system.
218In the example above, the priorities of D and E are unimportant because
219they do not overlap each other. It is the relative priority of B and C
220that causes D and E to appear on top of C: D and E's priorities are never
221compared against the priority of C.
222
9d3a4736
AK
223Visibility
224----------
225The memory core uses the following rules to select a memory region when the
226guest accesses an address:
227
228- all direct subregions of the root region are matched against the address, in
229 descending priority order
230 - if the address lies outside the region offset/size, the subregion is
231 discarded
6f1ce94a
PM
232 - if the subregion is a leaf (RAM or MMIO), the search terminates, returning
233 this leaf region
9d3a4736
AK
234 - if the subregion is a container, the same algorithm is used within the
235 subregion (after the address is adjusted by the subregion offset)
6f1ce94a 236 - if the subregion is an alias, the search is continued at the alias target
9d3a4736 237 (after the address is adjusted by the subregion offset and alias offset)
6f1ce94a
PM
238 - if a recursive search within a container or alias subregion does not
239 find a match (because of a "hole" in the container's coverage of its
240 address range), then if this is a container with its own MMIO or RAM
241 backing the search terminates, returning the container itself. Otherwise
242 we continue with the next subregion in priority order
243- if none of the subregions match the address then the search terminates
244 with no match found
9d3a4736
AK
245
246Example memory map
247------------------
248
249system_memory: container@0-2^48-1
250 |
251 +---- lomem: alias@0-0xdfffffff ---> #ram (0-0xdfffffff)
252 |
253 +---- himem: alias@0x100000000-0x11fffffff ---> #ram (0xe0000000-0xffffffff)
254 |
b3f3fdeb 255 +---- vga-window: alias@0xa0000-0xbffff ---> #pci (0xa0000-0xbffff)
9d3a4736
AK
256 | (prio 1)
257 |
258 +---- pci-hole: alias@0xe0000000-0xffffffff ---> #pci (0xe0000000-0xffffffff)
259
260pci (0-2^32-1)
261 |
262 +--- vga-area: container@0xa0000-0xbffff
263 | |
264 | +--- alias@0x00000-0x7fff ---> #vram (0x010000-0x017fff)
265 | |
266 | +--- alias@0x08000-0xffff ---> #vram (0x020000-0x027fff)
267 |
268 +---- vram: ram@0xe1000000-0xe1ffffff
269 |
270 +---- vga-mmio: mmio@0xe2000000-0xe200ffff
271
272ram: ram@0x00000000-0xffffffff
273
69ddaf66 274This is a (simplified) PC memory map. The 4GB RAM block is mapped into the
9d3a4736
AK
275system address space via two aliases: "lomem" is a 1:1 mapping of the first
2763.5GB; "himem" maps the last 0.5GB at address 4GB. This leaves 0.5GB for the
277so-called PCI hole, that allows a 32-bit PCI bus to exist in a system with
2784GB of memory.
279
280The memory controller diverts addresses in the range 640K-768K to the PCI
7075ba30 281address space. This is modelled using the "vga-window" alias, mapped at a
9d3a4736
AK
282higher priority so it obscures the RAM at the same addresses. The vga window
283can be removed by programming the memory controller; this is modelled by
284removing the alias and exposing the RAM underneath.
285
286The pci address space is not a direct child of the system address space, since
287we only want parts of it to be visible (we accomplish this using aliases).
288It has two subregions: vga-area models the legacy vga window and is occupied
289by two 32K memory banks pointing at two sections of the framebuffer.
290In addition the vram is mapped as a BAR at address e1000000, and an additional
291BAR containing MMIO registers is mapped after it.
292
293Note that if the guest maps a BAR outside the PCI hole, it would not be
294visible as the pci-hole alias clips it to a 0.5GB range.
295
9d3a4736
AK
296MMIO Operations
297---------------
298
299MMIO regions are provided with ->read() and ->write() callbacks; in addition
300various constraints can be supplied to control how these callbacks are called:
301
302 - .valid.min_access_size, .valid.max_access_size define the access sizes
303 (in bytes) which the device accepts; accesses outside this range will
304 have device and bus specific behaviour (ignored, or machine check)
ef00bdaf
PM
305 - .valid.unaligned specifies that the *device being modelled* supports
306 unaligned accesses; if false, unaligned accesses will invoke the
307 appropriate bus or CPU specific behaviour.
9d3a4736
AK
308 - .impl.min_access_size, .impl.max_access_size define the access sizes
309 (in bytes) supported by the *implementation*; other access sizes will be
310 emulated using the ones available. For example a 4-byte write will be
69ddaf66 311 emulated using four 1-byte writes, if .impl.max_access_size = 1.
edc1ba7a
FZ
312 - .impl.unaligned specifies that the *implementation* supports unaligned
313 accesses; if false, unaligned accesses will be emulated by two aligned
314 accesses.
ef00bdaf 315 - .old_mmio eases the porting of code that was formerly using
edc1ba7a 316 cpu_register_io_memory(). It should not be used in new code.