2 Copyright (c) 2017 Linaro Limited
3 Written by Peter Maydell
9 QEMU internally has multiple families of functions for performing
10 loads and stores. This document attempts to enumerate them all
11 and indicate when to use them. It does not provide detailed
12 documentation of each API -- for that you should look at the
13 documentation comments in the relevant header files.
19 These functions operate on a host pointer, and should be used
20 when you already have a pointer into host memory (corresponding
21 to guest ram or a local buffer). They deal with doing accesses
22 with the desired endianness and with correctly handling
23 potentially unaligned pointer values.
25 Function names follow the pattern:
27 load: ``ld{sign}{size}_{endian}_p(ptr)``
29 store: ``st{size}_{endian}_p(ptr, val)``
32 - (empty) : for 32 or 64 bit sizes
43 - ``he`` : host endian
45 - ``le`` : little endian
47 The ``_{endian}`` infix is omitted for target-endian accesses.
49 The target endian accessors are only available to source
50 files which are built per-target.
52 There are also functions which take the size as an argument:
54 load: ``ldn{endian}_p(ptr, sz)``
56 which performs an unsigned load of ``sz`` bytes from ``ptr``
57 as an ``{endian}`` order value and returns it in a uint64_t.
59 store: ``stn{endian}_p(ptr, sz, val)``
61 which stores ``val`` to ``ptr`` as an ``{endian}`` order value
66 - ``\<ld[us]\?[bwlq]\(_[hbl]e\)\?_p\>``
67 - ``\<st[bwlq]\(_[hbl]e\)\?_p\>``
68 - ``\<ldn_\([hbl]e\)?_p\>``
69 - ``\<stn_\([hbl]e\)?_p\>``
74 These functions operate on a guest virtual address, plus a context
75 known as a "mmu index" which controls how that virtual address is
76 translated, plus a ``MemOp`` which contains alignment requirements
77 among other things. The ``MemOp`` and mmu index are combined into
78 a single argument of type ``MemOpIdx``.
80 The meaning of the indexes are target specific, but specifying a
81 particular index might be necessary if, for instance, the helper
82 requires a "always as non-privileged" access rather than the
83 default access for the current state of the guest CPU.
85 These functions may cause a guest CPU exception to be taken
86 (e.g. for an alignment fault or MMU fault) which will result in
87 guest CPU state being updated and control longjmp'ing out of the
88 function call. They should therefore only be used in code that is
89 implementing emulation of the guest CPU.
91 The ``retaddr`` parameter is used to control unwinding of the
92 guest CPU state in case of a guest CPU exception. This is passed
93 to ``cpu_restore_state()``. Therefore the value should either be 0,
94 to indicate that the guest CPU state is already synchronized, or
95 the result of ``GETPC()`` from the top level ``HELPER(foo)``
96 function, which is a return address into the generated code [#gpc]_.
98 .. [#gpc] Note that ``GETPC()`` should be used with great care: calling
99 it in other functions that are *not* the top level
100 ``HELPER(foo)`` will cause unexpected behavior. Instead, the
101 value of ``GETPC()`` should be read from the helper and passed
102 if needed to the functions that the helper calls.
104 Function names follow the pattern:
106 load: ``cpu_ld{size}{end}_mmu(env, ptr, oi, retaddr)``
108 store: ``cpu_st{size}{end}_mmu(env, ptr, val, oi, retaddr)``
117 - (empty) : for target endian, or 8 bit sizes
118 - ``_be`` : big endian
119 - ``_le`` : little endian
121 Regexes for git grep:
122 - ``\<cpu_ld[bwlq](_[bl]e)\?_mmu\>``
123 - ``\<cpu_st[bwlq](_[bl]e)\?_mmu\>``
126 ``cpu_{ld,st}*_mmuidx_ra``
127 ~~~~~~~~~~~~~~~~~~~~~~~~~~
129 These functions work like the ``cpu_{ld,st}_mmu`` functions except
130 that the ``mmuidx`` parameter is not combined with a ``MemOp``,
131 and therefore there is no required alignment supplied or enforced.
133 Function names follow the pattern:
135 load: ``cpu_ld{sign}{size}{end}_mmuidx_ra(env, ptr, mmuidx, retaddr)``
137 store: ``cpu_st{size}{end}_mmuidx_ra(env, ptr, val, mmuidx, retaddr)``
140 - (empty) : for 32 or 64 bit sizes
151 - (empty) : for target endian, or 8 bit sizes
152 - ``_be`` : big endian
153 - ``_le`` : little endian
155 Regexes for git grep:
156 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_mmuidx_ra\>``
157 - ``\<cpu_st[bwlq](_[bl]e)\?_mmuidx_ra\>``
159 ``cpu_{ld,st}*_data_ra``
160 ~~~~~~~~~~~~~~~~~~~~~~~~
162 These functions work like the ``cpu_{ld,st}_mmuidx_ra`` functions
163 except that the ``mmuidx`` parameter is taken from the current mode
164 of the guest CPU, as determined by ``cpu_mmu_index(env, false)``.
166 These are generally the preferred way to do accesses by guest
167 virtual address from helper functions, unless the access should
168 be performed with a context other than the default, or alignment
169 should be enforced for the access.
171 Function names follow the pattern:
173 load: ``cpu_ld{sign}{size}{end}_data_ra(env, ptr, ra)``
175 store: ``cpu_st{size}{end}_data_ra(env, ptr, val, ra)``
178 - (empty) : for 32 or 64 bit sizes
189 - (empty) : for target endian, or 8 bit sizes
190 - ``_be`` : big endian
191 - ``_le`` : little endian
193 Regexes for git grep:
194 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data_ra\>``
195 - ``\<cpu_st[bwlq](_[bl]e)\?_data_ra\>``
197 ``cpu_{ld,st}*_data``
198 ~~~~~~~~~~~~~~~~~~~~~
200 These functions work like the ``cpu_{ld,st}_data_ra`` functions
201 except that the ``retaddr`` parameter is 0, and thus does not
202 unwind guest CPU state.
204 This means they must only be used from helper functions where the
205 translator has saved all necessary CPU state. These functions are
206 the right choice for calls made from hooks like the CPU ``do_interrupt``
207 hook or when you know for certain that the translator had to save all
208 the CPU state anyway.
210 Function names follow the pattern:
212 load: ``cpu_ld{sign}{size}{end}_data(env, ptr)``
214 store: ``cpu_st{size}{end}_data(env, ptr, val)``
217 - (empty) : for 32 or 64 bit sizes
228 - (empty) : for target endian, or 8 bit sizes
229 - ``_be`` : big endian
230 - ``_le`` : little endian
233 - ``\<cpu_ld[us]\?[bwlq](_[bl]e)\?_data\>``
234 - ``\<cpu_st[bwlq](_[bl]e)\?_data\+\>``
239 These functions perform a read for instruction execution. The ``mmuidx``
240 parameter is taken from the current mode of the guest CPU, as determined
241 by ``cpu_mmu_index(env, true)``. The ``retaddr`` parameter is 0, and
242 thus does not unwind guest CPU state, because CPU state is always
243 synchronized while translating instructions. Any guest CPU exception
244 that is raised will indicate an instruction execution fault rather than
247 In general these functions should not be used directly during translation.
248 There are wrapper functions that are to be used which also take care of
251 Function names follow the pattern:
253 load: ``cpu_ld{sign}{size}_code(env, ptr)``
256 - (empty) : for 32 or 64 bit sizes
266 Regexes for git grep:
267 - ``\<cpu_ld[us]\?[bwlq]_code\>``
272 These functions are a wrapper for ``cpu_ld*_code`` which also perform
273 any actions required by any tracing plugins. They are only to be
274 called during the translator callback ``translate_insn``.
276 There is a set of functions ending in ``_swap`` which, if the parameter
277 is true, returns the value in the endianness that is the reverse of
278 the guest native endianness, as determined by ``TARGET_BIG_ENDIAN``.
280 Function names follow the pattern:
282 load: ``translator_ld{sign}{size}(env, ptr)``
284 swap: ``translator_ld{sign}{size}_swap(env, ptr, swap)``
287 - (empty) : for 32 or 64 bit sizes
298 - ``\<translator_ld[us]\?[bwlq]\(_swap\)\?\>``
300 ``helper_{ld,st}*_mmu``
301 ~~~~~~~~~~~~~~~~~~~~~~~~~
303 These functions are intended primarily to be called by the code
304 generated by the TCG backend. Like the ``cpu_{ld,st}_mmu`` functions
305 they perform accesses by guest virtual address, with a given ``MemOpIdx``.
307 They differ from ``cpu_{ld,st}_mmu`` in that they take the endianness
308 of the operation only from the MemOpIdx, and loads extend the return
309 value to the size of a host general register (``tcg_target_ulong``).
311 load: ``helper_ld{sign}{size}_mmu(env, addr, opindex, retaddr)``
313 store: ``helper_{size}_mmu(env, addr, val, opindex, retaddr)``
316 - (empty) : for 32 or 64 bit sizes
327 - ``\<helper_ld[us]\?[bwlq]_mmu\>``
328 - ``\<helper_st[bwlq]_mmu\>``
333 These functions are the primary ones to use when emulating CPU
334 or device memory accesses. They take an AddressSpace, which is the
335 way QEMU defines the view of memory that a device or CPU has.
336 (They generally correspond to being the "master" end of a hardware bus
339 Each CPU has an AddressSpace. Some kinds of CPU have more than
340 one AddressSpace (for instance Arm guest CPUs have an AddressSpace
341 for the Secure world and one for NonSecure if they implement TrustZone).
342 Devices which can do DMA-type operations should generally have an
343 AddressSpace. There is also a "system address space" which typically
344 has all the devices and memory that all CPUs can see. (Some older
345 device models use the "system address space" rather than properly
346 modelling that they have an AddressSpace of their own.)
348 Functions are provided for doing byte-buffer reads and writes,
349 and also for doing one-data-item loads and stores.
351 In all cases the caller provides a MemTxAttrs to specify bus
352 transaction attributes, and can check whether the memory transaction
353 succeeded using a MemTxResult return code.
355 ``address_space_read(address_space, addr, attrs, buf, len)``
357 ``address_space_write(address_space, addr, attrs, buf, len)``
359 ``address_space_rw(address_space, addr, attrs, buf, len, is_write)``
361 ``address_space_ld{sign}{size}_{endian}(address_space, addr, attrs, txresult)``
363 ``address_space_st{size}_{endian}(address_space, addr, val, attrs, txresult)``
366 - (empty) : for 32 or 64 bit sizes
369 (No signed load operations are provided.)
378 - ``le`` : little endian
379 - ``be`` : big endian
381 The ``_{endian}`` suffix is omitted for byte accesses.
384 - ``\<address_space_\(read\|write\|rw\)\>``
385 - ``\<address_space_ldu\?[bwql]\(_[lb]e\)\?\>``
386 - ``\<address_space_st[bwql]\(_[lb]e\)\?\>``
388 ``address_space_write_rom``
389 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
391 This function performs a write by physical address like
392 ``address_space_write``, except that if the write is to a ROM then
393 the ROM contents will be modified, even though a write by the guest
394 CPU to the ROM would be ignored. This is used for non-guest writes
395 like writes from the gdb debug stub or initial loading of ROM contents.
397 Note that portions of the write which attempt to write data to a
398 device will be silently ignored -- only real RAM and ROM will
402 - ``address_space_write_rom``
407 These are functions which are identical to
408 ``address_space_{ld,st}*``, except that they always pass
409 ``MEMTXATTRS_UNSPECIFIED`` for the transaction attributes, and ignore
410 whether the transaction succeeded or failed.
412 The fact that they ignore whether the transaction succeeded means
413 they should not be used in new code, unless you know for certain
414 that your code will only be used in a context where the CPU or
415 device doing the access has no way to report such an error.
417 ``load: ld{sign}{size}_{endian}_phys``
419 ``store: st{size}_{endian}_phys``
422 - (empty) : for 32 or 64 bit sizes
425 (No signed load operations are provided.)
434 - ``le`` : little endian
435 - ``be`` : big endian
437 The ``_{endian}_`` infix is omitted for byte accesses.
440 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_phys\>``
441 - ``\<st[bwlq]\(_[bl]e\)\?_phys\>``
443 ``cpu_physical_memory_*``
444 ~~~~~~~~~~~~~~~~~~~~~~~~~
446 These are convenience functions which are identical to
447 ``address_space_*`` but operate specifically on the system address space,
448 always pass a ``MEMTXATTRS_UNSPECIFIED`` set of memory attributes and
449 ignore whether the memory transaction succeeded or failed.
450 For new code they are better avoided:
452 * there is likely to be behaviour you need to model correctly for a
453 failed read or write operation
454 * a device should usually perform operations on its own AddressSpace
455 rather than using the system address space
457 ``cpu_physical_memory_read``
459 ``cpu_physical_memory_write``
461 ``cpu_physical_memory_rw``
464 - ``\<cpu_physical_memory_\(read\|write\|rw\)\>``
466 ``cpu_memory_rw_debug``
467 ~~~~~~~~~~~~~~~~~~~~~~~
469 Access CPU memory by virtual address for debug purposes.
471 This function is intended for use by the GDB stub and similar code.
472 It takes a virtual address, converts it to a physical address via
473 an MMU lookup using the current settings of the specified CPU,
474 and then performs the access (using ``address_space_rw`` for
475 reads or ``cpu_physical_memory_write_rom`` for writes).
476 This means that if the access is a write to a ROM then this
477 function will modify the contents (whereas a normal guest CPU access
478 would ignore the write attempt).
480 ``cpu_memory_rw_debug``
485 These behave like ``address_space_*``, except that they perform a DMA
486 barrier operation first.
488 **TODO**: We should provide guidance on when you need the DMA
489 barrier operation and when it's OK to use ``address_space_*``, and
490 make sure our existing code is doing things correctly.
499 - ``\<dma_memory_\(read\|write\|rw\)\>``
500 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_dma\>``
501 - ``\<st[bwlq]\(_[bl]e\)\?_dma\>``
503 ``pci_dma_*`` and ``{ld,st}*_pci_dma``
504 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
506 These functions are specifically for PCI device models which need to
507 perform accesses where the PCI device is a bus master. You pass them a
508 ``PCIDevice *`` and they will do ``dma_memory_*`` operations on the
509 correct address space for that device.
517 ``load: ld{sign}{size}_{endian}_pci_dma``
519 ``store: st{size}_{endian}_pci_dma``
522 - (empty) : for 32 or 64 bit sizes
525 (No signed load operations are provided.)
534 - ``le`` : little endian
535 - ``be`` : big endian
537 The ``_{endian}_`` infix is omitted for byte accesses.
540 - ``\<pci_dma_\(read\|write\|rw\)\>``
541 - ``\<ldu\?[bwlq]\(_[bl]e\)\?_pci_dma\>``
542 - ``\<st[bwlq]\(_[bl]e\)\?_pci_dma\>``