]> git.proxmox.com Git - mirror_qemu.git/blob - docs/devel/atomics.rst
virtio: fix reachable assertion due to stale value of cached region size
[mirror_qemu.git] / docs / devel / atomics.rst
1 .. _atomics-ref:
2
3 =========================
4 Atomic operations in QEMU
5 =========================
6
7 CPUs perform independent memory operations effectively in random order.
8 but this can be a problem for CPU-CPU interaction (including interactions
9 between QEMU and the guest). Multi-threaded programs use various tools
10 to instruct the compiler and the CPU to restrict the order to something
11 that is consistent with the expectations of the programmer.
12
13 The most basic tool is locking. Mutexes, condition variables and
14 semaphores are used in QEMU, and should be the default approach to
15 synchronization. Anything else is considerably harder, but it's
16 also justified more often than one would like;
17 the most performance-critical parts of QEMU in particular require
18 a very low level approach to concurrency, involving memory barriers
19 and atomic operations. The semantics of concurrent memory accesses are governed
20 by the C11 memory model.
21
22 QEMU provides a header, ``qemu/atomic.h``, which wraps C11 atomics to
23 provide better portability and a less verbose syntax. ``qemu/atomic.h``
24 provides macros that fall in three camps:
25
26 - compiler barriers: ``barrier()``;
27
28 - weak atomic access and manual memory barriers: ``qatomic_read()``,
29 ``qatomic_set()``, ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
30 ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``;
31
32 - sequentially consistent atomic access: everything else.
33
34 In general, use of ``qemu/atomic.h`` should be wrapped with more easily
35 used data structures (e.g. the lock-free singly-linked list operations
36 ``QSLIST_INSERT_HEAD_ATOMIC`` and ``QSLIST_MOVE_ATOMIC``) or synchronization
37 primitives (such as RCU, ``QemuEvent`` or ``QemuLockCnt``). Bare use of
38 atomic operations and memory barriers should be limited to inter-thread
39 checking of flags and documented thoroughly.
40
41
42
43 Compiler memory barrier
44 =======================
45
46 ``barrier()`` prevents the compiler from moving the memory accesses on
47 either side of it to the other side. The compiler barrier has no direct
48 effect on the CPU, which may then reorder things however it wishes.
49
50 ``barrier()`` is mostly used within ``qemu/atomic.h`` itself. On some
51 architectures, CPU guarantees are strong enough that blocking compiler
52 optimizations already ensures the correct order of execution. In this
53 case, ``qemu/atomic.h`` will reduce stronger memory barriers to simple
54 compiler barriers.
55
56 Still, ``barrier()`` can be useful when writing code that can be interrupted
57 by signal handlers.
58
59
60 Sequentially consistent atomic access
61 =====================================
62
63 Most of the operations in the ``qemu/atomic.h`` header ensure *sequential
64 consistency*, where "the result of any execution is the same as if the
65 operations of all the processors were executed in some sequential order,
66 and the operations of each individual processor appear in this sequence
67 in the order specified by its program".
68
69 ``qemu/atomic.h`` provides the following set of atomic read-modify-write
70 operations::
71
72 void qatomic_inc(ptr)
73 void qatomic_dec(ptr)
74 void qatomic_add(ptr, val)
75 void qatomic_sub(ptr, val)
76 void qatomic_and(ptr, val)
77 void qatomic_or(ptr, val)
78
79 typeof(*ptr) qatomic_fetch_inc(ptr)
80 typeof(*ptr) qatomic_fetch_dec(ptr)
81 typeof(*ptr) qatomic_fetch_add(ptr, val)
82 typeof(*ptr) qatomic_fetch_sub(ptr, val)
83 typeof(*ptr) qatomic_fetch_and(ptr, val)
84 typeof(*ptr) qatomic_fetch_or(ptr, val)
85 typeof(*ptr) qatomic_fetch_xor(ptr, val)
86 typeof(*ptr) qatomic_fetch_inc_nonzero(ptr)
87 typeof(*ptr) qatomic_xchg(ptr, val)
88 typeof(*ptr) qatomic_cmpxchg(ptr, old, new)
89
90 all of which return the old value of ``*ptr``. These operations are
91 polymorphic; they operate on any type that is as wide as a pointer or
92 smaller.
93
94 Similar operations return the new value of ``*ptr``::
95
96 typeof(*ptr) qatomic_inc_fetch(ptr)
97 typeof(*ptr) qatomic_dec_fetch(ptr)
98 typeof(*ptr) qatomic_add_fetch(ptr, val)
99 typeof(*ptr) qatomic_sub_fetch(ptr, val)
100 typeof(*ptr) qatomic_and_fetch(ptr, val)
101 typeof(*ptr) qatomic_or_fetch(ptr, val)
102 typeof(*ptr) qatomic_xor_fetch(ptr, val)
103
104 ``qemu/atomic.h`` also provides loads and stores that cannot be reordered
105 with each other::
106
107 typeof(*ptr) qatomic_mb_read(ptr)
108 void qatomic_mb_set(ptr, val)
109
110 However these do not provide sequential consistency and, in particular,
111 they do not participate in the total ordering enforced by
112 sequentially-consistent operations. For this reason they are deprecated.
113 They should instead be replaced with any of the following (ordered from
114 easiest to hardest):
115
116 - accesses inside a mutex or spinlock
117
118 - lightweight synchronization primitives such as ``QemuEvent``
119
120 - RCU operations (``qatomic_rcu_read``, ``qatomic_rcu_set``) when publishing
121 or accessing a new version of a data structure
122
123 - other atomic accesses: ``qatomic_read`` and ``qatomic_load_acquire`` for
124 loads, ``qatomic_set`` and ``qatomic_store_release`` for stores, ``smp_mb``
125 to forbid reordering subsequent loads before a store.
126
127
128 Weak atomic access and manual memory barriers
129 =============================================
130
131 Compared to sequentially consistent atomic access, programming with
132 weaker consistency models can be considerably more complicated.
133 The only guarantees that you can rely upon in this case are:
134
135 - atomic accesses will not cause data races (and hence undefined behavior);
136 ordinary accesses instead cause data races if they are concurrent with
137 other accesses of which at least one is a write. In order to ensure this,
138 the compiler will not optimize accesses out of existence, create unsolicited
139 accesses, or perform other similar optimzations.
140
141 - acquire operations will appear to happen, with respect to the other
142 components of the system, before all the LOAD or STORE operations
143 specified afterwards.
144
145 - release operations will appear to happen, with respect to the other
146 components of the system, after all the LOAD or STORE operations
147 specified before.
148
149 - release operations will *synchronize with* acquire operations;
150 see :ref:`acqrel` for a detailed explanation.
151
152 When using this model, variables are accessed with:
153
154 - ``qatomic_read()`` and ``qatomic_set()``; these prevent the compiler from
155 optimizing accesses out of existence and creating unsolicited
156 accesses, but do not otherwise impose any ordering on loads and
157 stores: both the compiler and the processor are free to reorder
158 them.
159
160 - ``qatomic_load_acquire()``, which guarantees the LOAD to appear to
161 happen, with respect to the other components of the system,
162 before all the LOAD or STORE operations specified afterwards.
163 Operations coming before ``qatomic_load_acquire()`` can still be
164 reordered after it.
165
166 - ``qatomic_store_release()``, which guarantees the STORE to appear to
167 happen, with respect to the other components of the system,
168 after all the LOAD or STORE operations specified before.
169 Operations coming after ``qatomic_store_release()`` can still be
170 reordered before it.
171
172 Restrictions to the ordering of accesses can also be specified
173 using the memory barrier macros: ``smp_rmb()``, ``smp_wmb()``, ``smp_mb()``,
174 ``smp_mb_acquire()``, ``smp_mb_release()``, ``smp_read_barrier_depends()``.
175
176 Memory barriers control the order of references to shared memory.
177 They come in six kinds:
178
179 - ``smp_rmb()`` guarantees that all the LOAD operations specified before
180 the barrier will appear to happen before all the LOAD operations
181 specified after the barrier with respect to the other components of
182 the system.
183
184 In other words, ``smp_rmb()`` puts a partial ordering on loads, but is not
185 required to have any effect on stores.
186
187 - ``smp_wmb()`` guarantees that all the STORE operations specified before
188 the barrier will appear to happen before all the STORE operations
189 specified after the barrier with respect to the other components of
190 the system.
191
192 In other words, ``smp_wmb()`` puts a partial ordering on stores, but is not
193 required to have any effect on loads.
194
195 - ``smp_mb_acquire()`` guarantees that all the LOAD operations specified before
196 the barrier will appear to happen before all the LOAD or STORE operations
197 specified after the barrier with respect to the other components of
198 the system.
199
200 - ``smp_mb_release()`` guarantees that all the STORE operations specified *after*
201 the barrier will appear to happen after all the LOAD or STORE operations
202 specified *before* the barrier with respect to the other components of
203 the system.
204
205 - ``smp_mb()`` guarantees that all the LOAD and STORE operations specified
206 before the barrier will appear to happen before all the LOAD and
207 STORE operations specified after the barrier with respect to the other
208 components of the system.
209
210 ``smp_mb()`` puts a partial ordering on both loads and stores. It is
211 stronger than both a read and a write memory barrier; it implies both
212 ``smp_mb_acquire()`` and ``smp_mb_release()``, but it also prevents STOREs
213 coming before the barrier from overtaking LOADs coming after the
214 barrier and vice versa.
215
216 - ``smp_read_barrier_depends()`` is a weaker kind of read barrier. On
217 most processors, whenever two loads are performed such that the
218 second depends on the result of the first (e.g., the first load
219 retrieves the address to which the second load will be directed),
220 the processor will guarantee that the first LOAD will appear to happen
221 before the second with respect to the other components of the system.
222 However, this is not always true---for example, it was not true on
223 Alpha processors. Whenever this kind of access happens to shared
224 memory (that is not protected by a lock), a read barrier is needed,
225 and ``smp_read_barrier_depends()`` can be used instead of ``smp_rmb()``.
226
227 Note that the first load really has to have a _data_ dependency and not
228 a control dependency. If the address for the second load is dependent
229 on the first load, but the dependency is through a conditional rather
230 than actually loading the address itself, then it's a _control_
231 dependency and a full read barrier or better is required.
232
233
234 Memory barriers and ``qatomic_load_acquire``/``qatomic_store_release`` are
235 mostly used when a data structure has one thread that is always a writer
236 and one thread that is always a reader:
237
238 +----------------------------------+----------------------------------+
239 | thread 1 | thread 2 |
240 +==================================+==================================+
241 | :: | :: |
242 | | |
243 | qatomic_store_release(&a, x); | y = qatomic_load_acquire(&b); |
244 | qatomic_store_release(&b, y); | x = qatomic_load_acquire(&a); |
245 +----------------------------------+----------------------------------+
246
247 In this case, correctness is easy to check for using the "pairing"
248 trick that is explained below.
249
250 Sometimes, a thread is accessing many variables that are otherwise
251 unrelated to each other (for example because, apart from the current
252 thread, exactly one other thread will read or write each of these
253 variables). In this case, it is possible to "hoist" the barriers
254 outside a loop. For example:
255
256 +------------------------------------------+----------------------------------+
257 | before | after |
258 +==========================================+==================================+
259 | :: | :: |
260 | | |
261 | n = 0; | n = 0; |
262 | for (i = 0; i < 10; i++) | for (i = 0; i < 10; i++) |
263 | n += qatomic_load_acquire(&a[i]); | n += qatomic_read(&a[i]); |
264 | | smp_mb_acquire(); |
265 +------------------------------------------+----------------------------------+
266 | :: | :: |
267 | | |
268 | | smp_mb_release(); |
269 | for (i = 0; i < 10; i++) | for (i = 0; i < 10; i++) |
270 | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
271 +------------------------------------------+----------------------------------+
272
273 Splitting a loop can also be useful to reduce the number of barriers:
274
275 +------------------------------------------+----------------------------------+
276 | before | after |
277 +==========================================+==================================+
278 | :: | :: |
279 | | |
280 | n = 0; | smp_mb_release(); |
281 | for (i = 0; i < 10; i++) { | for (i = 0; i < 10; i++) |
282 | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
283 | smp_mb(); | smb_mb(); |
284 | n += qatomic_read(&b[i]); | n = 0; |
285 | } | for (i = 0; i < 10; i++) |
286 | | n += qatomic_read(&b[i]); |
287 +------------------------------------------+----------------------------------+
288
289 In this case, a ``smp_mb_release()`` is also replaced with a (possibly cheaper, and clearer
290 as well) ``smp_wmb()``:
291
292 +------------------------------------------+----------------------------------+
293 | before | after |
294 +==========================================+==================================+
295 | :: | :: |
296 | | |
297 | | smp_mb_release(); |
298 | for (i = 0; i < 10; i++) { | for (i = 0; i < 10; i++) |
299 | qatomic_store_release(&a[i], false); | qatomic_set(&a[i], false); |
300 | qatomic_store_release(&b[i], false); | smb_wmb(); |
301 | } | for (i = 0; i < 10; i++) |
302 | | qatomic_set(&b[i], false); |
303 +------------------------------------------+----------------------------------+
304
305
306 .. _acqrel:
307
308 Acquire/release pairing and the *synchronizes-with* relation
309 ------------------------------------------------------------
310
311 Atomic operations other than ``qatomic_set()`` and ``qatomic_read()`` have
312 either *acquire* or *release* semantics [#rmw]_. This has two effects:
313
314 .. [#rmw] Read-modify-write operations can have both---acquire applies to the
315 read part, and release to the write.
316
317 - within a thread, they are ordered either before subsequent operations
318 (for acquire) or after previous operations (for release).
319
320 - if a release operation in one thread *synchronizes with* an acquire operation
321 in another thread, the ordering constraints propagates from the first to the
322 second thread. That is, everything before the release operation in the
323 first thread is guaranteed to *happen before* everything after the
324 acquire operation in the second thread.
325
326 The concept of acquire and release semantics is not exclusive to atomic
327 operations; almost all higher-level synchronization primitives also have
328 acquire or release semantics. For example:
329
330 - ``pthread_mutex_lock`` has acquire semantics, ``pthread_mutex_unlock`` has
331 release semantics and synchronizes with a ``pthread_mutex_lock`` for the
332 same mutex.
333
334 - ``pthread_cond_signal`` and ``pthread_cond_broadcast`` have release semantics;
335 ``pthread_cond_wait`` has both release semantics (synchronizing with
336 ``pthread_mutex_lock``) and acquire semantics (synchronizing with
337 ``pthread_mutex_unlock`` and signaling of the condition variable).
338
339 - ``pthread_create`` has release semantics and synchronizes with the start
340 of the new thread; ``pthread_join`` has acquire semantics and synchronizes
341 with the exiting of the thread.
342
343 - ``qemu_event_set`` has release semantics, ``qemu_event_wait`` has
344 acquire semantics.
345
346 For example, in the following example there are no atomic accesses, but still
347 thread 2 is relying on the *synchronizes-with* relation between ``pthread_exit``
348 (release) and ``pthread_join`` (acquire):
349
350 +----------------------+-------------------------------+
351 | thread 1 | thread 2 |
352 +======================+===============================+
353 | :: | :: |
354 | | |
355 | *a = 1; | |
356 | pthread_exit(a); | pthread_join(thread1, &a); |
357 | | x = *a; |
358 +----------------------+-------------------------------+
359
360 Synchronization between threads basically descends from this pairing of
361 a release operation and an acquire operation. Therefore, atomic operations
362 other than ``qatomic_set()`` and ``qatomic_read()`` will almost always be
363 paired with another operation of the opposite kind: an acquire operation
364 will pair with a release operation and vice versa. This rule of thumb is
365 extremely useful; in the case of QEMU, however, note that the other
366 operation may actually be in a driver that runs in the guest!
367
368 ``smp_read_barrier_depends()``, ``smp_rmb()``, ``smp_mb_acquire()``,
369 ``qatomic_load_acquire()`` and ``qatomic_rcu_read()`` all count
370 as acquire operations. ``smp_wmb()``, ``smp_mb_release()``,
371 ``qatomic_store_release()`` and ``qatomic_rcu_set()`` all count as release
372 operations. ``smp_mb()`` counts as both acquire and release, therefore
373 it can pair with any other atomic operation. Here is an example:
374
375 +----------------------+------------------------------+
376 | thread 1 | thread 2 |
377 +======================+==============================+
378 | :: | :: |
379 | | |
380 | qatomic_set(&a, 1);| |
381 | smp_wmb(); | |
382 | qatomic_set(&b, 2);| x = qatomic_read(&b); |
383 | | smp_rmb(); |
384 | | y = qatomic_read(&a); |
385 +----------------------+------------------------------+
386
387 Note that a load-store pair only counts if the two operations access the
388 same variable: that is, a store-release on a variable ``x`` *synchronizes
389 with* a load-acquire on a variable ``x``, while a release barrier
390 synchronizes with any acquire operation. The following example shows
391 correct synchronization:
392
393 +--------------------------------+--------------------------------+
394 | thread 1 | thread 2 |
395 +================================+================================+
396 | :: | :: |
397 | | |
398 | qatomic_set(&a, 1); | |
399 | qatomic_store_release(&b, 2);| x = qatomic_load_acquire(&b);|
400 | | y = qatomic_read(&a); |
401 +--------------------------------+--------------------------------+
402
403 Acquire and release semantics of higher-level primitives can also be
404 relied upon for the purpose of establishing the *synchronizes with*
405 relation.
406
407 Note that the "writing" thread is accessing the variables in the
408 opposite order as the "reading" thread. This is expected: stores
409 before a release operation will normally match the loads after
410 the acquire operation, and vice versa. In fact, this happened already
411 in the ``pthread_exit``/``pthread_join`` example above.
412
413 Finally, this more complex example has more than two accesses and data
414 dependency barriers. It also does not use atomic accesses whenever there
415 cannot be a data race:
416
417 +----------------------+------------------------------+
418 | thread 1 | thread 2 |
419 +======================+==============================+
420 | :: | :: |
421 | | |
422 | b[2] = 1; | |
423 | smp_wmb(); | |
424 | x->i = 2; | |
425 | smp_wmb(); | |
426 | qatomic_set(&a, x);| x = qatomic_read(&a); |
427 | | smp_read_barrier_depends(); |
428 | | y = x->i; |
429 | | smp_read_barrier_depends(); |
430 | | z = b[y]; |
431 +----------------------+------------------------------+
432
433 Comparison with Linux kernel primitives
434 =======================================
435
436 Here is a list of differences between Linux kernel atomic operations
437 and memory barriers, and the equivalents in QEMU:
438
439 - atomic operations in Linux are always on a 32-bit int type and
440 use a boxed ``atomic_t`` type; atomic operations in QEMU are polymorphic
441 and use normal C types.
442
443 - Originally, ``atomic_read`` and ``atomic_set`` in Linux gave no guarantee
444 at all. Linux 4.1 updated them to implement volatile
445 semantics via ``ACCESS_ONCE`` (or the more recent ``READ``/``WRITE_ONCE``).
446
447 QEMU's ``qatomic_read`` and ``qatomic_set`` implement C11 atomic relaxed
448 semantics if the compiler supports it, and volatile semantics otherwise.
449 Both semantics prevent the compiler from doing certain transformations;
450 the difference is that atomic accesses are guaranteed to be atomic,
451 while volatile accesses aren't. Thus, in the volatile case we just cross
452 our fingers hoping that the compiler will generate atomic accesses,
453 since we assume the variables passed are machine-word sized and
454 properly aligned.
455
456 No barriers are implied by ``qatomic_read`` and ``qatomic_set`` in either
457 Linux or QEMU.
458
459 - atomic read-modify-write operations in Linux are of three kinds:
460
461 ===================== =========================================
462 ``atomic_OP`` returns void
463 ``atomic_OP_return`` returns new value of the variable
464 ``atomic_fetch_OP`` returns the old value of the variable
465 ``atomic_cmpxchg`` returns the old value of the variable
466 ===================== =========================================
467
468 In QEMU, the second kind is named ``atomic_OP_fetch``.
469
470 - different atomic read-modify-write operations in Linux imply
471 a different set of memory barriers; in QEMU, all of them enforce
472 sequential consistency.
473
474 - in QEMU, ``qatomic_read()`` and ``qatomic_set()`` do not participate in
475 the total ordering enforced by sequentially-consistent operations.
476 This is because QEMU uses the C11 memory model. The following example
477 is correct in Linux but not in QEMU:
478
479 +----------------------------------+--------------------------------+
480 | Linux (correct) | QEMU (incorrect) |
481 +==================================+================================+
482 | :: | :: |
483 | | |
484 | a = atomic_fetch_add(&x, 2); | a = qatomic_fetch_add(&x, 2);|
485 | b = READ_ONCE(&y); | b = qatomic_read(&y); |
486 +----------------------------------+--------------------------------+
487
488 because the read of ``y`` can be moved (by either the processor or the
489 compiler) before the write of ``x``.
490
491 Fixing this requires an ``smp_mb()`` memory barrier between the write
492 of ``x`` and the read of ``y``. In the common case where only one thread
493 writes ``x``, it is also possible to write it like this:
494
495 +--------------------------------+
496 | QEMU (correct) |
497 +================================+
498 | :: |
499 | |
500 | a = qatomic_read(&x); |
501 | qatomic_set(&x, a + 2); |
502 | smp_mb(); |
503 | b = qatomic_read(&y); |
504 +--------------------------------+
505
506 Sources
507 =======
508
509 - ``Documentation/memory-barriers.txt`` from the Linux kernel