]> git.proxmox.com Git - ceph.git/blob - ceph/src/boost/libs/atomic/doc/atomic.qbk
add subtree-ish sources for 12.0.3
[ceph.git] / ceph / src / boost / libs / atomic / doc / atomic.qbk
1 [/
2 / Copyright (c) 2009 Helge Bahmann
3 / Copyright (c) 2014 Andrey Semashev
4 /
5 / Distributed under the Boost Software License, Version 1.0. (See accompanying
6 / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
7 /]
8
9 [library Boost.Atomic
10 [quickbook 1.4]
11 [authors [Bahmann, Helge][Semashev, Andrey]]
12 [copyright 2011 Helge Bahmann]
13 [copyright 2012 Tim Blechmann]
14 [copyright 2013 Andrey Semashev]
15 [id atomic]
16 [dirname atomic]
17 [purpose Atomic operations]
18 [license
19 Distributed under the Boost Software License, Version 1.0.
20 (See accompanying file LICENSE_1_0.txt or copy at
21 [@http://www.boost.org/LICENSE_1_0.txt])
22 ]
23 ]
24
25 [section:introduction Introduction]
26
27 [section:introduction_presenting Presenting Boost.Atomic]
28
29 [*Boost.Atomic] is a library that provides [^atomic]
30 data types and operations on these data types, as well as memory
31 ordering constraints required for coordinating multiple threads through
32 atomic variables. It implements the interface as defined by the C++11
33 standard, but makes this feature available for platforms lacking
34 system/compiler support for this particular C++11 feature.
35
36 Users of this library should already be familiar with concurrency
37 in general, as well as elementary concepts such as "mutual exclusion".
38
39 The implementation makes use of processor-specific instructions where
40 possible (via inline assembler, platform libraries or compiler
41 intrinsics), and falls back to "emulating" atomic operations through
42 locking.
43
44 [endsect]
45
46 [section:introduction_purpose Purpose]
47
48 Operations on "ordinary" variables are not guaranteed to be atomic.
49 This means that with [^int n=0] initially, two threads concurrently
50 executing
51
52 [c++]
53
54 void function()
55 {
56 n ++;
57 }
58
59 might result in [^n==1] instead of 2: Each thread will read the
60 old value into a processor register, increment it and write the result
61 back. Both threads may therefore write [^1], unaware that the other thread
62 is doing likewise.
63
64 Declaring [^atomic<int> n=0] instead, the same operation on
65 this variable will always result in [^n==2] as each operation on this
66 variable is ['atomic]: This means that each operation behaves as if it
67 were strictly sequentialized with respect to the other.
68
69 Atomic variables are useful for two purposes:
70
71 * as a means for coordinating multiple threads via custom
72 coordination protocols
73 * as faster alternatives to "locked" access to simple variables
74
75 Take a look at the [link atomic.usage_examples examples] section
76 for common patterns.
77
78 [endsect]
79
80 [endsect]
81
82 [section:thread_coordination Thread coordination using Boost.Atomic]
83
84 The most common use of [*Boost.Atomic] is to realize custom
85 thread synchronization protocols: The goal is to coordinate
86 accesses of threads to shared variables in order to avoid
87 "conflicts". The
88 programmer must be aware of the fact that
89 compilers, CPUs and the cache
90 hierarchies may generally reorder memory references at will.
91 As a consequence a program such as:
92
93 [c++]
94
95 int x = 0, int y = 0;
96
97 thread1:
98 x = 1;
99 y = 1;
100
101 thread2
102 if (y == 1) {
103 assert(x == 1);
104 }
105
106 might indeed fail as there is no guarantee that the read of `x`
107 by thread2 "sees" the write by thread1.
108
109 [*Boost.Atomic] uses a synchronisation concept based on the
110 ['happens-before] relation to describe the guarantees under
111 which situations such as the above one cannot occur.
112
113 The remainder of this section will discuss ['happens-before] in
114 a "hands-on" way instead of giving a fully formalized definition.
115 The reader is encouraged to additionally have a
116 look at the discussion of the correctness of a few of the
117 [link atomic.usage_examples examples] afterwards.
118
119 [section:mutex Enforcing ['happens-before] through mutual exclusion]
120
121 As an introductory example to understand how arguing using
122 ['happens-before] works, consider two threads synchronizing
123 using a common mutex:
124
125 [c++]
126
127 mutex m;
128
129 thread1:
130 m.lock();
131 ... /* A */
132 m.unlock();
133
134 thread2:
135 m.lock();
136 ... /* B */
137 m.unlock();
138
139 The "lockset-based intuition" would be to argue that A and B
140 cannot be executed concurrently as the code paths require a
141 common lock to be held.
142
143 One can however also arrive at the same conclusion using
144 ['happens-before]: Either thread1 or thread2 will succeed first
145 at [^m.lock()]. If this is be thread1, then as a consequence,
146 thread2 cannot succeed at [^m.lock()] before thread1 has executed
147 [^m.unlock()], consequently A ['happens-before] B in this case.
148 By symmetry, if thread2 succeeds at [^m.lock()] first, we can
149 conclude B ['happens-before] A.
150
151 Since this already exhausts all options, we can conclude that
152 either A ['happens-before] B or B ['happens-before] A must
153 always hold. Obviously cannot state ['which] of the two relationships
154 holds, but either one is sufficient to conclude that A and B
155 cannot conflict.
156
157 Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
158 implementation to see how the mutual exclusion concept can be
159 mapped to [*Boost.Atomic].
160
161 [endsect]
162
163 [section:release_acquire ['happens-before] through [^release] and [^acquire]]
164
165 The most basic pattern for coordinating threads via [*Boost.Atomic]
166 uses [^release] and [^acquire] on an atomic variable for coordination: If ...
167
168 * ... thread1 performs an operation A,
169 * ... thread1 subsequently writes (or atomically
170 modifies) an atomic variable with [^release] semantic,
171 * ... thread2 reads (or atomically reads-and-modifies)
172 the value this value from the same atomic variable with
173 [^acquire] semantic and
174 * ... thread2 subsequently performs an operation B,
175
176 ... then A ['happens-before] B.
177
178 Consider the following example
179
180 [c++]
181
182 atomic<int> a(0);
183
184 thread1:
185 ... /* A */
186 a.fetch_add(1, memory_order_release);
187
188 thread2:
189 int tmp = a.load(memory_order_acquire);
190 if (tmp == 1) {
191 ... /* B */
192 } else {
193 ... /* C */
194 }
195
196 In this example, two avenues for execution are possible:
197
198 * The [^store] operation by thread1 precedes the [^load] by thread2:
199 In this case thread2 will execute B and "A ['happens-before] B"
200 holds as all of the criteria above are satisfied.
201 * The [^load] operation by thread2 precedes the [^store] by thread1:
202 In this case, thread2 will execute C, but "A ['happens-before] C"
203 does ['not] hold: thread2 does not read the value written by
204 thread1 through [^a].
205
206 Therefore, A and B cannot conflict, but A and C ['can] conflict.
207
208 [endsect]
209
210 [section:fences Fences]
211
212 Ordering constraints are generally specified together with an access to
213 an atomic variable. It is however also possible to issue "fence"
214 operations in isolation, in this case the fence operates in
215 conjunction with preceding (for `acquire`, `consume` or `seq_cst`
216 operations) or succeeding (for `release` or `seq_cst`) atomic
217 operations.
218
219 The example from the previous section could also be written in
220 the following way:
221
222 [c++]
223
224 atomic<int> a(0);
225
226 thread1:
227 ... /* A */
228 atomic_thread_fence(memory_order_release);
229 a.fetch_add(1, memory_order_relaxed);
230
231 thread2:
232 int tmp = a.load(memory_order_relaxed);
233 if (tmp == 1) {
234 atomic_thread_fence(memory_order_acquire);
235 ... /* B */
236 } else {
237 ... /* C */
238 }
239
240 This provides the same ordering guarantees as previously, but
241 elides a (possibly expensive) memory ordering operation in
242 the case C is executed.
243
244 [endsect]
245
246 [section:release_consume ['happens-before] through [^release] and [^consume]]
247
248 The second pattern for coordinating threads via [*Boost.Atomic]
249 uses [^release] and [^consume] on an atomic variable for coordination: If ...
250
251 * ... thread1 performs an operation A,
252 * ... thread1 subsequently writes (or atomically modifies) an
253 atomic variable with [^release] semantic,
254 * ... thread2 reads (or atomically reads-and-modifies)
255 the value this value from the same atomic variable with [^consume] semantic and
256 * ... thread2 subsequently performs an operation B that is ['computationally
257 dependent on the value of the atomic variable],
258
259 ... then A ['happens-before] B.
260
261 Consider the following example
262
263 [c++]
264
265 atomic<int> a(0);
266 complex_data_structure data[2];
267
268 thread1:
269 data[1] = ...; /* A */
270 a.store(1, memory_order_release);
271
272 thread2:
273 int index = a.load(memory_order_consume);
274 complex_data_structure tmp = data[index]; /* B */
275
276 In this example, two avenues for execution are possible:
277
278 * The [^store] operation by thread1 precedes the [^load] by thread2:
279 In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
280 holds as all of the criteria above are satisfied.
281 * The [^load] operation by thread2 precedes the [^store] by thread1:
282 In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
283 does ['not] hold: thread2 does not read the value written by
284 thread1 through [^a].
285
286 Here, the ['happens-before] relationship helps ensure that any
287 accesses (presumable writes) to [^data\[1\]] by thread1 happen before
288 before the accesses (presumably reads) to [^data\[1\]] by thread2:
289 Lacking this relationship, thread2 might see stale/inconsistent
290 data.
291
292 Note that in this example, the fact that operation B is computationally
293 dependent on the atomic variable, therefore the following program would
294 be erroneous:
295
296 [c++]
297
298 atomic<int> a(0);
299 complex_data_structure data[2];
300
301 thread1:
302 data[1] = ...; /* A */
303 a.store(1, memory_order_release);
304
305 thread2:
306 int index = a.load(memory_order_consume);
307 complex_data_structure tmp;
308 if (index == 0)
309 tmp = data[0];
310 else
311 tmp = data[1];
312
313 [^consume] is most commonly (and most safely! see
314 [link atomic.limitations limitations]) used with
315 pointers, compare for example the
316 [link boost_atomic.usage_examples.singleton singleton with double-checked locking].
317
318 [endsect]
319
320 [section:seq_cst Sequential consistency]
321
322 The third pattern for coordinating threads via [*Boost.Atomic]
323 uses [^seq_cst] for coordination: If ...
324
325 * ... thread1 performs an operation A,
326 * ... thread1 subsequently performs any operation with [^seq_cst],
327 * ... thread1 subsequently performs an operation B,
328 * ... thread2 performs an operation C,
329 * ... thread2 subsequently performs any operation with [^seq_cst],
330 * ... thread2 subsequently performs an operation D,
331
332 then either "A ['happens-before] D" or "C ['happens-before] B" holds.
333
334 In this case it does not matter whether thread1 and thread2 operate
335 on the same or different atomic variables, or use a "stand-alone"
336 [^atomic_thread_fence] operation.
337
338 [endsect]
339
340 [endsect]
341
342 [section:interface Programming interfaces]
343
344 [section:configuration Configuration and building]
345
346 The library contains header-only and compiled parts. The library is
347 header-only for lock-free cases but requires a separate binary to
348 implement the lock-based emulation. Users are able to detect whether
349 linking to the compiled part is required by checking the
350 [link atomic.interface.feature_macros feature macros].
351
352 The following macros affect library behavior:
353
354 [table
355 [[Macro] [Description]]
356 [[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
357 the library assumes the target CPU does not support `cmpxchg8b` instruction used
358 to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
359 The library does not perform runtime detection of this instruction, so running the code
360 that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
361 Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
362 this information from the compiler-defined macros.]]
363 [[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
364 the library assumes the target CPU does not support `cmpxchg16b` instruction used
365 to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
366 all Intel CPUs and current AMD CPUs support this instruction. The library does not
367 perform runtime detection of this instruction, so running the code that uses 128-bit
368 atomics on such CPUs will result in crashes, unless this macro is defined. Note that
369 the macro does not affect GCC and compatible compilers because the library infers
370 this information from the compiler-defined macros.]]
371 [[`BOOST_ATOMIC_NO_MFENCE`] [Affects 32-bit x86 Oracle Studio builds. When defined,
372 the library assumes the target CPU does not support `mfence` instruction used
373 to implement thread fences. This instruction was added with SSE2 instruction set extension,
374 which was available in CPUs since Intel Pentium 4. The library does not perform runtime detection
375 of this instruction, so running the library code on older CPUs will result in crashes, unless
376 this macro is defined. Note that the macro does not affect MSVC, GCC and compatible compilers
377 because the library infers this information from the compiler-defined macros.]]
378 [[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
379 This is mostly used for testing and should not be used in real world projects.]]
380 [[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
381 the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
382 libraries, not just [*Boost.Atomic].]]
383 [[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
384 When defined, disables auto-linking. The latter macro affects all Boost libraries,
385 not just [*Boost.Atomic].]]
386 ]
387
388 Besides macros, it is important to specify the correct compiler options for the target CPU.
389 With GCC and compatible compilers this affects whether particular atomic operations are
390 lock-free or not.
391
392 Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
393 For example, you can build [*Boost.Atomic] with the following command line:
394
395 [pre
396 bjam --with-atomic variant=release instruction-set=core2 stage
397 ]
398
399 [endsect]
400
401 [section:interface_memory_order Memory order]
402
403 #include <boost/memory_order.hpp>
404
405 The enumeration [^boost::memory_order] defines the following
406 values to represent memory ordering constraints:
407
408 [table
409 [[Constant] [Description]]
410 [[`memory_order_relaxed`] [No ordering constraint.
411 Informally speaking, following operations may be reordered before,
412 preceding operations may be reordered after the atomic
413 operation. This constraint is suitable only when
414 either a) further operations do not depend on the outcome
415 of the atomic operation or b) ordering is enforced through
416 stand-alone `atomic_thread_fence` operations. The operation on
417 the atomic value itself is still atomic though.
418 ]]
419 [[`memory_order_release`] [
420 Perform `release` operation. Informally speaking,
421 prevents all preceding memory operations to be reordered
422 past this point.
423 ]]
424 [[`memory_order_acquire`] [
425 Perform `acquire` operation. Informally speaking,
426 prevents succeeding memory operations to be reordered
427 before this point.
428 ]]
429 [[`memory_order_consume`] [
430 Perform `consume` operation. More relaxed (and
431 on some architectures more efficient) than `memory_order_acquire`
432 as it only affects succeeding operations that are
433 computationally-dependent on the value retrieved from
434 an atomic variable.
435 ]]
436 [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
437 [[`memory_order_seq_cst`] [
438 Enforce sequential consistency. Implies `memory_order_acq_rel`, but
439 additionally enforces total order for all operations such qualified.
440 ]]
441 ]
442
443 See section [link atomic.thread_coordination ['happens-before]] for explanation
444 of the various ordering constraints.
445
446 [endsect]
447
448 [section:interface_atomic_object Atomic objects]
449
450 #include <boost/atomic/atomic.hpp>
451
452 [^boost::atomic<['T]>] provides methods for atomically accessing
453 variables of a suitable type [^['T]]. The type is suitable if
454 it is /trivially copyable/ (3.9/9 \[basic.types\]). Following are
455 examples of the types compatible with this requirement:
456
457 * a scalar type (e.g. integer, boolean, enum or pointer type)
458 * a [^class] or [^struct] that has no non-trivial copy or move
459 constructors or assignment operators, has a trivial destructor,
460 and that is comparable via [^memcmp].
461
462 Note that classes with virtual functions or virtual base classes
463 do not satisfy the requirements. Also be warned
464 that structures with "padding" between data members may compare
465 non-equal via [^memcmp] even though all members are equal.
466
467 [section:interface_atomic_generic [^boost::atomic<['T]>] template class]
468
469 All atomic objects supports the following operations:
470
471 [table
472 [[Syntax] [Description]]
473 [
474 [`atomic()`]
475 [Initialize to an unspecified value]
476 ]
477 [
478 [`atomic(T initial_value)`]
479 [Initialize to [^initial_value]]
480 ]
481 [
482 [`bool is_lock_free()`]
483 [Checks if the atomic object is lock-free]
484 ]
485 [
486 [`T load(memory_order order)`]
487 [Return current value]
488 ]
489 [
490 [`void store(T value, memory_order order)`]
491 [Write new value to atomic variable]
492 ]
493 [
494 [`T exchange(T new_value, memory_order order)`]
495 [Exchange current value with `new_value`, returning current value]
496 ]
497 [
498 [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
499 [Compare current value with `expected`, change it to `desired` if matches.
500 Returns `true` if an exchange has been performed, and always writes the
501 previous value back in `expected`. May fail spuriously, so must generally be
502 retried in a loop.]
503 ]
504 [
505 [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
506 [Compare current value with `expected`, change it to `desired` if matches.
507 Returns `true` if an exchange has been performed, and always writes the
508 previous value back in `expected`. May fail spuriously, so must generally be
509 retried in a loop.]
510 ]
511 [
512 [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
513 [Compare current value with `expected`, change it to `desired` if matches.
514 Returns `true` if an exchange has been performed, and always writes the
515 previous value back in `expected`.]
516 ]
517 [
518 [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
519 [Compare current value with `expected`, change it to `desired` if matches.
520 Returns `true` if an exchange has been performed, and always writes the
521 previous value back in `expected`.]
522 ]
523 ]
524
525 `order` always has `memory_order_seq_cst` as default parameter.
526
527 The `compare_exchange_weak`/`compare_exchange_strong` variants
528 taking four parameters differ from the three parameter variants
529 in that they allow a different memory ordering constraint to
530 be specified in case the operation fails.
531
532 In addition to these explicit operations, each
533 [^atomic<['T]>] object also supports
534 implicit [^store] and [^load] through the use of "assignment"
535 and "conversion to [^T]" operators. Avoid using these operators,
536 as they do not allow explicit specification of a memory ordering
537 constraint.
538
539 [endsect]
540
541 [section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
542
543 In addition to the operations listed in the previous section,
544 [^boost::atomic<['I]>] for integral
545 types [^['I]] supports the following operations:
546
547 [table
548 [[Syntax] [Description]]
549 [
550 [`T fetch_add(T v, memory_order order)`]
551 [Add `v` to variable, returning previous value]
552 ]
553 [
554 [`T fetch_sub(T v, memory_order order)`]
555 [Subtract `v` from variable, returning previous value]
556 ]
557 [
558 [`T fetch_and(T v, memory_order order)`]
559 [Apply bit-wise "and" with `v` to variable, returning previous value]
560 ]
561 [
562 [`T fetch_or(T v, memory_order order)`]
563 [Apply bit-wise "or" with `v` to variable, returning previous value]
564 ]
565 [
566 [`T fetch_xor(T v, memory_order order)`]
567 [Apply bit-wise "xor" with `v` to variable, returning previous value]
568 ]
569 ]
570
571 `order` always has `memory_order_seq_cst` as default parameter.
572
573 In addition to these explicit operations, each
574 [^boost::atomic<['I]>] object also
575 supports implicit pre-/post- increment/decrement, as well
576 as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
577 Avoid using these operators,
578 as they do not allow explicit specification of a memory ordering
579 constraint.
580
581 [endsect]
582
583 [section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
584
585 In addition to the operations applicable to all atomic object,
586 [^boost::atomic<['P]>] for pointer
587 types [^['P]] (other than [^void] pointers) support the following operations:
588
589 [table
590 [[Syntax] [Description]]
591 [
592 [`T fetch_add(ptrdiff_t v, memory_order order)`]
593 [Add `v` to variable, returning previous value]
594 ]
595 [
596 [`T fetch_sub(ptrdiff_t v, memory_order order)`]
597 [Subtract `v` from variable, returning previous value]
598 ]
599 ]
600
601 `order` always has `memory_order_seq_cst` as default parameter.
602
603 In addition to these explicit operations, each
604 [^boost::atomic<['P]>] object also
605 supports implicit pre-/post- increment/decrement, as well
606 as the operators `+=`, `-=`. Avoid using these operators,
607 as they do not allow explicit specification of a memory ordering
608 constraint.
609
610 [endsect]
611
612 [endsect]
613
614 [section:interface_fences Fences]
615
616 #include <boost/atomic/fences.hpp>
617
618 [table
619 [[Syntax] [Description]]
620 [
621 [`void atomic_thread_fence(memory_order order)`]
622 [Issue fence for coordination with other threads.]
623 ]
624 [
625 [`void atomic_signal_fence(memory_order order)`]
626 [Issue fence for coordination with signal handler (only in same thread).]
627 ]
628 ]
629
630 [endsect]
631
632 [section:feature_macros Feature testing macros]
633
634 #include <boost/atomic/capabilities.hpp>
635
636 [*Boost.Atomic] defines a number of macros to allow compile-time
637 detection whether an atomic data type is implemented using
638 "true" atomic operations, or whether an internal "lock" is
639 used to provide atomicity. The following macros will be
640 defined to `0` if operations on the data type always
641 require a lock, to `1` if operations on the data type may
642 sometimes require a lock, and to `2` if they are always lock-free:
643
644 [table
645 [[Macro] [Description]]
646 [
647 [`BOOST_ATOMIC_FLAG_LOCK_FREE`]
648 [Indicate whether `atomic_flag` is lock-free]
649 ]
650 [
651 [`BOOST_ATOMIC_BOOL_LOCK_FREE`]
652 [Indicate whether `atomic<bool>` is lock-free]
653 ]
654 [
655 [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
656 [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
657 ]
658 [
659 [`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
660 [Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
661 ]
662 [
663 [`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
664 [Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
665 ]
666 [
667 [`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
668 [Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
669 ]
670 [
671 [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
672 [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
673 ]
674 [
675 [`BOOST_ATOMIC_INT_LOCK_FREE`]
676 [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
677 ]
678 [
679 [`BOOST_ATOMIC_LONG_LOCK_FREE`]
680 [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
681 ]
682 [
683 [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
684 [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
685 ]
686 [
687 [`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
688 [Indicate whether `atomic<T *>` is lock-free]
689 ]
690 [
691 [`BOOST_ATOMIC_THREAD_FENCE`]
692 [Indicate whether `atomic_thread_fence` function is lock-free]
693 ]
694 [
695 [`BOOST_ATOMIC_SIGNAL_FENCE`]
696 [Indicate whether `atomic_signal_fence` function is lock-free]
697 ]
698 ]
699
700 In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
701 which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
702 to indicate whether the corresponding operations are lock-free or not.
703
704 [table
705 [[Macro] [Description]]
706 [
707 [`BOOST_ATOMIC_INT8_LOCK_FREE`]
708 [Indicate whether `atomic<int8_type>` is lock-free.]
709 ]
710 [
711 [`BOOST_ATOMIC_INT16_LOCK_FREE`]
712 [Indicate whether `atomic<int16_type>` is lock-free.]
713 ]
714 [
715 [`BOOST_ATOMIC_INT32_LOCK_FREE`]
716 [Indicate whether `atomic<int32_type>` is lock-free.]
717 ]
718 [
719 [`BOOST_ATOMIC_INT64_LOCK_FREE`]
720 [Indicate whether `atomic<int64_type>` is lock-free.]
721 ]
722 [
723 [`BOOST_ATOMIC_INT128_LOCK_FREE`]
724 [Indicate whether `atomic<int128_type>` is lock-free.]
725 ]
726 [
727 [`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
728 [Defined after including `atomic_flag.hpp`, if the implementation
729 does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
730 initialization of `atomic_flag`. This macro is typically defined
731 for pre-C++11 compilers.]
732 ]
733 ]
734
735 In the table above, `intN_type` is a type that fits storage of contiguous `N` bits, suitably aligned for atomic operations.
736
737 [endsect]
738
739 [endsect]
740
741 [section:usage_examples Usage examples]
742
743 [include examples.qbk]
744
745 [endsect]
746
747 [/
748 [section:platform_support Implementing support for additional platforms]
749
750 [include platform.qbk]
751
752 [endsect]
753 ]
754
755 [/ [xinclude autodoc.xml] ]
756
757 [section:limitations Limitations]
758
759 While [*Boost.Atomic] strives to implement the atomic operations
760 from C++11 as faithfully as possible, there are a few
761 limitations that cannot be lifted without compiler support:
762
763 * [*Using non-POD-classes as template parameter to `atomic<T>` results
764 in undefined behavior]: This means that any class containing a
765 constructor, destructor, virtual methods or access control
766 specifications is not a valid argument in C++98. C++11 relaxes
767 this slightly by allowing "trivial" classes containing only
768 empty constructors. [*Advise]: Use only POD types.
769 * [*C++98 compilers may transform computation- to control-dependency]:
770 Crucially, `memory_order_consume` only affects computationally-dependent
771 operations, but in general there is nothing preventing a compiler
772 from transforming a computation dependency into a control dependency.
773 A C++11 compiler would be forbidden from such a transformation.
774 [*Advise]: Use `memory_order_consume` only in conjunction with
775 pointer values, as the compiler cannot speculate and transform
776 these into control dependencies.
777 * [*Fence operations enforce "too strong" compiler ordering]:
778 Semantically, `memory_order_acquire`/`memory_order_consume`
779 and `memory_order_release` need to restrain reordering of
780 memory operations only in one direction. Since there is no
781 way to express this constraint to the compiler, these act
782 as "full compiler barriers" in this implementation. In corner
783 cases this may result in a less efficient code than a C++11 compiler
784 could generate.
785 * [*No interprocess fallback]: using `atomic<T>` in shared memory only works
786 correctly, if `atomic<T>::is_lock_free() == true`.
787
788 [endsect]
789
790 [section:porting Porting]
791
792 [section:unit_tests Unit tests]
793
794 [*Boost.Atomic] provides a unit test suite to verify that the
795 implementation behaves as expected:
796
797 * [*fallback_api.cpp] verifies that the fallback-to-locking aspect
798 of [*Boost.Atomic] compiles and has correct value semantics.
799 * [*native_api.cpp] verifies that all atomic operations have correct
800 value semantics (e.g. "fetch_add" really adds the desired value,
801 returning the previous). It is a rough "smoke-test" to help weed
802 out the most obvious mistakes (for example width overflow,
803 signed/unsigned extension, ...).
804 * [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
805 are set properly according to the expectations for a given
806 platform, and that they match up with the [*is_always_lock_free] and
807 [*is_lock_free] members of the [*atomic] object instances.
808 * [*atomicity.cpp] lets two threads race against each other modifying
809 a shared variable, verifying that the operations behave atomic
810 as appropriate. By nature, this test is necessarily stochastic, and
811 the test self-calibrates to yield 99% confidence that a
812 positive result indicates absence of an error. This test is
813 very useful on uni-processor systems with preemption already.
814 * [*ordering.cpp] lets two threads race against each other accessing
815 multiple shared variables, verifying that the operations
816 exhibit the expected ordering behavior. By nature, this test is
817 necessarily stochastic, and the test attempts to self-calibrate to
818 yield 99% confidence that a positive result indicates absence
819 of an error. This only works on true multi-processor (or multi-core)
820 systems. It does not yield any result on uni-processor systems
821 or emulators (due to there being no observable reordering even
822 the order=relaxed case) and will report that fact.
823
824 [endsect]
825
826 [section:tested_compilers Tested compilers]
827
828 [*Boost.Atomic] has been tested on and is known to work on
829 the following compilers/platforms:
830
831 * gcc 4.x: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
832 * Visual Studio Express 2008/Windows XP, x86, x64, ARM
833
834 [endsect]
835
836 [section:acknowledgements Acknowledgements]
837
838 * Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.
839
840 [endsect]
841
842 [endsect]