2 / Copyright (c) 2009 Helge Bahmann
3 / Copyright (c) 2014 Andrey Semashev
5 / Distributed under the Boost Software License, Version 1.0. (See accompanying
6 / file LICENSE_1_0.txt or copy at http://www.boost.org/LICENSE_1_0.txt)
11 [authors [Bahmann, Helge][Semashev, Andrey]]
12 [copyright 2011 Helge Bahmann]
13 [copyright 2012 Tim Blechmann]
14 [copyright 2013 Andrey Semashev]
17 [purpose Atomic operations]
19 Distributed under the Boost Software License, Version 1.0.
20 (See accompanying file LICENSE_1_0.txt or copy at
21 [@http://www.boost.org/LICENSE_1_0.txt])
25 [section:introduction Introduction]
27 [section:introduction_presenting Presenting Boost.Atomic]
29 [*Boost.Atomic] is a library that provides [^atomic]
30 data types and operations on these data types, as well as memory
31 ordering constraints required for coordinating multiple threads through
32 atomic variables. It implements the interface as defined by the C++11
33 standard, but makes this feature available for platforms lacking
34 system/compiler support for this particular C++11 feature.
36 Users of this library should already be familiar with concurrency
37 in general, as well as elementary concepts such as "mutual exclusion".
39 The implementation makes use of processor-specific instructions where
40 possible (via inline assembler, platform libraries or compiler
41 intrinsics), and falls back to "emulating" atomic operations through
46 [section:introduction_purpose Purpose]
48 Operations on "ordinary" variables are not guaranteed to be atomic.
49 This means that with [^int n=0] initially, two threads concurrently
59 might result in [^n==1] instead of 2: Each thread will read the
60 old value into a processor register, increment it and write the result
61 back. Both threads may therefore write [^1], unaware that the other thread
64 Declaring [^atomic<int> n=0] instead, the same operation on
65 this variable will always result in [^n==2] as each operation on this
66 variable is ['atomic]: This means that each operation behaves as if it
67 were strictly sequentialized with respect to the other.
69 Atomic variables are useful for two purposes:
71 * as a means for coordinating multiple threads via custom
72 coordination protocols
73 * as faster alternatives to "locked" access to simple variables
75 Take a look at the [link atomic.usage_examples examples] section
82 [section:thread_coordination Thread coordination using Boost.Atomic]
84 The most common use of [*Boost.Atomic] is to realize custom
85 thread synchronization protocols: The goal is to coordinate
86 accesses of threads to shared variables in order to avoid
88 programmer must be aware of the fact that
89 compilers, CPUs and the cache
90 hierarchies may generally reorder memory references at will.
91 As a consequence a program such as:
106 might indeed fail as there is no guarantee that the read of `x`
107 by thread2 "sees" the write by thread1.
109 [*Boost.Atomic] uses a synchronisation concept based on the
110 ['happens-before] relation to describe the guarantees under
111 which situations such as the above one cannot occur.
113 The remainder of this section will discuss ['happens-before] in
114 a "hands-on" way instead of giving a fully formalized definition.
115 The reader is encouraged to additionally have a
116 look at the discussion of the correctness of a few of the
117 [link atomic.usage_examples examples] afterwards.
119 [section:mutex Enforcing ['happens-before] through mutual exclusion]
121 As an introductory example to understand how arguing using
122 ['happens-before] works, consider two threads synchronizing
123 using a common mutex:
139 The "lockset-based intuition" would be to argue that A and B
140 cannot be executed concurrently as the code paths require a
141 common lock to be held.
143 One can however also arrive at the same conclusion using
144 ['happens-before]: Either thread1 or thread2 will succeed first
145 at [^m.lock()]. If this is be thread1, then as a consequence,
146 thread2 cannot succeed at [^m.lock()] before thread1 has executed
147 [^m.unlock()], consequently A ['happens-before] B in this case.
148 By symmetry, if thread2 succeeds at [^m.lock()] first, we can
149 conclude B ['happens-before] A.
151 Since this already exhausts all options, we can conclude that
152 either A ['happens-before] B or B ['happens-before] A must
153 always hold. Obviously cannot state ['which] of the two relationships
154 holds, but either one is sufficient to conclude that A and B
157 Compare the [link boost_atomic.usage_examples.example_spinlock spinlock]
158 implementation to see how the mutual exclusion concept can be
159 mapped to [*Boost.Atomic].
163 [section:release_acquire ['happens-before] through [^release] and [^acquire]]
165 The most basic pattern for coordinating threads via [*Boost.Atomic]
166 uses [^release] and [^acquire] on an atomic variable for coordination: If ...
168 * ... thread1 performs an operation A,
169 * ... thread1 subsequently writes (or atomically
170 modifies) an atomic variable with [^release] semantic,
171 * ... thread2 reads (or atomically reads-and-modifies)
172 the value this value from the same atomic variable with
173 [^acquire] semantic and
174 * ... thread2 subsequently performs an operation B,
176 ... then A ['happens-before] B.
178 Consider the following example
186 a.fetch_add(1, memory_order_release);
189 int tmp = a.load(memory_order_acquire);
196 In this example, two avenues for execution are possible:
198 * The [^store] operation by thread1 precedes the [^load] by thread2:
199 In this case thread2 will execute B and "A ['happens-before] B"
200 holds as all of the criteria above are satisfied.
201 * The [^load] operation by thread2 precedes the [^store] by thread1:
202 In this case, thread2 will execute C, but "A ['happens-before] C"
203 does ['not] hold: thread2 does not read the value written by
204 thread1 through [^a].
206 Therefore, A and B cannot conflict, but A and C ['can] conflict.
210 [section:fences Fences]
212 Ordering constraints are generally specified together with an access to
213 an atomic variable. It is however also possible to issue "fence"
214 operations in isolation, in this case the fence operates in
215 conjunction with preceding (for `acquire`, `consume` or `seq_cst`
216 operations) or succeeding (for `release` or `seq_cst`) atomic
219 The example from the previous section could also be written in
228 atomic_thread_fence(memory_order_release);
229 a.fetch_add(1, memory_order_relaxed);
232 int tmp = a.load(memory_order_relaxed);
234 atomic_thread_fence(memory_order_acquire);
240 This provides the same ordering guarantees as previously, but
241 elides a (possibly expensive) memory ordering operation in
242 the case C is executed.
246 [section:release_consume ['happens-before] through [^release] and [^consume]]
248 The second pattern for coordinating threads via [*Boost.Atomic]
249 uses [^release] and [^consume] on an atomic variable for coordination: If ...
251 * ... thread1 performs an operation A,
252 * ... thread1 subsequently writes (or atomically modifies) an
253 atomic variable with [^release] semantic,
254 * ... thread2 reads (or atomically reads-and-modifies)
255 the value this value from the same atomic variable with [^consume] semantic and
256 * ... thread2 subsequently performs an operation B that is ['computationally
257 dependent on the value of the atomic variable],
259 ... then A ['happens-before] B.
261 Consider the following example
266 complex_data_structure data[2];
269 data[1] = ...; /* A */
270 a.store(1, memory_order_release);
273 int index = a.load(memory_order_consume);
274 complex_data_structure tmp = data[index]; /* B */
276 In this example, two avenues for execution are possible:
278 * The [^store] operation by thread1 precedes the [^load] by thread2:
279 In this case thread2 will read [^data\[1\]] and "A ['happens-before] B"
280 holds as all of the criteria above are satisfied.
281 * The [^load] operation by thread2 precedes the [^store] by thread1:
282 In this case thread2 will read [^data\[0\]] and "A ['happens-before] B"
283 does ['not] hold: thread2 does not read the value written by
284 thread1 through [^a].
286 Here, the ['happens-before] relationship helps ensure that any
287 accesses (presumable writes) to [^data\[1\]] by thread1 happen before
288 before the accesses (presumably reads) to [^data\[1\]] by thread2:
289 Lacking this relationship, thread2 might see stale/inconsistent
292 Note that in this example, the fact that operation B is computationally
293 dependent on the atomic variable, therefore the following program would
299 complex_data_structure data[2];
302 data[1] = ...; /* A */
303 a.store(1, memory_order_release);
306 int index = a.load(memory_order_consume);
307 complex_data_structure tmp;
313 [^consume] is most commonly (and most safely! see
314 [link atomic.limitations limitations]) used with
315 pointers, compare for example the
316 [link boost_atomic.usage_examples.singleton singleton with double-checked locking].
320 [section:seq_cst Sequential consistency]
322 The third pattern for coordinating threads via [*Boost.Atomic]
323 uses [^seq_cst] for coordination: If ...
325 * ... thread1 performs an operation A,
326 * ... thread1 subsequently performs any operation with [^seq_cst],
327 * ... thread1 subsequently performs an operation B,
328 * ... thread2 performs an operation C,
329 * ... thread2 subsequently performs any operation with [^seq_cst],
330 * ... thread2 subsequently performs an operation D,
332 then either "A ['happens-before] D" or "C ['happens-before] B" holds.
334 In this case it does not matter whether thread1 and thread2 operate
335 on the same or different atomic variables, or use a "stand-alone"
336 [^atomic_thread_fence] operation.
342 [section:interface Programming interfaces]
344 [section:configuration Configuration and building]
346 The library contains header-only and compiled parts. The library is
347 header-only for lock-free cases but requires a separate binary to
348 implement the lock-based emulation. Users are able to detect whether
349 linking to the compiled part is required by checking the
350 [link atomic.interface.feature_macros feature macros].
352 The following macros affect library behavior:
355 [[Macro] [Description]]
356 [[`BOOST_ATOMIC_NO_CMPXCHG8B`] [Affects 32-bit x86 Oracle Studio builds. When defined,
357 the library assumes the target CPU does not support `cmpxchg8b` instruction used
358 to support 64-bit atomic operations. This is the case with very old CPUs (pre-Pentium).
359 The library does not perform runtime detection of this instruction, so running the code
360 that uses 64-bit atomics on such CPUs will result in crashes, unless this macro is defined.
361 Note that the macro does not affect MSVC, GCC and compatible compilers because the library infers
362 this information from the compiler-defined macros.]]
363 [[`BOOST_ATOMIC_NO_CMPXCHG16B`] [Affects 64-bit x86 MSVC and Oracle Studio builds. When defined,
364 the library assumes the target CPU does not support `cmpxchg16b` instruction used
365 to support 128-bit atomic operations. This is the case with some early 64-bit AMD CPUs,
366 all Intel CPUs and current AMD CPUs support this instruction. The library does not
367 perform runtime detection of this instruction, so running the code that uses 128-bit
368 atomics on such CPUs will result in crashes, unless this macro is defined. Note that
369 the macro does not affect GCC and compatible compilers because the library infers
370 this information from the compiler-defined macros.]]
371 [[`BOOST_ATOMIC_NO_MFENCE`] [Affects 32-bit x86 Oracle Studio builds. When defined,
372 the library assumes the target CPU does not support `mfence` instruction used
373 to implement thread fences. This instruction was added with SSE2 instruction set extension,
374 which was available in CPUs since Intel Pentium 4. The library does not perform runtime detection
375 of this instruction, so running the library code on older CPUs will result in crashes, unless
376 this macro is defined. Note that the macro does not affect MSVC, GCC and compatible compilers
377 because the library infers this information from the compiler-defined macros.]]
378 [[`BOOST_ATOMIC_FORCE_FALLBACK`] [When defined, all operations are implemented with locks.
379 This is mostly used for testing and should not be used in real world projects.]]
380 [[`BOOST_ATOMIC_DYN_LINK` and `BOOST_ALL_DYN_LINK`] [Control library linking. If defined,
381 the library assumes dynamic linking, otherwise static. The latter macro affects all Boost
382 libraries, not just [*Boost.Atomic].]]
383 [[`BOOST_ATOMIC_NO_LIB` and `BOOST_ALL_NO_LIB`] [Control library auto-linking on Windows.
384 When defined, disables auto-linking. The latter macro affects all Boost libraries,
385 not just [*Boost.Atomic].]]
388 Besides macros, it is important to specify the correct compiler options for the target CPU.
389 With GCC and compatible compilers this affects whether particular atomic operations are
392 Boost building process is described in the [@http://www.boost.org/doc/libs/release/more/getting_started/ Getting Started guide].
393 For example, you can build [*Boost.Atomic] with the following command line:
396 bjam --with-atomic variant=release instruction-set=core2 stage
401 [section:interface_memory_order Memory order]
403 #include <boost/memory_order.hpp>
405 The enumeration [^boost::memory_order] defines the following
406 values to represent memory ordering constraints:
409 [[Constant] [Description]]
410 [[`memory_order_relaxed`] [No ordering constraint.
411 Informally speaking, following operations may be reordered before,
412 preceding operations may be reordered after the atomic
413 operation. This constraint is suitable only when
414 either a) further operations do not depend on the outcome
415 of the atomic operation or b) ordering is enforced through
416 stand-alone `atomic_thread_fence` operations. The operation on
417 the atomic value itself is still atomic though.
419 [[`memory_order_release`] [
420 Perform `release` operation. Informally speaking,
421 prevents all preceding memory operations to be reordered
424 [[`memory_order_acquire`] [
425 Perform `acquire` operation. Informally speaking,
426 prevents succeeding memory operations to be reordered
429 [[`memory_order_consume`] [
430 Perform `consume` operation. More relaxed (and
431 on some architectures more efficient) than `memory_order_acquire`
432 as it only affects succeeding operations that are
433 computationally-dependent on the value retrieved from
436 [[`memory_order_acq_rel`] [Perform both `release` and `acquire` operation]]
437 [[`memory_order_seq_cst`] [
438 Enforce sequential consistency. Implies `memory_order_acq_rel`, but
439 additionally enforces total order for all operations such qualified.
443 See section [link atomic.thread_coordination ['happens-before]] for explanation
444 of the various ordering constraints.
448 [section:interface_atomic_object Atomic objects]
450 #include <boost/atomic/atomic.hpp>
452 [^boost::atomic<['T]>] provides methods for atomically accessing
453 variables of a suitable type [^['T]]. The type is suitable if
454 it is /trivially copyable/ (3.9/9 \[basic.types\]). Following are
455 examples of the types compatible with this requirement:
457 * a scalar type (e.g. integer, boolean, enum or pointer type)
458 * a [^class] or [^struct] that has no non-trivial copy or move
459 constructors or assignment operators, has a trivial destructor,
460 and that is comparable via [^memcmp].
462 Note that classes with virtual functions or virtual base classes
463 do not satisfy the requirements. Also be warned
464 that structures with "padding" between data members may compare
465 non-equal via [^memcmp] even though all members are equal.
467 [section:interface_atomic_generic [^boost::atomic<['T]>] template class]
469 All atomic objects supports the following operations:
472 [[Syntax] [Description]]
475 [Initialize to an unspecified value]
478 [`atomic(T initial_value)`]
479 [Initialize to [^initial_value]]
482 [`bool is_lock_free()`]
483 [Checks if the atomic object is lock-free]
486 [`T load(memory_order order)`]
487 [Return current value]
490 [`void store(T value, memory_order order)`]
491 [Write new value to atomic variable]
494 [`T exchange(T new_value, memory_order order)`]
495 [Exchange current value with `new_value`, returning current value]
498 [`bool compare_exchange_weak(T & expected, T desired, memory_order order)`]
499 [Compare current value with `expected`, change it to `desired` if matches.
500 Returns `true` if an exchange has been performed, and always writes the
501 previous value back in `expected`. May fail spuriously, so must generally be
505 [`bool compare_exchange_weak(T & expected, T desired, memory_order success_order, memory_order failure_order)`]
506 [Compare current value with `expected`, change it to `desired` if matches.
507 Returns `true` if an exchange has been performed, and always writes the
508 previous value back in `expected`. May fail spuriously, so must generally be
512 [`bool compare_exchange_strong(T & expected, T desired, memory_order order)`]
513 [Compare current value with `expected`, change it to `desired` if matches.
514 Returns `true` if an exchange has been performed, and always writes the
515 previous value back in `expected`.]
518 [`bool compare_exchange_strong(T & expected, T desired, memory_order success_order, memory_order failure_order))`]
519 [Compare current value with `expected`, change it to `desired` if matches.
520 Returns `true` if an exchange has been performed, and always writes the
521 previous value back in `expected`.]
525 `order` always has `memory_order_seq_cst` as default parameter.
527 The `compare_exchange_weak`/`compare_exchange_strong` variants
528 taking four parameters differ from the three parameter variants
529 in that they allow a different memory ordering constraint to
530 be specified in case the operation fails.
532 In addition to these explicit operations, each
533 [^atomic<['T]>] object also supports
534 implicit [^store] and [^load] through the use of "assignment"
535 and "conversion to [^T]" operators. Avoid using these operators,
536 as they do not allow explicit specification of a memory ordering
541 [section:interface_atomic_integral [^boost::atomic<['integral]>] template class]
543 In addition to the operations listed in the previous section,
544 [^boost::atomic<['I]>] for integral
545 types [^['I]] supports the following operations:
548 [[Syntax] [Description]]
550 [`T fetch_add(T v, memory_order order)`]
551 [Add `v` to variable, returning previous value]
554 [`T fetch_sub(T v, memory_order order)`]
555 [Subtract `v` from variable, returning previous value]
558 [`T fetch_and(T v, memory_order order)`]
559 [Apply bit-wise "and" with `v` to variable, returning previous value]
562 [`T fetch_or(T v, memory_order order)`]
563 [Apply bit-wise "or" with `v` to variable, returning previous value]
566 [`T fetch_xor(T v, memory_order order)`]
567 [Apply bit-wise "xor" with `v` to variable, returning previous value]
571 `order` always has `memory_order_seq_cst` as default parameter.
573 In addition to these explicit operations, each
574 [^boost::atomic<['I]>] object also
575 supports implicit pre-/post- increment/decrement, as well
576 as the operators `+=`, `-=`, `&=`, `|=` and `^=`.
577 Avoid using these operators,
578 as they do not allow explicit specification of a memory ordering
583 [section:interface_atomic_pointer [^boost::atomic<['pointer]>] template class]
585 In addition to the operations applicable to all atomic object,
586 [^boost::atomic<['P]>] for pointer
587 types [^['P]] (other than [^void] pointers) support the following operations:
590 [[Syntax] [Description]]
592 [`T fetch_add(ptrdiff_t v, memory_order order)`]
593 [Add `v` to variable, returning previous value]
596 [`T fetch_sub(ptrdiff_t v, memory_order order)`]
597 [Subtract `v` from variable, returning previous value]
601 `order` always has `memory_order_seq_cst` as default parameter.
603 In addition to these explicit operations, each
604 [^boost::atomic<['P]>] object also
605 supports implicit pre-/post- increment/decrement, as well
606 as the operators `+=`, `-=`. Avoid using these operators,
607 as they do not allow explicit specification of a memory ordering
614 [section:interface_fences Fences]
616 #include <boost/atomic/fences.hpp>
619 [[Syntax] [Description]]
621 [`void atomic_thread_fence(memory_order order)`]
622 [Issue fence for coordination with other threads.]
625 [`void atomic_signal_fence(memory_order order)`]
626 [Issue fence for coordination with signal handler (only in same thread).]
632 [section:feature_macros Feature testing macros]
634 #include <boost/atomic/capabilities.hpp>
636 [*Boost.Atomic] defines a number of macros to allow compile-time
637 detection whether an atomic data type is implemented using
638 "true" atomic operations, or whether an internal "lock" is
639 used to provide atomicity. The following macros will be
640 defined to `0` if operations on the data type always
641 require a lock, to `1` if operations on the data type may
642 sometimes require a lock, and to `2` if they are always lock-free:
645 [[Macro] [Description]]
647 [`BOOST_ATOMIC_FLAG_LOCK_FREE`]
648 [Indicate whether `atomic_flag` is lock-free]
651 [`BOOST_ATOMIC_BOOL_LOCK_FREE`]
652 [Indicate whether `atomic<bool>` is lock-free]
655 [`BOOST_ATOMIC_CHAR_LOCK_FREE`]
656 [Indicate whether `atomic<char>` (including signed/unsigned variants) is lock-free]
659 [`BOOST_ATOMIC_CHAR16_T_LOCK_FREE`]
660 [Indicate whether `atomic<char16_t>` (including signed/unsigned variants) is lock-free]
663 [`BOOST_ATOMIC_CHAR32_T_LOCK_FREE`]
664 [Indicate whether `atomic<char32_t>` (including signed/unsigned variants) is lock-free]
667 [`BOOST_ATOMIC_WCHAR_T_LOCK_FREE`]
668 [Indicate whether `atomic<wchar_t>` (including signed/unsigned variants) is lock-free]
671 [`BOOST_ATOMIC_SHORT_LOCK_FREE`]
672 [Indicate whether `atomic<short>` (including signed/unsigned variants) is lock-free]
675 [`BOOST_ATOMIC_INT_LOCK_FREE`]
676 [Indicate whether `atomic<int>` (including signed/unsigned variants) is lock-free]
679 [`BOOST_ATOMIC_LONG_LOCK_FREE`]
680 [Indicate whether `atomic<long>` (including signed/unsigned variants) is lock-free]
683 [`BOOST_ATOMIC_LLONG_LOCK_FREE`]
684 [Indicate whether `atomic<long long>` (including signed/unsigned variants) is lock-free]
687 [`BOOST_ATOMIC_ADDRESS_LOCK_FREE` or `BOOST_ATOMIC_POINTER_LOCK_FREE`]
688 [Indicate whether `atomic<T *>` is lock-free]
691 [`BOOST_ATOMIC_THREAD_FENCE`]
692 [Indicate whether `atomic_thread_fence` function is lock-free]
695 [`BOOST_ATOMIC_SIGNAL_FENCE`]
696 [Indicate whether `atomic_signal_fence` function is lock-free]
700 In addition to these standard macros, [*Boost.Atomic] also defines a number of extension macros,
701 which can also be useful. Like the standard ones, these macros are defined to values `0`, `1` and `2`
702 to indicate whether the corresponding operations are lock-free or not.
705 [[Macro] [Description]]
707 [`BOOST_ATOMIC_INT8_LOCK_FREE`]
708 [Indicate whether `atomic<int8_type>` is lock-free.]
711 [`BOOST_ATOMIC_INT16_LOCK_FREE`]
712 [Indicate whether `atomic<int16_type>` is lock-free.]
715 [`BOOST_ATOMIC_INT32_LOCK_FREE`]
716 [Indicate whether `atomic<int32_type>` is lock-free.]
719 [`BOOST_ATOMIC_INT64_LOCK_FREE`]
720 [Indicate whether `atomic<int64_type>` is lock-free.]
723 [`BOOST_ATOMIC_INT128_LOCK_FREE`]
724 [Indicate whether `atomic<int128_type>` is lock-free.]
727 [`BOOST_ATOMIC_NO_ATOMIC_FLAG_INIT`]
728 [Defined after including `atomic_flag.hpp`, if the implementation
729 does not support the `BOOST_ATOMIC_FLAG_INIT` macro for static
730 initialization of `atomic_flag`. This macro is typically defined
731 for pre-C++11 compilers.]
735 In the table above, `intN_type` is a type that fits storage of contiguous `N` bits, suitably aligned for atomic operations.
741 [section:usage_examples Usage examples]
743 [include examples.qbk]
748 [section:platform_support Implementing support for additional platforms]
750 [include platform.qbk]
755 [/ [xinclude autodoc.xml] ]
757 [section:limitations Limitations]
759 While [*Boost.Atomic] strives to implement the atomic operations
760 from C++11 as faithfully as possible, there are a few
761 limitations that cannot be lifted without compiler support:
763 * [*Using non-POD-classes as template parameter to `atomic<T>` results
764 in undefined behavior]: This means that any class containing a
765 constructor, destructor, virtual methods or access control
766 specifications is not a valid argument in C++98. C++11 relaxes
767 this slightly by allowing "trivial" classes containing only
768 empty constructors. [*Advise]: Use only POD types.
769 * [*C++98 compilers may transform computation- to control-dependency]:
770 Crucially, `memory_order_consume` only affects computationally-dependent
771 operations, but in general there is nothing preventing a compiler
772 from transforming a computation dependency into a control dependency.
773 A C++11 compiler would be forbidden from such a transformation.
774 [*Advise]: Use `memory_order_consume` only in conjunction with
775 pointer values, as the compiler cannot speculate and transform
776 these into control dependencies.
777 * [*Fence operations enforce "too strong" compiler ordering]:
778 Semantically, `memory_order_acquire`/`memory_order_consume`
779 and `memory_order_release` need to restrain reordering of
780 memory operations only in one direction. Since there is no
781 way to express this constraint to the compiler, these act
782 as "full compiler barriers" in this implementation. In corner
783 cases this may result in a less efficient code than a C++11 compiler
785 * [*No interprocess fallback]: using `atomic<T>` in shared memory only works
786 correctly, if `atomic<T>::is_lock_free() == true`.
790 [section:porting Porting]
792 [section:unit_tests Unit tests]
794 [*Boost.Atomic] provides a unit test suite to verify that the
795 implementation behaves as expected:
797 * [*fallback_api.cpp] verifies that the fallback-to-locking aspect
798 of [*Boost.Atomic] compiles and has correct value semantics.
799 * [*native_api.cpp] verifies that all atomic operations have correct
800 value semantics (e.g. "fetch_add" really adds the desired value,
801 returning the previous). It is a rough "smoke-test" to help weed
802 out the most obvious mistakes (for example width overflow,
803 signed/unsigned extension, ...).
804 * [*lockfree.cpp] verifies that the [*BOOST_ATOMIC_*_LOCKFREE] macros
805 are set properly according to the expectations for a given
806 platform, and that they match up with the [*is_always_lock_free] and
807 [*is_lock_free] members of the [*atomic] object instances.
808 * [*atomicity.cpp] lets two threads race against each other modifying
809 a shared variable, verifying that the operations behave atomic
810 as appropriate. By nature, this test is necessarily stochastic, and
811 the test self-calibrates to yield 99% confidence that a
812 positive result indicates absence of an error. This test is
813 very useful on uni-processor systems with preemption already.
814 * [*ordering.cpp] lets two threads race against each other accessing
815 multiple shared variables, verifying that the operations
816 exhibit the expected ordering behavior. By nature, this test is
817 necessarily stochastic, and the test attempts to self-calibrate to
818 yield 99% confidence that a positive result indicates absence
819 of an error. This only works on true multi-processor (or multi-core)
820 systems. It does not yield any result on uni-processor systems
821 or emulators (due to there being no observable reordering even
822 the order=relaxed case) and will report that fact.
826 [section:tested_compilers Tested compilers]
828 [*Boost.Atomic] has been tested on and is known to work on
829 the following compilers/platforms:
831 * gcc 4.x: i386, x86_64, ppc32, ppc64, sparcv9, armv6, alpha
832 * Visual Studio Express 2008/Windows XP, x86, x64, ARM
836 [section:acknowledgements Acknowledgements]
838 * Adam Wulkiewicz created the logo used on the [@https://github.com/boostorg/atomic GitHub project page]. The logo was taken from his [@https://github.com/awulkiew/boost-logos collection] of Boost logos.