1 //! Epoch-based memory reclamation.
3 //! An interesting problem concurrent collections deal with comes from the remove operation.
4 //! Suppose that a thread removes an element from a lock-free map, while another thread is reading
5 //! that same element at the same time. The first thread must wait until the second thread stops
6 //! reading the element. Only then it is safe to destruct it.
8 //! Programming languages that come with garbage collectors solve this problem trivially. The
9 //! garbage collector will destruct the removed element when no thread can hold a reference to it
12 //! This crate implements a basic memory reclamation mechanism, which is based on epochs. When an
13 //! element gets removed from a concurrent collection, it is inserted into a pile of garbage and
14 //! marked with the current epoch. Every time a thread accesses a collection, it checks the current
15 //! epoch, attempts to increment it, and destructs some garbage that became so old that no thread
16 //! can be referencing it anymore.
18 //! That is the general mechanism behind epoch-based memory reclamation, but the details are a bit
19 //! more complicated. Anyhow, memory reclamation is designed to be fully automatic and something
20 //! users of concurrent collections don't have to worry much about.
24 //! Concurrent collections are built using atomic pointers. This module provides [`Atomic`], which
25 //! is just a shared atomic pointer to a heap-allocated object. Loading an [`Atomic`] yields a
26 //! [`Shared`], which is an epoch-protected pointer through which the loaded object can be safely
31 //! Before an [`Atomic`] can be loaded, a participant must be [`pin`]ned. By pinning a participant
32 //! we declare that any object that gets removed from now on must not be destructed just
33 //! yet. Garbage collection of newly removed objects is suspended until the participant gets
38 //! Objects that get removed from concurrent collections must be stashed away until all currently
39 //! pinned participants get unpinned. Such objects can be stored into a thread-local or global
40 //! storage, where they are kept until the right time for their destruction comes.
42 //! There is a global shared instance of garbage queue. You can [`defer`](Guard::defer) the execution of an
43 //! arbitrary function until the global epoch is advanced enough. Most notably, concurrent data
44 //! structures may defer the deallocation of an object.
48 //! For majority of use cases, just use the default garbage collector by invoking [`pin`]. If you
49 //! want to create your own garbage collector, use the [`Collector`] API.
54 deny(warnings
, rust_2018_idioms
),
55 allow(dead_code
, unused_assignments
, unused_variables
)
60 missing_debug_implementations
,
64 #![cfg_attr(not(feature = "std"), no_std)]
66 #[cfg(crossbeam_loom)]
67 extern crate loom_crate
as loom
;
71 #[cfg(crossbeam_loom)]
72 #[allow(unused_imports, dead_code)]
75 pub(crate) use loom
::cell
::UnsafeCell
;
78 pub(crate) mod atomic
{
79 use core
::sync
::atomic
::Ordering
;
80 pub(crate) use loom
::sync
::atomic
::{fence, AtomicUsize}
;
82 // FIXME: loom does not support compiler_fence at the moment.
83 // https://github.com/tokio-rs/loom/issues/117
84 // we use fence as a stand-in for compiler_fence for the time being.
85 // this may miss some races since fence is stronger than compiler_fence,
86 // but it's the best we can do for the time being.
87 pub(crate) use self::fence
as compiler_fence
;
89 pub(crate) use loom
::sync
::Arc
;
91 pub(crate) use loom
::thread_local
;
93 #[cfg(not(crossbeam_no_atomic_cas))]
94 #[cfg(not(crossbeam_loom))]
95 #[allow(unused_imports, dead_code)]
97 #[cfg(feature = "alloc")]
101 pub(crate) struct UnsafeCell
<T
>(::core
::cell
::UnsafeCell
<T
>);
103 // loom's UnsafeCell has a slightly different API than the standard library UnsafeCell.
104 // Since we want the rest of the code to be agnostic to whether it's running under loom or
105 // not, we write this small wrapper that provides the loom-supported API for the standard
106 // library UnsafeCell. This is also what the loom documentation recommends:
107 // https://github.com/tokio-rs/loom#handling-loom-api-differences
108 impl<T
> UnsafeCell
<T
> {
110 pub(crate) const fn new(data
: T
) -> UnsafeCell
<T
> {
111 UnsafeCell(::core
::cell
::UnsafeCell
::new(data
))
115 pub(crate) fn with
<R
>(&self, f
: impl FnOnce(*const T
) -> R
) -> R
{
120 pub(crate) fn with_mut
<R
>(&self, f
: impl FnOnce(*mut T
) -> R
) -> R
{
125 #[cfg(feature = "alloc")]
126 pub(crate) mod sync
{
127 pub(crate) mod atomic
{
128 pub(crate) use core
::sync
::atomic
::compiler_fence
;
129 pub(crate) use core
::sync
::atomic
::fence
;
130 pub(crate) use core
::sync
::atomic
::AtomicUsize
;
132 pub(crate) use alloc
::sync
::Arc
;
135 #[cfg(feature = "std")]
136 pub(crate) use std
::thread_local
;
139 #[cfg(not(crossbeam_no_atomic_cas))]
141 if #[cfg(feature = "alloc")] {
152 pub use self::atomic
::{
153 Pointable
, Atomic
, CompareExchangeError
,
154 Owned
, Pointer
, Shared
,
156 pub use self::collector
::{Collector, LocalHandle}
;
157 pub use self::guard
::{unprotected, Guard}
;
160 pub use self::atomic
::{CompareAndSetError, CompareAndSetOrdering}
;
165 if #[cfg(feature = "std")] {
167 pub use self::default::{default_collector, is_pinned, pin}
;