3 //! ## The threading model
5 //! An executing Rust program consists of a collection of native OS threads,
6 //! each with their own stack and local state. Threads can be named, and
7 //! provide some built-in support for low-level synchronization.
9 //! Communication between threads can be done through
10 //! [channels], Rust's message-passing types, along with [other forms of thread
11 //! synchronization](../../std/sync/index.html) and shared-memory data
12 //! structures. In particular, types that are guaranteed to be
13 //! threadsafe are easily shared between threads using the
14 //! atomically-reference-counted container, [`Arc`].
16 //! Fatal logic errors in Rust cause *thread panic*, during which
17 //! a thread will unwind the stack, running destructors and freeing
18 //! owned resources. While not meant as a 'try/catch' mechanism, panics
19 //! in Rust can nonetheless be caught (unless compiling with `panic=abort`) with
20 //! [`catch_unwind`](../../std/panic/fn.catch_unwind.html) and recovered
21 //! from, or alternatively be resumed with
22 //! [`resume_unwind`](../../std/panic/fn.resume_unwind.html). If the panic
23 //! is not caught the thread will exit, but the panic may optionally be
24 //! detected from a different thread with [`join`]. If the main thread panics
25 //! without the panic being caught, the application will exit with a
26 //! non-zero exit code.
28 //! When the main thread of a Rust program terminates, the entire program shuts
29 //! down, even if other threads are still running. However, this module provides
30 //! convenient facilities for automatically waiting for the termination of a
31 //! child thread (i.e., join).
33 //! ## Spawning a thread
35 //! A new thread can be spawned using the [`thread::spawn`][`spawn`] function:
40 //! thread::spawn(move || {
45 //! In this example, the spawned thread is "detached" from the current
46 //! thread. This means that it can outlive its parent (the thread that spawned
47 //! it), unless this parent is the main thread.
49 //! The parent thread can also wait on the completion of the child
50 //! thread; a call to [`spawn`] produces a [`JoinHandle`], which provides
51 //! a `join` method for waiting:
56 //! let child = thread::spawn(move || {
60 //! let res = child.join();
63 //! The [`join`] method returns a [`thread::Result`] containing [`Ok`] of the final
64 //! value produced by the child thread, or [`Err`] of the value given to
65 //! a call to [`panic!`] if the child panicked.
67 //! ## Configuring threads
69 //! A new thread can be configured before it is spawned via the [`Builder`] type,
70 //! which currently allows you to set the name and stack size for the child thread:
73 //! # #![allow(unused_must_use)]
76 //! thread::Builder::new().name("child1".to_string()).spawn(move || {
77 //! println!("Hello, world!");
81 //! ## The `Thread` type
83 //! Threads are represented via the [`Thread`] type, which you can get in one of
86 //! * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
87 //! function, and calling [`thread`][`JoinHandle::thread`] on the [`JoinHandle`].
88 //! * By requesting the current thread, using the [`thread::current`] function.
90 //! The [`thread::current`] function is available even for threads not spawned
91 //! by the APIs of this module.
93 //! ## Thread-local storage
95 //! This module also provides an implementation of thread-local storage for Rust
96 //! programs. Thread-local storage is a method of storing data into a global
97 //! variable that each thread in the program will have its own copy of.
98 //! Threads do not share this data, so accesses do not need to be synchronized.
100 //! A thread-local key owns the value it contains and will destroy the value when the
101 //! thread exits. It is created with the [`thread_local!`] macro and can contain any
102 //! value that is `'static` (no borrowed pointers). It provides an accessor function,
103 //! [`with`], that yields a shared reference to the value to the specified
104 //! closure. Thread-local keys allow only shared access to values, as there would be no
105 //! way to guarantee uniqueness if mutable borrows were allowed. Most values
106 //! will want to make use of some form of **interior mutability** through the
107 //! [`Cell`] or [`RefCell`] types.
109 //! ## Naming threads
111 //! Threads are able to have associated names for identification purposes. By default, spawned
112 //! threads are unnamed. To specify a name for a thread, build the thread with [`Builder`] and pass
113 //! the desired thread name to [`Builder::name`]. To retrieve the thread name from within the
114 //! thread, use [`Thread::name`]. A couple examples of where the name of a thread gets used:
116 //! * If a panic occurs in a named thread, the thread name will be printed in the panic message.
117 //! * The thread name is provided to the OS where applicable (e.g., `pthread_setname_np` in
118 //! unix-like platforms).
122 //! The default stack size for spawned threads is 2 MiB, though this particular stack size is
123 //! subject to change in the future. There are two ways to manually specify the stack size for
126 //! * Build the thread with [`Builder`] and pass the desired stack size to [`Builder::stack_size`].
127 //! * Set the `RUST_MIN_STACK` environment variable to an integer representing the desired stack
128 //! size (in bytes). Note that setting [`Builder::stack_size`] will override this.
130 //! Note that the stack size of the main thread is *not* determined by Rust.
132 //! [channels]: crate::sync::mpsc
133 //! [`join`]: JoinHandle::join
134 //! [`Result`]: crate::result::Result
135 //! [`Ok`]: crate::result::Result::Ok
136 //! [`Err`]: crate::result::Result::Err
137 //! [`thread::current`]: current
138 //! [`thread::Result`]: Result
139 //! [`unpark`]: Thread::unpark
140 //! [`thread::park_timeout`]: park_timeout
141 //! [`Cell`]: crate::cell::Cell
142 //! [`RefCell`]: crate::cell::RefCell
143 //! [`with`]: LocalKey::with
145 #![stable(feature = "rust1", since = "1.0.0")]
146 #![deny(unsafe_op_in_unsafe_fn)]
148 #[cfg(all(test, not(target_os = "emscripten")))]
152 use crate::cell
::UnsafeCell
;
153 use crate::ffi
::{CStr, CString}
;
157 use crate::num
::NonZeroU64
;
159 use crate::panicking
;
161 use crate::sync
::Arc
;
162 use crate::sys
::thread
as imp
;
163 use crate::sys_common
::mutex
;
164 use crate::sys_common
::thread
;
165 use crate::sys_common
::thread_info
;
166 use crate::sys_common
::thread_parker
::Parker
;
167 use crate::sys_common
::{AsInner, IntoInner}
;
168 use crate::time
::Duration
;
170 ////////////////////////////////////////////////////////////////////////////////
171 // Thread-local storage
172 ////////////////////////////////////////////////////////////////////////////////
177 #[unstable(feature = "available_concurrency", issue = "74479")]
178 mod available_concurrency
;
180 #[stable(feature = "rust1", since = "1.0.0")]
181 pub use self::local
::{AccessError, LocalKey}
;
183 #[unstable(feature = "available_concurrency", issue = "74479")]
184 pub use available_concurrency
::available_concurrency
;
186 // The types used by the thread_local! macro to access TLS keys. Note that there
187 // are two types, the "OS" type and the "fast" type. The OS thread local key
188 // type is accessed via platform-specific API calls and is slow, while the fast
189 // key type is accessed via code generated via LLVM, where TLS keys are set up
190 // by the elf linker. Note that the OS TLS type is always available: on macOS
191 // the standard library is compiled with support for older platform versions
192 // where fast TLS was not available; end-user code is compiled with fast TLS
193 // where available, but both are needed.
195 #[unstable(feature = "libstd_thread_internals", issue = "none")]
196 #[cfg(target_thread_local)]
198 pub use self::local
::fast
::Key
as __FastLocalKeyInner
;
199 #[unstable(feature = "libstd_thread_internals", issue = "none")]
201 pub use self::local
::os
::Key
as __OsLocalKeyInner
;
202 #[unstable(feature = "libstd_thread_internals", issue = "none")]
203 #[cfg(all(target_arch = "wasm32", not(target_feature = "atomics")))]
205 pub use self::local
::statik
::Key
as __StaticLocalKeyInner
;
207 ////////////////////////////////////////////////////////////////////////////////
209 ////////////////////////////////////////////////////////////////////////////////
211 /// Thread factory, which can be used in order to configure the properties of
214 /// Methods can be chained on it in order to configure it.
216 /// The two configurations available are:
218 /// - [`name`]: specifies an [associated name for the thread][naming-threads]
219 /// - [`stack_size`]: specifies the [desired stack size for the thread][stack-size]
221 /// The [`spawn`] method will take ownership of the builder and create an
222 /// [`io::Result`] to the thread handle with the given configuration.
224 /// The [`thread::spawn`] free function uses a `Builder` with default
225 /// configuration and [`unwrap`]s its return value.
227 /// You may want to use [`spawn`] instead of [`thread::spawn`], when you want
228 /// to recover from a failure to launch a thread, indeed the free function will
229 /// panic where the `Builder` method will return a [`io::Result`].
236 /// let builder = thread::Builder::new();
238 /// let handler = builder.spawn(|| {
242 /// handler.join().unwrap();
245 /// [`stack_size`]: Builder::stack_size
246 /// [`name`]: Builder::name
247 /// [`spawn`]: Builder::spawn
248 /// [`thread::spawn`]: spawn
249 /// [`io::Result`]: crate::io::Result
250 /// [`unwrap`]: crate::result::Result::unwrap
251 /// [naming-threads]: ./index.html#naming-threads
252 /// [stack-size]: ./index.html#stack-size
253 #[stable(feature = "rust1", since = "1.0.0")]
256 // A name for the thread-to-be, for identification in panic messages
257 name
: Option
<String
>,
258 // The size of the stack for the spawned thread in bytes
259 stack_size
: Option
<usize>,
263 /// Generates the base configuration for spawning a thread, from which
264 /// configuration methods can be chained.
271 /// let builder = thread::Builder::new()
272 /// .name("foo".into())
273 /// .stack_size(32 * 1024);
275 /// let handler = builder.spawn(|| {
279 /// handler.join().unwrap();
281 #[stable(feature = "rust1", since = "1.0.0")]
282 pub fn new() -> Builder
{
283 Builder { name: None, stack_size: None }
286 /// Names the thread-to-be. Currently the name is used for identification
287 /// only in panic messages.
289 /// The name must not contain null bytes (`\0`).
291 /// For more information about named threads, see
292 /// [this module-level documentation][naming-threads].
299 /// let builder = thread::Builder::new()
300 /// .name("foo".into());
302 /// let handler = builder.spawn(|| {
303 /// assert_eq!(thread::current().name(), Some("foo"))
306 /// handler.join().unwrap();
309 /// [naming-threads]: ./index.html#naming-threads
310 #[stable(feature = "rust1", since = "1.0.0")]
311 pub fn name(mut self, name
: String
) -> Builder
{
312 self.name
= Some(name
);
316 /// Sets the size of the stack (in bytes) for the new thread.
318 /// The actual stack size may be greater than this value if
319 /// the platform specifies a minimal stack size.
321 /// For more information about the stack size for threads, see
322 /// [this module-level documentation][stack-size].
329 /// let builder = thread::Builder::new().stack_size(32 * 1024);
332 /// [stack-size]: ./index.html#stack-size
333 #[stable(feature = "rust1", since = "1.0.0")]
334 pub fn stack_size(mut self, size
: usize) -> Builder
{
335 self.stack_size
= Some(size
);
339 /// Spawns a new thread by taking ownership of the `Builder`, and returns an
340 /// [`io::Result`] to its [`JoinHandle`].
342 /// The spawned thread may outlive the caller (unless the caller thread
343 /// is the main thread; the whole process is terminated when the main
344 /// thread finishes). The join handle can be used to block on
345 /// termination of the child thread, including recovering its panics.
347 /// For a more complete documentation see [`thread::spawn`][`spawn`].
351 /// Unlike the [`spawn`] free function, this method yields an
352 /// [`io::Result`] to capture any failure to create the thread at
355 /// [`io::Result`]: crate::io::Result
359 /// Panics if a thread name was set and it contained null bytes.
366 /// let builder = thread::Builder::new();
368 /// let handler = builder.spawn(|| {
372 /// handler.join().unwrap();
374 #[stable(feature = "rust1", since = "1.0.0")]
375 pub fn spawn
<F
, T
>(self, f
: F
) -> io
::Result
<JoinHandle
<T
>>
381 unsafe { self.spawn_unchecked(f) }
384 /// Spawns a new thread without any lifetime restrictions by taking ownership
385 /// of the `Builder`, and returns an [`io::Result`] to its [`JoinHandle`].
387 /// The spawned thread may outlive the caller (unless the caller thread
388 /// is the main thread; the whole process is terminated when the main
389 /// thread finishes). The join handle can be used to block on
390 /// termination of the child thread, including recovering its panics.
392 /// This method is identical to [`thread::Builder::spawn`][`Builder::spawn`],
393 /// except for the relaxed lifetime bounds, which render it unsafe.
394 /// For a more complete documentation see [`thread::spawn`][`spawn`].
398 /// Unlike the [`spawn`] free function, this method yields an
399 /// [`io::Result`] to capture any failure to create the thread at
404 /// Panics if a thread name was set and it contained null bytes.
408 /// The caller has to ensure that no references in the supplied thread closure
409 /// or its return type can outlive the spawned thread's lifetime. This can be
410 /// guaranteed in two ways:
412 /// - ensure that [`join`][`JoinHandle::join`] is called before any referenced
414 /// - use only types with `'static` lifetime bounds, i.e., those with no or only
415 /// `'static` references (both [`thread::Builder::spawn`][`Builder::spawn`]
416 /// and [`thread::spawn`][`spawn`] enforce this property statically)
421 /// #![feature(thread_spawn_unchecked)]
424 /// let builder = thread::Builder::new();
427 /// let thread_x = &x;
429 /// let handler = unsafe {
430 /// builder.spawn_unchecked(move || {
431 /// println!("x = {}", *thread_x);
435 /// // caller has to ensure `join()` is called, otherwise
436 /// // it is possible to access freed memory if `x` gets
437 /// // dropped before the thread closure is executed!
438 /// handler.join().unwrap();
441 /// [`io::Result`]: crate::io::Result
442 #[unstable(feature = "thread_spawn_unchecked", issue = "55132")]
443 pub unsafe fn spawn_unchecked
<'a
, F
, T
>(self, f
: F
) -> io
::Result
<JoinHandle
<T
>>
449 let Builder { name, stack_size }
= self;
451 let stack_size
= stack_size
.unwrap_or_else(thread
::min_stack
);
453 let my_thread
= Thread
::new(name
);
454 let their_thread
= my_thread
.clone();
456 let my_packet
: Arc
<UnsafeCell
<Option
<Result
<T
>>>> = Arc
::new(UnsafeCell
::new(None
));
457 let their_packet
= my_packet
.clone();
459 let output_capture
= crate::io
::set_output_capture(None
);
460 crate::io
::set_output_capture(output_capture
.clone());
463 if let Some(name
) = their_thread
.cname() {
464 imp
::Thread
::set_name(name
);
467 crate::io
::set_output_capture(output_capture
);
469 // SAFETY: the stack guard passed is the one for the current thread.
470 // This means the current thread's stack and the new thread's stack
471 // are properly set and protected from each other.
472 thread_info
::set(unsafe { imp::guard::current() }
, their_thread
);
473 let try_result
= panic
::catch_unwind(panic
::AssertUnwindSafe(|| {
474 crate::sys_common
::backtrace
::__rust_begin_short_backtrace(f
)
476 // SAFETY: `their_packet` as been built just above and moved by the
477 // closure (it is an Arc<...>) and `my_packet` will be stored in the
478 // same `JoinInner` as this closure meaning the mutation will be
479 // safe (not modify it and affect a value far away).
480 unsafe { *their_packet.get() = Some(try_result) }
;
483 Ok(JoinHandle(JoinInner
{
486 // `imp::Thread::new` takes a closure with a `'static` lifetime, since it's passed
487 // through FFI or otherwise used with low-level threading primitives that have no
488 // notion of or way to enforce lifetimes.
490 // As mentioned in the `Safety` section of this function's documentation, the caller of
491 // this function needs to guarantee that the passed-in lifetime is sufficiently long
492 // for the lifetime of the thread.
494 // Similarly, the `sys` implementation must guarantee that no references to the closure
495 // exist after the thread has terminated, which is signaled by `Thread::join`
498 Some(imp
::Thread
::new(
500 mem
::transmute
::<Box
<dyn FnOnce() + 'a
>, Box
<dyn FnOnce() + '
static>>(
506 packet
: Packet(my_packet
),
511 ////////////////////////////////////////////////////////////////////////////////
513 ////////////////////////////////////////////////////////////////////////////////
515 /// Spawns a new thread, returning a [`JoinHandle`] for it.
517 /// The join handle will implicitly *detach* the child thread upon being
518 /// dropped. In this case, the child thread may outlive the parent (unless
519 /// the parent thread is the main thread; the whole process is terminated when
520 /// the main thread finishes). Additionally, the join handle provides a [`join`]
521 /// method that can be used to join the child thread. If the child thread
522 /// panics, [`join`] will return an [`Err`] containing the argument given to
525 /// This will create a thread using default parameters of [`Builder`], if you
526 /// want to specify the stack size or the name of the thread, use this API
529 /// As you can see in the signature of `spawn` there are two constraints on
530 /// both the closure given to `spawn` and its return value, let's explain them:
532 /// - The `'static` constraint means that the closure and its return value
533 /// must have a lifetime of the whole program execution. The reason for this
534 /// is that threads can `detach` and outlive the lifetime they have been
536 /// Indeed if the thread, and by extension its return value, can outlive their
537 /// caller, we need to make sure that they will be valid afterwards, and since
538 /// we *can't* know when it will return we need to have them valid as long as
539 /// possible, that is until the end of the program, hence the `'static`
541 /// - The [`Send`] constraint is because the closure will need to be passed
542 /// *by value* from the thread where it is spawned to the new thread. Its
543 /// return value will need to be passed from the new thread to the thread
544 /// where it is `join`ed.
545 /// As a reminder, the [`Send`] marker trait expresses that it is safe to be
546 /// passed from thread to thread. [`Sync`] expresses that it is safe to have a
547 /// reference be passed from thread to thread.
551 /// Panics if the OS fails to create a thread; use [`Builder::spawn`]
552 /// to recover from such errors.
556 /// Creating a thread.
561 /// let handler = thread::spawn(|| {
565 /// handler.join().unwrap();
568 /// As mentioned in the module documentation, threads are usually made to
569 /// communicate using [`channels`], here is how it usually looks.
571 /// This example also shows how to use `move`, in order to give ownership
572 /// of values to a thread.
576 /// use std::sync::mpsc::channel;
578 /// let (tx, rx) = channel();
580 /// let sender = thread::spawn(move || {
581 /// tx.send("Hello, thread".to_owned())
582 /// .expect("Unable to send on channel");
585 /// let receiver = thread::spawn(move || {
586 /// let value = rx.recv().expect("Unable to receive from channel");
587 /// println!("{}", value);
590 /// sender.join().expect("The sender thread has panicked");
591 /// receiver.join().expect("The receiver thread has panicked");
594 /// A thread can also return a value through its [`JoinHandle`], you can use
595 /// this to make asynchronous computations (futures might be more appropriate
601 /// let computation = thread::spawn(|| {
602 /// // Some expensive computation.
606 /// let result = computation.join().unwrap();
607 /// println!("{}", result);
610 /// [`channels`]: crate::sync::mpsc
611 /// [`join`]: JoinHandle::join
612 /// [`Err`]: crate::result::Result::Err
613 #[stable(feature = "rust1", since = "1.0.0")]
614 pub fn spawn
<F
, T
>(f
: F
) -> JoinHandle
<T
>
620 Builder
::new().spawn(f
).expect("failed to spawn thread")
623 /// Gets a handle to the thread that invokes it.
627 /// Getting a handle to the current thread with `thread::current()`:
632 /// let handler = thread::Builder::new()
633 /// .name("named thread".into())
635 /// let handle = thread::current();
636 /// assert_eq!(handle.name(), Some("named thread"));
640 /// handler.join().unwrap();
642 #[stable(feature = "rust1", since = "1.0.0")]
643 pub fn current() -> Thread
{
644 thread_info
::current_thread().expect(
645 "use of std::thread::current() is not possible \
646 after the thread's local data has been destroyed",
650 /// Cooperatively gives up a timeslice to the OS scheduler.
652 /// This is used when the programmer knows that the thread will have nothing
653 /// to do for some time, and thus avoid wasting computing time.
655 /// For example when polling on a resource, it is common to check that it is
656 /// available, and if not to yield in order to avoid busy waiting.
658 /// Thus the pattern of `yield`ing after a failed poll is rather common when
659 /// implementing low-level shared resources or synchronization primitives.
661 /// However programmers will usually prefer to use [`channel`]s, [`Condvar`]s,
662 /// [`Mutex`]es or [`join`] for their synchronization routines, as they avoid
663 /// thinking about thread scheduling.
665 /// Note that [`channel`]s for example are implemented using this primitive.
666 /// Indeed when you call `send` or `recv`, which are blocking, they will yield
667 /// if the channel is not available.
674 /// thread::yield_now();
677 /// [`channel`]: crate::sync::mpsc
678 /// [`join`]: JoinHandle::join
679 /// [`Condvar`]: crate::sync::Condvar
680 /// [`Mutex`]: crate::sync::Mutex
681 #[stable(feature = "rust1", since = "1.0.0")]
683 imp
::Thread
::yield_now()
686 /// Determines whether the current thread is unwinding because of panic.
688 /// A common use of this feature is to poison shared resources when writing
689 /// unsafe code, by checking `panicking` when the `drop` is called.
691 /// This is usually not needed when writing safe code, as [`Mutex`es][Mutex]
692 /// already poison themselves when a thread panics while holding the lock.
694 /// This can also be used in multithreaded applications, in order to send a
695 /// message to other threads warning that a thread has panicked (e.g., for
696 /// monitoring purposes).
703 /// struct SomeStruct;
705 /// impl Drop for SomeStruct {
706 /// fn drop(&mut self) {
707 /// if thread::panicking() {
708 /// println!("dropped while unwinding");
710 /// println!("dropped while not unwinding");
717 /// let a = SomeStruct;
722 /// let b = SomeStruct;
727 /// [Mutex]: crate::sync::Mutex
729 #[stable(feature = "rust1", since = "1.0.0")]
730 pub fn panicking() -> bool
{
731 panicking
::panicking()
734 /// Puts the current thread to sleep for at least the specified amount of time.
736 /// The thread may sleep longer than the duration specified due to scheduling
737 /// specifics or platform-dependent functionality. It will never sleep less.
739 /// This function is blocking, and should not be used in `async` functions.
741 /// # Platform-specific behavior
743 /// On Unix platforms, the underlying syscall may be interrupted by a
744 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
745 /// the specified duration, this function may invoke that system call multiple
753 /// // Let's sleep for 2 seconds:
754 /// thread::sleep_ms(2000);
756 #[stable(feature = "rust1", since = "1.0.0")]
757 #[rustc_deprecated(since = "1.6.0", reason = "replaced by `std::thread::sleep`")]
758 pub fn sleep_ms(ms
: u32) {
759 sleep(Duration
::from_millis(ms
as u64))
762 /// Puts the current thread to sleep for at least the specified amount of time.
764 /// The thread may sleep longer than the duration specified due to scheduling
765 /// specifics or platform-dependent functionality. It will never sleep less.
767 /// This function is blocking, and should not be used in `async` functions.
769 /// # Platform-specific behavior
771 /// On Unix platforms, the underlying syscall may be interrupted by a
772 /// spurious wakeup or signal handler. To ensure the sleep occurs for at least
773 /// the specified duration, this function may invoke that system call multiple
775 /// Platforms which do not support nanosecond precision for sleeping will
776 /// have `dur` rounded up to the nearest granularity of time they can sleep for.
778 /// Currently, specifying a zero duration on Unix platforms returns immediately
779 /// without invoking the underlying [`nanosleep`] syscall, whereas on Windows
780 /// platforms the underlying [`Sleep`] syscall is always invoked.
781 /// If the intention is to yield the current time-slice you may want to use
782 /// [`yield_now`] instead.
784 /// [`nanosleep`]: https://linux.die.net/man/2/nanosleep
785 /// [`Sleep`]: https://docs.microsoft.com/en-us/windows/win32/api/synchapi/nf-synchapi-sleep
790 /// use std::{thread, time};
792 /// let ten_millis = time::Duration::from_millis(10);
793 /// let now = time::Instant::now();
795 /// thread::sleep(ten_millis);
797 /// assert!(now.elapsed() >= ten_millis);
799 #[stable(feature = "thread_sleep", since = "1.4.0")]
800 pub fn sleep(dur
: Duration
) {
801 imp
::Thread
::sleep(dur
)
804 /// Blocks unless or until the current thread's token is made available.
806 /// A call to `park` does not guarantee that the thread will remain parked
807 /// forever, and callers should be prepared for this possibility.
809 /// # park and unpark
811 /// Every thread is equipped with some basic low-level blocking support, via the
812 /// [`thread::park`][`park`] function and [`thread::Thread::unpark`][`unpark`]
813 /// method. [`park`] blocks the current thread, which can then be resumed from
814 /// another thread by calling the [`unpark`] method on the blocked thread's
817 /// Conceptually, each [`Thread`] handle has an associated token, which is
818 /// initially not present:
820 /// * The [`thread::park`][`park`] function blocks the current thread unless or
821 /// until the token is available for its thread handle, at which point it
822 /// atomically consumes the token. It may also return *spuriously*, without
823 /// consuming the token. [`thread::park_timeout`] does the same, but allows
824 /// specifying a maximum time to block the thread for.
826 /// * The [`unpark`] method on a [`Thread`] atomically makes the token available
827 /// if it wasn't already. Because the token is initially absent, [`unpark`]
828 /// followed by [`park`] will result in the second call returning immediately.
830 /// In other words, each [`Thread`] acts a bit like a spinlock that can be
831 /// locked and unlocked using `park` and `unpark`.
833 /// Notice that being unblocked does not imply any synchronization with someone
834 /// that unparked this thread, it could also be spurious.
835 /// For example, it would be a valid, but inefficient, implementation to make both [`park`] and
836 /// [`unpark`] return immediately without doing anything.
838 /// The API is typically used by acquiring a handle to the current thread,
839 /// placing that handle in a shared data structure so that other threads can
840 /// find it, and then `park`ing in a loop. When some desired condition is met, another
841 /// thread calls [`unpark`] on the handle.
843 /// The motivation for this design is twofold:
845 /// * It avoids the need to allocate mutexes and condvars when building new
846 /// synchronization primitives; the threads already provide basic
847 /// blocking/signaling.
849 /// * It can be implemented very efficiently on many platforms.
855 /// use std::sync::{Arc, atomic::{Ordering, AtomicBool}};
856 /// use std::time::Duration;
858 /// let flag = Arc::new(AtomicBool::new(false));
859 /// let flag2 = Arc::clone(&flag);
861 /// let parked_thread = thread::spawn(move || {
862 /// // We want to wait until the flag is set. We *could* just spin, but using
863 /// // park/unpark is more efficient.
864 /// while !flag2.load(Ordering::Acquire) {
865 /// println!("Parking thread");
867 /// // We *could* get here spuriously, i.e., way before the 10ms below are over!
868 /// // But that is no problem, we are in a loop until the flag is set anyway.
869 /// println!("Thread unparked");
871 /// println!("Flag received");
874 /// // Let some time pass for the thread to be spawned.
875 /// thread::sleep(Duration::from_millis(10));
877 /// // Set the flag, and let the thread wake up.
878 /// // There is no race condition here, if `unpark`
879 /// // happens first, `park` will return immediately.
880 /// // Hence there is no risk of a deadlock.
881 /// flag.store(true, Ordering::Release);
882 /// println!("Unpark the thread");
883 /// parked_thread.thread().unpark();
885 /// parked_thread.join().unwrap();
888 /// [`unpark`]: Thread::unpark
889 /// [`thread::park_timeout`]: park_timeout
890 #[stable(feature = "rust1", since = "1.0.0")]
892 // SAFETY: park_timeout is called on the parker owned by this thread.
894 current().inner
.parker
.park();
898 /// Use [`park_timeout`].
900 /// Blocks unless or until the current thread's token is made available or
901 /// the specified duration has been reached (may wake spuriously).
903 /// The semantics of this function are equivalent to [`park`] except
904 /// that the thread will be blocked for roughly no longer than `dur`. This
905 /// method should not be used for precise timing due to anomalies such as
906 /// preemption or platform differences that may not cause the maximum
907 /// amount of time waited to be precisely `ms` long.
909 /// See the [park documentation][`park`] for more detail.
910 #[stable(feature = "rust1", since = "1.0.0")]
911 #[rustc_deprecated(since = "1.6.0", reason = "replaced by `std::thread::park_timeout`")]
912 pub fn park_timeout_ms(ms
: u32) {
913 park_timeout(Duration
::from_millis(ms
as u64))
916 /// Blocks unless or until the current thread's token is made available or
917 /// the specified duration has been reached (may wake spuriously).
919 /// The semantics of this function are equivalent to [`park`][park] except
920 /// that the thread will be blocked for roughly no longer than `dur`. This
921 /// method should not be used for precise timing due to anomalies such as
922 /// preemption or platform differences that may not cause the maximum
923 /// amount of time waited to be precisely `dur` long.
925 /// See the [park documentation][park] for more details.
927 /// # Platform-specific behavior
929 /// Platforms which do not support nanosecond precision for sleeping will have
930 /// `dur` rounded up to the nearest granularity of time they can sleep for.
934 /// Waiting for the complete expiration of the timeout:
937 /// use std::thread::park_timeout;
938 /// use std::time::{Instant, Duration};
940 /// let timeout = Duration::from_secs(2);
941 /// let beginning_park = Instant::now();
943 /// let mut timeout_remaining = timeout;
945 /// park_timeout(timeout_remaining);
946 /// let elapsed = beginning_park.elapsed();
947 /// if elapsed >= timeout {
950 /// println!("restarting park_timeout after {:?}", elapsed);
951 /// timeout_remaining = timeout - elapsed;
954 #[stable(feature = "park_timeout", since = "1.4.0")]
955 pub fn park_timeout(dur
: Duration
) {
956 // SAFETY: park_timeout is called on the parker owned by this thread.
958 current().inner
.parker
.park_timeout(dur
);
962 ////////////////////////////////////////////////////////////////////////////////
964 ////////////////////////////////////////////////////////////////////////////////
966 /// A unique identifier for a running thread.
968 /// A `ThreadId` is an opaque object that has a unique value for each thread
969 /// that creates one. `ThreadId`s are not guaranteed to correspond to a thread's
970 /// system-designated identifier. A `ThreadId` can be retrieved from the [`id`]
971 /// method on a [`Thread`].
978 /// let other_thread = thread::spawn(|| {
979 /// thread::current().id()
982 /// let other_thread_id = other_thread.join().unwrap();
983 /// assert!(thread::current().id() != other_thread_id);
986 /// [`id`]: Thread::id
987 #[stable(feature = "thread_id", since = "1.19.0")]
988 #[derive(Eq, PartialEq, Clone, Copy, Hash, Debug)]
989 pub struct ThreadId(NonZeroU64
);
992 // Generate a new unique thread ID.
993 fn new() -> ThreadId
{
994 // It is UB to attempt to acquire this mutex reentrantly!
995 static GUARD
: mutex
::StaticMutex
= mutex
::StaticMutex
::new();
996 static mut COUNTER
: u64 = 1;
999 let _guard
= GUARD
.lock();
1001 // If we somehow use up all our bits, panic so that we're not
1002 // covering up subtle bugs of IDs being reused.
1003 if COUNTER
== u64::MAX
{
1004 panic
!("failed to generate unique thread ID: bitspace exhausted");
1010 ThreadId(NonZeroU64
::new(id
).unwrap())
1014 /// This returns a numeric identifier for the thread identified by this
1017 /// As noted in the documentation for the type itself, it is essentially an
1018 /// opaque ID, but is guaranteed to be unique for each thread. The returned
1019 /// value is entirely opaque -- only equality testing is stable. Note that
1020 /// it is not guaranteed which values new threads will return, and this may
1021 /// change across Rust versions.
1022 #[unstable(feature = "thread_id_value", issue = "67939")]
1023 pub fn as_u64(&self) -> NonZeroU64
{
1028 ////////////////////////////////////////////////////////////////////////////////
1030 ////////////////////////////////////////////////////////////////////////////////
1032 /// The internal representation of a `Thread` handle
1034 name
: Option
<CString
>, // Guaranteed to be UTF-8
1040 #[stable(feature = "rust1", since = "1.0.0")]
1041 /// A handle to a thread.
1043 /// Threads are represented via the `Thread` type, which you can get in one of
1046 /// * By spawning a new thread, e.g., using the [`thread::spawn`][`spawn`]
1047 /// function, and calling [`thread`][`JoinHandle::thread`] on the
1049 /// * By requesting the current thread, using the [`thread::current`] function.
1051 /// The [`thread::current`] function is available even for threads not spawned
1052 /// by the APIs of this module.
1054 /// There is usually no need to create a `Thread` struct yourself, one
1055 /// should instead use a function like `spawn` to create new threads, see the
1056 /// docs of [`Builder`] and [`spawn`] for more details.
1058 /// [`thread::current`]: current
1064 // Used only internally to construct a thread object without spawning
1065 // Panics if the name contains nuls.
1066 pub(crate) fn new(name
: Option
<String
>) -> Thread
{
1068 name
.map(|n
| CString
::new(n
).expect("thread name may not contain interior null bytes"));
1070 inner
: Arc
::new(Inner { name: cname, id: ThreadId::new(), parker: Parker::new() }
),
1074 /// Atomically makes the handle's token available if it is not already.
1076 /// Every thread is equipped with some basic low-level blocking support, via
1077 /// the [`park`][park] function and the `unpark()` method. These can be
1078 /// used as a more CPU-efficient implementation of a spinlock.
1080 /// See the [park documentation][park] for more details.
1085 /// use std::thread;
1086 /// use std::time::Duration;
1088 /// let parked_thread = thread::Builder::new()
1090 /// println!("Parking thread");
1092 /// println!("Thread unparked");
1096 /// // Let some time pass for the thread to be spawned.
1097 /// thread::sleep(Duration::from_millis(10));
1099 /// println!("Unpark the thread");
1100 /// parked_thread.thread().unpark();
1102 /// parked_thread.join().unwrap();
1104 #[stable(feature = "rust1", since = "1.0.0")]
1106 pub fn unpark(&self) {
1107 self.inner
.parker
.unpark();
1110 /// Gets the thread's unique identifier.
1115 /// use std::thread;
1117 /// let other_thread = thread::spawn(|| {
1118 /// thread::current().id()
1121 /// let other_thread_id = other_thread.join().unwrap();
1122 /// assert!(thread::current().id() != other_thread_id);
1124 #[stable(feature = "thread_id", since = "1.19.0")]
1125 pub fn id(&self) -> ThreadId
{
1129 /// Gets the thread's name.
1131 /// For more information about named threads, see
1132 /// [this module-level documentation][naming-threads].
1136 /// Threads by default have no name specified:
1139 /// use std::thread;
1141 /// let builder = thread::Builder::new();
1143 /// let handler = builder.spawn(|| {
1144 /// assert!(thread::current().name().is_none());
1147 /// handler.join().unwrap();
1150 /// Thread with a specified name:
1153 /// use std::thread;
1155 /// let builder = thread::Builder::new()
1156 /// .name("foo".into());
1158 /// let handler = builder.spawn(|| {
1159 /// assert_eq!(thread::current().name(), Some("foo"))
1162 /// handler.join().unwrap();
1165 /// [naming-threads]: ./index.html#naming-threads
1166 #[stable(feature = "rust1", since = "1.0.0")]
1167 pub fn name(&self) -> Option
<&str> {
1168 self.cname().map(|s
| unsafe { str::from_utf8_unchecked(s.to_bytes()) }
)
1171 fn cname(&self) -> Option
<&CStr
> {
1172 self.inner
.name
.as_deref()
1176 #[stable(feature = "rust1", since = "1.0.0")]
1177 impl fmt
::Debug
for Thread
{
1178 fn fmt(&self, f
: &mut fmt
::Formatter
<'_
>) -> fmt
::Result
{
1179 f
.debug_struct("Thread").field("id", &self.id()).field("name", &self.name()).finish()
1183 ////////////////////////////////////////////////////////////////////////////////
1185 ////////////////////////////////////////////////////////////////////////////////
1187 /// A specialized [`Result`] type for threads.
1189 /// Indicates the manner in which a thread exited.
1191 /// The value contained in the `Result::Err` variant
1192 /// is the value the thread panicked with;
1193 /// that is, the argument the `panic!` macro was called with.
1194 /// Unlike with normal errors, this value doesn't implement
1195 /// the [`Error`](crate::error::Error) trait.
1197 /// Thus, a sensible way to handle a thread panic is to either:
1199 /// 1. propagate the panic with [`std::panic::resume_unwind`]
1200 /// 2. or in case the thread is intended to be a subsystem boundary
1201 /// that is supposed to isolate system-level failures,
1202 /// match on the `Err` variant and handle the panic in an appropriate way
1204 /// A thread that completes without panicking is considered to exit successfully.
1208 /// Matching on the result of a joined thread:
1211 /// use std::{fs, thread, panic};
1213 /// fn copy_in_thread() -> thread::Result<()> {
1214 /// thread::spawn(|| {
1215 /// fs::copy("foo.txt", "bar.txt").unwrap();
1220 /// match copy_in_thread() {
1221 /// Ok(_) => println!("copy succeeded"),
1222 /// Err(e) => panic::resume_unwind(e),
1227 /// [`Result`]: crate::result::Result
1228 /// [`std::panic::resume_unwind`]: crate::panic::resume_unwind
1229 #[stable(feature = "rust1", since = "1.0.0")]
1230 pub type Result
<T
> = crate::result
::Result
<T
, Box
<dyn Any
+ Send
+ '
static>>;
1232 // This packet is used to communicate the return value between the child thread
1233 // and the parent thread. Memory is shared through the `Arc` within and there's
1234 // no need for a mutex here because synchronization happens with `join()` (the
1235 // parent thread never reads this packet until the child has exited).
1237 // This packet itself is then stored into a `JoinInner` which in turns is placed
1238 // in `JoinHandle` and `JoinGuard`. Due to the usage of `UnsafeCell` we need to
1239 // manually worry about impls like Send and Sync. The type `T` should
1240 // already always be Send (otherwise the thread could not have been created) and
1241 // this type is inherently Sync because no methods take &self. Regardless,
1242 // however, we add inheriting impls for Send/Sync to this type to ensure it's
1243 // Send/Sync and that future modifications will still appropriately classify it.
1244 struct Packet
<T
>(Arc
<UnsafeCell
<Option
<Result
<T
>>>>);
1246 unsafe impl<T
: Send
> Send
for Packet
<T
> {}
1247 unsafe impl<T
: Sync
> Sync
for Packet
<T
> {}
1249 /// Inner representation for JoinHandle
1250 struct JoinInner
<T
> {
1251 native
: Option
<imp
::Thread
>,
1256 impl<T
> JoinInner
<T
> {
1257 fn join(&mut self) -> Result
<T
> {
1258 self.native
.take().unwrap().join();
1259 unsafe { (*self.packet.0.get()).take().unwrap() }
1263 /// An owned permission to join on a thread (block on its termination).
1265 /// A `JoinHandle` *detaches* the associated thread when it is dropped, which
1266 /// means that there is no longer any handle to thread and no way to `join`
1269 /// Due to platform restrictions, it is not possible to [`Clone`] this
1270 /// handle: the ability to join a thread is a uniquely-owned permission.
1272 /// This `struct` is created by the [`thread::spawn`] function and the
1273 /// [`thread::Builder::spawn`] method.
1277 /// Creation from [`thread::spawn`]:
1280 /// use std::thread;
1282 /// let join_handle: thread::JoinHandle<_> = thread::spawn(|| {
1283 /// // some work here
1287 /// Creation from [`thread::Builder::spawn`]:
1290 /// use std::thread;
1292 /// let builder = thread::Builder::new();
1294 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1295 /// // some work here
1299 /// Child being detached and outliving its parent:
1302 /// use std::thread;
1303 /// use std::time::Duration;
1305 /// let original_thread = thread::spawn(|| {
1306 /// let _detached_thread = thread::spawn(|| {
1307 /// // Here we sleep to make sure that the first thread returns before.
1308 /// thread::sleep(Duration::from_millis(10));
1309 /// // This will be called, even though the JoinHandle is dropped.
1310 /// println!("♫ Still alive ♫");
1314 /// original_thread.join().expect("The thread being joined has panicked");
1315 /// println!("Original thread is joined.");
1317 /// // We make sure that the new thread has time to run, before the main
1318 /// // thread returns.
1320 /// thread::sleep(Duration::from_millis(1000));
1323 /// [`thread::Builder::spawn`]: Builder::spawn
1324 /// [`thread::spawn`]: spawn
1325 #[stable(feature = "rust1", since = "1.0.0")]
1326 pub struct JoinHandle
<T
>(JoinInner
<T
>);
1328 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1329 unsafe impl<T
> Send
for JoinHandle
<T
> {}
1330 #[stable(feature = "joinhandle_impl_send_sync", since = "1.29.0")]
1331 unsafe impl<T
> Sync
for JoinHandle
<T
> {}
1333 impl<T
> JoinHandle
<T
> {
1334 /// Extracts a handle to the underlying thread.
1339 /// use std::thread;
1341 /// let builder = thread::Builder::new();
1343 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1344 /// // some work here
1347 /// let thread = join_handle.thread();
1348 /// println!("thread id: {:?}", thread.id());
1350 #[stable(feature = "rust1", since = "1.0.0")]
1351 pub fn thread(&self) -> &Thread
{
1355 /// Waits for the associated thread to finish.
1357 /// In terms of [atomic memory orderings], the completion of the associated
1358 /// thread synchronizes with this function returning. In other words, all
1359 /// operations performed by that thread are ordered before all
1360 /// operations that happen after `join` returns.
1362 /// If the child thread panics, [`Err`] is returned with the parameter given
1365 /// [`Err`]: crate::result::Result::Err
1366 /// [atomic memory orderings]: crate::sync::atomic
1370 /// This function may panic on some platforms if a thread attempts to join
1371 /// itself or otherwise may create a deadlock with joining threads.
1376 /// use std::thread;
1378 /// let builder = thread::Builder::new();
1380 /// let join_handle: thread::JoinHandle<_> = builder.spawn(|| {
1381 /// // some work here
1383 /// join_handle.join().expect("Couldn't join on the associated thread");
1385 #[stable(feature = "rust1", since = "1.0.0")]
1386 pub fn join(mut self) -> Result
<T
> {
1391 impl<T
> AsInner
<imp
::Thread
> for JoinHandle
<T
> {
1392 fn as_inner(&self) -> &imp
::Thread
{
1393 self.0.native
.as_ref().unwrap()
1397 impl<T
> IntoInner
<imp
::Thread
> for JoinHandle
<T
> {
1398 fn into_inner(self) -> imp
::Thread
{
1399 self.0.native
.unwrap()
1403 #[stable(feature = "std_debug", since = "1.16.0")]
1404 impl<T
> fmt
::Debug
for JoinHandle
<T
> {
1405 fn fmt(&self, f
: &mut fmt
::Formatter
<'_
>) -> fmt
::Result
{
1406 f
.pad("JoinHandle { .. }")
1410 fn _assert_sync_and_send() {
1411 fn _assert_both
<T
: Send
+ Sync
>() {}
1412 _assert_both
::<JoinHandle
<()>>();
1413 _assert_both
::<Thread
>();