]> git.proxmox.com Git - rustc.git/blame - src/libstd/sync/once.rs
New upstream version 1.41.1+dfsg1
[rustc.git] / src / libstd / sync / once.rs
CommitLineData
1a4d82fc
JJ
1//! A "once initialization" primitive
2//!
3//! This primitive is meant to be used to run one-time initialization. An
4//! example use case would be for initializing an FFI library.
5
54a0048b
SL
6// A "once" is a relatively simple primitive, and it's also typically provided
7// by the OS as well (see `pthread_once` or `InitOnceExecuteOnce`). The OS
8// primitives, however, tend to have surprising restrictions, such as the Unix
9// one doesn't allow an argument to be passed to the function.
10//
11// As a result, we end up implementing it ourselves in the standard library.
12// This also gives us the opportunity to optimize the implementation a bit which
13// should help the fast path on call sites. Consequently, let's explain how this
14// primitive works now!
15//
16// So to recap, the guarantees of a Once are that it will call the
17// initialization closure at most once, and it will never return until the one
18// that's running has finished running. This means that we need some form of
19// blocking here while the custom callback is running at the very least.
20// Additionally, we add on the restriction of **poisoning**. Whenever an
21// initialization closure panics, the Once enters a "poisoned" state which means
22// that all future calls will immediately panic as well.
23//
b7449926
XL
24// So to implement this, one might first reach for a `Mutex`, but those cannot
25// be put into a `static`. It also gets a lot harder with poisoning to figure
26// out when the mutex needs to be deallocated because it's not after the closure
27// finishes, but after the first successful closure finishes.
54a0048b
SL
28//
29// All in all, this is instead implemented with atomics and lock-free
30// operations! Whee! Each `Once` has one word of atomic state, and this state is
31// CAS'd on to determine what to do. There are four possible state of a `Once`:
32//
33// * Incomplete - no initialization has run yet, and no thread is currently
34// using the Once.
35// * Poisoned - some thread has previously attempted to initialize the Once, but
36// it panicked, so the Once is now poisoned. There are no other
37// threads currently accessing this Once.
38// * Running - some thread is currently attempting to run initialization. It may
39// succeed, so all future threads need to wait for it to finish.
40// Note that this state is accompanied with a payload, described
41// below.
42// * Complete - initialization has completed and all future calls should finish
43// immediately.
44//
45// With 4 states we need 2 bits to encode this, and we use the remaining bits
46// in the word we have allocated as a queue of threads waiting for the thread
47// responsible for entering the RUNNING state. This queue is just a linked list
48// of Waiter nodes which is monotonically increasing in size. Each node is
49// allocated on the stack, and whenever the running closure finishes it will
50// consume the entire queue and notify all waiters they should try again.
51//
52// You'll find a few more details in the implementation, but that's the gist of
53// it!
60c5eb7d
XL
54//
55// Atomic orderings:
56// When running `Once` we deal with multiple atomics:
57// `Once.state_and_queue` and an unknown number of `Waiter.signaled`.
58// * `state_and_queue` is used (1) as a state flag, (2) for synchronizing the
59// result of the `Once`, and (3) for synchronizing `Waiter` nodes.
60// - At the end of the `call_inner` function we have to make sure the result
61// of the `Once` is acquired. So every load which can be the only one to
62// load COMPLETED must have at least Acquire ordering, which means all
63// three of them.
64// - `WaiterQueue::Drop` is the only place that may store COMPLETED, and
65// must do so with Release ordering to make the result available.
66// - `wait` inserts `Waiter` nodes as a pointer in `state_and_queue`, and
67// needs to make the nodes available with Release ordering. The load in
68// its `compare_and_swap` can be Relaxed because it only has to compare
69// the atomic, not to read other data.
70// - `WaiterQueue::Drop` must see the `Waiter` nodes, so it must load
71// `state_and_queue` with Acquire ordering.
72// - There is just one store where `state_and_queue` is used only as a
73// state flag, without having to synchronize data: switching the state
74// from INCOMPLETE to RUNNING in `call_inner`. This store can be Relaxed,
75// but the read has to be Acquire because of the requirements mentioned
76// above.
77// * `Waiter.signaled` is both used as a flag, and to protect a field with
78// interior mutability in `Waiter`. `Waiter.thread` is changed in
79// `WaiterQueue::Drop` which then sets `signaled` with Release ordering.
80// After `wait` loads `signaled` with Acquire and sees it is true, it needs to
81// see the changes to drop the `Waiter` struct correctly.
82// * There is one place where the two atomics `Once.state_and_queue` and
83// `Waiter.signaled` come together, and might be reordered by the compiler or
84// processor. Because both use Aquire ordering such a reordering is not
85// allowed, so no need for SeqCst.
86
87use crate::cell::Cell;
532ac7d7
XL
88use crate::fmt;
89use crate::marker;
60c5eb7d 90use crate::sync::atomic::{AtomicBool, AtomicUsize, Ordering};
532ac7d7 91use crate::thread::{self, Thread};
1a4d82fc
JJ
92
93/// A synchronization primitive which can be used to run a one-time global
94/// initialization. Useful for one-time initialization for FFI or related
e74abb32
XL
95/// functionality. This type can only be constructed with the [`Once::new`]
96/// constructor.
1a4d82fc 97///
94b46f34 98/// [`Once::new`]: struct.Once.html#method.new
cc61c64b 99///
c34b1796 100/// # Examples
1a4d82fc 101///
c34b1796 102/// ```
94b46f34 103/// use std::sync::Once;
1a4d82fc 104///
94b46f34 105/// static START: Once = Once::new();
1a4d82fc
JJ
106///
107/// START.call_once(|| {
108/// // run initialization here
109/// });
110/// ```
85aaf69f 111#[stable(feature = "rust1", since = "1.0.0")]
1a4d82fc 112pub struct Once {
60c5eb7d
XL
113 // `state_and_queue` is actually an a pointer to a `Waiter` with extra state
114 // bits, so we add the `PhantomData` appropriately.
115 state_and_queue: AtomicUsize,
116 _marker: marker::PhantomData<*const Waiter>,
54a0048b
SL
117}
118
119// The `PhantomData` of a raw pointer removes these two auto traits, but we
120// enforce both below in the implementation so this should be safe to add.
121#[stable(feature = "rust1", since = "1.0.0")]
122unsafe impl Sync for Once {}
123#[stable(feature = "rust1", since = "1.0.0")]
124unsafe impl Send for Once {}
125
abe05a73
XL
126/// State yielded to [`call_once_force`]’s closure parameter. The state can be
127/// used to query the poison status of the [`Once`].
cc61c64b
XL
128///
129/// [`call_once_force`]: struct.Once.html#method.call_once_force
130/// [`Once`]: struct.Once.html
a7813a04 131#[unstable(feature = "once_poison", issue = "33577")]
32a655c1 132#[derive(Debug)]
54a0048b
SL
133pub struct OnceState {
134 poisoned: bool,
1a4d82fc
JJ
135}
136
cc61c64b
XL
137/// Initialization value for static [`Once`] values.
138///
139/// [`Once`]: struct.Once.html
140///
141/// # Examples
142///
143/// ```
144/// use std::sync::{Once, ONCE_INIT};
145///
146/// static START: Once = ONCE_INIT;
147/// ```
85aaf69f 148#[stable(feature = "rust1", since = "1.0.0")]
dc9dc135
XL
149#[rustc_deprecated(
150 since = "1.38.0",
151 reason = "the `new` function is now preferred",
60c5eb7d 152 suggestion = "Once::new()"
dc9dc135 153)]
62682a34 154pub const ONCE_INIT: Once = Once::new();
1a4d82fc 155
60c5eb7d
XL
156// Four states that a Once can be in, encoded into the lower bits of
157// `state_and_queue` in the Once structure.
54a0048b
SL
158const INCOMPLETE: usize = 0x0;
159const POISONED: usize = 0x1;
160const RUNNING: usize = 0x2;
161const COMPLETE: usize = 0x3;
162
163// Mask to learn about the state. All other bits are the queue of waiters if
164// this is in the RUNNING state.
165const STATE_MASK: usize = 0x3;
166
60c5eb7d
XL
167// Representation of a node in the linked list of waiters, used while in the
168// RUNNING state.
169// Note: `Waiter` can't hold a mutable pointer to the next thread, because then
170// `wait` would both hand out a mutable reference to its `Waiter` node, and keep
171// a shared reference to check `signaled`. Instead we hold shared references and
172// use interior mutability.
173#[repr(align(4))] // Ensure the two lower bits are free to use as state bits.
54a0048b 174struct Waiter {
60c5eb7d 175 thread: Cell<Option<Thread>>,
54a0048b 176 signaled: AtomicBool,
60c5eb7d 177 next: *const Waiter,
54a0048b
SL
178}
179
60c5eb7d
XL
180// Head of a linked list of waiters.
181// Every node is a struct on the stack of a waiting thread.
182// Will wake up the waiters when it gets dropped, i.e. also on panic.
183struct WaiterQueue<'a> {
184 state_and_queue: &'a AtomicUsize,
185 set_state_on_drop_to: usize,
54a0048b
SL
186}
187
1a4d82fc 188impl Once {
62682a34
SL
189 /// Creates a new `Once` value.
190 #[stable(feature = "once_new", since = "1.2.0")]
60c5eb7d 191 #[cfg_attr(not(bootstrap), rustc_const_stable(feature = "const_once_new", since = "1.32.0"))]
62682a34 192 pub const fn new() -> Once {
60c5eb7d 193 Once { state_and_queue: AtomicUsize::new(INCOMPLETE), _marker: marker::PhantomData }
62682a34
SL
194 }
195
9346a6ac 196 /// Performs an initialization routine once and only once. The given closure
1a4d82fc
JJ
197 /// will be executed if this is the first time `call_once` has been called,
198 /// and otherwise the routine will *not* be invoked.
199 ///
bd371182 200 /// This method will block the calling thread if another initialization
1a4d82fc
JJ
201 /// routine is currently running.
202 ///
203 /// When this function returns, it is guaranteed that some initialization
bd371182
AL
204 /// has run and completed (it may not be the closure specified). It is also
205 /// guaranteed that any memory writes performed by the executed closure can
206 /// be reliably observed by other threads at this point (there is a
207 /// happens-before relation between the closure and code executing after the
208 /// return).
54a0048b 209 ///
b7449926
XL
210 /// If the given closure recursively invokes `call_once` on the same `Once`
211 /// instance the exact behavior is not specified, allowed outcomes are
212 /// a panic or a deadlock.
213 ///
54a0048b
SL
214 /// # Examples
215 ///
216 /// ```
94b46f34 217 /// use std::sync::Once;
54a0048b
SL
218 ///
219 /// static mut VAL: usize = 0;
94b46f34 220 /// static INIT: Once = Once::new();
54a0048b
SL
221 ///
222 /// // Accessing a `static mut` is unsafe much of the time, but if we do so
0731742a 223 /// // in a synchronized fashion (e.g., write once or read all) then we're
54a0048b
SL
224 /// // good to go!
225 /// //
226 /// // This function will only call `expensive_computation` once, and will
227 /// // otherwise always return the value returned from the first invocation.
228 /// fn get_cached_val() -> usize {
229 /// unsafe {
230 /// INIT.call_once(|| {
231 /// VAL = expensive_computation();
232 /// });
233 /// VAL
234 /// }
235 /// }
236 ///
237 /// fn expensive_computation() -> usize {
238 /// // ...
239 /// # 2
240 /// }
241 /// ```
242 ///
243 /// # Panics
244 ///
245 /// The closure `f` will only be executed once if this is called
246 /// concurrently amongst many threads. If that closure panics, however, then
247 /// it will *poison* this `Once` instance, causing all future invocations of
248 /// `call_once` to also panic.
249 ///
250 /// This is similar to [poisoning with mutexes][poison].
251 ///
252 /// [poison]: struct.Mutex.html#poisoning
85aaf69f 253 #[stable(feature = "rust1", since = "1.0.0")]
60c5eb7d
XL
254 pub fn call_once<F>(&self, f: F)
255 where
256 F: FnOnce(),
257 {
b7449926
XL
258 // Fast path check
259 if self.is_completed() {
260 return;
1a4d82fc
JJ
261 }
262
54a0048b
SL
263 let mut f = Some(f);
264 self.call_inner(false, &mut |_| f.take().unwrap()());
265 }
266
cc61c64b
XL
267 /// Performs the same function as [`call_once`] except ignores poisoning.
268 ///
0731742a 269 /// Unlike [`call_once`], if this `Once` has been poisoned (i.e., a previous
abe05a73
XL
270 /// call to `call_once` or `call_once_force` caused a panic), calling
271 /// `call_once_force` will still invoke the closure `f` and will _not_
272 /// result in an immediate panic. If `f` panics, the `Once` will remain
273 /// in a poison state. If `f` does _not_ panic, the `Once` will no
274 /// longer be in a poison state and all future calls to `call_once` or
9fa01778 275 /// `call_one_force` will be no-ops.
abe05a73
XL
276 ///
277 /// The closure `f` is yielded a [`OnceState`] structure which can be used
278 /// to query the poison status of the `Once`.
279 ///
cc61c64b 280 /// [`call_once`]: struct.Once.html#method.call_once
abe05a73 281 /// [`OnceState`]: struct.OnceState.html
54a0048b 282 ///
abe05a73 283 /// # Examples
54a0048b 284 ///
abe05a73
XL
285 /// ```
286 /// #![feature(once_poison)]
cc61c64b 287 ///
94b46f34 288 /// use std::sync::Once;
abe05a73
XL
289 /// use std::thread;
290 ///
94b46f34 291 /// static INIT: Once = Once::new();
abe05a73
XL
292 ///
293 /// // poison the once
294 /// let handle = thread::spawn(|| {
295 /// INIT.call_once(|| panic!());
296 /// });
297 /// assert!(handle.join().is_err());
298 ///
299 /// // poisoning propagates
300 /// let handle = thread::spawn(|| {
301 /// INIT.call_once(|| {});
302 /// });
303 /// assert!(handle.join().is_err());
304 ///
305 /// // call_once_force will still run and reset the poisoned state
306 /// INIT.call_once_force(|state| {
307 /// assert!(state.poisoned());
308 /// });
309 ///
310 /// // once any success happens, we stop propagating the poison
311 /// INIT.call_once(|| {});
312 /// ```
a7813a04 313 #[unstable(feature = "once_poison", issue = "33577")]
60c5eb7d
XL
314 pub fn call_once_force<F>(&self, f: F)
315 where
316 F: FnOnce(&OnceState),
317 {
b7449926
XL
318 // Fast path check
319 if self.is_completed() {
320 return;
1a4d82fc
JJ
321 }
322
54a0048b 323 let mut f = Some(f);
60c5eb7d 324 self.call_inner(true, &mut |p| f.take().unwrap()(&OnceState { poisoned: p }));
54a0048b
SL
325 }
326
9fa01778 327 /// Returns `true` if some `call_once` call has completed
0bf4aa26
XL
328 /// successfully. Specifically, `is_completed` will return false in
329 /// the following situations:
b7449926
XL
330 /// * `call_once` was not called at all,
331 /// * `call_once` was called, but has not yet completed,
332 /// * the `Once` instance is poisoned
333 ///
334 /// It is also possible that immediately after `is_completed`
335 /// returns false, some other thread finishes executing
336 /// `call_once`.
337 ///
338 /// # Examples
339 ///
340 /// ```
341 /// #![feature(once_is_completed)]
342 /// use std::sync::Once;
343 ///
344 /// static INIT: Once = Once::new();
345 ///
346 /// assert_eq!(INIT.is_completed(), false);
347 /// INIT.call_once(|| {
348 /// assert_eq!(INIT.is_completed(), false);
349 /// });
350 /// assert_eq!(INIT.is_completed(), true);
351 /// ```
352 ///
353 /// ```
354 /// #![feature(once_is_completed)]
355 /// use std::sync::Once;
356 /// use std::thread;
357 ///
358 /// static INIT: Once = Once::new();
359 ///
360 /// assert_eq!(INIT.is_completed(), false);
361 /// let handle = thread::spawn(|| {
362 /// INIT.call_once(|| panic!());
363 /// });
364 /// assert!(handle.join().is_err());
365 /// assert_eq!(INIT.is_completed(), false);
366 /// ```
0bf4aa26
XL
367 #[unstable(feature = "once_is_completed", issue = "54890")]
368 #[inline]
b7449926
XL
369 pub fn is_completed(&self) -> bool {
370 // An `Acquire` load is enough because that makes all the initialization
371 // operations visible to us, and, this being a fast path, weaker
372 // ordering helps with performance. This `Acquire` synchronizes with
60c5eb7d
XL
373 // `Release` operations on the slow path.
374 self.state_and_queue.load(Ordering::Acquire) == COMPLETE
b7449926
XL
375 }
376
54a0048b
SL
377 // This is a non-generic function to reduce the monomorphization cost of
378 // using `call_once` (this isn't exactly a trivial or small implementation).
379 //
380 // Additionally, this is tagged with `#[cold]` as it should indeed be cold
381 // and it helps let LLVM know that calls to this function should be off the
382 // fast path. Essentially, this should help generate more straight line code
383 // in LLVM.
384 //
385 // Finally, this takes an `FnMut` instead of a `FnOnce` because there's
386 // currently no way to take an `FnOnce` and call it via virtual dispatch
387 // without some allocation overhead.
388 #[cold]
60c5eb7d
XL
389 fn call_inner(&self, ignore_poisoning: bool, init: &mut dyn FnMut(bool)) {
390 let mut state_and_queue = self.state_and_queue.load(Ordering::Acquire);
391 loop {
392 match state_and_queue {
393 COMPLETE => break,
54a0048b 394 POISONED if !ignore_poisoning => {
60c5eb7d 395 // Panic to propagate the poison.
54a0048b
SL
396 panic!("Once instance has previously been poisoned");
397 }
60c5eb7d
XL
398 POISONED | INCOMPLETE => {
399 // Try to register this thread as the one RUNNING.
400 let old = self.state_and_queue.compare_and_swap(
401 state_and_queue,
402 RUNNING,
403 Ordering::Acquire,
404 );
405 if old != state_and_queue {
406 state_and_queue = old;
407 continue;
54a0048b 408 }
60c5eb7d
XL
409 // `waiter_queue` will manage other waiting threads, and
410 // wake them up on drop.
411 let mut waiter_queue = WaiterQueue {
412 state_and_queue: &self.state_and_queue,
413 set_state_on_drop_to: POISONED,
54a0048b 414 };
60c5eb7d
XL
415 // Run the initialization function, letting it know if we're
416 // poisoned or not.
417 init(state_and_queue == POISONED);
418 waiter_queue.set_state_on_drop_to = COMPLETE;
419 break;
54a0048b 420 }
54a0048b 421 _ => {
60c5eb7d
XL
422 // All other values must be RUNNING with possibly a
423 // pointer to the waiter queue in the more significant bits.
424 assert!(state_and_queue & STATE_MASK == RUNNING);
425 wait(&self.state_and_queue, state_and_queue);
426 state_and_queue = self.state_and_queue.load(Ordering::Acquire);
54a0048b
SL
427 }
428 }
1a4d82fc 429 }
54a0048b
SL
430 }
431}
1a4d82fc 432
60c5eb7d
XL
433fn wait(state_and_queue: &AtomicUsize, mut current_state: usize) {
434 // Note: the following code was carefully written to avoid creating a
435 // mutable reference to `node` that gets aliased.
436 loop {
437 // Don't queue this thread if the status is no longer running,
438 // otherwise we will not be woken up.
439 if current_state & STATE_MASK != RUNNING {
440 return;
441 }
442
443 // Create the node for our current thread.
444 let node = Waiter {
445 thread: Cell::new(Some(thread::current())),
446 signaled: AtomicBool::new(false),
447 next: (current_state & !STATE_MASK) as *const Waiter,
448 };
449 let me = &node as *const Waiter as usize;
450
451 // Try to slide in the node at the head of the linked list, making sure
452 // that another thread didn't just replace the head of the linked list.
453 let old = state_and_queue.compare_and_swap(current_state, me | RUNNING, Ordering::Release);
454 if old != current_state {
455 current_state = old;
456 continue;
457 }
458
459 // We have enqueued ourselves, now lets wait.
460 // It is important not to return before being signaled, otherwise we
461 // would drop our `Waiter` node and leave a hole in the linked list
462 // (and a dangling reference). Guard against spurious wakeups by
463 // reparking ourselves until we are signaled.
464 while !node.signaled.load(Ordering::Acquire) {
465 // If the managing thread happens to signal and unpark us before we
466 // can park ourselves, the result could be this thread never gets
467 // unparked. Luckily `park` comes with the guarantee that if it got
468 // an `unpark` just before on an unparked thread is does not park.
469 thread::park();
470 }
471 break;
472 }
473}
474
8bb4bdeb 475#[stable(feature = "std_debug", since = "1.16.0")]
32a655c1 476impl fmt::Debug for Once {
532ac7d7 477 fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
32a655c1
SL
478 f.pad("Once { .. }")
479 }
480}
481
60c5eb7d 482impl Drop for WaiterQueue<'_> {
54a0048b 483 fn drop(&mut self) {
60c5eb7d
XL
484 // Swap out our state with however we finished.
485 let state_and_queue =
486 self.state_and_queue.swap(self.set_state_on_drop_to, Ordering::AcqRel);
487
488 // We should only ever see an old state which was RUNNING.
489 assert_eq!(state_and_queue & STATE_MASK, RUNNING);
54a0048b 490
60c5eb7d
XL
491 // Walk the entire linked list of waiters and wake them up (in lifo
492 // order, last to register is first to wake up).
54a0048b 493 unsafe {
60c5eb7d
XL
494 // Right after setting `node.signaled = true` the other thread may
495 // free `node` if there happens to be has a spurious wakeup.
496 // So we have to take out the `thread` field and copy the pointer to
497 // `next` first.
498 let mut queue = (state_and_queue & !STATE_MASK) as *const Waiter;
54a0048b
SL
499 while !queue.is_null() {
500 let next = (*queue).next;
60c5eb7d
XL
501 let thread = (*queue).thread.replace(None).unwrap();
502 (*queue).signaled.store(true, Ordering::Release);
503 // ^- FIXME (maybe): This is another case of issue #55005
504 // `store()` has a potentially dangling ref to `signaled`.
54a0048b 505 queue = next;
60c5eb7d 506 thread.unpark();
54a0048b 507 }
1a4d82fc
JJ
508 }
509 }
510}
511
54a0048b 512impl OnceState {
9fa01778 513 /// Returns `true` if the associated [`Once`] was poisoned prior to the
abe05a73 514 /// invocation of the closure passed to [`call_once_force`].
cc61c64b 515 ///
abe05a73 516 /// [`call_once_force`]: struct.Once.html#method.call_once_force
cc61c64b 517 /// [`Once`]: struct.Once.html
abe05a73
XL
518 ///
519 /// # Examples
520 ///
521 /// A poisoned `Once`:
522 ///
523 /// ```
524 /// #![feature(once_poison)]
525 ///
94b46f34 526 /// use std::sync::Once;
abe05a73
XL
527 /// use std::thread;
528 ///
94b46f34 529 /// static INIT: Once = Once::new();
abe05a73
XL
530 ///
531 /// // poison the once
532 /// let handle = thread::spawn(|| {
533 /// INIT.call_once(|| panic!());
534 /// });
535 /// assert!(handle.join().is_err());
536 ///
537 /// INIT.call_once_force(|state| {
538 /// assert!(state.poisoned());
539 /// });
540 /// ```
541 ///
542 /// An unpoisoned `Once`:
543 ///
544 /// ```
545 /// #![feature(once_poison)]
546 ///
94b46f34 547 /// use std::sync::Once;
abe05a73 548 ///
94b46f34 549 /// static INIT: Once = Once::new();
abe05a73
XL
550 ///
551 /// INIT.call_once_force(|state| {
552 /// assert!(!state.poisoned());
553 /// });
a7813a04 554 #[unstable(feature = "once_poison", issue = "33577")]
54a0048b
SL
555 pub fn poisoned(&self) -> bool {
556 self.poisoned
557 }
558}
559
c30ab7b3 560#[cfg(all(test, not(target_os = "emscripten")))]
d9579d0f 561mod tests {
60c5eb7d 562 use super::Once;
532ac7d7
XL
563 use crate::panic;
564 use crate::sync::mpsc::channel;
565 use crate::thread;
1a4d82fc
JJ
566
567 #[test]
568 fn smoke_once() {
62682a34 569 static O: Once = Once::new();
85aaf69f 570 let mut a = 0;
1a4d82fc
JJ
571 O.call_once(|| a += 1);
572 assert_eq!(a, 1);
573 O.call_once(|| a += 1);
574 assert_eq!(a, 1);
575 }
576
577 #[test]
578 fn stampede_once() {
62682a34 579 static O: Once = Once::new();
c30ab7b3 580 static mut RUN: bool = false;
1a4d82fc
JJ
581
582 let (tx, rx) = channel();
85aaf69f 583 for _ in 0..10 {
1a4d82fc 584 let tx = tx.clone();
60c5eb7d
XL
585 thread::spawn(move || {
586 for _ in 0..4 {
587 thread::yield_now()
588 }
1a4d82fc
JJ
589 unsafe {
590 O.call_once(|| {
c30ab7b3
SL
591 assert!(!RUN);
592 RUN = true;
1a4d82fc 593 });
c30ab7b3 594 assert!(RUN);
1a4d82fc
JJ
595 }
596 tx.send(()).unwrap();
597 });
598 }
599
600 unsafe {
601 O.call_once(|| {
c30ab7b3
SL
602 assert!(!RUN);
603 RUN = true;
1a4d82fc 604 });
c30ab7b3 605 assert!(RUN);
1a4d82fc
JJ
606 }
607
85aaf69f 608 for _ in 0..10 {
1a4d82fc
JJ
609 rx.recv().unwrap();
610 }
611 }
54a0048b
SL
612
613 #[test]
614 fn poison_bad() {
615 static O: Once = Once::new();
616
617 // poison the once
618 let t = panic::catch_unwind(|| {
619 O.call_once(|| panic!());
620 });
621 assert!(t.is_err());
622
623 // poisoning propagates
624 let t = panic::catch_unwind(|| {
625 O.call_once(|| {});
626 });
627 assert!(t.is_err());
628
629 // we can subvert poisoning, however
630 let mut called = false;
631 O.call_once_force(|p| {
632 called = true;
633 assert!(p.poisoned())
634 });
635 assert!(called);
636
637 // once any success happens, we stop propagating the poison
638 O.call_once(|| {});
639 }
640
641 #[test]
642 fn wait_for_force_to_finish() {
643 static O: Once = Once::new();
644
645 // poison the once
646 let t = panic::catch_unwind(|| {
647 O.call_once(|| panic!());
648 });
649 assert!(t.is_err());
650
651 // make sure someone's waiting inside the once via a force
652 let (tx1, rx1) = channel();
653 let (tx2, rx2) = channel();
654 let t1 = thread::spawn(move || {
655 O.call_once_force(|p| {
656 assert!(p.poisoned());
657 tx1.send(()).unwrap();
658 rx2.recv().unwrap();
659 });
660 });
661
662 rx1.recv().unwrap();
663
664 // put another waiter on the once
665 let t2 = thread::spawn(|| {
666 let mut called = false;
667 O.call_once(|| {
668 called = true;
669 });
670 assert!(!called);
671 });
672
673 tx2.send(()).unwrap();
674
675 assert!(t1.join().is_ok());
676 assert!(t2.join().is_ok());
54a0048b 677 }
1a4d82fc 678}