1 ## Shared-State Concurrency
3 Message passing is a fine way of handling concurrency, but it’s not the only
4 one. Consider this part of the slogan from the Go language documentation again:
5 “do not communicate by sharing memory.”
7 What would communicating by sharing memory look like? In addition, why would
8 message-passing enthusiasts not use it and do the opposite instead?
10 In a way, channels in any programming language are similar to single ownership,
11 because once you transfer a value down a channel, you should no longer use that
12 value. Shared memory concurrency is like multiple ownership: multiple threads
13 can access the same memory location at the same time. As you saw in Chapter 15,
14 where smart pointers made multiple ownership possible, multiple ownership can
15 add complexity because these different owners need managing. Rust’s type system
16 and ownership rules greatly assist in getting this management correct. For an
17 example, let’s look at mutexes, one of the more common concurrency primitives
20 ### Using Mutexes to Allow Access to Data from One Thread at a Time
22 *Mutex* is an abbreviation for *mutual exclusion*, as in, a mutex allows only
23 one thread to access some data at any given time. To access the data in a
24 mutex, a thread must first signal that it wants access by asking to acquire the
25 mutex’s *lock*. The lock is a data structure that is part of the mutex that
26 keeps track of who currently has exclusive access to the data. Therefore, the
27 mutex is described as *guarding* the data it holds via the locking system.
29 Mutexes have a reputation for being difficult to use because you have to
32 * You must attempt to acquire the lock before using the data.
33 * When you’re done with the data that the mutex guards, you must unlock the
34 data so other threads can acquire the lock.
36 For a real-world metaphor for a mutex, imagine a panel discussion at a
37 conference with only one microphone. Before a panelist can speak, they have to
38 ask or signal that they want to use the microphone. When they get the
39 microphone, they can talk for as long as they want to and then hand the
40 microphone to the next panelist who requests to speak. If a panelist forgets to
41 hand the microphone off when they’re finished with it, no one else is able to
42 speak. If management of the shared microphone goes wrong, the panel won’t work
45 Management of mutexes can be incredibly tricky to get right, which is why so
46 many people are enthusiastic about channels. However, thanks to Rust’s type
47 system and ownership rules, you can’t get locking and unlocking wrong.
49 #### The API of `Mutex<T>`
51 As an example of how to use a mutex, let’s start by using a mutex in a
52 single-threaded context, as shown in Listing 16-12:
54 <span class="filename">Filename: src/main.rs</span>
60 let m = Mutex::new(5);
63 let mut num = m.lock().unwrap();
67 println!("m = {:?}", m);
71 <span class="caption">Listing 16-12: Exploring the API of `Mutex<T>` in a
72 single-threaded context for simplicity</span>
74 As with many types, we create a `Mutex<T>` using the associated function `new`.
75 To access the data inside the mutex, we use the `lock` method to acquire the
76 lock. This call will block the current thread so it can’t do any work until
77 it’s our turn to have the lock.
79 The call to `lock` would fail if another thread holding the lock panicked. In
80 that case, no one would ever be able to get the lock, so we’ve chosen to
81 `unwrap` and have this thread panic if we’re in that situation.
83 After we’ve acquired the lock, we can treat the return value, named `num` in
84 this case, as a mutable reference to the data inside. The type system ensures
85 that we acquire a lock before using the value in `m`: `Mutex<i32>` is not an
86 `i32`, so we *must* acquire the lock to be able to use the `i32` value. We
87 can’t forget; the type system won’t let us access the inner `i32` otherwise.
89 As you might suspect, `Mutex<T>` is a smart pointer. More accurately, the call
90 to `lock` *returns* a smart pointer called `MutexGuard`, wrapped in a
91 `LockResult` that we handled with the call to `unwrap`. The `MutexGuard` smart
92 pointer implements `Deref` to point at our inner data; the smart pointer also
93 has a `Drop` implementation that releases the lock automatically when a
94 `MutexGuard` goes out of scope, which happens at the end of the inner scope in
95 Listing 16-12. As a result, we don’t risk forgetting to release the lock and
96 blocking the mutex from being used by other threads because the lock release
97 happens automatically.
99 After dropping the lock, we can print the mutex value and see that we were able
100 to change the inner `i32` to 6.
102 #### Sharing a `Mutex<T>` Between Multiple Threads
104 Now, let’s try to share a value between multiple threads using `Mutex<T>`.
105 We’ll spin up 10 threads and have them each increment a counter value by 1, so
106 the counter goes from 0 to 10. The next example in Listing 16-13 will have
107 a compiler error, and we’ll use that error to learn more about using
108 `Mutex<T>` and how Rust helps us use it correctly.
110 <span class="filename">Filename: src/main.rs</span>
112 ```rust,ignore,does_not_compile
113 use std::sync::Mutex;
117 let counter = Mutex::new(0);
118 let mut handles = vec![];
121 let handle = thread::spawn(move || {
122 let mut num = counter.lock().unwrap();
126 handles.push(handle);
129 for handle in handles {
130 handle.join().unwrap();
133 println!("Result: {}", *counter.lock().unwrap());
137 <span class="caption">Listing 16-13: Ten threads each increment a counter
138 guarded by a `Mutex<T>`</span>
140 We create a `counter` variable to hold an `i32` inside a `Mutex<T>`, as we
141 did in Listing 16-12. Next, we create 10 threads by iterating over a range
142 of numbers. We use `thread::spawn` and give all the threads the same closure,
143 one that moves the counter into the thread, acquires a lock on the `Mutex<T>`
144 by calling the `lock` method, and then adds 1 to the value in the mutex. When a
145 thread finishes running its closure, `num` will go out of scope and release the
146 lock so another thread can acquire it.
148 In the main thread, we collect all the join handles. Then, as we did in Listing
149 16-2, we call `join` on each handle to make sure all the threads finish. At
150 that point, the main thread will acquire the lock and print the result of this
153 We hinted that this example wouldn’t compile. Now let’s find out why!
156 error[E0382]: use of moved value: `counter`
159 9 | let handle = thread::spawn(move || {
160 | ^^^^^^^ value moved into closure here,
161 in previous iteration of loop
162 10 | let mut num = counter.lock().unwrap();
163 | ------- use occurs due to use in closure
165 = note: move occurs because `counter` has type `std::sync::Mutex<i32>`,
166 which does not implement the `Copy` trait
169 The error message states that the `counter` value was moved in the previous
170 iteration of the loop. So Rust is telling us that we can’t move the ownership
171 of lock `counter` into multiple threads. Let’s fix the compiler error with a
172 multiple-ownership method we discussed in Chapter 15.
174 #### Multiple Ownership with Multiple Threads
176 In Chapter 15, we gave a value multiple owners by using the smart pointer
177 `Rc<T>` to create a reference counted value. Let’s do the same here and see
178 what happens. We’ll wrap the `Mutex<T>` in `Rc<T>` in Listing 16-14 and clone
179 the `Rc<T>` before moving ownership to the thread. Now that we’ve seen the
180 errors, we’ll also switch back to using the `for` loop, and we’ll keep the
181 `move` keyword with the closure.
183 <span class="filename">Filename: src/main.rs</span>
185 ```rust,ignore,does_not_compile
187 use std::sync::Mutex;
191 let counter = Rc::new(Mutex::new(0));
192 let mut handles = vec![];
195 let counter = Rc::clone(&counter);
196 let handle = thread::spawn(move || {
197 let mut num = counter.lock().unwrap();
201 handles.push(handle);
204 for handle in handles {
205 handle.join().unwrap();
208 println!("Result: {}", *counter.lock().unwrap());
212 <span class="caption">Listing 16-14: Attempting to use `Rc<T>` to allow
213 multiple threads to own the `Mutex<T>`</span>
215 Once again, we compile and get... different errors! The compiler is teaching us
219 error[E0277]: `std::rc::Rc<std::sync::Mutex<i32>>` cannot be sent between threads safely
220 --> src/main.rs:11:22
222 11 | let handle = thread::spawn(move || {
223 | ^^^^^^^^^^^^^ `std::rc::Rc<std::sync::Mutex<i32>>`
224 cannot be sent between threads safely
226 = help: within `[closure@src/main.rs:11:36: 14:10
227 counter:std::rc::Rc<std::sync::Mutex<i32>>]`, the trait `std::marker::Send`
228 is not implemented for `std::rc::Rc<std::sync::Mutex<i32>>`
229 = note: required because it appears within the type
230 `[closure@src/main.rs:11:36: 14:10 counter:std::rc::Rc<std::sync::Mutex<i32>>]`
231 = note: required by `std::thread::spawn`
234 Wow, that error message is very wordy! Here’s the important part to focus
235 on: `` `Rc<Mutex<i32>>` cannot be sent between threads safely ``. The compiler
236 is also telling us the reason why: ``the trait `Send` is not implemented for
237 `Rc<Mutex<i32>>` ``. We’ll talk about `Send` in the next section: it’s one of
238 the traits that ensures the types we use with threads are meant for use in
239 concurrent situations.
241 Unfortunately, `Rc<T>` is not safe to share across threads. When `Rc<T>`
242 manages the reference count, it adds to the count for each call to `clone` and
243 subtracts from the count when each clone is dropped. But it doesn’t use any
244 concurrency primitives to make sure that changes to the count can’t be
245 interrupted by another thread. This could lead to wrong counts—subtle bugs that
246 could in turn lead to memory leaks or a value being dropped before we’re done
247 with it. What we need is a type exactly like `Rc<T>` but one that makes changes
248 to the reference count in a thread-safe way.
250 #### Atomic Reference Counting with `Arc<T>`
252 Fortunately, `Arc<T>` *is* a type like `Rc<T>` that is safe to use in
253 concurrent situations. The *a* stands for *atomic*, meaning it’s an *atomically
254 reference counted* type. Atomics are an additional kind of concurrency
255 primitive that we won’t cover in detail here: see the standard library
256 documentation for `std::sync::atomic` for more details. At this point, you just
257 need to know that atomics work like primitive types but are safe to share
260 You might then wonder why all primitive types aren’t atomic and why standard
261 library types aren’t implemented to use `Arc<T>` by default. The reason is that
262 thread safety comes with a performance penalty that you only want to pay when
263 you really need to. If you’re just performing operations on values within a
264 single thread, your code can run faster if it doesn’t have to enforce the
265 guarantees atomics provide.
267 Let’s return to our example: `Arc<T>` and `Rc<T>` have the same API, so we fix
268 our program by changing the `use` line, the call to `new`, and the call to
269 `clone`. The code in Listing 16-15 will finally compile and run:
271 <span class="filename">Filename: src/main.rs</span>
274 use std::sync::{Mutex, Arc};
278 let counter = Arc::new(Mutex::new(0));
279 let mut handles = vec![];
282 let counter = Arc::clone(&counter);
283 let handle = thread::spawn(move || {
284 let mut num = counter.lock().unwrap();
288 handles.push(handle);
291 for handle in handles {
292 handle.join().unwrap();
295 println!("Result: {}", *counter.lock().unwrap());
299 <span class="caption">Listing 16-15: Using an `Arc<T>` to wrap the `Mutex<T>`
300 to be able to share ownership across multiple threads</span>
302 This code will print the following:
308 We did it! We counted from 0 to 10, which may not seem very impressive, but it
309 did teach us a lot about `Mutex<T>` and thread safety. You could also use this
310 program’s structure to do more complicated operations than just incrementing a
311 counter. Using this strategy, you can divide a calculation into independent
312 parts, split those parts across threads, and then use a `Mutex<T>` to have each
313 thread update the final result with its part.
315 ### Similarities Between `RefCell<T>`/`Rc<T>` and `Mutex<T>`/`Arc<T>`
317 You might have noticed that `counter` is immutable but we could get a mutable
318 reference to the value inside it; this means `Mutex<T>` provides interior
319 mutability, as the `Cell` family does. In the same way we used `RefCell<T>` in
320 Chapter 15 to allow us to mutate contents inside an `Rc<T>`, we use `Mutex<T>`
321 to mutate contents inside an `Arc<T>`.
323 Another detail to note is that Rust can’t protect you from all kinds of logic
324 errors when you use `Mutex<T>`. Recall in Chapter 15 that using `Rc<T>` came
325 with the risk of creating reference cycles, where two `Rc<T>` values refer to
326 each other, causing memory leaks. Similarly, `Mutex<T>` comes with the risk of
327 creating *deadlocks*. These occur when an operation needs to lock two resources
328 and two threads have each acquired one of the locks, causing them to wait for
329 each other forever. If you’re interested in deadlocks, try creating a Rust
330 program that has a deadlock; then research deadlock mitigation strategies for
331 mutexes in any language and have a go at implementing them in Rust. The
332 standard library API documentation for `Mutex<T>` and `MutexGuard` offers
335 We’ll round out this chapter by talking about the `Send` and `Sync` traits and
336 how we can use them with custom types.