]> git.proxmox.com Git - rustc.git/blob - src/doc/trpl/concurrency.md
Imported Upstream version 1.5.0+dfsg1
[rustc.git] / src / doc / trpl / concurrency.md
1 % Concurrency
2
3 Concurrency and parallelism are incredibly important topics in computer
4 science, and are also a hot topic in industry today. Computers are gaining more
5 and more cores, yet many programmers aren't prepared to fully utilize them.
6
7 Rust's memory safety features also apply to its concurrency story too. Even
8 concurrent Rust programs must be memory safe, having no data races. Rust's type
9 system is up to the task, and gives you powerful ways to reason about
10 concurrent code at compile time.
11
12 Before we talk about the concurrency features that come with Rust, it's important
13 to understand something: Rust is low-level enough that the vast majority of
14 this is provided by the standard library, not by the language. This means that
15 if you don't like some aspect of the way Rust handles concurrency, you can
16 implement an alternative way of doing things.
17 [mio](https://github.com/carllerche/mio) is a real-world example of this
18 principle in action.
19
20 ## Background: `Send` and `Sync`
21
22 Concurrency is difficult to reason about. In Rust, we have a strong, static
23 type system to help us reason about our code. As such, Rust gives us two traits
24 to help us make sense of code that can possibly be concurrent.
25
26 ### `Send`
27
28 The first trait we're going to talk about is
29 [`Send`](../std/marker/trait.Send.html). When a type `T` implements `Send`, it
30 indicates that something of this type is able to have ownership transferred
31 safely between threads.
32
33 This is important to enforce certain restrictions. For example, if we have a
34 channel connecting two threads, we would want to be able to send some data
35 down the channel and to the other thread. Therefore, we'd ensure that `Send` was
36 implemented for that type.
37
38 In the opposite way, if we were wrapping a library with [FFI][ffi] that isn't
39 threadsafe, we wouldn't want to implement `Send`, and so the compiler will help
40 us enforce that it can't leave the current thread.
41
42 [ffi]: ffi.html
43
44 ### `Sync`
45
46 The second of these traits is called [`Sync`](../std/marker/trait.Sync.html).
47 When a type `T` implements `Sync`, it indicates that something
48 of this type has no possibility of introducing memory unsafety when used from
49 multiple threads concurrently through shared references. This implies that
50 types which don't have [interior mutability](mutability.html) are inherently
51 `Sync`, which includes simple primitive types (like `u8`) and aggregate types
52 containing them.
53
54 For sharing references across threads, Rust provides a wrapper type called
55 `Arc<T>`. `Arc<T>` implements `Send` and `Sync` if and only if `T` implements
56 both `Send` and `Sync`. For example, an object of type `Arc<RefCell<U>>` cannot
57 be transferred across threads because
58 [`RefCell`](choosing-your-guarantees.html#refcellt) does not implement
59 `Sync`, consequently `Arc<RefCell<U>>` would not implement `Send`.
60
61 These two traits allow you to use the type system to make strong guarantees
62 about the properties of your code under concurrency. Before we demonstrate
63 why, we need to learn how to create a concurrent Rust program in the first
64 place!
65
66 ## Threads
67
68 Rust's standard library provides a library for threads, which allow you to
69 run Rust code in parallel. Here's a basic example of using `std::thread`:
70
71 ```rust
72 use std::thread;
73
74 fn main() {
75 thread::spawn(|| {
76 println!("Hello from a thread!");
77 });
78 }
79 ```
80
81 The `thread::spawn()` method accepts a [closure](closures.html), which is executed in a
82 new thread. It returns a handle to the thread, that can be used to
83 wait for the child thread to finish and extract its result:
84
85 ```rust
86 use std::thread;
87
88 fn main() {
89 let handle = thread::spawn(|| {
90 "Hello from a thread!"
91 });
92
93 println!("{}", handle.join().unwrap());
94 }
95 ```
96
97 Many languages have the ability to execute threads, but it's wildly unsafe.
98 There are entire books about how to prevent errors that occur from shared
99 mutable state. Rust helps out with its type system here as well, by preventing
100 data races at compile time. Let's talk about how you actually share things
101 between threads.
102
103 ## Safe Shared Mutable State
104
105 Due to Rust's type system, we have a concept that sounds like a lie: "safe
106 shared mutable state." Many programmers agree that shared mutable state is
107 very, very bad.
108
109 Someone once said this:
110
111 > Shared mutable state is the root of all evil. Most languages attempt to deal
112 > with this problem through the 'mutable' part, but Rust deals with it by
113 > solving the 'shared' part.
114
115 The same [ownership system](ownership.html) that helps prevent using pointers
116 incorrectly also helps rule out data races, one of the worst kinds of
117 concurrency bugs.
118
119 As an example, here is a Rust program that would have a data race in many
120 languages. It will not compile:
121
122 ```ignore
123 use std::thread;
124
125 fn main() {
126 let mut data = vec![1, 2, 3];
127
128 for i in 0..3 {
129 thread::spawn(move || {
130 data[i] += 1;
131 });
132 }
133
134 thread::sleep_ms(50);
135 }
136 ```
137
138 This gives us an error:
139
140 ```text
141 8:17 error: capture of moved value: `data`
142 data[i] += 1;
143 ^~~~
144 ```
145
146 Rust knows this wouldn't be safe! If we had a reference to `data` in each
147 thread, and the thread takes ownership of the reference, we'd have three
148 owners!
149
150 So, we need some type that lets us have more than one reference to a value and
151 that we can share between threads, that is it must implement `Sync`.
152
153 We'll use `Arc<T>`, Rust's standard atomic reference count type, which
154 wraps a value up with some extra runtime bookkeeping which allows us to
155 share the ownership of the value between multiple references at the same time.
156
157 The bookkeeping consists of a count of how many of these references exist to
158 the value, hence the reference count part of the name.
159
160 The Atomic part means `Arc<T>` can safely be accessed from multiple threads.
161 To do this the compiler guarantees that mutations of the internal count use
162 indivisible operations which can't have data races.
163
164
165 ```ignore
166 use std::thread;
167 use std::sync::Arc;
168
169 fn main() {
170 let mut data = Arc::new(vec![1, 2, 3]);
171
172 for i in 0..3 {
173 let data = data.clone();
174 thread::spawn(move || {
175 data[i] += 1;
176 });
177 }
178
179 thread::sleep_ms(50);
180 }
181 ```
182
183 We now call `clone()` on our `Arc<T>`, which increases the internal count.
184 This handle is then moved into the new thread.
185
186 And... still gives us an error.
187
188 ```text
189 <anon>:11:24 error: cannot borrow immutable borrowed content as mutable
190 <anon>:11 data[i] += 1;
191 ^~~~
192 ```
193
194 `Arc<T>` assumes one more property about its contents to ensure that it is safe
195 to share across threads: it assumes its contents are `Sync`. This is true for
196 our value if it's immutable, but we want to be able to mutate it, so we need
197 something else to persuade the borrow checker we know what we're doing.
198
199 It looks like we need some type that allows us to safely mutate a shared value,
200 for example a type that can ensure only one thread at a time is able to
201 mutate the value inside it at any one time.
202
203 For that, we can use the `Mutex<T>` type!
204
205 Here's the working version:
206
207 ```rust
208 use std::sync::{Arc, Mutex};
209 use std::thread;
210
211 fn main() {
212 let data = Arc::new(Mutex::new(vec![1, 2, 3]));
213
214 for i in 0..3 {
215 let data = data.clone();
216 thread::spawn(move || {
217 let mut data = data.lock().unwrap();
218 data[i] += 1;
219 });
220 }
221
222 thread::sleep_ms(50);
223 }
224 ```
225
226 Note that the value of `i` is bound (copied) to the closure and not shared
227 among the threads.
228
229 Also note that [`lock`](../std/sync/struct.Mutex.html#method.lock) method of
230 [`Mutex`](../std/sync/struct.Mutex.html) has this signature:
231
232 ```ignore
233 fn lock(&self) -> LockResult<MutexGuard<T>>
234 ```
235
236 and because `Send` is not implemented for `MutexGuard<T>`, the guard cannot
237 cross thread boundaries, ensuring thread-locality of lock acquire and release.
238
239 Let's examine the body of the thread more closely:
240
241 ```rust
242 # use std::sync::{Arc, Mutex};
243 # use std::thread;
244 # fn main() {
245 # let data = Arc::new(Mutex::new(vec![1, 2, 3]));
246 # for i in 0..3 {
247 # let data = data.clone();
248 thread::spawn(move || {
249 let mut data = data.lock().unwrap();
250 data[i] += 1;
251 });
252 # }
253 # thread::sleep_ms(50);
254 # }
255 ```
256
257 First, we call `lock()`, which acquires the mutex's lock. Because this may fail,
258 it returns an `Result<T, E>`, and because this is just an example, we `unwrap()`
259 it to get a reference to the data. Real code would have more robust error handling
260 here. We're then free to mutate it, since we have the lock.
261
262 Lastly, while the threads are running, we wait on a short timer. But
263 this is not ideal: we may have picked a reasonable amount of time to
264 wait but it's more likely we'll either be waiting longer than
265 necessary or not long enough, depending on just how much time the
266 threads actually take to finish computing when the program runs.
267
268 A more precise alternative to the timer would be to use one of the
269 mechanisms provided by the Rust standard library for synchronizing
270 threads with each other. Let's talk about one of them: channels.
271
272 ## Channels
273
274 Here's a version of our code that uses channels for synchronization, rather
275 than waiting for a specific time:
276
277 ```rust
278 use std::sync::{Arc, Mutex};
279 use std::thread;
280 use std::sync::mpsc;
281
282 fn main() {
283 let data = Arc::new(Mutex::new(0));
284
285 let (tx, rx) = mpsc::channel();
286
287 for _ in 0..10 {
288 let (data, tx) = (data.clone(), tx.clone());
289
290 thread::spawn(move || {
291 let mut data = data.lock().unwrap();
292 *data += 1;
293
294 tx.send(());
295 });
296 }
297
298 for _ in 0..10 {
299 rx.recv();
300 }
301 }
302 ```
303
304 We use the `mpsc::channel()` method to construct a new channel. We just `send`
305 a simple `()` down the channel, and then wait for ten of them to come back.
306
307 While this channel is just sending a generic signal, we can send any data that
308 is `Send` over the channel!
309
310 ```rust
311 use std::thread;
312 use std::sync::mpsc;
313
314 fn main() {
315 let (tx, rx) = mpsc::channel();
316
317 for i in 0..10 {
318 let tx = tx.clone();
319
320 thread::spawn(move || {
321 let answer = i * i;
322
323 tx.send(answer);
324 });
325 }
326
327 for _ in 0..10 {
328 println!("{}", rx.recv().unwrap());
329 }
330 }
331 ```
332
333 Here we create 10 threads, asking each to calculate the square of a number (`i`
334 at the time of `spawn()`), and then `send()` back the answer over the channel.
335
336
337 ## Panics
338
339 A `panic!` will crash the currently executing thread. You can use Rust's
340 threads as a simple isolation mechanism:
341
342 ```rust
343 use std::thread;
344
345 let handle = thread::spawn(move || {
346 panic!("oops!");
347 });
348
349 let result = handle.join();
350
351 assert!(result.is_err());
352 ```
353
354 `Thread.join()` gives us a `Result` back, which allows us to check if the thread
355 has panicked or not.