]>
Commit | Line | Data |
---|---|---|
628c0842 PG |
1 | What is RCU? -- "Read, Copy, Update" |
2 | ||
32300751 PM |
3 | Please note that the "What is RCU?" LWN series is an excellent place |
4 | to start learning about RCU: | |
5 | ||
6 | 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ | |
7 | 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ | |
8 | 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ | |
d493011a | 9 | 4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/ |
db4855b5 | 10 | 2010 Big API Table http://lwn.net/Articles/419086/ |
2921b123 | 11 | 5. The RCU API, 2014 Edition http://lwn.net/Articles/609904/ |
db4855b5 | 12 | 2014 Big API Table http://lwn.net/Articles/609973/ |
32300751 PM |
13 | |
14 | ||
dd81eca8 PM |
15 | What is RCU? |
16 | ||
17 | RCU is a synchronization mechanism that was added to the Linux kernel | |
18 | during the 2.5 development effort that is optimized for read-mostly | |
19 | situations. Although RCU is actually quite simple once you understand it, | |
20 | getting there can sometimes be a challenge. Part of the problem is that | |
21 | most of the past descriptions of RCU have been written with the mistaken | |
22 | assumption that there is "one true way" to describe RCU. Instead, | |
23 | the experience has been that different people must take different paths | |
24 | to arrive at an understanding of RCU. This document provides several | |
25 | different paths, as follows: | |
26 | ||
27 | 1. RCU OVERVIEW | |
28 | 2. WHAT IS RCU'S CORE API? | |
29 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
30 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
31 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
32 | 6. ANALOGY WITH READER-WRITER LOCKING | |
33 | 7. FULL LIST OF RCU APIs | |
34 | 8. ANSWERS TO QUICK QUIZZES | |
35 | ||
36 | People who prefer starting with a conceptual overview should focus on | |
37 | Section 1, though most readers will profit by reading this section at | |
38 | some point. People who prefer to start with an API that they can then | |
39 | experiment with should focus on Section 2. People who prefer to start | |
40 | with example uses should focus on Sections 3 and 4. People who need to | |
41 | understand the RCU implementation should focus on Section 5, then dive | |
42 | into the kernel source code. People who reason best by analogy should | |
43 | focus on Section 6. Section 7 serves as an index to the docbook API | |
44 | documentation, and Section 8 is the traditional answer key. | |
45 | ||
46 | So, start with the section that makes the most sense to you and your | |
47 | preferred method of learning. If you need to know everything about | |
48 | everything, feel free to read the whole thing -- but if you are really | |
49 | that type of person, you have perused the source code and will therefore | |
50 | never need this document anyway. ;-) | |
51 | ||
52 | ||
53 | 1. RCU OVERVIEW | |
54 | ||
55 | The basic idea behind RCU is to split updates into "removal" and | |
56 | "reclamation" phases. The removal phase removes references to data items | |
57 | within a data structure (possibly by replacing them with references to | |
58 | new versions of these data items), and can run concurrently with readers. | |
59 | The reason that it is safe to run the removal phase concurrently with | |
60 | readers is the semantics of modern CPUs guarantee that readers will see | |
61 | either the old or the new version of the data structure rather than a | |
62 | partially updated reference. The reclamation phase does the work of reclaiming | |
63 | (e.g., freeing) the data items removed from the data structure during the | |
64 | removal phase. Because reclaiming data items can disrupt any readers | |
65 | concurrently referencing those data items, the reclamation phase must | |
66 | not start until readers no longer hold references to those data items. | |
67 | ||
68 | Splitting the update into removal and reclamation phases permits the | |
69 | updater to perform the removal phase immediately, and to defer the | |
70 | reclamation phase until all readers active during the removal phase have | |
71 | completed, either by blocking until they finish or by registering a | |
72 | callback that is invoked after they finish. Only readers that are active | |
73 | during the removal phase need be considered, because any reader starting | |
74 | after the removal phase will be unable to gain a reference to the removed | |
75 | data items, and therefore cannot be disrupted by the reclamation phase. | |
76 | ||
77 | So the typical RCU update sequence goes something like the following: | |
78 | ||
79 | a. Remove pointers to a data structure, so that subsequent | |
80 | readers cannot gain a reference to it. | |
81 | ||
82 | b. Wait for all previous readers to complete their RCU read-side | |
83 | critical sections. | |
84 | ||
85 | c. At this point, there cannot be any readers who hold references | |
86 | to the data structure, so it now may safely be reclaimed | |
87 | (e.g., kfree()d). | |
88 | ||
89 | Step (b) above is the key idea underlying RCU's deferred destruction. | |
90 | The ability to wait until all readers are done allows RCU readers to | |
91 | use much lighter-weight synchronization, in some cases, absolutely no | |
92 | synchronization at all. In contrast, in more conventional lock-based | |
93 | schemes, readers must use heavy-weight synchronization in order to | |
94 | prevent an updater from deleting the data structure out from under them. | |
95 | This is because lock-based updaters typically update data items in place, | |
96 | and must therefore exclude readers. In contrast, RCU-based updaters | |
97 | typically take advantage of the fact that writes to single aligned | |
98 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, | |
99 | and replacement of data items in a linked structure without disrupting | |
100 | readers. Concurrent RCU readers can then continue accessing the old | |
101 | versions, and can dispense with the atomic operations, memory barriers, | |
102 | and communications cache misses that are so expensive on present-day | |
103 | SMP computer systems, even in absence of lock contention. | |
104 | ||
105 | In the three-step procedure shown above, the updater is performing both | |
106 | the removal and the reclamation step, but it is often helpful for an | |
107 | entirely different thread to do the reclamation, as is in fact the case | |
108 | in the Linux kernel's directory-entry cache (dcache). Even if the same | |
109 | thread performs both the update step (step (a) above) and the reclamation | |
110 | step (step (c) above), it is often helpful to think of them separately. | |
111 | For example, RCU readers and updaters need not communicate at all, | |
112 | but RCU provides implicit low-overhead communication between readers | |
113 | and reclaimers, namely, in step (b) above. | |
114 | ||
115 | So how the heck can a reclaimer tell when a reader is done, given | |
116 | that readers are not doing any sort of synchronization operations??? | |
117 | Read on to learn about how RCU's API makes this easy. | |
118 | ||
119 | ||
120 | 2. WHAT IS RCU'S CORE API? | |
121 | ||
122 | The core RCU API is quite small: | |
123 | ||
124 | a. rcu_read_lock() | |
125 | b. rcu_read_unlock() | |
126 | c. synchronize_rcu() / call_rcu() | |
127 | d. rcu_assign_pointer() | |
128 | e. rcu_dereference() | |
129 | ||
130 | There are many other members of the RCU API, but the rest can be | |
131 | expressed in terms of these five, though most implementations instead | |
132 | express synchronize_rcu() in terms of the call_rcu() callback API. | |
133 | ||
134 | The five core RCU APIs are described below, the other 18 will be enumerated | |
135 | later. See the kernel docbook documentation for more info, or look directly | |
136 | at the function header comments. | |
137 | ||
138 | rcu_read_lock() | |
139 | ||
140 | void rcu_read_lock(void); | |
141 | ||
142 | Used by a reader to inform the reclaimer that the reader is | |
143 | entering an RCU read-side critical section. It is illegal | |
144 | to block while in an RCU read-side critical section, though | |
28f6569a | 145 | kernels built with CONFIG_PREEMPT_RCU can preempt RCU |
6b3ef48a PM |
146 | read-side critical sections. Any RCU-protected data structure |
147 | accessed during an RCU read-side critical section is guaranteed to | |
148 | remain unreclaimed for the full duration of that critical section. | |
dd81eca8 PM |
149 | Reference counts may be used in conjunction with RCU to maintain |
150 | longer-term references to data structures. | |
151 | ||
152 | rcu_read_unlock() | |
153 | ||
154 | void rcu_read_unlock(void); | |
155 | ||
156 | Used by a reader to inform the reclaimer that the reader is | |
157 | exiting an RCU read-side critical section. Note that RCU | |
158 | read-side critical sections may be nested and/or overlapping. | |
159 | ||
160 | synchronize_rcu() | |
161 | ||
162 | void synchronize_rcu(void); | |
163 | ||
164 | Marks the end of updater code and the beginning of reclaimer | |
165 | code. It does this by blocking until all pre-existing RCU | |
166 | read-side critical sections on all CPUs have completed. | |
167 | Note that synchronize_rcu() will -not- necessarily wait for | |
168 | any subsequent RCU read-side critical sections to complete. | |
169 | For example, consider the following sequence of events: | |
170 | ||
171 | CPU 0 CPU 1 CPU 2 | |
172 | ----------------- ------------------------- --------------- | |
173 | 1. rcu_read_lock() | |
174 | 2. enters synchronize_rcu() | |
175 | 3. rcu_read_lock() | |
176 | 4. rcu_read_unlock() | |
177 | 5. exits synchronize_rcu() | |
178 | 6. rcu_read_unlock() | |
179 | ||
180 | To reiterate, synchronize_rcu() waits only for ongoing RCU | |
181 | read-side critical sections to complete, not necessarily for | |
182 | any that begin after synchronize_rcu() is invoked. | |
183 | ||
184 | Of course, synchronize_rcu() does not necessarily return | |
185 | -immediately- after the last pre-existing RCU read-side critical | |
186 | section completes. For one thing, there might well be scheduling | |
187 | delays. For another thing, many RCU implementations process | |
188 | requests in batches in order to improve efficiencies, which can | |
189 | further delay synchronize_rcu(). | |
190 | ||
191 | Since synchronize_rcu() is the API that must figure out when | |
192 | readers are done, its implementation is key to RCU. For RCU | |
193 | to be useful in all but the most read-intensive situations, | |
194 | synchronize_rcu()'s overhead must also be quite small. | |
195 | ||
196 | The call_rcu() API is a callback form of synchronize_rcu(), | |
197 | and is described in more detail in a later section. Instead of | |
198 | blocking, it registers a function and argument which are invoked | |
199 | after all ongoing RCU read-side critical sections have completed. | |
200 | This callback variant is particularly useful in situations where | |
165d6c78 PM |
201 | it is illegal to block or where update-side performance is |
202 | critically important. | |
203 | ||
204 | However, the call_rcu() API should not be used lightly, as use | |
205 | of the synchronize_rcu() API generally results in simpler code. | |
206 | In addition, the synchronize_rcu() API has the nice property | |
207 | of automatically limiting update rate should grace periods | |
208 | be delayed. This property results in system resilience in face | |
209 | of denial-of-service attacks. Code using call_rcu() should limit | |
210 | update rate in order to gain this same sort of resilience. See | |
211 | checklist.txt for some approaches to limiting the update rate. | |
dd81eca8 PM |
212 | |
213 | rcu_assign_pointer() | |
214 | ||
9129b017 | 215 | void rcu_assign_pointer(p, typeof(p) v); |
dd81eca8 PM |
216 | |
217 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it | |
218 | would be cool to be able to declare a function in this manner. | |
219 | (Compiler experts will no doubt disagree.) | |
220 | ||
221 | The updater uses this function to assign a new value to an | |
222 | RCU-protected pointer, in order to safely communicate the change | |
9129b017 AP |
223 | in value from the updater to the reader. This macro does not |
224 | evaluate to an rvalue, but it does execute any memory-barrier | |
225 | instructions required for a given CPU architecture. | |
dd81eca8 | 226 | |
d19720a9 PM |
227 | Perhaps just as important, it serves to document (1) which |
228 | pointers are protected by RCU and (2) the point at which a | |
229 | given structure becomes accessible to other CPUs. That said, | |
230 | rcu_assign_pointer() is most frequently used indirectly, via | |
231 | the _rcu list-manipulation primitives such as list_add_rcu(). | |
dd81eca8 PM |
232 | |
233 | rcu_dereference() | |
234 | ||
235 | typeof(p) rcu_dereference(p); | |
236 | ||
237 | Like rcu_assign_pointer(), rcu_dereference() must be implemented | |
238 | as a macro. | |
239 | ||
240 | The reader uses rcu_dereference() to fetch an RCU-protected | |
241 | pointer, which returns a value that may then be safely | |
8cf503d3 | 242 | dereferenced. Note that rcu_dereference() does not actually |
dd81eca8 PM |
243 | dereference the pointer, instead, it protects the pointer for |
244 | later dereferencing. It also executes any needed memory-barrier | |
245 | instructions for a given CPU architecture. Currently, only Alpha | |
246 | needs memory barriers within rcu_dereference() -- on other CPUs, | |
247 | it compiles to nothing, not even a compiler directive. | |
248 | ||
249 | Common coding practice uses rcu_dereference() to copy an | |
250 | RCU-protected pointer to a local variable, then dereferences | |
251 | this local variable, for example as follows: | |
252 | ||
253 | p = rcu_dereference(head.next); | |
254 | return p->data; | |
255 | ||
256 | However, in this case, one could just as easily combine these | |
257 | into one statement: | |
258 | ||
259 | return rcu_dereference(head.next)->data; | |
260 | ||
261 | If you are going to be fetching multiple fields from the | |
262 | RCU-protected structure, using the local variable is of | |
263 | course preferred. Repeated rcu_dereference() calls look | |
ed384464 MV |
264 | ugly, do not guarantee that the same pointer will be returned |
265 | if an update happened while in the critical section, and incur | |
266 | unnecessary overhead on Alpha CPUs. | |
dd81eca8 PM |
267 | |
268 | Note that the value returned by rcu_dereference() is valid | |
93eb1420 | 269 | only within the enclosing RCU read-side critical section [1]. |
dd81eca8 PM |
270 | For example, the following is -not- legal: |
271 | ||
272 | rcu_read_lock(); | |
273 | p = rcu_dereference(head.next); | |
274 | rcu_read_unlock(); | |
4357fb57 | 275 | x = p->address; /* BUG!!! */ |
dd81eca8 | 276 | rcu_read_lock(); |
4357fb57 | 277 | y = p->data; /* BUG!!! */ |
dd81eca8 PM |
278 | rcu_read_unlock(); |
279 | ||
280 | Holding a reference from one RCU read-side critical section | |
281 | to another is just as illegal as holding a reference from | |
282 | one lock-based critical section to another! Similarly, | |
283 | using a reference outside of the critical section in which | |
284 | it was acquired is just as illegal as doing so with normal | |
285 | locking. | |
286 | ||
287 | As with rcu_assign_pointer(), an important function of | |
d19720a9 PM |
288 | rcu_dereference() is to document which pointers are protected by |
289 | RCU, in particular, flagging a pointer that is subject to changing | |
290 | at any time, including immediately after the rcu_dereference(). | |
291 | And, again like rcu_assign_pointer(), rcu_dereference() is | |
292 | typically used indirectly, via the _rcu list-manipulation | |
dd81eca8 PM |
293 | primitives, such as list_for_each_entry_rcu(). |
294 | ||
93eb1420 JFG |
295 | [1] The variant rcu_dereference_protected() can be used outside |
296 | of an RCU read-side critical section as long as the usage is | |
297 | protected by locks acquired by the update-side code. This variant | |
298 | avoids the lockdep warning that would happen when using (for | |
299 | example) rcu_dereference() without rcu_read_lock() protection. | |
300 | Using rcu_dereference_protected() also has the advantage | |
301 | of permitting compiler optimizations that rcu_dereference() | |
302 | must prohibit. The rcu_dereference_protected() variant takes | |
303 | a lockdep expression to indicate which locks must be acquired | |
304 | by the caller. If the indicated protection is not provided, | |
87d1779d | 305 | a lockdep splat is emitted. See RCU/Design/Requirements/Requirements.html |
93eb1420 JFG |
306 | and the API's code comments for more details and example usage. |
307 | ||
dd81eca8 PM |
308 | The following diagram shows how each API communicates among the |
309 | reader, updater, and reclaimer. | |
310 | ||
311 | ||
312 | rcu_assign_pointer() | |
0fa201d1 | 313 | +--------+ |
dd81eca8 PM |
314 | +---------------------->| reader |---------+ |
315 | | +--------+ | | |
316 | | | | | |
317 | | | | Protect: | |
318 | | | | rcu_read_lock() | |
319 | | | | rcu_read_unlock() | |
320 | | rcu_dereference() | | | |
0fa201d1 TA |
321 | +---------+ | | |
322 | | updater |<----------------+ | | |
323 | +---------+ V | |
dd81eca8 PM |
324 | | +-----------+ |
325 | +----------------------------------->| reclaimer | | |
0fa201d1 | 326 | +-----------+ |
dd81eca8 PM |
327 | Defer: |
328 | synchronize_rcu() & call_rcu() | |
329 | ||
330 | ||
331 | The RCU infrastructure observes the time sequence of rcu_read_lock(), | |
332 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in | |
333 | order to determine when (1) synchronize_rcu() invocations may return | |
334 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient | |
335 | implementations of the RCU infrastructure make heavy use of batching in | |
336 | order to amortize their overhead over many uses of the corresponding APIs. | |
337 | ||
33984964 JFG |
338 | There are at least three flavors of RCU usage in the Linux kernel. The diagram |
339 | above shows the most common one. On the updater side, the rcu_assign_pointer(), | |
340 | sychronize_rcu() and call_rcu() primitives used are the same for all three | |
341 | flavors. However for protection (on the reader side), the primitives used vary | |
342 | depending on the flavor: | |
dd81eca8 | 343 | |
33984964 JFG |
344 | a. rcu_read_lock() / rcu_read_unlock() |
345 | rcu_dereference() | |
dd81eca8 | 346 | |
33984964 JFG |
347 | b. rcu_read_lock_bh() / rcu_read_unlock_bh() |
348 | local_bh_disable() / local_bh_enable() | |
349 | rcu_dereference_bh() | |
dd81eca8 | 350 | |
33984964 JFG |
351 | c. rcu_read_lock_sched() / rcu_read_unlock_sched() |
352 | preempt_disable() / preempt_enable() | |
353 | local_irq_save() / local_irq_restore() | |
354 | hardirq enter / hardirq exit | |
355 | NMI enter / NMI exit | |
356 | rcu_dereference_sched() | |
dd81eca8 | 357 | |
33984964 | 358 | These three flavors are used as follows: |
dd81eca8 PM |
359 | |
360 | a. RCU applied to normal data structures. | |
361 | ||
362 | b. RCU applied to networking data structures that may be subjected | |
363 | to remote denial-of-service attacks. | |
364 | ||
365 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. | |
366 | ||
367 | Again, most uses will be of (a). The (b) and (c) cases are important | |
368 | for specialized uses, but are relatively uncommon. | |
369 | ||
370 | ||
371 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
372 | ||
373 | This section shows a simple use of the core RCU API to protect a | |
d19720a9 | 374 | global pointer to a dynamically allocated structure. More-typical |
dd81eca8 PM |
375 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. |
376 | ||
377 | struct foo { | |
378 | int a; | |
379 | char b; | |
380 | long c; | |
381 | }; | |
382 | DEFINE_SPINLOCK(foo_mutex); | |
383 | ||
2c4ac34b | 384 | struct foo __rcu *gbl_foo; |
dd81eca8 PM |
385 | |
386 | /* | |
387 | * Create a new struct foo that is the same as the one currently | |
388 | * pointed to by gbl_foo, except that field "a" is replaced | |
389 | * with "new_a". Points gbl_foo to the new structure, and | |
390 | * frees up the old structure after a grace period. | |
391 | * | |
392 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
393 | * see the initialized version of the new structure. | |
394 | * | |
395 | * Uses synchronize_rcu() to ensure that any readers that might | |
396 | * have references to the old structure complete before freeing | |
397 | * the old structure. | |
398 | */ | |
399 | void foo_update_a(int new_a) | |
400 | { | |
401 | struct foo *new_fp; | |
402 | struct foo *old_fp; | |
403 | ||
de0dfcdf | 404 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 | 405 | spin_lock(&foo_mutex); |
2c4ac34b | 406 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
dd81eca8 PM |
407 | *new_fp = *old_fp; |
408 | new_fp->a = new_a; | |
409 | rcu_assign_pointer(gbl_foo, new_fp); | |
410 | spin_unlock(&foo_mutex); | |
411 | synchronize_rcu(); | |
412 | kfree(old_fp); | |
413 | } | |
414 | ||
415 | /* | |
416 | * Return the value of field "a" of the current gbl_foo | |
417 | * structure. Use rcu_read_lock() and rcu_read_unlock() | |
418 | * to ensure that the structure does not get deleted out | |
419 | * from under us, and use rcu_dereference() to ensure that | |
420 | * we see the initialized version of the structure (important | |
421 | * for DEC Alpha and for people reading the code). | |
422 | */ | |
423 | int foo_get_a(void) | |
424 | { | |
425 | int retval; | |
426 | ||
427 | rcu_read_lock(); | |
428 | retval = rcu_dereference(gbl_foo)->a; | |
429 | rcu_read_unlock(); | |
430 | return retval; | |
431 | } | |
432 | ||
433 | So, to sum up: | |
434 | ||
435 | o Use rcu_read_lock() and rcu_read_unlock() to guard RCU | |
436 | read-side critical sections. | |
437 | ||
438 | o Within an RCU read-side critical section, use rcu_dereference() | |
439 | to dereference RCU-protected pointers. | |
440 | ||
441 | o Use some solid scheme (such as locks or semaphores) to | |
442 | keep concurrent updates from interfering with each other. | |
443 | ||
444 | o Use rcu_assign_pointer() to update an RCU-protected pointer. | |
445 | This primitive protects concurrent readers from the updater, | |
446 | -not- concurrent updates from each other! You therefore still | |
447 | need to use locking (or something similar) to keep concurrent | |
448 | rcu_assign_pointer() primitives from interfering with each other. | |
449 | ||
450 | o Use synchronize_rcu() -after- removing a data element from an | |
451 | RCU-protected data structure, but -before- reclaiming/freeing | |
452 | the data element, in order to wait for the completion of all | |
453 | RCU read-side critical sections that might be referencing that | |
454 | data item. | |
455 | ||
456 | See checklist.txt for additional rules to follow when using RCU. | |
d19720a9 PM |
457 | And again, more-typical uses of RCU may be found in listRCU.txt, |
458 | arrayRCU.txt, and NMI-RCU.txt. | |
dd81eca8 PM |
459 | |
460 | ||
461 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
462 | ||
463 | In the example above, foo_update_a() blocks until a grace period elapses. | |
464 | This is quite simple, but in some cases one cannot afford to wait so | |
465 | long -- there might be other high-priority work to be done. | |
466 | ||
467 | In such cases, one uses call_rcu() rather than synchronize_rcu(). | |
468 | The call_rcu() API is as follows: | |
469 | ||
470 | void call_rcu(struct rcu_head * head, | |
471 | void (*func)(struct rcu_head *head)); | |
472 | ||
473 | This function invokes func(head) after a grace period has elapsed. | |
474 | This invocation might happen from either softirq or process context, | |
475 | so the function is not permitted to block. The foo struct needs to | |
476 | have an rcu_head structure added, perhaps as follows: | |
477 | ||
478 | struct foo { | |
479 | int a; | |
480 | char b; | |
481 | long c; | |
482 | struct rcu_head rcu; | |
483 | }; | |
484 | ||
485 | The foo_update_a() function might then be written as follows: | |
486 | ||
487 | /* | |
488 | * Create a new struct foo that is the same as the one currently | |
489 | * pointed to by gbl_foo, except that field "a" is replaced | |
490 | * with "new_a". Points gbl_foo to the new structure, and | |
491 | * frees up the old structure after a grace period. | |
492 | * | |
493 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
494 | * see the initialized version of the new structure. | |
495 | * | |
496 | * Uses call_rcu() to ensure that any readers that might have | |
497 | * references to the old structure complete before freeing the | |
498 | * old structure. | |
499 | */ | |
500 | void foo_update_a(int new_a) | |
501 | { | |
502 | struct foo *new_fp; | |
503 | struct foo *old_fp; | |
504 | ||
de0dfcdf | 505 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 | 506 | spin_lock(&foo_mutex); |
2c4ac34b | 507 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
dd81eca8 PM |
508 | *new_fp = *old_fp; |
509 | new_fp->a = new_a; | |
510 | rcu_assign_pointer(gbl_foo, new_fp); | |
511 | spin_unlock(&foo_mutex); | |
512 | call_rcu(&old_fp->rcu, foo_reclaim); | |
513 | } | |
514 | ||
515 | The foo_reclaim() function might appear as follows: | |
516 | ||
517 | void foo_reclaim(struct rcu_head *rp) | |
518 | { | |
519 | struct foo *fp = container_of(rp, struct foo, rcu); | |
520 | ||
57d34a6c KC |
521 | foo_cleanup(fp->a); |
522 | ||
dd81eca8 PM |
523 | kfree(fp); |
524 | } | |
525 | ||
526 | The container_of() primitive is a macro that, given a pointer into a | |
527 | struct, the type of the struct, and the pointed-to field within the | |
528 | struct, returns a pointer to the beginning of the struct. | |
529 | ||
530 | The use of call_rcu() permits the caller of foo_update_a() to | |
531 | immediately regain control, without needing to worry further about the | |
532 | old version of the newly updated element. It also clearly shows the | |
533 | RCU distinction between updater, namely foo_update_a(), and reclaimer, | |
534 | namely foo_reclaim(). | |
535 | ||
536 | The summary of advice is the same as for the previous section, except | |
537 | that we are now using call_rcu() rather than synchronize_rcu(): | |
538 | ||
539 | o Use call_rcu() -after- removing a data element from an | |
540 | RCU-protected data structure in order to register a callback | |
541 | function that will be invoked after the completion of all RCU | |
542 | read-side critical sections that might be referencing that | |
543 | data item. | |
544 | ||
57d34a6c KC |
545 | If the callback for call_rcu() is not doing anything more than calling |
546 | kfree() on the structure, you can use kfree_rcu() instead of call_rcu() | |
547 | to avoid having to write your own callback: | |
548 | ||
549 | kfree_rcu(old_fp, rcu); | |
550 | ||
dd81eca8 PM |
551 | Again, see checklist.txt for additional rules governing the use of RCU. |
552 | ||
553 | ||
554 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
555 | ||
556 | One of the nice things about RCU is that it has extremely simple "toy" | |
557 | implementations that are a good first step towards understanding the | |
558 | production-quality implementations in the Linux kernel. This section | |
559 | presents two such "toy" implementations of RCU, one that is implemented | |
560 | in terms of familiar locking primitives, and another that more closely | |
561 | resembles "classic" RCU. Both are way too simple for real-world use, | |
562 | lacking both functionality and performance. However, they are useful | |
87d1779d | 563 | in getting a feel for how RCU works. See kernel/rcu/update.c for a |
dd81eca8 PM |
564 | production-quality implementation, and see: |
565 | ||
566 | http://www.rdrop.com/users/paulmck/RCU | |
567 | ||
568 | for papers describing the Linux kernel RCU implementation. The OLS'01 | |
569 | and OLS'02 papers are a good introduction, and the dissertation provides | |
d19720a9 | 570 | more details on the current implementation as of early 2004. |
dd81eca8 PM |
571 | |
572 | ||
573 | 5A. "TOY" IMPLEMENTATION #1: LOCKING | |
574 | ||
575 | This section presents a "toy" RCU implementation that is based on | |
576 | familiar locking primitives. Its overhead makes it a non-starter for | |
577 | real-life use, as does its lack of scalability. It is also unsuitable | |
578 | for realtime use, since it allows scheduling latency to "bleed" from | |
d3d3a3cc PM |
579 | one read-side critical section to another. It also assumes recursive |
580 | reader-writer locks: If you try this with non-recursive locks, and | |
581 | you allow nested rcu_read_lock() calls, you can deadlock. | |
dd81eca8 PM |
582 | |
583 | However, it is probably the easiest implementation to relate to, so is | |
584 | a good starting point. | |
585 | ||
586 | It is extremely simple: | |
587 | ||
588 | static DEFINE_RWLOCK(rcu_gp_mutex); | |
589 | ||
590 | void rcu_read_lock(void) | |
591 | { | |
592 | read_lock(&rcu_gp_mutex); | |
593 | } | |
594 | ||
595 | void rcu_read_unlock(void) | |
596 | { | |
597 | read_unlock(&rcu_gp_mutex); | |
598 | } | |
599 | ||
600 | void synchronize_rcu(void) | |
601 | { | |
602 | write_lock(&rcu_gp_mutex); | |
264d4f88 | 603 | smp_mb__after_spinlock(); |
dd81eca8 PM |
604 | write_unlock(&rcu_gp_mutex); |
605 | } | |
606 | ||
066bb1c8 PM |
607 | [You can ignore rcu_assign_pointer() and rcu_dereference() without missing |
608 | much. But here are simplified versions anyway. And whatever you do, | |
609 | don't forget about them when submitting patches making use of RCU!] | |
610 | ||
611 | #define rcu_assign_pointer(p, v) \ | |
612 | ({ \ | |
613 | smp_store_release(&(p), (v)); \ | |
614 | }) | |
615 | ||
616 | #define rcu_dereference(p) \ | |
617 | ({ \ | |
9ad3c143 | 618 | typeof(p) _________p1 = READ_ONCE(p); \ |
066bb1c8 PM |
619 | (_________p1); \ |
620 | }) | |
dd81eca8 PM |
621 | |
622 | ||
623 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire | |
624 | and release a global reader-writer lock. The synchronize_rcu() | |
264d4f88 AP |
625 | primitive write-acquires this same lock, then releases it. This means |
626 | that once synchronize_rcu() exits, all RCU read-side critical sections | |
627 | that were in progress before synchronize_rcu() was called are guaranteed | |
628 | to have completed -- there is no way that synchronize_rcu() would have | |
629 | been able to write-acquire the lock otherwise. The smp_mb__after_spinlock() | |
630 | promotes synchronize_rcu() to a full memory barrier in compliance with | |
631 | the "Memory-Barrier Guarantees" listed in: | |
632 | ||
633 | Documentation/RCU/Design/Requirements/Requirements.html. | |
dd81eca8 PM |
634 | |
635 | It is possible to nest rcu_read_lock(), since reader-writer locks may | |
636 | be recursively acquired. Note also that rcu_read_lock() is immune | |
637 | from deadlock (an important property of RCU). The reason for this is | |
638 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). | |
639 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, | |
640 | so there can be no deadlock cycle. | |
641 | ||
642 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
643 | occur when using this algorithm in a real-world Linux | |
644 | kernel? How could this deadlock be avoided? | |
645 | ||
646 | ||
647 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU | |
648 | ||
649 | This section presents a "toy" RCU implementation that is based on | |
650 | "classic RCU". It is also short on performance (but only for updates) and | |
651 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT | |
652 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() | |
653 | are the same as those shown in the preceding section, so they are omitted. | |
654 | ||
655 | void rcu_read_lock(void) { } | |
656 | ||
657 | void rcu_read_unlock(void) { } | |
658 | ||
659 | void synchronize_rcu(void) | |
660 | { | |
661 | int cpu; | |
662 | ||
3c30a752 | 663 | for_each_possible_cpu(cpu) |
dd81eca8 PM |
664 | run_on(cpu); |
665 | } | |
666 | ||
667 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. | |
668 | This is the great strength of classic RCU in a non-preemptive kernel: | |
669 | read-side overhead is precisely zero, at least on non-Alpha CPUs. | |
670 | And there is absolutely no way that rcu_read_lock() can possibly | |
671 | participate in a deadlock cycle! | |
672 | ||
673 | The implementation of synchronize_rcu() simply schedules itself on each | |
674 | CPU in turn. The run_on() primitive can be implemented straightforwardly | |
675 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less | |
676 | "toy" implementation would restore the affinity upon completion rather | |
677 | than just leaving all tasks running on the last CPU, but when I said | |
678 | "toy", I meant -toy-! | |
679 | ||
680 | So how the heck is this supposed to work??? | |
681 | ||
682 | Remember that it is illegal to block while in an RCU read-side critical | |
683 | section. Therefore, if a given CPU executes a context switch, we know | |
684 | that it must have completed all preceding RCU read-side critical sections. | |
685 | Once -all- CPUs have executed a context switch, then -all- preceding | |
686 | RCU read-side critical sections will have completed. | |
687 | ||
688 | So, suppose that we remove a data item from its structure and then invoke | |
689 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed | |
690 | that there are no RCU read-side critical sections holding a reference | |
691 | to that data item, so we can safely reclaim it. | |
692 | ||
693 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
694 | overhead is -negative-. | |
695 | ||
696 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
697 | critical section, what the heck do you do in | |
698 | PREEMPT_RT, where normal spinlocks can block??? | |
699 | ||
700 | ||
701 | 6. ANALOGY WITH READER-WRITER LOCKING | |
702 | ||
703 | Although RCU can be used in many different ways, a very common use of | |
704 | RCU is analogous to reader-writer locking. The following unified | |
705 | diff shows how closely related RCU and reader-writer locking can be. | |
706 | ||
70946a44 YD |
707 | @@ -5,5 +5,5 @@ struct el { |
708 | int data; | |
709 | /* Other data fields */ | |
710 | }; | |
711 | -rwlock_t listmutex; | |
712 | +spinlock_t listmutex; | |
713 | struct el head; | |
714 | ||
dd81eca8 PM |
715 | @@ -13,15 +14,15 @@ |
716 | struct list_head *lp; | |
717 | struct el *p; | |
718 | ||
70946a44 | 719 | - read_lock(&listmutex); |
dd81eca8 PM |
720 | - list_for_each_entry(p, head, lp) { |
721 | + rcu_read_lock(); | |
722 | + list_for_each_entry_rcu(p, head, lp) { | |
723 | if (p->key == key) { | |
724 | *result = p->data; | |
70946a44 | 725 | - read_unlock(&listmutex); |
dd81eca8 PM |
726 | + rcu_read_unlock(); |
727 | return 1; | |
728 | } | |
729 | } | |
70946a44 | 730 | - read_unlock(&listmutex); |
dd81eca8 PM |
731 | + rcu_read_unlock(); |
732 | return 0; | |
733 | } | |
734 | ||
735 | @@ -29,15 +30,16 @@ | |
736 | { | |
737 | struct el *p; | |
738 | ||
739 | - write_lock(&listmutex); | |
740 | + spin_lock(&listmutex); | |
741 | list_for_each_entry(p, head, lp) { | |
742 | if (p->key == key) { | |
82a854ec | 743 | - list_del(&p->list); |
dd81eca8 | 744 | - write_unlock(&listmutex); |
82a854ec | 745 | + list_del_rcu(&p->list); |
dd81eca8 PM |
746 | + spin_unlock(&listmutex); |
747 | + synchronize_rcu(); | |
748 | kfree(p); | |
749 | return 1; | |
750 | } | |
751 | } | |
752 | - write_unlock(&listmutex); | |
753 | + spin_unlock(&listmutex); | |
754 | return 0; | |
755 | } | |
756 | ||
757 | Or, for those who prefer a side-by-side listing: | |
758 | ||
759 | 1 struct el { 1 struct el { | |
760 | 2 struct list_head list; 2 struct list_head list; | |
761 | 3 long key; 3 long key; | |
762 | 4 spinlock_t mutex; 4 spinlock_t mutex; | |
763 | 5 int data; 5 int data; | |
764 | 6 /* Other data fields */ 6 /* Other data fields */ | |
765 | 7 }; 7 }; | |
70946a44 | 766 | 8 rwlock_t listmutex; 8 spinlock_t listmutex; |
dd81eca8 PM |
767 | 9 struct el head; 9 struct el head; |
768 | ||
769 | 1 int search(long key, int *result) 1 int search(long key, int *result) | |
770 | 2 { 2 { | |
771 | 3 struct list_head *lp; 3 struct list_head *lp; | |
772 | 4 struct el *p; 4 struct el *p; | |
773 | 5 5 | |
70946a44 | 774 | 6 read_lock(&listmutex); 6 rcu_read_lock(); |
dd81eca8 PM |
775 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { |
776 | 8 if (p->key == key) { 8 if (p->key == key) { | |
777 | 9 *result = p->data; 9 *result = p->data; | |
70946a44 | 778 | 10 read_unlock(&listmutex); 10 rcu_read_unlock(); |
dd81eca8 PM |
779 | 11 return 1; 11 return 1; |
780 | 12 } 12 } | |
781 | 13 } 13 } | |
70946a44 | 782 | 14 read_unlock(&listmutex); 14 rcu_read_unlock(); |
dd81eca8 PM |
783 | 15 return 0; 15 return 0; |
784 | 16 } 16 } | |
785 | ||
786 | 1 int delete(long key) 1 int delete(long key) | |
787 | 2 { 2 { | |
788 | 3 struct el *p; 3 struct el *p; | |
789 | 4 4 | |
790 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); | |
791 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { | |
792 | 7 if (p->key == key) { 7 if (p->key == key) { | |
82a854ec | 793 | 8 list_del(&p->list); 8 list_del_rcu(&p->list); |
dd81eca8 PM |
794 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); |
795 | 10 synchronize_rcu(); | |
796 | 10 kfree(p); 11 kfree(p); | |
797 | 11 return 1; 12 return 1; | |
798 | 12 } 13 } | |
799 | 13 } 14 } | |
800 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); | |
801 | 15 return 0; 16 return 0; | |
802 | 16 } 17 } | |
803 | ||
804 | Either way, the differences are quite small. Read-side locking moves | |
805 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from | |
670e9f34 | 806 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() |
dd81eca8 PM |
807 | precedes the kfree(). |
808 | ||
809 | However, there is one potential catch: the read-side and update-side | |
810 | critical sections can now run concurrently. In many cases, this will | |
811 | not be a problem, but it is necessary to check carefully regardless. | |
812 | For example, if multiple independent list updates must be seen as | |
813 | a single atomic update, converting to RCU will require special care. | |
814 | ||
815 | Also, the presence of synchronize_rcu() means that the RCU version of | |
816 | delete() can now block. If this is a problem, there is a callback-based | |
57d34a6c KC |
817 | mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can |
818 | be used in place of synchronize_rcu(). | |
dd81eca8 PM |
819 | |
820 | ||
821 | 7. FULL LIST OF RCU APIs | |
822 | ||
823 | The RCU APIs are documented in docbook-format header comments in the | |
824 | Linux-kernel source code, but it helps to have a full list of the | |
825 | APIs, since there does not appear to be a way to categorize them | |
826 | in docbook. Here is the list, by category. | |
827 | ||
c598a070 | 828 | RCU list traversal: |
dd81eca8 | 829 | |
d07e6d08 PM |
830 | list_entry_rcu |
831 | list_first_entry_rcu | |
832 | list_next_rcu | |
32300751 | 833 | list_for_each_entry_rcu |
d07e6d08 | 834 | list_for_each_entry_continue_rcu |
b7b6f94c | 835 | list_for_each_entry_from_rcu |
d07e6d08 PM |
836 | hlist_first_rcu |
837 | hlist_next_rcu | |
838 | hlist_pprev_rcu | |
32300751 | 839 | hlist_for_each_entry_rcu |
d07e6d08 | 840 | hlist_for_each_entry_rcu_bh |
b7b6f94c | 841 | hlist_for_each_entry_from_rcu |
d07e6d08 PM |
842 | hlist_for_each_entry_continue_rcu |
843 | hlist_for_each_entry_continue_rcu_bh | |
844 | hlist_nulls_first_rcu | |
240ebbf8 | 845 | hlist_nulls_for_each_entry_rcu |
d07e6d08 PM |
846 | hlist_bl_first_rcu |
847 | hlist_bl_for_each_entry_rcu | |
dd81eca8 | 848 | |
32300751 | 849 | RCU pointer/list update: |
dd81eca8 PM |
850 | |
851 | rcu_assign_pointer | |
852 | list_add_rcu | |
853 | list_add_tail_rcu | |
854 | list_del_rcu | |
855 | list_replace_rcu | |
1d023284 | 856 | hlist_add_behind_rcu |
32300751 | 857 | hlist_add_before_rcu |
dd81eca8 | 858 | hlist_add_head_rcu |
d07e6d08 PM |
859 | hlist_del_rcu |
860 | hlist_del_init_rcu | |
32300751 PM |
861 | hlist_replace_rcu |
862 | list_splice_init_rcu() | |
d07e6d08 PM |
863 | hlist_nulls_del_init_rcu |
864 | hlist_nulls_del_rcu | |
865 | hlist_nulls_add_head_rcu | |
866 | hlist_bl_add_head_rcu | |
867 | hlist_bl_del_init_rcu | |
868 | hlist_bl_del_rcu | |
869 | hlist_bl_set_first_rcu | |
dd81eca8 | 870 | |
32300751 PM |
871 | RCU: Critical sections Grace period Barrier |
872 | ||
873 | rcu_read_lock synchronize_net rcu_barrier | |
874 | rcu_read_unlock synchronize_rcu | |
c598a070 | 875 | rcu_dereference synchronize_rcu_expedited |
d07e6d08 PM |
876 | rcu_read_lock_held call_rcu |
877 | rcu_dereference_check kfree_rcu | |
878 | rcu_dereference_protected | |
32300751 PM |
879 | |
880 | bh: Critical sections Grace period Barrier | |
881 | ||
33984964 JFG |
882 | rcu_read_lock_bh call_rcu rcu_barrier |
883 | rcu_read_unlock_bh synchronize_rcu | |
884 | [local_bh_disable] synchronize_rcu_expedited | |
885 | [and friends] | |
886 | rcu_dereference_bh | |
d07e6d08 PM |
887 | rcu_dereference_bh_check |
888 | rcu_dereference_bh_protected | |
889 | rcu_read_lock_bh_held | |
32300751 PM |
890 | |
891 | sched: Critical sections Grace period Barrier | |
892 | ||
33984964 JFG |
893 | rcu_read_lock_sched call_rcu rcu_barrier |
894 | rcu_read_unlock_sched synchronize_rcu | |
895 | [preempt_disable] synchronize_rcu_expedited | |
240ebbf8 | 896 | [and friends] |
d07e6d08 PM |
897 | rcu_read_lock_sched_notrace |
898 | rcu_read_unlock_sched_notrace | |
c598a070 | 899 | rcu_dereference_sched |
d07e6d08 PM |
900 | rcu_dereference_sched_check |
901 | rcu_dereference_sched_protected | |
902 | rcu_read_lock_sched_held | |
32300751 PM |
903 | |
904 | ||
905 | SRCU: Critical sections Grace period Barrier | |
906 | ||
33984964 JFG |
907 | srcu_read_lock call_srcu srcu_barrier |
908 | srcu_read_unlock synchronize_srcu | |
99f88919 | 909 | srcu_dereference synchronize_srcu_expedited |
d07e6d08 PM |
910 | srcu_dereference_check |
911 | srcu_read_lock_held | |
dd81eca8 | 912 | |
240ebbf8 | 913 | SRCU: Initialization/cleanup |
4de5f89e PM |
914 | DEFINE_SRCU |
915 | DEFINE_STATIC_SRCU | |
240ebbf8 PM |
916 | init_srcu_struct |
917 | cleanup_srcu_struct | |
dd81eca8 | 918 | |
50aec002 PM |
919 | All: lockdep-checked RCU-protected pointer access |
920 | ||
50aec002 | 921 | rcu_access_pointer |
d07e6d08 | 922 | rcu_dereference_raw |
f78f5b90 | 923 | RCU_LOCKDEP_WARN |
d07e6d08 PM |
924 | rcu_sleep_check |
925 | RCU_NONIDLE | |
50aec002 | 926 | |
dd81eca8 PM |
927 | See the comment headers in the source code (or the docbook generated |
928 | from them) for more information. | |
929 | ||
fea65126 PM |
930 | However, given that there are no fewer than four families of RCU APIs |
931 | in the Linux kernel, how do you choose which one to use? The following | |
932 | list can be helpful: | |
933 | ||
934 | a. Will readers need to block? If so, you need SRCU. | |
935 | ||
99f88919 | 936 | b. What about the -rt patchset? If readers would need to block |
fea65126 PM |
937 | in an non-rt kernel, you need SRCU. If readers would block |
938 | in a -rt kernel, but not in a non-rt kernel, SRCU is not | |
4de5f89e PM |
939 | necessary. (The -rt patchset turns spinlocks into sleeplocks, |
940 | hence this distinction.) | |
fea65126 | 941 | |
99f88919 | 942 | c. Do you need to treat NMI handlers, hardirq handlers, |
fea65126 PM |
943 | and code segments with preemption disabled (whether |
944 | via preempt_disable(), local_irq_save(), local_bh_disable(), | |
945 | or some other mechanism) as if they were explicit RCU readers? | |
2aef619c | 946 | If so, RCU-sched is the only choice that will work for you. |
fea65126 | 947 | |
99f88919 | 948 | d. Do you need RCU grace periods to complete even in the face |
fea65126 PM |
949 | of softirq monopolization of one or more of the CPUs? For |
950 | example, is your code subject to network-based denial-of-service | |
77095901 PM |
951 | attacks? If so, you should disable softirq across your readers, |
952 | for example, by using rcu_read_lock_bh(). | |
fea65126 | 953 | |
99f88919 | 954 | e. Is your workload too update-intensive for normal use of |
fea65126 | 955 | RCU, but inappropriate for other synchronization mechanisms? |
5f0d5a3a PM |
956 | If so, consider SLAB_TYPESAFE_BY_RCU (which was originally |
957 | named SLAB_DESTROY_BY_RCU). But please be careful! | |
fea65126 | 958 | |
99f88919 | 959 | f. Do you need read-side critical sections that are respected |
2aef619c PM |
960 | even though they are in the middle of the idle loop, during |
961 | user-mode execution, or on an offlined CPU? If so, SRCU is the | |
962 | only choice that will work for you. | |
963 | ||
99f88919 | 964 | g. Otherwise, use RCU. |
fea65126 PM |
965 | |
966 | Of course, this all assumes that you have determined that RCU is in fact | |
967 | the right tool for your job. | |
968 | ||
dd81eca8 PM |
969 | |
970 | 8. ANSWERS TO QUICK QUIZZES | |
971 | ||
972 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
973 | occur when using this algorithm in a real-world Linux | |
974 | kernel? [Referring to the lock-based "toy" RCU | |
975 | algorithm.] | |
976 | ||
977 | Answer: Consider the following sequence of events: | |
978 | ||
979 | 1. CPU 0 acquires some unrelated lock, call it | |
d19720a9 PM |
980 | "problematic_lock", disabling irq via |
981 | spin_lock_irqsave(). | |
dd81eca8 PM |
982 | |
983 | 2. CPU 1 enters synchronize_rcu(), write-acquiring | |
984 | rcu_gp_mutex. | |
985 | ||
986 | 3. CPU 0 enters rcu_read_lock(), but must wait | |
987 | because CPU 1 holds rcu_gp_mutex. | |
988 | ||
989 | 4. CPU 1 is interrupted, and the irq handler | |
990 | attempts to acquire problematic_lock. | |
991 | ||
992 | The system is now deadlocked. | |
993 | ||
994 | One way to avoid this deadlock is to use an approach like | |
995 | that of CONFIG_PREEMPT_RT, where all normal spinlocks | |
996 | become blocking locks, and all irq handlers execute in | |
997 | the context of special tasks. In this case, in step 4 | |
998 | above, the irq handler would block, allowing CPU 1 to | |
999 | release rcu_gp_mutex, avoiding the deadlock. | |
1000 | ||
1001 | Even in the absence of deadlock, this RCU implementation | |
1002 | allows latency to "bleed" from readers to other | |
1003 | readers through synchronize_rcu(). To see this, | |
1004 | consider task A in an RCU read-side critical section | |
1005 | (thus read-holding rcu_gp_mutex), task B blocked | |
1006 | attempting to write-acquire rcu_gp_mutex, and | |
1007 | task C blocked in rcu_read_lock() attempting to | |
1008 | read_acquire rcu_gp_mutex. Task A's RCU read-side | |
1009 | latency is holding up task C, albeit indirectly via | |
1010 | task B. | |
1011 | ||
1012 | Realtime RCU implementations therefore use a counter-based | |
1013 | approach where tasks in RCU read-side critical sections | |
1014 | cannot be blocked by tasks executing synchronize_rcu(). | |
1015 | ||
1016 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
1017 | overhead is -negative-. | |
1018 | ||
1019 | Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT | |
1020 | kernel where a routing table is used by process-context | |
1021 | code, but can be updated by irq-context code (for example, | |
1022 | by an "ICMP REDIRECT" packet). The usual way of handling | |
1023 | this would be to have the process-context code disable | |
1024 | interrupts while searching the routing table. Use of | |
1025 | RCU allows such interrupt-disabling to be dispensed with. | |
1026 | Thus, without RCU, you pay the cost of disabling interrupts, | |
1027 | and with RCU you don't. | |
1028 | ||
1029 | One can argue that the overhead of RCU in this | |
1030 | case is negative with respect to the single-CPU | |
1031 | interrupt-disabling approach. Others might argue that | |
1032 | the overhead of RCU is merely zero, and that replacing | |
1033 | the positive overhead of the interrupt-disabling scheme | |
1034 | with the zero-overhead RCU scheme does not constitute | |
1035 | negative overhead. | |
1036 | ||
1037 | In real life, of course, things are more complex. But | |
1038 | even the theoretical possibility of negative overhead for | |
1039 | a synchronization primitive is a bit unexpected. ;-) | |
1040 | ||
1041 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
1042 | critical section, what the heck do you do in | |
1043 | PREEMPT_RT, where normal spinlocks can block??? | |
1044 | ||
1045 | Answer: Just as PREEMPT_RT permits preemption of spinlock | |
1046 | critical sections, it permits preemption of RCU | |
1047 | read-side critical sections. It also permits | |
1048 | spinlocks blocking while in RCU read-side critical | |
1049 | sections. | |
1050 | ||
33984964 | 1051 | Why the apparent inconsistency? Because it is |
dd81eca8 PM |
1052 | possible to use priority boosting to keep the RCU |
1053 | grace periods short if need be (for example, if running | |
1054 | short of memory). In contrast, if blocking waiting | |
1055 | for (say) network reception, there is no way to know | |
1056 | what should be boosted. Especially given that the | |
1057 | process we need to boost might well be a human being | |
1058 | who just went out for a pizza or something. And although | |
1059 | a computer-operated cattle prod might arouse serious | |
1060 | interest, it might also provoke serious objections. | |
1061 | Besides, how does the computer know what pizza parlor | |
1062 | the human being went to??? | |
1063 | ||
1064 | ||
1065 | ACKNOWLEDGEMENTS | |
1066 | ||
1067 | My thanks to the people who helped make this human-readable, including | |
d19720a9 | 1068 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
dd81eca8 PM |
1069 | |
1070 | ||
1071 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |