]>
Commit | Line | Data |
---|---|---|
32300751 PM |
1 | Please note that the "What is RCU?" LWN series is an excellent place |
2 | to start learning about RCU: | |
3 | ||
4 | 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ | |
5 | 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ | |
6 | 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ | |
d493011a | 7 | 4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/ |
db4855b5 | 8 | 2010 Big API Table http://lwn.net/Articles/419086/ |
2921b123 | 9 | 5. The RCU API, 2014 Edition http://lwn.net/Articles/609904/ |
db4855b5 | 10 | 2014 Big API Table http://lwn.net/Articles/609973/ |
32300751 PM |
11 | |
12 | ||
dd81eca8 PM |
13 | What is RCU? |
14 | ||
15 | RCU is a synchronization mechanism that was added to the Linux kernel | |
16 | during the 2.5 development effort that is optimized for read-mostly | |
17 | situations. Although RCU is actually quite simple once you understand it, | |
18 | getting there can sometimes be a challenge. Part of the problem is that | |
19 | most of the past descriptions of RCU have been written with the mistaken | |
20 | assumption that there is "one true way" to describe RCU. Instead, | |
21 | the experience has been that different people must take different paths | |
22 | to arrive at an understanding of RCU. This document provides several | |
23 | different paths, as follows: | |
24 | ||
25 | 1. RCU OVERVIEW | |
26 | 2. WHAT IS RCU'S CORE API? | |
27 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
28 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
29 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
30 | 6. ANALOGY WITH READER-WRITER LOCKING | |
31 | 7. FULL LIST OF RCU APIs | |
32 | 8. ANSWERS TO QUICK QUIZZES | |
33 | ||
34 | People who prefer starting with a conceptual overview should focus on | |
35 | Section 1, though most readers will profit by reading this section at | |
36 | some point. People who prefer to start with an API that they can then | |
37 | experiment with should focus on Section 2. People who prefer to start | |
38 | with example uses should focus on Sections 3 and 4. People who need to | |
39 | understand the RCU implementation should focus on Section 5, then dive | |
40 | into the kernel source code. People who reason best by analogy should | |
41 | focus on Section 6. Section 7 serves as an index to the docbook API | |
42 | documentation, and Section 8 is the traditional answer key. | |
43 | ||
44 | So, start with the section that makes the most sense to you and your | |
45 | preferred method of learning. If you need to know everything about | |
46 | everything, feel free to read the whole thing -- but if you are really | |
47 | that type of person, you have perused the source code and will therefore | |
48 | never need this document anyway. ;-) | |
49 | ||
50 | ||
51 | 1. RCU OVERVIEW | |
52 | ||
53 | The basic idea behind RCU is to split updates into "removal" and | |
54 | "reclamation" phases. The removal phase removes references to data items | |
55 | within a data structure (possibly by replacing them with references to | |
56 | new versions of these data items), and can run concurrently with readers. | |
57 | The reason that it is safe to run the removal phase concurrently with | |
58 | readers is the semantics of modern CPUs guarantee that readers will see | |
59 | either the old or the new version of the data structure rather than a | |
60 | partially updated reference. The reclamation phase does the work of reclaiming | |
61 | (e.g., freeing) the data items removed from the data structure during the | |
62 | removal phase. Because reclaiming data items can disrupt any readers | |
63 | concurrently referencing those data items, the reclamation phase must | |
64 | not start until readers no longer hold references to those data items. | |
65 | ||
66 | Splitting the update into removal and reclamation phases permits the | |
67 | updater to perform the removal phase immediately, and to defer the | |
68 | reclamation phase until all readers active during the removal phase have | |
69 | completed, either by blocking until they finish or by registering a | |
70 | callback that is invoked after they finish. Only readers that are active | |
71 | during the removal phase need be considered, because any reader starting | |
72 | after the removal phase will be unable to gain a reference to the removed | |
73 | data items, and therefore cannot be disrupted by the reclamation phase. | |
74 | ||
75 | So the typical RCU update sequence goes something like the following: | |
76 | ||
77 | a. Remove pointers to a data structure, so that subsequent | |
78 | readers cannot gain a reference to it. | |
79 | ||
80 | b. Wait for all previous readers to complete their RCU read-side | |
81 | critical sections. | |
82 | ||
83 | c. At this point, there cannot be any readers who hold references | |
84 | to the data structure, so it now may safely be reclaimed | |
85 | (e.g., kfree()d). | |
86 | ||
87 | Step (b) above is the key idea underlying RCU's deferred destruction. | |
88 | The ability to wait until all readers are done allows RCU readers to | |
89 | use much lighter-weight synchronization, in some cases, absolutely no | |
90 | synchronization at all. In contrast, in more conventional lock-based | |
91 | schemes, readers must use heavy-weight synchronization in order to | |
92 | prevent an updater from deleting the data structure out from under them. | |
93 | This is because lock-based updaters typically update data items in place, | |
94 | and must therefore exclude readers. In contrast, RCU-based updaters | |
95 | typically take advantage of the fact that writes to single aligned | |
96 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, | |
97 | and replacement of data items in a linked structure without disrupting | |
98 | readers. Concurrent RCU readers can then continue accessing the old | |
99 | versions, and can dispense with the atomic operations, memory barriers, | |
100 | and communications cache misses that are so expensive on present-day | |
101 | SMP computer systems, even in absence of lock contention. | |
102 | ||
103 | In the three-step procedure shown above, the updater is performing both | |
104 | the removal and the reclamation step, but it is often helpful for an | |
105 | entirely different thread to do the reclamation, as is in fact the case | |
106 | in the Linux kernel's directory-entry cache (dcache). Even if the same | |
107 | thread performs both the update step (step (a) above) and the reclamation | |
108 | step (step (c) above), it is often helpful to think of them separately. | |
109 | For example, RCU readers and updaters need not communicate at all, | |
110 | but RCU provides implicit low-overhead communication between readers | |
111 | and reclaimers, namely, in step (b) above. | |
112 | ||
113 | So how the heck can a reclaimer tell when a reader is done, given | |
114 | that readers are not doing any sort of synchronization operations??? | |
115 | Read on to learn about how RCU's API makes this easy. | |
116 | ||
117 | ||
118 | 2. WHAT IS RCU'S CORE API? | |
119 | ||
120 | The core RCU API is quite small: | |
121 | ||
122 | a. rcu_read_lock() | |
123 | b. rcu_read_unlock() | |
124 | c. synchronize_rcu() / call_rcu() | |
125 | d. rcu_assign_pointer() | |
126 | e. rcu_dereference() | |
127 | ||
128 | There are many other members of the RCU API, but the rest can be | |
129 | expressed in terms of these five, though most implementations instead | |
130 | express synchronize_rcu() in terms of the call_rcu() callback API. | |
131 | ||
132 | The five core RCU APIs are described below, the other 18 will be enumerated | |
133 | later. See the kernel docbook documentation for more info, or look directly | |
134 | at the function header comments. | |
135 | ||
136 | rcu_read_lock() | |
137 | ||
138 | void rcu_read_lock(void); | |
139 | ||
140 | Used by a reader to inform the reclaimer that the reader is | |
141 | entering an RCU read-side critical section. It is illegal | |
142 | to block while in an RCU read-side critical section, though | |
28f6569a | 143 | kernels built with CONFIG_PREEMPT_RCU can preempt RCU |
6b3ef48a PM |
144 | read-side critical sections. Any RCU-protected data structure |
145 | accessed during an RCU read-side critical section is guaranteed to | |
146 | remain unreclaimed for the full duration of that critical section. | |
dd81eca8 PM |
147 | Reference counts may be used in conjunction with RCU to maintain |
148 | longer-term references to data structures. | |
149 | ||
150 | rcu_read_unlock() | |
151 | ||
152 | void rcu_read_unlock(void); | |
153 | ||
154 | Used by a reader to inform the reclaimer that the reader is | |
155 | exiting an RCU read-side critical section. Note that RCU | |
156 | read-side critical sections may be nested and/or overlapping. | |
157 | ||
158 | synchronize_rcu() | |
159 | ||
160 | void synchronize_rcu(void); | |
161 | ||
162 | Marks the end of updater code and the beginning of reclaimer | |
163 | code. It does this by blocking until all pre-existing RCU | |
164 | read-side critical sections on all CPUs have completed. | |
165 | Note that synchronize_rcu() will -not- necessarily wait for | |
166 | any subsequent RCU read-side critical sections to complete. | |
167 | For example, consider the following sequence of events: | |
168 | ||
169 | CPU 0 CPU 1 CPU 2 | |
170 | ----------------- ------------------------- --------------- | |
171 | 1. rcu_read_lock() | |
172 | 2. enters synchronize_rcu() | |
173 | 3. rcu_read_lock() | |
174 | 4. rcu_read_unlock() | |
175 | 5. exits synchronize_rcu() | |
176 | 6. rcu_read_unlock() | |
177 | ||
178 | To reiterate, synchronize_rcu() waits only for ongoing RCU | |
179 | read-side critical sections to complete, not necessarily for | |
180 | any that begin after synchronize_rcu() is invoked. | |
181 | ||
182 | Of course, synchronize_rcu() does not necessarily return | |
183 | -immediately- after the last pre-existing RCU read-side critical | |
184 | section completes. For one thing, there might well be scheduling | |
185 | delays. For another thing, many RCU implementations process | |
186 | requests in batches in order to improve efficiencies, which can | |
187 | further delay synchronize_rcu(). | |
188 | ||
189 | Since synchronize_rcu() is the API that must figure out when | |
190 | readers are done, its implementation is key to RCU. For RCU | |
191 | to be useful in all but the most read-intensive situations, | |
192 | synchronize_rcu()'s overhead must also be quite small. | |
193 | ||
194 | The call_rcu() API is a callback form of synchronize_rcu(), | |
195 | and is described in more detail in a later section. Instead of | |
196 | blocking, it registers a function and argument which are invoked | |
197 | after all ongoing RCU read-side critical sections have completed. | |
198 | This callback variant is particularly useful in situations where | |
165d6c78 PM |
199 | it is illegal to block or where update-side performance is |
200 | critically important. | |
201 | ||
202 | However, the call_rcu() API should not be used lightly, as use | |
203 | of the synchronize_rcu() API generally results in simpler code. | |
204 | In addition, the synchronize_rcu() API has the nice property | |
205 | of automatically limiting update rate should grace periods | |
206 | be delayed. This property results in system resilience in face | |
207 | of denial-of-service attacks. Code using call_rcu() should limit | |
208 | update rate in order to gain this same sort of resilience. See | |
209 | checklist.txt for some approaches to limiting the update rate. | |
dd81eca8 PM |
210 | |
211 | rcu_assign_pointer() | |
212 | ||
213 | typeof(p) rcu_assign_pointer(p, typeof(p) v); | |
214 | ||
215 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it | |
216 | would be cool to be able to declare a function in this manner. | |
217 | (Compiler experts will no doubt disagree.) | |
218 | ||
219 | The updater uses this function to assign a new value to an | |
220 | RCU-protected pointer, in order to safely communicate the change | |
221 | in value from the updater to the reader. This function returns | |
222 | the new value, and also executes any memory-barrier instructions | |
223 | required for a given CPU architecture. | |
224 | ||
d19720a9 PM |
225 | Perhaps just as important, it serves to document (1) which |
226 | pointers are protected by RCU and (2) the point at which a | |
227 | given structure becomes accessible to other CPUs. That said, | |
228 | rcu_assign_pointer() is most frequently used indirectly, via | |
229 | the _rcu list-manipulation primitives such as list_add_rcu(). | |
dd81eca8 PM |
230 | |
231 | rcu_dereference() | |
232 | ||
233 | typeof(p) rcu_dereference(p); | |
234 | ||
235 | Like rcu_assign_pointer(), rcu_dereference() must be implemented | |
236 | as a macro. | |
237 | ||
238 | The reader uses rcu_dereference() to fetch an RCU-protected | |
239 | pointer, which returns a value that may then be safely | |
8cf503d3 | 240 | dereferenced. Note that rcu_dereference() does not actually |
dd81eca8 PM |
241 | dereference the pointer, instead, it protects the pointer for |
242 | later dereferencing. It also executes any needed memory-barrier | |
243 | instructions for a given CPU architecture. Currently, only Alpha | |
244 | needs memory barriers within rcu_dereference() -- on other CPUs, | |
245 | it compiles to nothing, not even a compiler directive. | |
246 | ||
247 | Common coding practice uses rcu_dereference() to copy an | |
248 | RCU-protected pointer to a local variable, then dereferences | |
249 | this local variable, for example as follows: | |
250 | ||
251 | p = rcu_dereference(head.next); | |
252 | return p->data; | |
253 | ||
254 | However, in this case, one could just as easily combine these | |
255 | into one statement: | |
256 | ||
257 | return rcu_dereference(head.next)->data; | |
258 | ||
259 | If you are going to be fetching multiple fields from the | |
260 | RCU-protected structure, using the local variable is of | |
261 | course preferred. Repeated rcu_dereference() calls look | |
ed384464 MV |
262 | ugly, do not guarantee that the same pointer will be returned |
263 | if an update happened while in the critical section, and incur | |
264 | unnecessary overhead on Alpha CPUs. | |
dd81eca8 PM |
265 | |
266 | Note that the value returned by rcu_dereference() is valid | |
267 | only within the enclosing RCU read-side critical section. | |
268 | For example, the following is -not- legal: | |
269 | ||
270 | rcu_read_lock(); | |
271 | p = rcu_dereference(head.next); | |
272 | rcu_read_unlock(); | |
4357fb57 | 273 | x = p->address; /* BUG!!! */ |
dd81eca8 | 274 | rcu_read_lock(); |
4357fb57 | 275 | y = p->data; /* BUG!!! */ |
dd81eca8 PM |
276 | rcu_read_unlock(); |
277 | ||
278 | Holding a reference from one RCU read-side critical section | |
279 | to another is just as illegal as holding a reference from | |
280 | one lock-based critical section to another! Similarly, | |
281 | using a reference outside of the critical section in which | |
282 | it was acquired is just as illegal as doing so with normal | |
283 | locking. | |
284 | ||
285 | As with rcu_assign_pointer(), an important function of | |
d19720a9 PM |
286 | rcu_dereference() is to document which pointers are protected by |
287 | RCU, in particular, flagging a pointer that is subject to changing | |
288 | at any time, including immediately after the rcu_dereference(). | |
289 | And, again like rcu_assign_pointer(), rcu_dereference() is | |
290 | typically used indirectly, via the _rcu list-manipulation | |
dd81eca8 PM |
291 | primitives, such as list_for_each_entry_rcu(). |
292 | ||
293 | The following diagram shows how each API communicates among the | |
294 | reader, updater, and reclaimer. | |
295 | ||
296 | ||
297 | rcu_assign_pointer() | |
298 | +--------+ | |
299 | +---------------------->| reader |---------+ | |
300 | | +--------+ | | |
301 | | | | | |
302 | | | | Protect: | |
303 | | | | rcu_read_lock() | |
304 | | | | rcu_read_unlock() | |
305 | | rcu_dereference() | | | |
306 | +---------+ | | | |
307 | | updater |<---------------------+ | | |
308 | +---------+ V | |
309 | | +-----------+ | |
310 | +----------------------------------->| reclaimer | | |
311 | +-----------+ | |
312 | Defer: | |
313 | synchronize_rcu() & call_rcu() | |
314 | ||
315 | ||
316 | The RCU infrastructure observes the time sequence of rcu_read_lock(), | |
317 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in | |
318 | order to determine when (1) synchronize_rcu() invocations may return | |
319 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient | |
320 | implementations of the RCU infrastructure make heavy use of batching in | |
321 | order to amortize their overhead over many uses of the corresponding APIs. | |
322 | ||
323 | There are no fewer than three RCU mechanisms in the Linux kernel; the | |
324 | diagram above shows the first one, which is by far the most commonly used. | |
325 | The rcu_dereference() and rcu_assign_pointer() primitives are used for | |
326 | all three mechanisms, but different defer and protect primitives are | |
327 | used as follows: | |
328 | ||
329 | Defer Protect | |
330 | ||
331 | a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock() | |
c598a070 | 332 | call_rcu() rcu_dereference() |
dd81eca8 | 333 | |
d07e6d08 PM |
334 | b. synchronize_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh() |
335 | call_rcu_bh() rcu_dereference_bh() | |
dd81eca8 | 336 | |
4c54005c | 337 | c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched() |
d07e6d08 | 338 | call_rcu_sched() preempt_disable() / preempt_enable() |
dd81eca8 PM |
339 | local_irq_save() / local_irq_restore() |
340 | hardirq enter / hardirq exit | |
341 | NMI enter / NMI exit | |
c598a070 | 342 | rcu_dereference_sched() |
dd81eca8 PM |
343 | |
344 | These three mechanisms are used as follows: | |
345 | ||
346 | a. RCU applied to normal data structures. | |
347 | ||
348 | b. RCU applied to networking data structures that may be subjected | |
349 | to remote denial-of-service attacks. | |
350 | ||
351 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. | |
352 | ||
353 | Again, most uses will be of (a). The (b) and (c) cases are important | |
354 | for specialized uses, but are relatively uncommon. | |
355 | ||
356 | ||
357 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
358 | ||
359 | This section shows a simple use of the core RCU API to protect a | |
d19720a9 | 360 | global pointer to a dynamically allocated structure. More-typical |
dd81eca8 PM |
361 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. |
362 | ||
363 | struct foo { | |
364 | int a; | |
365 | char b; | |
366 | long c; | |
367 | }; | |
368 | DEFINE_SPINLOCK(foo_mutex); | |
369 | ||
2c4ac34b | 370 | struct foo __rcu *gbl_foo; |
dd81eca8 PM |
371 | |
372 | /* | |
373 | * Create a new struct foo that is the same as the one currently | |
374 | * pointed to by gbl_foo, except that field "a" is replaced | |
375 | * with "new_a". Points gbl_foo to the new structure, and | |
376 | * frees up the old structure after a grace period. | |
377 | * | |
378 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
379 | * see the initialized version of the new structure. | |
380 | * | |
381 | * Uses synchronize_rcu() to ensure that any readers that might | |
382 | * have references to the old structure complete before freeing | |
383 | * the old structure. | |
384 | */ | |
385 | void foo_update_a(int new_a) | |
386 | { | |
387 | struct foo *new_fp; | |
388 | struct foo *old_fp; | |
389 | ||
de0dfcdf | 390 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 | 391 | spin_lock(&foo_mutex); |
2c4ac34b | 392 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
dd81eca8 PM |
393 | *new_fp = *old_fp; |
394 | new_fp->a = new_a; | |
395 | rcu_assign_pointer(gbl_foo, new_fp); | |
396 | spin_unlock(&foo_mutex); | |
397 | synchronize_rcu(); | |
398 | kfree(old_fp); | |
399 | } | |
400 | ||
401 | /* | |
402 | * Return the value of field "a" of the current gbl_foo | |
403 | * structure. Use rcu_read_lock() and rcu_read_unlock() | |
404 | * to ensure that the structure does not get deleted out | |
405 | * from under us, and use rcu_dereference() to ensure that | |
406 | * we see the initialized version of the structure (important | |
407 | * for DEC Alpha and for people reading the code). | |
408 | */ | |
409 | int foo_get_a(void) | |
410 | { | |
411 | int retval; | |
412 | ||
413 | rcu_read_lock(); | |
414 | retval = rcu_dereference(gbl_foo)->a; | |
415 | rcu_read_unlock(); | |
416 | return retval; | |
417 | } | |
418 | ||
419 | So, to sum up: | |
420 | ||
421 | o Use rcu_read_lock() and rcu_read_unlock() to guard RCU | |
422 | read-side critical sections. | |
423 | ||
424 | o Within an RCU read-side critical section, use rcu_dereference() | |
425 | to dereference RCU-protected pointers. | |
426 | ||
427 | o Use some solid scheme (such as locks or semaphores) to | |
428 | keep concurrent updates from interfering with each other. | |
429 | ||
430 | o Use rcu_assign_pointer() to update an RCU-protected pointer. | |
431 | This primitive protects concurrent readers from the updater, | |
432 | -not- concurrent updates from each other! You therefore still | |
433 | need to use locking (or something similar) to keep concurrent | |
434 | rcu_assign_pointer() primitives from interfering with each other. | |
435 | ||
436 | o Use synchronize_rcu() -after- removing a data element from an | |
437 | RCU-protected data structure, but -before- reclaiming/freeing | |
438 | the data element, in order to wait for the completion of all | |
439 | RCU read-side critical sections that might be referencing that | |
440 | data item. | |
441 | ||
442 | See checklist.txt for additional rules to follow when using RCU. | |
d19720a9 PM |
443 | And again, more-typical uses of RCU may be found in listRCU.txt, |
444 | arrayRCU.txt, and NMI-RCU.txt. | |
dd81eca8 PM |
445 | |
446 | ||
447 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
448 | ||
449 | In the example above, foo_update_a() blocks until a grace period elapses. | |
450 | This is quite simple, but in some cases one cannot afford to wait so | |
451 | long -- there might be other high-priority work to be done. | |
452 | ||
453 | In such cases, one uses call_rcu() rather than synchronize_rcu(). | |
454 | The call_rcu() API is as follows: | |
455 | ||
456 | void call_rcu(struct rcu_head * head, | |
457 | void (*func)(struct rcu_head *head)); | |
458 | ||
459 | This function invokes func(head) after a grace period has elapsed. | |
460 | This invocation might happen from either softirq or process context, | |
461 | so the function is not permitted to block. The foo struct needs to | |
462 | have an rcu_head structure added, perhaps as follows: | |
463 | ||
464 | struct foo { | |
465 | int a; | |
466 | char b; | |
467 | long c; | |
468 | struct rcu_head rcu; | |
469 | }; | |
470 | ||
471 | The foo_update_a() function might then be written as follows: | |
472 | ||
473 | /* | |
474 | * Create a new struct foo that is the same as the one currently | |
475 | * pointed to by gbl_foo, except that field "a" is replaced | |
476 | * with "new_a". Points gbl_foo to the new structure, and | |
477 | * frees up the old structure after a grace period. | |
478 | * | |
479 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
480 | * see the initialized version of the new structure. | |
481 | * | |
482 | * Uses call_rcu() to ensure that any readers that might have | |
483 | * references to the old structure complete before freeing the | |
484 | * old structure. | |
485 | */ | |
486 | void foo_update_a(int new_a) | |
487 | { | |
488 | struct foo *new_fp; | |
489 | struct foo *old_fp; | |
490 | ||
de0dfcdf | 491 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 | 492 | spin_lock(&foo_mutex); |
2c4ac34b | 493 | old_fp = rcu_dereference_protected(gbl_foo, lockdep_is_held(&foo_mutex)); |
dd81eca8 PM |
494 | *new_fp = *old_fp; |
495 | new_fp->a = new_a; | |
496 | rcu_assign_pointer(gbl_foo, new_fp); | |
497 | spin_unlock(&foo_mutex); | |
498 | call_rcu(&old_fp->rcu, foo_reclaim); | |
499 | } | |
500 | ||
501 | The foo_reclaim() function might appear as follows: | |
502 | ||
503 | void foo_reclaim(struct rcu_head *rp) | |
504 | { | |
505 | struct foo *fp = container_of(rp, struct foo, rcu); | |
506 | ||
57d34a6c KC |
507 | foo_cleanup(fp->a); |
508 | ||
dd81eca8 PM |
509 | kfree(fp); |
510 | } | |
511 | ||
512 | The container_of() primitive is a macro that, given a pointer into a | |
513 | struct, the type of the struct, and the pointed-to field within the | |
514 | struct, returns a pointer to the beginning of the struct. | |
515 | ||
516 | The use of call_rcu() permits the caller of foo_update_a() to | |
517 | immediately regain control, without needing to worry further about the | |
518 | old version of the newly updated element. It also clearly shows the | |
519 | RCU distinction between updater, namely foo_update_a(), and reclaimer, | |
520 | namely foo_reclaim(). | |
521 | ||
522 | The summary of advice is the same as for the previous section, except | |
523 | that we are now using call_rcu() rather than synchronize_rcu(): | |
524 | ||
525 | o Use call_rcu() -after- removing a data element from an | |
526 | RCU-protected data structure in order to register a callback | |
527 | function that will be invoked after the completion of all RCU | |
528 | read-side critical sections that might be referencing that | |
529 | data item. | |
530 | ||
57d34a6c KC |
531 | If the callback for call_rcu() is not doing anything more than calling |
532 | kfree() on the structure, you can use kfree_rcu() instead of call_rcu() | |
533 | to avoid having to write your own callback: | |
534 | ||
535 | kfree_rcu(old_fp, rcu); | |
536 | ||
dd81eca8 PM |
537 | Again, see checklist.txt for additional rules governing the use of RCU. |
538 | ||
539 | ||
540 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
541 | ||
542 | One of the nice things about RCU is that it has extremely simple "toy" | |
543 | implementations that are a good first step towards understanding the | |
544 | production-quality implementations in the Linux kernel. This section | |
545 | presents two such "toy" implementations of RCU, one that is implemented | |
546 | in terms of familiar locking primitives, and another that more closely | |
547 | resembles "classic" RCU. Both are way too simple for real-world use, | |
548 | lacking both functionality and performance. However, they are useful | |
549 | in getting a feel for how RCU works. See kernel/rcupdate.c for a | |
550 | production-quality implementation, and see: | |
551 | ||
552 | http://www.rdrop.com/users/paulmck/RCU | |
553 | ||
554 | for papers describing the Linux kernel RCU implementation. The OLS'01 | |
555 | and OLS'02 papers are a good introduction, and the dissertation provides | |
d19720a9 | 556 | more details on the current implementation as of early 2004. |
dd81eca8 PM |
557 | |
558 | ||
559 | 5A. "TOY" IMPLEMENTATION #1: LOCKING | |
560 | ||
561 | This section presents a "toy" RCU implementation that is based on | |
562 | familiar locking primitives. Its overhead makes it a non-starter for | |
563 | real-life use, as does its lack of scalability. It is also unsuitable | |
564 | for realtime use, since it allows scheduling latency to "bleed" from | |
d3d3a3cc PM |
565 | one read-side critical section to another. It also assumes recursive |
566 | reader-writer locks: If you try this with non-recursive locks, and | |
567 | you allow nested rcu_read_lock() calls, you can deadlock. | |
dd81eca8 PM |
568 | |
569 | However, it is probably the easiest implementation to relate to, so is | |
570 | a good starting point. | |
571 | ||
572 | It is extremely simple: | |
573 | ||
574 | static DEFINE_RWLOCK(rcu_gp_mutex); | |
575 | ||
576 | void rcu_read_lock(void) | |
577 | { | |
578 | read_lock(&rcu_gp_mutex); | |
579 | } | |
580 | ||
581 | void rcu_read_unlock(void) | |
582 | { | |
583 | read_unlock(&rcu_gp_mutex); | |
584 | } | |
585 | ||
586 | void synchronize_rcu(void) | |
587 | { | |
588 | write_lock(&rcu_gp_mutex); | |
589 | write_unlock(&rcu_gp_mutex); | |
590 | } | |
591 | ||
066bb1c8 PM |
592 | [You can ignore rcu_assign_pointer() and rcu_dereference() without missing |
593 | much. But here are simplified versions anyway. And whatever you do, | |
594 | don't forget about them when submitting patches making use of RCU!] | |
595 | ||
596 | #define rcu_assign_pointer(p, v) \ | |
597 | ({ \ | |
598 | smp_store_release(&(p), (v)); \ | |
599 | }) | |
600 | ||
601 | #define rcu_dereference(p) \ | |
602 | ({ \ | |
9ad3c143 | 603 | typeof(p) _________p1 = READ_ONCE(p); \ |
066bb1c8 PM |
604 | (_________p1); \ |
605 | }) | |
dd81eca8 PM |
606 | |
607 | ||
608 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire | |
609 | and release a global reader-writer lock. The synchronize_rcu() | |
610 | primitive write-acquires this same lock, then immediately releases | |
611 | it. This means that once synchronize_rcu() exits, all RCU read-side | |
53cb4726 | 612 | critical sections that were in progress before synchronize_rcu() was |
dd81eca8 PM |
613 | called are guaranteed to have completed -- there is no way that |
614 | synchronize_rcu() would have been able to write-acquire the lock | |
615 | otherwise. | |
616 | ||
617 | It is possible to nest rcu_read_lock(), since reader-writer locks may | |
618 | be recursively acquired. Note also that rcu_read_lock() is immune | |
619 | from deadlock (an important property of RCU). The reason for this is | |
620 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). | |
621 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, | |
622 | so there can be no deadlock cycle. | |
623 | ||
624 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
625 | occur when using this algorithm in a real-world Linux | |
626 | kernel? How could this deadlock be avoided? | |
627 | ||
628 | ||
629 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU | |
630 | ||
631 | This section presents a "toy" RCU implementation that is based on | |
632 | "classic RCU". It is also short on performance (but only for updates) and | |
633 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT | |
634 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() | |
635 | are the same as those shown in the preceding section, so they are omitted. | |
636 | ||
637 | void rcu_read_lock(void) { } | |
638 | ||
639 | void rcu_read_unlock(void) { } | |
640 | ||
641 | void synchronize_rcu(void) | |
642 | { | |
643 | int cpu; | |
644 | ||
3c30a752 | 645 | for_each_possible_cpu(cpu) |
dd81eca8 PM |
646 | run_on(cpu); |
647 | } | |
648 | ||
649 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. | |
650 | This is the great strength of classic RCU in a non-preemptive kernel: | |
651 | read-side overhead is precisely zero, at least on non-Alpha CPUs. | |
652 | And there is absolutely no way that rcu_read_lock() can possibly | |
653 | participate in a deadlock cycle! | |
654 | ||
655 | The implementation of synchronize_rcu() simply schedules itself on each | |
656 | CPU in turn. The run_on() primitive can be implemented straightforwardly | |
657 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less | |
658 | "toy" implementation would restore the affinity upon completion rather | |
659 | than just leaving all tasks running on the last CPU, but when I said | |
660 | "toy", I meant -toy-! | |
661 | ||
662 | So how the heck is this supposed to work??? | |
663 | ||
664 | Remember that it is illegal to block while in an RCU read-side critical | |
665 | section. Therefore, if a given CPU executes a context switch, we know | |
666 | that it must have completed all preceding RCU read-side critical sections. | |
667 | Once -all- CPUs have executed a context switch, then -all- preceding | |
668 | RCU read-side critical sections will have completed. | |
669 | ||
670 | So, suppose that we remove a data item from its structure and then invoke | |
671 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed | |
672 | that there are no RCU read-side critical sections holding a reference | |
673 | to that data item, so we can safely reclaim it. | |
674 | ||
675 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
676 | overhead is -negative-. | |
677 | ||
678 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
679 | critical section, what the heck do you do in | |
680 | PREEMPT_RT, where normal spinlocks can block??? | |
681 | ||
682 | ||
683 | 6. ANALOGY WITH READER-WRITER LOCKING | |
684 | ||
685 | Although RCU can be used in many different ways, a very common use of | |
686 | RCU is analogous to reader-writer locking. The following unified | |
687 | diff shows how closely related RCU and reader-writer locking can be. | |
688 | ||
70946a44 YD |
689 | @@ -5,5 +5,5 @@ struct el { |
690 | int data; | |
691 | /* Other data fields */ | |
692 | }; | |
693 | -rwlock_t listmutex; | |
694 | +spinlock_t listmutex; | |
695 | struct el head; | |
696 | ||
dd81eca8 PM |
697 | @@ -13,15 +14,15 @@ |
698 | struct list_head *lp; | |
699 | struct el *p; | |
700 | ||
70946a44 | 701 | - read_lock(&listmutex); |
dd81eca8 PM |
702 | - list_for_each_entry(p, head, lp) { |
703 | + rcu_read_lock(); | |
704 | + list_for_each_entry_rcu(p, head, lp) { | |
705 | if (p->key == key) { | |
706 | *result = p->data; | |
70946a44 | 707 | - read_unlock(&listmutex); |
dd81eca8 PM |
708 | + rcu_read_unlock(); |
709 | return 1; | |
710 | } | |
711 | } | |
70946a44 | 712 | - read_unlock(&listmutex); |
dd81eca8 PM |
713 | + rcu_read_unlock(); |
714 | return 0; | |
715 | } | |
716 | ||
717 | @@ -29,15 +30,16 @@ | |
718 | { | |
719 | struct el *p; | |
720 | ||
721 | - write_lock(&listmutex); | |
722 | + spin_lock(&listmutex); | |
723 | list_for_each_entry(p, head, lp) { | |
724 | if (p->key == key) { | |
82a854ec | 725 | - list_del(&p->list); |
dd81eca8 | 726 | - write_unlock(&listmutex); |
82a854ec | 727 | + list_del_rcu(&p->list); |
dd81eca8 PM |
728 | + spin_unlock(&listmutex); |
729 | + synchronize_rcu(); | |
730 | kfree(p); | |
731 | return 1; | |
732 | } | |
733 | } | |
734 | - write_unlock(&listmutex); | |
735 | + spin_unlock(&listmutex); | |
736 | return 0; | |
737 | } | |
738 | ||
739 | Or, for those who prefer a side-by-side listing: | |
740 | ||
741 | 1 struct el { 1 struct el { | |
742 | 2 struct list_head list; 2 struct list_head list; | |
743 | 3 long key; 3 long key; | |
744 | 4 spinlock_t mutex; 4 spinlock_t mutex; | |
745 | 5 int data; 5 int data; | |
746 | 6 /* Other data fields */ 6 /* Other data fields */ | |
747 | 7 }; 7 }; | |
70946a44 | 748 | 8 rwlock_t listmutex; 8 spinlock_t listmutex; |
dd81eca8 PM |
749 | 9 struct el head; 9 struct el head; |
750 | ||
751 | 1 int search(long key, int *result) 1 int search(long key, int *result) | |
752 | 2 { 2 { | |
753 | 3 struct list_head *lp; 3 struct list_head *lp; | |
754 | 4 struct el *p; 4 struct el *p; | |
755 | 5 5 | |
70946a44 | 756 | 6 read_lock(&listmutex); 6 rcu_read_lock(); |
dd81eca8 PM |
757 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { |
758 | 8 if (p->key == key) { 8 if (p->key == key) { | |
759 | 9 *result = p->data; 9 *result = p->data; | |
70946a44 | 760 | 10 read_unlock(&listmutex); 10 rcu_read_unlock(); |
dd81eca8 PM |
761 | 11 return 1; 11 return 1; |
762 | 12 } 12 } | |
763 | 13 } 13 } | |
70946a44 | 764 | 14 read_unlock(&listmutex); 14 rcu_read_unlock(); |
dd81eca8 PM |
765 | 15 return 0; 15 return 0; |
766 | 16 } 16 } | |
767 | ||
768 | 1 int delete(long key) 1 int delete(long key) | |
769 | 2 { 2 { | |
770 | 3 struct el *p; 3 struct el *p; | |
771 | 4 4 | |
772 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); | |
773 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { | |
774 | 7 if (p->key == key) { 7 if (p->key == key) { | |
82a854ec | 775 | 8 list_del(&p->list); 8 list_del_rcu(&p->list); |
dd81eca8 PM |
776 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); |
777 | 10 synchronize_rcu(); | |
778 | 10 kfree(p); 11 kfree(p); | |
779 | 11 return 1; 12 return 1; | |
780 | 12 } 13 } | |
781 | 13 } 14 } | |
782 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); | |
783 | 15 return 0; 16 return 0; | |
784 | 16 } 17 } | |
785 | ||
786 | Either way, the differences are quite small. Read-side locking moves | |
787 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from | |
670e9f34 | 788 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() |
dd81eca8 PM |
789 | precedes the kfree(). |
790 | ||
791 | However, there is one potential catch: the read-side and update-side | |
792 | critical sections can now run concurrently. In many cases, this will | |
793 | not be a problem, but it is necessary to check carefully regardless. | |
794 | For example, if multiple independent list updates must be seen as | |
795 | a single atomic update, converting to RCU will require special care. | |
796 | ||
797 | Also, the presence of synchronize_rcu() means that the RCU version of | |
798 | delete() can now block. If this is a problem, there is a callback-based | |
57d34a6c KC |
799 | mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can |
800 | be used in place of synchronize_rcu(). | |
dd81eca8 PM |
801 | |
802 | ||
803 | 7. FULL LIST OF RCU APIs | |
804 | ||
805 | The RCU APIs are documented in docbook-format header comments in the | |
806 | Linux-kernel source code, but it helps to have a full list of the | |
807 | APIs, since there does not appear to be a way to categorize them | |
808 | in docbook. Here is the list, by category. | |
809 | ||
c598a070 | 810 | RCU list traversal: |
dd81eca8 | 811 | |
d07e6d08 PM |
812 | list_entry_rcu |
813 | list_first_entry_rcu | |
814 | list_next_rcu | |
32300751 | 815 | list_for_each_entry_rcu |
d07e6d08 PM |
816 | list_for_each_entry_continue_rcu |
817 | hlist_first_rcu | |
818 | hlist_next_rcu | |
819 | hlist_pprev_rcu | |
32300751 | 820 | hlist_for_each_entry_rcu |
d07e6d08 PM |
821 | hlist_for_each_entry_rcu_bh |
822 | hlist_for_each_entry_continue_rcu | |
823 | hlist_for_each_entry_continue_rcu_bh | |
824 | hlist_nulls_first_rcu | |
240ebbf8 | 825 | hlist_nulls_for_each_entry_rcu |
d07e6d08 PM |
826 | hlist_bl_first_rcu |
827 | hlist_bl_for_each_entry_rcu | |
dd81eca8 | 828 | |
32300751 | 829 | RCU pointer/list update: |
dd81eca8 PM |
830 | |
831 | rcu_assign_pointer | |
832 | list_add_rcu | |
833 | list_add_tail_rcu | |
834 | list_del_rcu | |
835 | list_replace_rcu | |
1d023284 | 836 | hlist_add_behind_rcu |
32300751 | 837 | hlist_add_before_rcu |
dd81eca8 | 838 | hlist_add_head_rcu |
d07e6d08 PM |
839 | hlist_del_rcu |
840 | hlist_del_init_rcu | |
32300751 PM |
841 | hlist_replace_rcu |
842 | list_splice_init_rcu() | |
d07e6d08 PM |
843 | hlist_nulls_del_init_rcu |
844 | hlist_nulls_del_rcu | |
845 | hlist_nulls_add_head_rcu | |
846 | hlist_bl_add_head_rcu | |
847 | hlist_bl_del_init_rcu | |
848 | hlist_bl_del_rcu | |
849 | hlist_bl_set_first_rcu | |
dd81eca8 | 850 | |
32300751 PM |
851 | RCU: Critical sections Grace period Barrier |
852 | ||
853 | rcu_read_lock synchronize_net rcu_barrier | |
854 | rcu_read_unlock synchronize_rcu | |
c598a070 | 855 | rcu_dereference synchronize_rcu_expedited |
d07e6d08 PM |
856 | rcu_read_lock_held call_rcu |
857 | rcu_dereference_check kfree_rcu | |
858 | rcu_dereference_protected | |
32300751 PM |
859 | |
860 | bh: Critical sections Grace period Barrier | |
861 | ||
862 | rcu_read_lock_bh call_rcu_bh rcu_barrier_bh | |
240ebbf8 | 863 | rcu_read_unlock_bh synchronize_rcu_bh |
c598a070 | 864 | rcu_dereference_bh synchronize_rcu_bh_expedited |
d07e6d08 PM |
865 | rcu_dereference_bh_check |
866 | rcu_dereference_bh_protected | |
867 | rcu_read_lock_bh_held | |
32300751 PM |
868 | |
869 | sched: Critical sections Grace period Barrier | |
870 | ||
240ebbf8 PM |
871 | rcu_read_lock_sched synchronize_sched rcu_barrier_sched |
872 | rcu_read_unlock_sched call_rcu_sched | |
873 | [preempt_disable] synchronize_sched_expedited | |
874 | [and friends] | |
d07e6d08 PM |
875 | rcu_read_lock_sched_notrace |
876 | rcu_read_unlock_sched_notrace | |
c598a070 | 877 | rcu_dereference_sched |
d07e6d08 PM |
878 | rcu_dereference_sched_check |
879 | rcu_dereference_sched_protected | |
880 | rcu_read_lock_sched_held | |
32300751 PM |
881 | |
882 | ||
883 | SRCU: Critical sections Grace period Barrier | |
884 | ||
74d874e7 PM |
885 | srcu_read_lock synchronize_srcu srcu_barrier |
886 | srcu_read_unlock call_srcu | |
99f88919 | 887 | srcu_dereference synchronize_srcu_expedited |
d07e6d08 PM |
888 | srcu_dereference_check |
889 | srcu_read_lock_held | |
dd81eca8 | 890 | |
240ebbf8 | 891 | SRCU: Initialization/cleanup |
4de5f89e PM |
892 | DEFINE_SRCU |
893 | DEFINE_STATIC_SRCU | |
240ebbf8 PM |
894 | init_srcu_struct |
895 | cleanup_srcu_struct | |
dd81eca8 | 896 | |
50aec002 PM |
897 | All: lockdep-checked RCU-protected pointer access |
898 | ||
50aec002 | 899 | rcu_access_pointer |
d07e6d08 | 900 | rcu_dereference_raw |
f78f5b90 | 901 | RCU_LOCKDEP_WARN |
d07e6d08 PM |
902 | rcu_sleep_check |
903 | RCU_NONIDLE | |
50aec002 | 904 | |
dd81eca8 PM |
905 | See the comment headers in the source code (or the docbook generated |
906 | from them) for more information. | |
907 | ||
fea65126 PM |
908 | However, given that there are no fewer than four families of RCU APIs |
909 | in the Linux kernel, how do you choose which one to use? The following | |
910 | list can be helpful: | |
911 | ||
912 | a. Will readers need to block? If so, you need SRCU. | |
913 | ||
99f88919 | 914 | b. What about the -rt patchset? If readers would need to block |
fea65126 PM |
915 | in an non-rt kernel, you need SRCU. If readers would block |
916 | in a -rt kernel, but not in a non-rt kernel, SRCU is not | |
4de5f89e PM |
917 | necessary. (The -rt patchset turns spinlocks into sleeplocks, |
918 | hence this distinction.) | |
fea65126 | 919 | |
99f88919 | 920 | c. Do you need to treat NMI handlers, hardirq handlers, |
fea65126 PM |
921 | and code segments with preemption disabled (whether |
922 | via preempt_disable(), local_irq_save(), local_bh_disable(), | |
923 | or some other mechanism) as if they were explicit RCU readers? | |
2aef619c | 924 | If so, RCU-sched is the only choice that will work for you. |
fea65126 | 925 | |
99f88919 | 926 | d. Do you need RCU grace periods to complete even in the face |
fea65126 PM |
927 | of softirq monopolization of one or more of the CPUs? For |
928 | example, is your code subject to network-based denial-of-service | |
929 | attacks? If so, you need RCU-bh. | |
930 | ||
99f88919 | 931 | e. Is your workload too update-intensive for normal use of |
fea65126 | 932 | RCU, but inappropriate for other synchronization mechanisms? |
5f0d5a3a PM |
933 | If so, consider SLAB_TYPESAFE_BY_RCU (which was originally |
934 | named SLAB_DESTROY_BY_RCU). But please be careful! | |
fea65126 | 935 | |
99f88919 | 936 | f. Do you need read-side critical sections that are respected |
2aef619c PM |
937 | even though they are in the middle of the idle loop, during |
938 | user-mode execution, or on an offlined CPU? If so, SRCU is the | |
939 | only choice that will work for you. | |
940 | ||
99f88919 | 941 | g. Otherwise, use RCU. |
fea65126 PM |
942 | |
943 | Of course, this all assumes that you have determined that RCU is in fact | |
944 | the right tool for your job. | |
945 | ||
dd81eca8 PM |
946 | |
947 | 8. ANSWERS TO QUICK QUIZZES | |
948 | ||
949 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
950 | occur when using this algorithm in a real-world Linux | |
951 | kernel? [Referring to the lock-based "toy" RCU | |
952 | algorithm.] | |
953 | ||
954 | Answer: Consider the following sequence of events: | |
955 | ||
956 | 1. CPU 0 acquires some unrelated lock, call it | |
d19720a9 PM |
957 | "problematic_lock", disabling irq via |
958 | spin_lock_irqsave(). | |
dd81eca8 PM |
959 | |
960 | 2. CPU 1 enters synchronize_rcu(), write-acquiring | |
961 | rcu_gp_mutex. | |
962 | ||
963 | 3. CPU 0 enters rcu_read_lock(), but must wait | |
964 | because CPU 1 holds rcu_gp_mutex. | |
965 | ||
966 | 4. CPU 1 is interrupted, and the irq handler | |
967 | attempts to acquire problematic_lock. | |
968 | ||
969 | The system is now deadlocked. | |
970 | ||
971 | One way to avoid this deadlock is to use an approach like | |
972 | that of CONFIG_PREEMPT_RT, where all normal spinlocks | |
973 | become blocking locks, and all irq handlers execute in | |
974 | the context of special tasks. In this case, in step 4 | |
975 | above, the irq handler would block, allowing CPU 1 to | |
976 | release rcu_gp_mutex, avoiding the deadlock. | |
977 | ||
978 | Even in the absence of deadlock, this RCU implementation | |
979 | allows latency to "bleed" from readers to other | |
980 | readers through synchronize_rcu(). To see this, | |
981 | consider task A in an RCU read-side critical section | |
982 | (thus read-holding rcu_gp_mutex), task B blocked | |
983 | attempting to write-acquire rcu_gp_mutex, and | |
984 | task C blocked in rcu_read_lock() attempting to | |
985 | read_acquire rcu_gp_mutex. Task A's RCU read-side | |
986 | latency is holding up task C, albeit indirectly via | |
987 | task B. | |
988 | ||
989 | Realtime RCU implementations therefore use a counter-based | |
990 | approach where tasks in RCU read-side critical sections | |
991 | cannot be blocked by tasks executing synchronize_rcu(). | |
992 | ||
993 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
994 | overhead is -negative-. | |
995 | ||
996 | Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT | |
997 | kernel where a routing table is used by process-context | |
998 | code, but can be updated by irq-context code (for example, | |
999 | by an "ICMP REDIRECT" packet). The usual way of handling | |
1000 | this would be to have the process-context code disable | |
1001 | interrupts while searching the routing table. Use of | |
1002 | RCU allows such interrupt-disabling to be dispensed with. | |
1003 | Thus, without RCU, you pay the cost of disabling interrupts, | |
1004 | and with RCU you don't. | |
1005 | ||
1006 | One can argue that the overhead of RCU in this | |
1007 | case is negative with respect to the single-CPU | |
1008 | interrupt-disabling approach. Others might argue that | |
1009 | the overhead of RCU is merely zero, and that replacing | |
1010 | the positive overhead of the interrupt-disabling scheme | |
1011 | with the zero-overhead RCU scheme does not constitute | |
1012 | negative overhead. | |
1013 | ||
1014 | In real life, of course, things are more complex. But | |
1015 | even the theoretical possibility of negative overhead for | |
1016 | a synchronization primitive is a bit unexpected. ;-) | |
1017 | ||
1018 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
1019 | critical section, what the heck do you do in | |
1020 | PREEMPT_RT, where normal spinlocks can block??? | |
1021 | ||
1022 | Answer: Just as PREEMPT_RT permits preemption of spinlock | |
1023 | critical sections, it permits preemption of RCU | |
1024 | read-side critical sections. It also permits | |
1025 | spinlocks blocking while in RCU read-side critical | |
1026 | sections. | |
1027 | ||
1028 | Why the apparent inconsistency? Because it is it | |
1029 | possible to use priority boosting to keep the RCU | |
1030 | grace periods short if need be (for example, if running | |
1031 | short of memory). In contrast, if blocking waiting | |
1032 | for (say) network reception, there is no way to know | |
1033 | what should be boosted. Especially given that the | |
1034 | process we need to boost might well be a human being | |
1035 | who just went out for a pizza or something. And although | |
1036 | a computer-operated cattle prod might arouse serious | |
1037 | interest, it might also provoke serious objections. | |
1038 | Besides, how does the computer know what pizza parlor | |
1039 | the human being went to??? | |
1040 | ||
1041 | ||
1042 | ACKNOWLEDGEMENTS | |
1043 | ||
1044 | My thanks to the people who helped make this human-readable, including | |
d19720a9 | 1045 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
dd81eca8 PM |
1046 | |
1047 | ||
1048 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |