]>
Commit | Line | Data |
---|---|---|
32300751 PM |
1 | Please note that the "What is RCU?" LWN series is an excellent place |
2 | to start learning about RCU: | |
3 | ||
4 | 1. What is RCU, Fundamentally? http://lwn.net/Articles/262464/ | |
5 | 2. What is RCU? Part 2: Usage http://lwn.net/Articles/263130/ | |
6 | 3. RCU part 3: the RCU API http://lwn.net/Articles/264090/ | |
d493011a | 7 | 4. The RCU API, 2010 Edition http://lwn.net/Articles/418853/ |
32300751 PM |
8 | |
9 | ||
dd81eca8 PM |
10 | What is RCU? |
11 | ||
12 | RCU is a synchronization mechanism that was added to the Linux kernel | |
13 | during the 2.5 development effort that is optimized for read-mostly | |
14 | situations. Although RCU is actually quite simple once you understand it, | |
15 | getting there can sometimes be a challenge. Part of the problem is that | |
16 | most of the past descriptions of RCU have been written with the mistaken | |
17 | assumption that there is "one true way" to describe RCU. Instead, | |
18 | the experience has been that different people must take different paths | |
19 | to arrive at an understanding of RCU. This document provides several | |
20 | different paths, as follows: | |
21 | ||
22 | 1. RCU OVERVIEW | |
23 | 2. WHAT IS RCU'S CORE API? | |
24 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
25 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
26 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
27 | 6. ANALOGY WITH READER-WRITER LOCKING | |
28 | 7. FULL LIST OF RCU APIs | |
29 | 8. ANSWERS TO QUICK QUIZZES | |
30 | ||
31 | People who prefer starting with a conceptual overview should focus on | |
32 | Section 1, though most readers will profit by reading this section at | |
33 | some point. People who prefer to start with an API that they can then | |
34 | experiment with should focus on Section 2. People who prefer to start | |
35 | with example uses should focus on Sections 3 and 4. People who need to | |
36 | understand the RCU implementation should focus on Section 5, then dive | |
37 | into the kernel source code. People who reason best by analogy should | |
38 | focus on Section 6. Section 7 serves as an index to the docbook API | |
39 | documentation, and Section 8 is the traditional answer key. | |
40 | ||
41 | So, start with the section that makes the most sense to you and your | |
42 | preferred method of learning. If you need to know everything about | |
43 | everything, feel free to read the whole thing -- but if you are really | |
44 | that type of person, you have perused the source code and will therefore | |
45 | never need this document anyway. ;-) | |
46 | ||
47 | ||
48 | 1. RCU OVERVIEW | |
49 | ||
50 | The basic idea behind RCU is to split updates into "removal" and | |
51 | "reclamation" phases. The removal phase removes references to data items | |
52 | within a data structure (possibly by replacing them with references to | |
53 | new versions of these data items), and can run concurrently with readers. | |
54 | The reason that it is safe to run the removal phase concurrently with | |
55 | readers is the semantics of modern CPUs guarantee that readers will see | |
56 | either the old or the new version of the data structure rather than a | |
57 | partially updated reference. The reclamation phase does the work of reclaiming | |
58 | (e.g., freeing) the data items removed from the data structure during the | |
59 | removal phase. Because reclaiming data items can disrupt any readers | |
60 | concurrently referencing those data items, the reclamation phase must | |
61 | not start until readers no longer hold references to those data items. | |
62 | ||
63 | Splitting the update into removal and reclamation phases permits the | |
64 | updater to perform the removal phase immediately, and to defer the | |
65 | reclamation phase until all readers active during the removal phase have | |
66 | completed, either by blocking until they finish or by registering a | |
67 | callback that is invoked after they finish. Only readers that are active | |
68 | during the removal phase need be considered, because any reader starting | |
69 | after the removal phase will be unable to gain a reference to the removed | |
70 | data items, and therefore cannot be disrupted by the reclamation phase. | |
71 | ||
72 | So the typical RCU update sequence goes something like the following: | |
73 | ||
74 | a. Remove pointers to a data structure, so that subsequent | |
75 | readers cannot gain a reference to it. | |
76 | ||
77 | b. Wait for all previous readers to complete their RCU read-side | |
78 | critical sections. | |
79 | ||
80 | c. At this point, there cannot be any readers who hold references | |
81 | to the data structure, so it now may safely be reclaimed | |
82 | (e.g., kfree()d). | |
83 | ||
84 | Step (b) above is the key idea underlying RCU's deferred destruction. | |
85 | The ability to wait until all readers are done allows RCU readers to | |
86 | use much lighter-weight synchronization, in some cases, absolutely no | |
87 | synchronization at all. In contrast, in more conventional lock-based | |
88 | schemes, readers must use heavy-weight synchronization in order to | |
89 | prevent an updater from deleting the data structure out from under them. | |
90 | This is because lock-based updaters typically update data items in place, | |
91 | and must therefore exclude readers. In contrast, RCU-based updaters | |
92 | typically take advantage of the fact that writes to single aligned | |
93 | pointers are atomic on modern CPUs, allowing atomic insertion, removal, | |
94 | and replacement of data items in a linked structure without disrupting | |
95 | readers. Concurrent RCU readers can then continue accessing the old | |
96 | versions, and can dispense with the atomic operations, memory barriers, | |
97 | and communications cache misses that are so expensive on present-day | |
98 | SMP computer systems, even in absence of lock contention. | |
99 | ||
100 | In the three-step procedure shown above, the updater is performing both | |
101 | the removal and the reclamation step, but it is often helpful for an | |
102 | entirely different thread to do the reclamation, as is in fact the case | |
103 | in the Linux kernel's directory-entry cache (dcache). Even if the same | |
104 | thread performs both the update step (step (a) above) and the reclamation | |
105 | step (step (c) above), it is often helpful to think of them separately. | |
106 | For example, RCU readers and updaters need not communicate at all, | |
107 | but RCU provides implicit low-overhead communication between readers | |
108 | and reclaimers, namely, in step (b) above. | |
109 | ||
110 | So how the heck can a reclaimer tell when a reader is done, given | |
111 | that readers are not doing any sort of synchronization operations??? | |
112 | Read on to learn about how RCU's API makes this easy. | |
113 | ||
114 | ||
115 | 2. WHAT IS RCU'S CORE API? | |
116 | ||
117 | The core RCU API is quite small: | |
118 | ||
119 | a. rcu_read_lock() | |
120 | b. rcu_read_unlock() | |
121 | c. synchronize_rcu() / call_rcu() | |
122 | d. rcu_assign_pointer() | |
123 | e. rcu_dereference() | |
124 | ||
125 | There are many other members of the RCU API, but the rest can be | |
126 | expressed in terms of these five, though most implementations instead | |
127 | express synchronize_rcu() in terms of the call_rcu() callback API. | |
128 | ||
129 | The five core RCU APIs are described below, the other 18 will be enumerated | |
130 | later. See the kernel docbook documentation for more info, or look directly | |
131 | at the function header comments. | |
132 | ||
133 | rcu_read_lock() | |
134 | ||
135 | void rcu_read_lock(void); | |
136 | ||
137 | Used by a reader to inform the reclaimer that the reader is | |
138 | entering an RCU read-side critical section. It is illegal | |
139 | to block while in an RCU read-side critical section, though | |
6b3ef48a PM |
140 | kernels built with CONFIG_TREE_PREEMPT_RCU can preempt RCU |
141 | read-side critical sections. Any RCU-protected data structure | |
142 | accessed during an RCU read-side critical section is guaranteed to | |
143 | remain unreclaimed for the full duration of that critical section. | |
dd81eca8 PM |
144 | Reference counts may be used in conjunction with RCU to maintain |
145 | longer-term references to data structures. | |
146 | ||
147 | rcu_read_unlock() | |
148 | ||
149 | void rcu_read_unlock(void); | |
150 | ||
151 | Used by a reader to inform the reclaimer that the reader is | |
152 | exiting an RCU read-side critical section. Note that RCU | |
153 | read-side critical sections may be nested and/or overlapping. | |
154 | ||
155 | synchronize_rcu() | |
156 | ||
157 | void synchronize_rcu(void); | |
158 | ||
159 | Marks the end of updater code and the beginning of reclaimer | |
160 | code. It does this by blocking until all pre-existing RCU | |
161 | read-side critical sections on all CPUs have completed. | |
162 | Note that synchronize_rcu() will -not- necessarily wait for | |
163 | any subsequent RCU read-side critical sections to complete. | |
164 | For example, consider the following sequence of events: | |
165 | ||
166 | CPU 0 CPU 1 CPU 2 | |
167 | ----------------- ------------------------- --------------- | |
168 | 1. rcu_read_lock() | |
169 | 2. enters synchronize_rcu() | |
170 | 3. rcu_read_lock() | |
171 | 4. rcu_read_unlock() | |
172 | 5. exits synchronize_rcu() | |
173 | 6. rcu_read_unlock() | |
174 | ||
175 | To reiterate, synchronize_rcu() waits only for ongoing RCU | |
176 | read-side critical sections to complete, not necessarily for | |
177 | any that begin after synchronize_rcu() is invoked. | |
178 | ||
179 | Of course, synchronize_rcu() does not necessarily return | |
180 | -immediately- after the last pre-existing RCU read-side critical | |
181 | section completes. For one thing, there might well be scheduling | |
182 | delays. For another thing, many RCU implementations process | |
183 | requests in batches in order to improve efficiencies, which can | |
184 | further delay synchronize_rcu(). | |
185 | ||
186 | Since synchronize_rcu() is the API that must figure out when | |
187 | readers are done, its implementation is key to RCU. For RCU | |
188 | to be useful in all but the most read-intensive situations, | |
189 | synchronize_rcu()'s overhead must also be quite small. | |
190 | ||
191 | The call_rcu() API is a callback form of synchronize_rcu(), | |
192 | and is described in more detail in a later section. Instead of | |
193 | blocking, it registers a function and argument which are invoked | |
194 | after all ongoing RCU read-side critical sections have completed. | |
195 | This callback variant is particularly useful in situations where | |
165d6c78 PM |
196 | it is illegal to block or where update-side performance is |
197 | critically important. | |
198 | ||
199 | However, the call_rcu() API should not be used lightly, as use | |
200 | of the synchronize_rcu() API generally results in simpler code. | |
201 | In addition, the synchronize_rcu() API has the nice property | |
202 | of automatically limiting update rate should grace periods | |
203 | be delayed. This property results in system resilience in face | |
204 | of denial-of-service attacks. Code using call_rcu() should limit | |
205 | update rate in order to gain this same sort of resilience. See | |
206 | checklist.txt for some approaches to limiting the update rate. | |
dd81eca8 PM |
207 | |
208 | rcu_assign_pointer() | |
209 | ||
210 | typeof(p) rcu_assign_pointer(p, typeof(p) v); | |
211 | ||
212 | Yes, rcu_assign_pointer() -is- implemented as a macro, though it | |
213 | would be cool to be able to declare a function in this manner. | |
214 | (Compiler experts will no doubt disagree.) | |
215 | ||
216 | The updater uses this function to assign a new value to an | |
217 | RCU-protected pointer, in order to safely communicate the change | |
218 | in value from the updater to the reader. This function returns | |
219 | the new value, and also executes any memory-barrier instructions | |
220 | required for a given CPU architecture. | |
221 | ||
d19720a9 PM |
222 | Perhaps just as important, it serves to document (1) which |
223 | pointers are protected by RCU and (2) the point at which a | |
224 | given structure becomes accessible to other CPUs. That said, | |
225 | rcu_assign_pointer() is most frequently used indirectly, via | |
226 | the _rcu list-manipulation primitives such as list_add_rcu(). | |
dd81eca8 PM |
227 | |
228 | rcu_dereference() | |
229 | ||
230 | typeof(p) rcu_dereference(p); | |
231 | ||
232 | Like rcu_assign_pointer(), rcu_dereference() must be implemented | |
233 | as a macro. | |
234 | ||
235 | The reader uses rcu_dereference() to fetch an RCU-protected | |
236 | pointer, which returns a value that may then be safely | |
237 | dereferenced. Note that rcu_deference() does not actually | |
238 | dereference the pointer, instead, it protects the pointer for | |
239 | later dereferencing. It also executes any needed memory-barrier | |
240 | instructions for a given CPU architecture. Currently, only Alpha | |
241 | needs memory barriers within rcu_dereference() -- on other CPUs, | |
242 | it compiles to nothing, not even a compiler directive. | |
243 | ||
244 | Common coding practice uses rcu_dereference() to copy an | |
245 | RCU-protected pointer to a local variable, then dereferences | |
246 | this local variable, for example as follows: | |
247 | ||
248 | p = rcu_dereference(head.next); | |
249 | return p->data; | |
250 | ||
251 | However, in this case, one could just as easily combine these | |
252 | into one statement: | |
253 | ||
254 | return rcu_dereference(head.next)->data; | |
255 | ||
256 | If you are going to be fetching multiple fields from the | |
257 | RCU-protected structure, using the local variable is of | |
258 | course preferred. Repeated rcu_dereference() calls look | |
259 | ugly and incur unnecessary overhead on Alpha CPUs. | |
260 | ||
261 | Note that the value returned by rcu_dereference() is valid | |
262 | only within the enclosing RCU read-side critical section. | |
263 | For example, the following is -not- legal: | |
264 | ||
265 | rcu_read_lock(); | |
266 | p = rcu_dereference(head.next); | |
267 | rcu_read_unlock(); | |
4357fb57 | 268 | x = p->address; /* BUG!!! */ |
dd81eca8 | 269 | rcu_read_lock(); |
4357fb57 | 270 | y = p->data; /* BUG!!! */ |
dd81eca8 PM |
271 | rcu_read_unlock(); |
272 | ||
273 | Holding a reference from one RCU read-side critical section | |
274 | to another is just as illegal as holding a reference from | |
275 | one lock-based critical section to another! Similarly, | |
276 | using a reference outside of the critical section in which | |
277 | it was acquired is just as illegal as doing so with normal | |
278 | locking. | |
279 | ||
280 | As with rcu_assign_pointer(), an important function of | |
d19720a9 PM |
281 | rcu_dereference() is to document which pointers are protected by |
282 | RCU, in particular, flagging a pointer that is subject to changing | |
283 | at any time, including immediately after the rcu_dereference(). | |
284 | And, again like rcu_assign_pointer(), rcu_dereference() is | |
285 | typically used indirectly, via the _rcu list-manipulation | |
dd81eca8 PM |
286 | primitives, such as list_for_each_entry_rcu(). |
287 | ||
288 | The following diagram shows how each API communicates among the | |
289 | reader, updater, and reclaimer. | |
290 | ||
291 | ||
292 | rcu_assign_pointer() | |
293 | +--------+ | |
294 | +---------------------->| reader |---------+ | |
295 | | +--------+ | | |
296 | | | | | |
297 | | | | Protect: | |
298 | | | | rcu_read_lock() | |
299 | | | | rcu_read_unlock() | |
300 | | rcu_dereference() | | | |
301 | +---------+ | | | |
302 | | updater |<---------------------+ | | |
303 | +---------+ V | |
304 | | +-----------+ | |
305 | +----------------------------------->| reclaimer | | |
306 | +-----------+ | |
307 | Defer: | |
308 | synchronize_rcu() & call_rcu() | |
309 | ||
310 | ||
311 | The RCU infrastructure observes the time sequence of rcu_read_lock(), | |
312 | rcu_read_unlock(), synchronize_rcu(), and call_rcu() invocations in | |
313 | order to determine when (1) synchronize_rcu() invocations may return | |
314 | to their callers and (2) call_rcu() callbacks may be invoked. Efficient | |
315 | implementations of the RCU infrastructure make heavy use of batching in | |
316 | order to amortize their overhead over many uses of the corresponding APIs. | |
317 | ||
318 | There are no fewer than three RCU mechanisms in the Linux kernel; the | |
319 | diagram above shows the first one, which is by far the most commonly used. | |
320 | The rcu_dereference() and rcu_assign_pointer() primitives are used for | |
321 | all three mechanisms, but different defer and protect primitives are | |
322 | used as follows: | |
323 | ||
324 | Defer Protect | |
325 | ||
326 | a. synchronize_rcu() rcu_read_lock() / rcu_read_unlock() | |
c598a070 | 327 | call_rcu() rcu_dereference() |
dd81eca8 | 328 | |
d07e6d08 PM |
329 | b. synchronize_rcu_bh() rcu_read_lock_bh() / rcu_read_unlock_bh() |
330 | call_rcu_bh() rcu_dereference_bh() | |
dd81eca8 | 331 | |
4c54005c | 332 | c. synchronize_sched() rcu_read_lock_sched() / rcu_read_unlock_sched() |
d07e6d08 | 333 | call_rcu_sched() preempt_disable() / preempt_enable() |
dd81eca8 PM |
334 | local_irq_save() / local_irq_restore() |
335 | hardirq enter / hardirq exit | |
336 | NMI enter / NMI exit | |
c598a070 | 337 | rcu_dereference_sched() |
dd81eca8 PM |
338 | |
339 | These three mechanisms are used as follows: | |
340 | ||
341 | a. RCU applied to normal data structures. | |
342 | ||
343 | b. RCU applied to networking data structures that may be subjected | |
344 | to remote denial-of-service attacks. | |
345 | ||
346 | c. RCU applied to scheduler and interrupt/NMI-handler tasks. | |
347 | ||
348 | Again, most uses will be of (a). The (b) and (c) cases are important | |
349 | for specialized uses, but are relatively uncommon. | |
350 | ||
351 | ||
352 | 3. WHAT ARE SOME EXAMPLE USES OF CORE RCU API? | |
353 | ||
354 | This section shows a simple use of the core RCU API to protect a | |
d19720a9 | 355 | global pointer to a dynamically allocated structure. More-typical |
dd81eca8 PM |
356 | uses of RCU may be found in listRCU.txt, arrayRCU.txt, and NMI-RCU.txt. |
357 | ||
358 | struct foo { | |
359 | int a; | |
360 | char b; | |
361 | long c; | |
362 | }; | |
363 | DEFINE_SPINLOCK(foo_mutex); | |
364 | ||
365 | struct foo *gbl_foo; | |
366 | ||
367 | /* | |
368 | * Create a new struct foo that is the same as the one currently | |
369 | * pointed to by gbl_foo, except that field "a" is replaced | |
370 | * with "new_a". Points gbl_foo to the new structure, and | |
371 | * frees up the old structure after a grace period. | |
372 | * | |
373 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
374 | * see the initialized version of the new structure. | |
375 | * | |
376 | * Uses synchronize_rcu() to ensure that any readers that might | |
377 | * have references to the old structure complete before freeing | |
378 | * the old structure. | |
379 | */ | |
380 | void foo_update_a(int new_a) | |
381 | { | |
382 | struct foo *new_fp; | |
383 | struct foo *old_fp; | |
384 | ||
de0dfcdf | 385 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 PM |
386 | spin_lock(&foo_mutex); |
387 | old_fp = gbl_foo; | |
388 | *new_fp = *old_fp; | |
389 | new_fp->a = new_a; | |
390 | rcu_assign_pointer(gbl_foo, new_fp); | |
391 | spin_unlock(&foo_mutex); | |
392 | synchronize_rcu(); | |
393 | kfree(old_fp); | |
394 | } | |
395 | ||
396 | /* | |
397 | * Return the value of field "a" of the current gbl_foo | |
398 | * structure. Use rcu_read_lock() and rcu_read_unlock() | |
399 | * to ensure that the structure does not get deleted out | |
400 | * from under us, and use rcu_dereference() to ensure that | |
401 | * we see the initialized version of the structure (important | |
402 | * for DEC Alpha and for people reading the code). | |
403 | */ | |
404 | int foo_get_a(void) | |
405 | { | |
406 | int retval; | |
407 | ||
408 | rcu_read_lock(); | |
409 | retval = rcu_dereference(gbl_foo)->a; | |
410 | rcu_read_unlock(); | |
411 | return retval; | |
412 | } | |
413 | ||
414 | So, to sum up: | |
415 | ||
416 | o Use rcu_read_lock() and rcu_read_unlock() to guard RCU | |
417 | read-side critical sections. | |
418 | ||
419 | o Within an RCU read-side critical section, use rcu_dereference() | |
420 | to dereference RCU-protected pointers. | |
421 | ||
422 | o Use some solid scheme (such as locks or semaphores) to | |
423 | keep concurrent updates from interfering with each other. | |
424 | ||
425 | o Use rcu_assign_pointer() to update an RCU-protected pointer. | |
426 | This primitive protects concurrent readers from the updater, | |
427 | -not- concurrent updates from each other! You therefore still | |
428 | need to use locking (or something similar) to keep concurrent | |
429 | rcu_assign_pointer() primitives from interfering with each other. | |
430 | ||
431 | o Use synchronize_rcu() -after- removing a data element from an | |
432 | RCU-protected data structure, but -before- reclaiming/freeing | |
433 | the data element, in order to wait for the completion of all | |
434 | RCU read-side critical sections that might be referencing that | |
435 | data item. | |
436 | ||
437 | See checklist.txt for additional rules to follow when using RCU. | |
d19720a9 PM |
438 | And again, more-typical uses of RCU may be found in listRCU.txt, |
439 | arrayRCU.txt, and NMI-RCU.txt. | |
dd81eca8 PM |
440 | |
441 | ||
442 | 4. WHAT IF MY UPDATING THREAD CANNOT BLOCK? | |
443 | ||
444 | In the example above, foo_update_a() blocks until a grace period elapses. | |
445 | This is quite simple, but in some cases one cannot afford to wait so | |
446 | long -- there might be other high-priority work to be done. | |
447 | ||
448 | In such cases, one uses call_rcu() rather than synchronize_rcu(). | |
449 | The call_rcu() API is as follows: | |
450 | ||
451 | void call_rcu(struct rcu_head * head, | |
452 | void (*func)(struct rcu_head *head)); | |
453 | ||
454 | This function invokes func(head) after a grace period has elapsed. | |
455 | This invocation might happen from either softirq or process context, | |
456 | so the function is not permitted to block. The foo struct needs to | |
457 | have an rcu_head structure added, perhaps as follows: | |
458 | ||
459 | struct foo { | |
460 | int a; | |
461 | char b; | |
462 | long c; | |
463 | struct rcu_head rcu; | |
464 | }; | |
465 | ||
466 | The foo_update_a() function might then be written as follows: | |
467 | ||
468 | /* | |
469 | * Create a new struct foo that is the same as the one currently | |
470 | * pointed to by gbl_foo, except that field "a" is replaced | |
471 | * with "new_a". Points gbl_foo to the new structure, and | |
472 | * frees up the old structure after a grace period. | |
473 | * | |
474 | * Uses rcu_assign_pointer() to ensure that concurrent readers | |
475 | * see the initialized version of the new structure. | |
476 | * | |
477 | * Uses call_rcu() to ensure that any readers that might have | |
478 | * references to the old structure complete before freeing the | |
479 | * old structure. | |
480 | */ | |
481 | void foo_update_a(int new_a) | |
482 | { | |
483 | struct foo *new_fp; | |
484 | struct foo *old_fp; | |
485 | ||
de0dfcdf | 486 | new_fp = kmalloc(sizeof(*new_fp), GFP_KERNEL); |
dd81eca8 PM |
487 | spin_lock(&foo_mutex); |
488 | old_fp = gbl_foo; | |
489 | *new_fp = *old_fp; | |
490 | new_fp->a = new_a; | |
491 | rcu_assign_pointer(gbl_foo, new_fp); | |
492 | spin_unlock(&foo_mutex); | |
493 | call_rcu(&old_fp->rcu, foo_reclaim); | |
494 | } | |
495 | ||
496 | The foo_reclaim() function might appear as follows: | |
497 | ||
498 | void foo_reclaim(struct rcu_head *rp) | |
499 | { | |
500 | struct foo *fp = container_of(rp, struct foo, rcu); | |
501 | ||
57d34a6c KC |
502 | foo_cleanup(fp->a); |
503 | ||
dd81eca8 PM |
504 | kfree(fp); |
505 | } | |
506 | ||
507 | The container_of() primitive is a macro that, given a pointer into a | |
508 | struct, the type of the struct, and the pointed-to field within the | |
509 | struct, returns a pointer to the beginning of the struct. | |
510 | ||
511 | The use of call_rcu() permits the caller of foo_update_a() to | |
512 | immediately regain control, without needing to worry further about the | |
513 | old version of the newly updated element. It also clearly shows the | |
514 | RCU distinction between updater, namely foo_update_a(), and reclaimer, | |
515 | namely foo_reclaim(). | |
516 | ||
517 | The summary of advice is the same as for the previous section, except | |
518 | that we are now using call_rcu() rather than synchronize_rcu(): | |
519 | ||
520 | o Use call_rcu() -after- removing a data element from an | |
521 | RCU-protected data structure in order to register a callback | |
522 | function that will be invoked after the completion of all RCU | |
523 | read-side critical sections that might be referencing that | |
524 | data item. | |
525 | ||
57d34a6c KC |
526 | If the callback for call_rcu() is not doing anything more than calling |
527 | kfree() on the structure, you can use kfree_rcu() instead of call_rcu() | |
528 | to avoid having to write your own callback: | |
529 | ||
530 | kfree_rcu(old_fp, rcu); | |
531 | ||
dd81eca8 PM |
532 | Again, see checklist.txt for additional rules governing the use of RCU. |
533 | ||
534 | ||
535 | 5. WHAT ARE SOME SIMPLE IMPLEMENTATIONS OF RCU? | |
536 | ||
537 | One of the nice things about RCU is that it has extremely simple "toy" | |
538 | implementations that are a good first step towards understanding the | |
539 | production-quality implementations in the Linux kernel. This section | |
540 | presents two such "toy" implementations of RCU, one that is implemented | |
541 | in terms of familiar locking primitives, and another that more closely | |
542 | resembles "classic" RCU. Both are way too simple for real-world use, | |
543 | lacking both functionality and performance. However, they are useful | |
544 | in getting a feel for how RCU works. See kernel/rcupdate.c for a | |
545 | production-quality implementation, and see: | |
546 | ||
547 | http://www.rdrop.com/users/paulmck/RCU | |
548 | ||
549 | for papers describing the Linux kernel RCU implementation. The OLS'01 | |
550 | and OLS'02 papers are a good introduction, and the dissertation provides | |
d19720a9 | 551 | more details on the current implementation as of early 2004. |
dd81eca8 PM |
552 | |
553 | ||
554 | 5A. "TOY" IMPLEMENTATION #1: LOCKING | |
555 | ||
556 | This section presents a "toy" RCU implementation that is based on | |
557 | familiar locking primitives. Its overhead makes it a non-starter for | |
558 | real-life use, as does its lack of scalability. It is also unsuitable | |
559 | for realtime use, since it allows scheduling latency to "bleed" from | |
560 | one read-side critical section to another. | |
561 | ||
562 | However, it is probably the easiest implementation to relate to, so is | |
563 | a good starting point. | |
564 | ||
565 | It is extremely simple: | |
566 | ||
567 | static DEFINE_RWLOCK(rcu_gp_mutex); | |
568 | ||
569 | void rcu_read_lock(void) | |
570 | { | |
571 | read_lock(&rcu_gp_mutex); | |
572 | } | |
573 | ||
574 | void rcu_read_unlock(void) | |
575 | { | |
576 | read_unlock(&rcu_gp_mutex); | |
577 | } | |
578 | ||
579 | void synchronize_rcu(void) | |
580 | { | |
581 | write_lock(&rcu_gp_mutex); | |
582 | write_unlock(&rcu_gp_mutex); | |
583 | } | |
584 | ||
585 | [You can ignore rcu_assign_pointer() and rcu_dereference() without | |
586 | missing much. But here they are anyway. And whatever you do, don't | |
587 | forget about them when submitting patches making use of RCU!] | |
588 | ||
589 | #define rcu_assign_pointer(p, v) ({ \ | |
590 | smp_wmb(); \ | |
591 | (p) = (v); \ | |
592 | }) | |
593 | ||
594 | #define rcu_dereference(p) ({ \ | |
595 | typeof(p) _________p1 = p; \ | |
596 | smp_read_barrier_depends(); \ | |
597 | (_________p1); \ | |
598 | }) | |
599 | ||
600 | ||
601 | The rcu_read_lock() and rcu_read_unlock() primitive read-acquire | |
602 | and release a global reader-writer lock. The synchronize_rcu() | |
603 | primitive write-acquires this same lock, then immediately releases | |
604 | it. This means that once synchronize_rcu() exits, all RCU read-side | |
53cb4726 | 605 | critical sections that were in progress before synchronize_rcu() was |
dd81eca8 PM |
606 | called are guaranteed to have completed -- there is no way that |
607 | synchronize_rcu() would have been able to write-acquire the lock | |
608 | otherwise. | |
609 | ||
610 | It is possible to nest rcu_read_lock(), since reader-writer locks may | |
611 | be recursively acquired. Note also that rcu_read_lock() is immune | |
612 | from deadlock (an important property of RCU). The reason for this is | |
613 | that the only thing that can block rcu_read_lock() is a synchronize_rcu(). | |
614 | But synchronize_rcu() does not acquire any locks while holding rcu_gp_mutex, | |
615 | so there can be no deadlock cycle. | |
616 | ||
617 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
618 | occur when using this algorithm in a real-world Linux | |
619 | kernel? How could this deadlock be avoided? | |
620 | ||
621 | ||
622 | 5B. "TOY" EXAMPLE #2: CLASSIC RCU | |
623 | ||
624 | This section presents a "toy" RCU implementation that is based on | |
625 | "classic RCU". It is also short on performance (but only for updates) and | |
626 | on features such as hotplug CPU and the ability to run in CONFIG_PREEMPT | |
627 | kernels. The definitions of rcu_dereference() and rcu_assign_pointer() | |
628 | are the same as those shown in the preceding section, so they are omitted. | |
629 | ||
630 | void rcu_read_lock(void) { } | |
631 | ||
632 | void rcu_read_unlock(void) { } | |
633 | ||
634 | void synchronize_rcu(void) | |
635 | { | |
636 | int cpu; | |
637 | ||
3c30a752 | 638 | for_each_possible_cpu(cpu) |
dd81eca8 PM |
639 | run_on(cpu); |
640 | } | |
641 | ||
642 | Note that rcu_read_lock() and rcu_read_unlock() do absolutely nothing. | |
643 | This is the great strength of classic RCU in a non-preemptive kernel: | |
644 | read-side overhead is precisely zero, at least on non-Alpha CPUs. | |
645 | And there is absolutely no way that rcu_read_lock() can possibly | |
646 | participate in a deadlock cycle! | |
647 | ||
648 | The implementation of synchronize_rcu() simply schedules itself on each | |
649 | CPU in turn. The run_on() primitive can be implemented straightforwardly | |
650 | in terms of the sched_setaffinity() primitive. Of course, a somewhat less | |
651 | "toy" implementation would restore the affinity upon completion rather | |
652 | than just leaving all tasks running on the last CPU, but when I said | |
653 | "toy", I meant -toy-! | |
654 | ||
655 | So how the heck is this supposed to work??? | |
656 | ||
657 | Remember that it is illegal to block while in an RCU read-side critical | |
658 | section. Therefore, if a given CPU executes a context switch, we know | |
659 | that it must have completed all preceding RCU read-side critical sections. | |
660 | Once -all- CPUs have executed a context switch, then -all- preceding | |
661 | RCU read-side critical sections will have completed. | |
662 | ||
663 | So, suppose that we remove a data item from its structure and then invoke | |
664 | synchronize_rcu(). Once synchronize_rcu() returns, we are guaranteed | |
665 | that there are no RCU read-side critical sections holding a reference | |
666 | to that data item, so we can safely reclaim it. | |
667 | ||
668 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
669 | overhead is -negative-. | |
670 | ||
671 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
672 | critical section, what the heck do you do in | |
673 | PREEMPT_RT, where normal spinlocks can block??? | |
674 | ||
675 | ||
676 | 6. ANALOGY WITH READER-WRITER LOCKING | |
677 | ||
678 | Although RCU can be used in many different ways, a very common use of | |
679 | RCU is analogous to reader-writer locking. The following unified | |
680 | diff shows how closely related RCU and reader-writer locking can be. | |
681 | ||
682 | @@ -13,15 +14,15 @@ | |
683 | struct list_head *lp; | |
684 | struct el *p; | |
685 | ||
686 | - read_lock(); | |
687 | - list_for_each_entry(p, head, lp) { | |
688 | + rcu_read_lock(); | |
689 | + list_for_each_entry_rcu(p, head, lp) { | |
690 | if (p->key == key) { | |
691 | *result = p->data; | |
692 | - read_unlock(); | |
693 | + rcu_read_unlock(); | |
694 | return 1; | |
695 | } | |
696 | } | |
697 | - read_unlock(); | |
698 | + rcu_read_unlock(); | |
699 | return 0; | |
700 | } | |
701 | ||
702 | @@ -29,15 +30,16 @@ | |
703 | { | |
704 | struct el *p; | |
705 | ||
706 | - write_lock(&listmutex); | |
707 | + spin_lock(&listmutex); | |
708 | list_for_each_entry(p, head, lp) { | |
709 | if (p->key == key) { | |
82a854ec | 710 | - list_del(&p->list); |
dd81eca8 | 711 | - write_unlock(&listmutex); |
82a854ec | 712 | + list_del_rcu(&p->list); |
dd81eca8 PM |
713 | + spin_unlock(&listmutex); |
714 | + synchronize_rcu(); | |
715 | kfree(p); | |
716 | return 1; | |
717 | } | |
718 | } | |
719 | - write_unlock(&listmutex); | |
720 | + spin_unlock(&listmutex); | |
721 | return 0; | |
722 | } | |
723 | ||
724 | Or, for those who prefer a side-by-side listing: | |
725 | ||
726 | 1 struct el { 1 struct el { | |
727 | 2 struct list_head list; 2 struct list_head list; | |
728 | 3 long key; 3 long key; | |
729 | 4 spinlock_t mutex; 4 spinlock_t mutex; | |
730 | 5 int data; 5 int data; | |
731 | 6 /* Other data fields */ 6 /* Other data fields */ | |
732 | 7 }; 7 }; | |
733 | 8 spinlock_t listmutex; 8 spinlock_t listmutex; | |
734 | 9 struct el head; 9 struct el head; | |
735 | ||
736 | 1 int search(long key, int *result) 1 int search(long key, int *result) | |
737 | 2 { 2 { | |
738 | 3 struct list_head *lp; 3 struct list_head *lp; | |
739 | 4 struct el *p; 4 struct el *p; | |
740 | 5 5 | |
741 | 6 read_lock(); 6 rcu_read_lock(); | |
742 | 7 list_for_each_entry(p, head, lp) { 7 list_for_each_entry_rcu(p, head, lp) { | |
743 | 8 if (p->key == key) { 8 if (p->key == key) { | |
744 | 9 *result = p->data; 9 *result = p->data; | |
745 | 10 read_unlock(); 10 rcu_read_unlock(); | |
746 | 11 return 1; 11 return 1; | |
747 | 12 } 12 } | |
748 | 13 } 13 } | |
749 | 14 read_unlock(); 14 rcu_read_unlock(); | |
750 | 15 return 0; 15 return 0; | |
751 | 16 } 16 } | |
752 | ||
753 | 1 int delete(long key) 1 int delete(long key) | |
754 | 2 { 2 { | |
755 | 3 struct el *p; 3 struct el *p; | |
756 | 4 4 | |
757 | 5 write_lock(&listmutex); 5 spin_lock(&listmutex); | |
758 | 6 list_for_each_entry(p, head, lp) { 6 list_for_each_entry(p, head, lp) { | |
759 | 7 if (p->key == key) { 7 if (p->key == key) { | |
82a854ec | 760 | 8 list_del(&p->list); 8 list_del_rcu(&p->list); |
dd81eca8 PM |
761 | 9 write_unlock(&listmutex); 9 spin_unlock(&listmutex); |
762 | 10 synchronize_rcu(); | |
763 | 10 kfree(p); 11 kfree(p); | |
764 | 11 return 1; 12 return 1; | |
765 | 12 } 13 } | |
766 | 13 } 14 } | |
767 | 14 write_unlock(&listmutex); 15 spin_unlock(&listmutex); | |
768 | 15 return 0; 16 return 0; | |
769 | 16 } 17 } | |
770 | ||
771 | Either way, the differences are quite small. Read-side locking moves | |
772 | to rcu_read_lock() and rcu_read_unlock, update-side locking moves from | |
670e9f34 | 773 | a reader-writer lock to a simple spinlock, and a synchronize_rcu() |
dd81eca8 PM |
774 | precedes the kfree(). |
775 | ||
776 | However, there is one potential catch: the read-side and update-side | |
777 | critical sections can now run concurrently. In many cases, this will | |
778 | not be a problem, but it is necessary to check carefully regardless. | |
779 | For example, if multiple independent list updates must be seen as | |
780 | a single atomic update, converting to RCU will require special care. | |
781 | ||
782 | Also, the presence of synchronize_rcu() means that the RCU version of | |
783 | delete() can now block. If this is a problem, there is a callback-based | |
57d34a6c KC |
784 | mechanism that never blocks, namely call_rcu() or kfree_rcu(), that can |
785 | be used in place of synchronize_rcu(). | |
dd81eca8 PM |
786 | |
787 | ||
788 | 7. FULL LIST OF RCU APIs | |
789 | ||
790 | The RCU APIs are documented in docbook-format header comments in the | |
791 | Linux-kernel source code, but it helps to have a full list of the | |
792 | APIs, since there does not appear to be a way to categorize them | |
793 | in docbook. Here is the list, by category. | |
794 | ||
c598a070 | 795 | RCU list traversal: |
dd81eca8 | 796 | |
d07e6d08 PM |
797 | list_entry_rcu |
798 | list_first_entry_rcu | |
799 | list_next_rcu | |
32300751 | 800 | list_for_each_entry_rcu |
d07e6d08 PM |
801 | list_for_each_entry_continue_rcu |
802 | hlist_first_rcu | |
803 | hlist_next_rcu | |
804 | hlist_pprev_rcu | |
32300751 | 805 | hlist_for_each_entry_rcu |
d07e6d08 PM |
806 | hlist_for_each_entry_rcu_bh |
807 | hlist_for_each_entry_continue_rcu | |
808 | hlist_for_each_entry_continue_rcu_bh | |
809 | hlist_nulls_first_rcu | |
240ebbf8 | 810 | hlist_nulls_for_each_entry_rcu |
d07e6d08 PM |
811 | hlist_bl_first_rcu |
812 | hlist_bl_for_each_entry_rcu | |
dd81eca8 | 813 | |
32300751 | 814 | RCU pointer/list update: |
dd81eca8 PM |
815 | |
816 | rcu_assign_pointer | |
817 | list_add_rcu | |
818 | list_add_tail_rcu | |
819 | list_del_rcu | |
820 | list_replace_rcu | |
1d023284 | 821 | hlist_add_behind_rcu |
32300751 | 822 | hlist_add_before_rcu |
dd81eca8 | 823 | hlist_add_head_rcu |
d07e6d08 PM |
824 | hlist_del_rcu |
825 | hlist_del_init_rcu | |
32300751 PM |
826 | hlist_replace_rcu |
827 | list_splice_init_rcu() | |
d07e6d08 PM |
828 | hlist_nulls_del_init_rcu |
829 | hlist_nulls_del_rcu | |
830 | hlist_nulls_add_head_rcu | |
831 | hlist_bl_add_head_rcu | |
832 | hlist_bl_del_init_rcu | |
833 | hlist_bl_del_rcu | |
834 | hlist_bl_set_first_rcu | |
dd81eca8 | 835 | |
32300751 PM |
836 | RCU: Critical sections Grace period Barrier |
837 | ||
838 | rcu_read_lock synchronize_net rcu_barrier | |
839 | rcu_read_unlock synchronize_rcu | |
c598a070 | 840 | rcu_dereference synchronize_rcu_expedited |
d07e6d08 PM |
841 | rcu_read_lock_held call_rcu |
842 | rcu_dereference_check kfree_rcu | |
843 | rcu_dereference_protected | |
32300751 PM |
844 | |
845 | bh: Critical sections Grace period Barrier | |
846 | ||
847 | rcu_read_lock_bh call_rcu_bh rcu_barrier_bh | |
240ebbf8 | 848 | rcu_read_unlock_bh synchronize_rcu_bh |
c598a070 | 849 | rcu_dereference_bh synchronize_rcu_bh_expedited |
d07e6d08 PM |
850 | rcu_dereference_bh_check |
851 | rcu_dereference_bh_protected | |
852 | rcu_read_lock_bh_held | |
32300751 PM |
853 | |
854 | sched: Critical sections Grace period Barrier | |
855 | ||
240ebbf8 PM |
856 | rcu_read_lock_sched synchronize_sched rcu_barrier_sched |
857 | rcu_read_unlock_sched call_rcu_sched | |
858 | [preempt_disable] synchronize_sched_expedited | |
859 | [and friends] | |
d07e6d08 PM |
860 | rcu_read_lock_sched_notrace |
861 | rcu_read_unlock_sched_notrace | |
c598a070 | 862 | rcu_dereference_sched |
d07e6d08 PM |
863 | rcu_dereference_sched_check |
864 | rcu_dereference_sched_protected | |
865 | rcu_read_lock_sched_held | |
32300751 PM |
866 | |
867 | ||
868 | SRCU: Critical sections Grace period Barrier | |
869 | ||
74d874e7 PM |
870 | srcu_read_lock synchronize_srcu srcu_barrier |
871 | srcu_read_unlock call_srcu | |
99f88919 | 872 | srcu_dereference synchronize_srcu_expedited |
d07e6d08 PM |
873 | srcu_dereference_check |
874 | srcu_read_lock_held | |
dd81eca8 | 875 | |
240ebbf8 PM |
876 | SRCU: Initialization/cleanup |
877 | init_srcu_struct | |
878 | cleanup_srcu_struct | |
dd81eca8 | 879 | |
50aec002 PM |
880 | All: lockdep-checked RCU-protected pointer access |
881 | ||
d07e6d08 | 882 | rcu_access_index |
50aec002 | 883 | rcu_access_pointer |
d07e6d08 PM |
884 | rcu_dereference_index_check |
885 | rcu_dereference_raw | |
886 | rcu_lockdep_assert | |
887 | rcu_sleep_check | |
888 | RCU_NONIDLE | |
50aec002 | 889 | |
dd81eca8 PM |
890 | See the comment headers in the source code (or the docbook generated |
891 | from them) for more information. | |
892 | ||
fea65126 PM |
893 | However, given that there are no fewer than four families of RCU APIs |
894 | in the Linux kernel, how do you choose which one to use? The following | |
895 | list can be helpful: | |
896 | ||
897 | a. Will readers need to block? If so, you need SRCU. | |
898 | ||
99f88919 | 899 | b. What about the -rt patchset? If readers would need to block |
fea65126 PM |
900 | in an non-rt kernel, you need SRCU. If readers would block |
901 | in a -rt kernel, but not in a non-rt kernel, SRCU is not | |
902 | necessary. | |
903 | ||
99f88919 | 904 | c. Do you need to treat NMI handlers, hardirq handlers, |
fea65126 PM |
905 | and code segments with preemption disabled (whether |
906 | via preempt_disable(), local_irq_save(), local_bh_disable(), | |
907 | or some other mechanism) as if they were explicit RCU readers? | |
2aef619c | 908 | If so, RCU-sched is the only choice that will work for you. |
fea65126 | 909 | |
99f88919 | 910 | d. Do you need RCU grace periods to complete even in the face |
fea65126 PM |
911 | of softirq monopolization of one or more of the CPUs? For |
912 | example, is your code subject to network-based denial-of-service | |
913 | attacks? If so, you need RCU-bh. | |
914 | ||
99f88919 | 915 | e. Is your workload too update-intensive for normal use of |
fea65126 PM |
916 | RCU, but inappropriate for other synchronization mechanisms? |
917 | If so, consider SLAB_DESTROY_BY_RCU. But please be careful! | |
918 | ||
99f88919 | 919 | f. Do you need read-side critical sections that are respected |
2aef619c PM |
920 | even though they are in the middle of the idle loop, during |
921 | user-mode execution, or on an offlined CPU? If so, SRCU is the | |
922 | only choice that will work for you. | |
923 | ||
99f88919 | 924 | g. Otherwise, use RCU. |
fea65126 PM |
925 | |
926 | Of course, this all assumes that you have determined that RCU is in fact | |
927 | the right tool for your job. | |
928 | ||
dd81eca8 PM |
929 | |
930 | 8. ANSWERS TO QUICK QUIZZES | |
931 | ||
932 | Quick Quiz #1: Why is this argument naive? How could a deadlock | |
933 | occur when using this algorithm in a real-world Linux | |
934 | kernel? [Referring to the lock-based "toy" RCU | |
935 | algorithm.] | |
936 | ||
937 | Answer: Consider the following sequence of events: | |
938 | ||
939 | 1. CPU 0 acquires some unrelated lock, call it | |
d19720a9 PM |
940 | "problematic_lock", disabling irq via |
941 | spin_lock_irqsave(). | |
dd81eca8 PM |
942 | |
943 | 2. CPU 1 enters synchronize_rcu(), write-acquiring | |
944 | rcu_gp_mutex. | |
945 | ||
946 | 3. CPU 0 enters rcu_read_lock(), but must wait | |
947 | because CPU 1 holds rcu_gp_mutex. | |
948 | ||
949 | 4. CPU 1 is interrupted, and the irq handler | |
950 | attempts to acquire problematic_lock. | |
951 | ||
952 | The system is now deadlocked. | |
953 | ||
954 | One way to avoid this deadlock is to use an approach like | |
955 | that of CONFIG_PREEMPT_RT, where all normal spinlocks | |
956 | become blocking locks, and all irq handlers execute in | |
957 | the context of special tasks. In this case, in step 4 | |
958 | above, the irq handler would block, allowing CPU 1 to | |
959 | release rcu_gp_mutex, avoiding the deadlock. | |
960 | ||
961 | Even in the absence of deadlock, this RCU implementation | |
962 | allows latency to "bleed" from readers to other | |
963 | readers through synchronize_rcu(). To see this, | |
964 | consider task A in an RCU read-side critical section | |
965 | (thus read-holding rcu_gp_mutex), task B blocked | |
966 | attempting to write-acquire rcu_gp_mutex, and | |
967 | task C blocked in rcu_read_lock() attempting to | |
968 | read_acquire rcu_gp_mutex. Task A's RCU read-side | |
969 | latency is holding up task C, albeit indirectly via | |
970 | task B. | |
971 | ||
972 | Realtime RCU implementations therefore use a counter-based | |
973 | approach where tasks in RCU read-side critical sections | |
974 | cannot be blocked by tasks executing synchronize_rcu(). | |
975 | ||
976 | Quick Quiz #2: Give an example where Classic RCU's read-side | |
977 | overhead is -negative-. | |
978 | ||
979 | Answer: Imagine a single-CPU system with a non-CONFIG_PREEMPT | |
980 | kernel where a routing table is used by process-context | |
981 | code, but can be updated by irq-context code (for example, | |
982 | by an "ICMP REDIRECT" packet). The usual way of handling | |
983 | this would be to have the process-context code disable | |
984 | interrupts while searching the routing table. Use of | |
985 | RCU allows such interrupt-disabling to be dispensed with. | |
986 | Thus, without RCU, you pay the cost of disabling interrupts, | |
987 | and with RCU you don't. | |
988 | ||
989 | One can argue that the overhead of RCU in this | |
990 | case is negative with respect to the single-CPU | |
991 | interrupt-disabling approach. Others might argue that | |
992 | the overhead of RCU is merely zero, and that replacing | |
993 | the positive overhead of the interrupt-disabling scheme | |
994 | with the zero-overhead RCU scheme does not constitute | |
995 | negative overhead. | |
996 | ||
997 | In real life, of course, things are more complex. But | |
998 | even the theoretical possibility of negative overhead for | |
999 | a synchronization primitive is a bit unexpected. ;-) | |
1000 | ||
1001 | Quick Quiz #3: If it is illegal to block in an RCU read-side | |
1002 | critical section, what the heck do you do in | |
1003 | PREEMPT_RT, where normal spinlocks can block??? | |
1004 | ||
1005 | Answer: Just as PREEMPT_RT permits preemption of spinlock | |
1006 | critical sections, it permits preemption of RCU | |
1007 | read-side critical sections. It also permits | |
1008 | spinlocks blocking while in RCU read-side critical | |
1009 | sections. | |
1010 | ||
1011 | Why the apparent inconsistency? Because it is it | |
1012 | possible to use priority boosting to keep the RCU | |
1013 | grace periods short if need be (for example, if running | |
1014 | short of memory). In contrast, if blocking waiting | |
1015 | for (say) network reception, there is no way to know | |
1016 | what should be boosted. Especially given that the | |
1017 | process we need to boost might well be a human being | |
1018 | who just went out for a pizza or something. And although | |
1019 | a computer-operated cattle prod might arouse serious | |
1020 | interest, it might also provoke serious objections. | |
1021 | Besides, how does the computer know what pizza parlor | |
1022 | the human being went to??? | |
1023 | ||
1024 | ||
1025 | ACKNOWLEDGEMENTS | |
1026 | ||
1027 | My thanks to the people who helped make this human-readable, including | |
d19720a9 | 1028 | Jon Walpole, Josh Triplett, Serge Hallyn, Suzanne Wood, and Alan Stern. |
dd81eca8 PM |
1029 | |
1030 | ||
1031 | For more information, see http://www.rdrop.com/users/paulmck/RCU. |