]>
Commit | Line | Data |
---|---|---|
1da177e4 LT |
1 | Review Checklist for RCU Patches |
2 | ||
3 | ||
4 | This document contains a checklist for producing and reviewing patches | |
5 | that make use of RCU. Violating any of the rules listed below will | |
6 | result in the same sorts of problems that leaving out a locking primitive | |
7 | would cause. This list is based on experiences reviewing such patches | |
8 | over a rather long period of time, but improvements are always welcome! | |
9 | ||
10 | 0. Is RCU being applied to a read-mostly situation? If the data | |
4c54005c PM |
11 | structure is updated more than about 10% of the time, then you |
12 | should strongly consider some other approach, unless detailed | |
13 | performance measurements show that RCU is nonetheless the right | |
14 | tool for the job. Yes, RCU does reduce read-side overhead by | |
15 | increasing write-side overhead, which is exactly why normal uses | |
16 | of RCU will do much more reading than updating. | |
1da177e4 | 17 | |
32300751 PM |
18 | Another exception is where performance is not an issue, and RCU |
19 | provides a simpler implementation. An example of this situation | |
20 | is the dynamic NMI code in the Linux 2.6 kernel, at least on | |
21 | architectures where NMIs are rare. | |
22 | ||
23 | Yet another exception is where the low real-time latency of RCU's | |
24 | read-side primitives is critically important. | |
1da177e4 | 25 | |
4de5f89e PM |
26 | One final exception is where RCU readers are used to prevent |
27 | the ABA problem (https://en.wikipedia.org/wiki/ABA_problem) | |
28 | for lockless updates. This does result in the mildly | |
29 | counter-intuitive situation where rcu_read_lock() and | |
30 | rcu_read_unlock() are used to protect updates, however, this | |
31 | approach provides the same potential simplifications that garbage | |
32 | collectors do. | |
33 | ||
1da177e4 LT |
34 | 1. Does the update code have proper mutual exclusion? |
35 | ||
36 | RCU does allow -readers- to run (almost) naked, but -writers- must | |
37 | still use some sort of mutual exclusion, such as: | |
38 | ||
39 | a. locking, | |
40 | b. atomic operations, or | |
41 | c. restricting updates to a single task. | |
42 | ||
43 | If you choose #b, be prepared to describe how you have handled | |
44 | memory barriers on weakly ordered machines (pretty much all of | |
4c54005c PM |
45 | them -- even x86 allows later loads to be reordered to precede |
46 | earlier stores), and be prepared to explain why this added | |
47 | complexity is worthwhile. If you choose #c, be prepared to | |
48 | explain how this single task does not become a major bottleneck on | |
49 | big multiprocessor machines (for example, if the task is updating | |
50 | information relating to itself that other tasks can read, there | |
4de5f89e PM |
51 | by definition can be no bottleneck). Note that the definition |
52 | of "large" has changed significantly: Eight CPUs was "large" | |
53 | in the year 2000, but a hundred CPUs was unremarkable in 2017. | |
1da177e4 LT |
54 | |
55 | 2. Do the RCU read-side critical sections make proper use of | |
56 | rcu_read_lock() and friends? These primitives are needed | |
32300751 PM |
57 | to prevent grace periods from ending prematurely, which |
58 | could result in data being unceremoniously freed out from | |
59 | under your read-side code, which can greatly increase the | |
60 | actuarial risk of your kernel. | |
1da177e4 | 61 | |
dd81eca8 | 62 | As a rough rule of thumb, any dereference of an RCU-protected |
4c54005c PM |
63 | pointer must be covered by rcu_read_lock(), rcu_read_lock_bh(), |
64 | rcu_read_lock_sched(), or by the appropriate update-side lock. | |
65 | Disabling of preemption can serve as rcu_read_lock_sched(), but | |
66 | is less readable. | |
dd81eca8 | 67 | |
4de5f89e PM |
68 | Letting RCU-protected pointers "leak" out of an RCU read-side |
69 | critical section is every bid as bad as letting them leak out | |
70 | from under a lock. Unless, of course, you have arranged some | |
71 | other means of protection, such as a lock or a reference count | |
72 | -before- letting them out of the RCU read-side critical section. | |
73 | ||
1da177e4 LT |
74 | 3. Does the update code tolerate concurrent accesses? |
75 | ||
76 | The whole point of RCU is to permit readers to run without | |
77 | any locks or atomic operations. This means that readers will | |
78 | be running while updates are in progress. There are a number | |
79 | of ways to handle this concurrency, depending on the situation: | |
80 | ||
32300751 | 81 | a. Use the RCU variants of the list and hlist update |
4c54005c PM |
82 | primitives to add, remove, and replace elements on |
83 | an RCU-protected list. Alternatively, use the other | |
84 | RCU-protected data structures that have been added to | |
85 | the Linux kernel. | |
32300751 PM |
86 | |
87 | This is almost always the best approach. | |
88 | ||
89 | b. Proceed as in (a) above, but also maintain per-element | |
90 | locks (that are acquired by both readers and writers) | |
91 | that guard per-element state. Of course, fields that | |
4c54005c PM |
92 | the readers refrain from accessing can be guarded by |
93 | some other lock acquired only by updaters, if desired. | |
32300751 PM |
94 | |
95 | This works quite well, also. | |
96 | ||
4de5f89e | 97 | c. Make updates appear atomic to readers. For example, |
4c54005c PM |
98 | pointer updates to properly aligned fields will |
99 | appear atomic, as will individual atomic primitives. | |
4de5f89e | 100 | Sequences of operations performed under a lock will -not- |
4c54005c PM |
101 | appear to be atomic to RCU readers, nor will sequences |
102 | of multiple atomic primitives. | |
1da177e4 | 103 | |
32300751 | 104 | This can work, but is starting to get a bit tricky. |
1da177e4 | 105 | |
32300751 | 106 | d. Carefully order the updates and the reads so that |
1da177e4 LT |
107 | readers see valid data at all phases of the update. |
108 | This is often more difficult than it sounds, especially | |
109 | given modern CPUs' tendency to reorder memory references. | |
110 | One must usually liberally sprinkle memory barriers | |
111 | (smp_wmb(), smp_rmb(), smp_mb()) through the code, | |
112 | making it difficult to understand and to test. | |
113 | ||
114 | It is usually better to group the changing data into | |
115 | a separate structure, so that the change may be made | |
116 | to appear atomic by updating a pointer to reference | |
117 | a new structure containing updated values. | |
118 | ||
119 | 4. Weakly ordered CPUs pose special challenges. Almost all CPUs | |
4c54005c PM |
120 | are weakly ordered -- even x86 CPUs allow later loads to be |
121 | reordered to precede earlier stores. RCU code must take all of | |
122 | the following measures to prevent memory-corruption problems: | |
1da177e4 LT |
123 | |
124 | a. Readers must maintain proper ordering of their memory | |
125 | accesses. The rcu_dereference() primitive ensures that | |
126 | the CPU picks up the pointer before it picks up the data | |
127 | that the pointer points to. This really is necessary | |
128 | on Alpha CPUs. If you don't believe me, see: | |
129 | ||
130 | http://www.openvms.compaq.com/wizard/wiz_2637.html | |
131 | ||
132 | The rcu_dereference() primitive is also an excellent | |
b4c5bf35 PM |
133 | documentation aid, letting the person reading the |
134 | code know exactly which pointers are protected by RCU. | |
4c54005c PM |
135 | Please note that compilers can also reorder code, and |
136 | they are becoming increasingly aggressive about doing | |
b4c5bf35 PM |
137 | just that. The rcu_dereference() primitive therefore also |
138 | prevents destructive compiler optimizations. However, | |
139 | with a bit of devious creativity, it is possible to | |
140 | mishandle the return value from rcu_dereference(). | |
141 | Please see rcu_dereference.txt in this directory for | |
142 | more information. | |
4c54005c PM |
143 | |
144 | The rcu_dereference() primitive is used by the | |
145 | various "_rcu()" list-traversal primitives, such | |
146 | as the list_for_each_entry_rcu(). Note that it is | |
147 | perfectly legal (if redundant) for update-side code to | |
148 | use rcu_dereference() and the "_rcu()" list-traversal | |
149 | primitives. This is particularly useful in code that | |
c598a070 PM |
150 | is common to readers and updaters. However, lockdep |
151 | will complain if you access rcu_dereference() outside | |
152 | of an RCU read-side critical section. See lockdep.txt | |
153 | to learn what to do about this. | |
154 | ||
155 | Of course, neither rcu_dereference() nor the "_rcu()" | |
156 | list-traversal primitives can substitute for a good | |
157 | concurrency design coordinating among multiple updaters. | |
1da177e4 | 158 | |
a83f1fe2 PM |
159 | b. If the list macros are being used, the list_add_tail_rcu() |
160 | and list_add_rcu() primitives must be used in order | |
161 | to prevent weakly ordered machines from misordering | |
162 | structure initialization and pointer planting. | |
1da177e4 | 163 | Similarly, if the hlist macros are being used, the |
a83f1fe2 | 164 | hlist_add_head_rcu() primitive is required. |
1da177e4 | 165 | |
a83f1fe2 PM |
166 | c. If the list macros are being used, the list_del_rcu() |
167 | primitive must be used to keep list_del()'s pointer | |
168 | poisoning from inflicting toxic effects on concurrent | |
169 | readers. Similarly, if the hlist macros are being used, | |
170 | the hlist_del_rcu() primitive is required. | |
171 | ||
4c54005c PM |
172 | The list_replace_rcu() and hlist_replace_rcu() primitives |
173 | may be used to replace an old structure with a new one | |
174 | in their respective types of RCU-protected lists. | |
175 | ||
176 | d. Rules similar to (4b) and (4c) apply to the "hlist_nulls" | |
177 | type of RCU-protected linked lists. | |
a83f1fe2 | 178 | |
4c54005c | 179 | e. Updates must ensure that initialization of a given |
1da177e4 LT |
180 | structure happens before pointers to that structure are |
181 | publicized. Use the rcu_assign_pointer() primitive | |
182 | when publicizing a pointer to a structure that can | |
183 | be traversed by an RCU read-side critical section. | |
184 | ||
74d874e7 PM |
185 | 5. If call_rcu(), or a related primitive such as call_rcu_bh(), |
186 | call_rcu_sched(), or call_srcu() is used, the callback function | |
4de5f89e PM |
187 | will be called from softirq context. In particular, it cannot |
188 | block. | |
1da177e4 | 189 | |
a83f1fe2 | 190 | 6. Since synchronize_rcu() can block, it cannot be called from |
4c54005c PM |
191 | any sort of irq context. The same rule applies for |
192 | synchronize_rcu_bh(), synchronize_sched(), synchronize_srcu(), | |
193 | synchronize_rcu_expedited(), synchronize_rcu_bh_expedited(), | |
194 | synchronize_sched_expedite(), and synchronize_srcu_expedited(). | |
195 | ||
196 | The expedited forms of these primitives have the same semantics | |
4de5f89e PM |
197 | as the non-expedited forms, but expediting is both expensive and |
198 | (with the exception of synchronize_srcu_expedited()) unfriendly | |
199 | to real-time workloads. Use of the expedited primitives should | |
200 | be restricted to rare configuration-change operations that would | |
201 | not normally be undertaken while a real-time workload is running. | |
202 | However, real-time workloads can use rcupdate.rcu_normal kernel | |
203 | boot parameter to completely disable expedited grace periods, | |
204 | though this might have performance implications. | |
4c54005c | 205 | |
236fefaf PM |
206 | In particular, if you find yourself invoking one of the expedited |
207 | primitives repeatedly in a loop, please do everyone a favor: | |
208 | Restructure your code so that it batches the updates, allowing | |
209 | a single non-expedited primitive to cover the entire batch. | |
210 | This will very likely be faster than the loop containing the | |
211 | expedited primitive, and will be much much easier on the rest | |
212 | of the system, especially to real-time workloads running on | |
213 | the rest of the system. | |
214 | ||
4c54005c PM |
215 | 7. If the updater uses call_rcu() or synchronize_rcu(), then the |
216 | corresponding readers must use rcu_read_lock() and | |
217 | rcu_read_unlock(). If the updater uses call_rcu_bh() or | |
218 | synchronize_rcu_bh(), then the corresponding readers must | |
219 | use rcu_read_lock_bh() and rcu_read_unlock_bh(). If the | |
220 | updater uses call_rcu_sched() or synchronize_sched(), then | |
221 | the corresponding readers must disable preemption, possibly | |
222 | by calling rcu_read_lock_sched() and rcu_read_unlock_sched(). | |
4b0d3f0f MO |
223 | If the updater uses synchronize_srcu() or call_srcu(), then |
224 | the corresponding readers must use srcu_read_lock() and | |
74d874e7 PM |
225 | srcu_read_unlock(), and with the same srcu_struct. The rules for |
226 | the expedited primitives are the same as for their non-expedited | |
227 | counterparts. Mixing things up will result in confusion and | |
228 | broken kernels. | |
1da177e4 LT |
229 | |
230 | One exception to this rule: rcu_read_lock() and rcu_read_unlock() | |
231 | may be substituted for rcu_read_lock_bh() and rcu_read_unlock_bh() | |
232 | in cases where local bottom halves are already known to be | |
233 | disabled, for example, in irq or softirq context. Commenting | |
234 | such cases is a must, of course! And the jury is still out on | |
235 | whether the increased speed is worth it. | |
236 | ||
32300751 | 237 | 8. Although synchronize_rcu() is slower than is call_rcu(), it |
3f944adb PM |
238 | usually results in simpler code. So, unless update performance is |
239 | critically important, the updaters cannot block, or the latency of | |
240 | synchronize_rcu() is visible from userspace, synchronize_rcu() | |
241 | should be used in preference to call_rcu(). Furthermore, | |
242 | kfree_rcu() usually results in even simpler code than does | |
243 | synchronize_rcu() without synchronize_rcu()'s multi-millisecond | |
244 | latency. So please take advantage of kfree_rcu()'s "fire and | |
245 | forget" memory-freeing capabilities where it applies. | |
165d6c78 PM |
246 | |
247 | An especially important property of the synchronize_rcu() | |
248 | primitive is that it automatically self-limits: if grace periods | |
249 | are delayed for whatever reason, then the synchronize_rcu() | |
250 | primitive will correspondingly delay updates. In contrast, | |
251 | code using call_rcu() should explicitly limit update rate in | |
252 | cases where grace periods are delayed, as failing to do so can | |
253 | result in excessive realtime latencies or even OOM conditions. | |
254 | ||
255 | Ways of gaining this self-limiting property when using call_rcu() | |
256 | include: | |
257 | ||
258 | a. Keeping a count of the number of data-structure elements | |
5cc6517a PM |
259 | used by the RCU-protected data structure, including |
260 | those waiting for a grace period to elapse. Enforce a | |
261 | limit on this number, stalling updates as needed to allow | |
262 | previously deferred frees to complete. Alternatively, | |
263 | limit only the number awaiting deferred free rather than | |
264 | the total number of elements. | |
265 | ||
266 | One way to stall the updates is to acquire the update-side | |
267 | mutex. (Don't try this with a spinlock -- other CPUs | |
268 | spinning on the lock could prevent the grace period | |
269 | from ever ending.) Another way to stall the updates | |
270 | is for the updates to use a wrapper function around | |
271 | the memory allocator, so that this wrapper function | |
272 | simulates OOM when there is too much memory awaiting an | |
273 | RCU grace period. There are of course many other | |
274 | variations on this theme. | |
165d6c78 PM |
275 | |
276 | b. Limiting update rate. For example, if updates occur only | |
6e676696 PM |
277 | once per hour, then no explicit rate limiting is |
278 | required, unless your system is already badly broken. | |
279 | Older versions of the dcache subsystem take this approach, | |
280 | guarding updates with a global lock, limiting their rate. | |
165d6c78 PM |
281 | |
282 | c. Trusted update -- if updates can only be done manually by | |
283 | superuser or some other trusted user, then it might not | |
284 | be necessary to automatically limit them. The theory | |
285 | here is that superuser already has lots of ways to crash | |
286 | the machine. | |
287 | ||
288 | d. Use call_rcu_bh() rather than call_rcu(), in order to take | |
6e676696 PM |
289 | advantage of call_rcu_bh()'s faster grace periods. (This |
290 | is only a partial solution, though.) | |
165d6c78 PM |
291 | |
292 | e. Periodically invoke synchronize_rcu(), permitting a limited | |
293 | number of updates per grace period. | |
1da177e4 | 294 | |
3f944adb PM |
295 | The same cautions apply to call_rcu_bh(), call_rcu_sched(), |
296 | call_srcu(), and kfree_rcu(). | |
4c54005c | 297 | |
6e676696 PM |
298 | Note that although these primitives do take action to avoid memory |
299 | exhaustion when any given CPU has too many callbacks, a determined | |
300 | user could still exhaust memory. This is especially the case | |
301 | if a system with a large number of CPUs has been configured to | |
302 | offload all of its RCU callbacks onto a single CPU, or if the | |
303 | system has relatively little free memory. | |
304 | ||
1da177e4 | 305 | 9. All RCU list-traversal primitives, which include |
bb08f76d PM |
306 | rcu_dereference(), list_for_each_entry_rcu(), and |
307 | list_for_each_safe_rcu(), must be either within an RCU read-side | |
308 | critical section or must be protected by appropriate update-side | |
309 | locks. RCU read-side critical sections are delimited by | |
310 | rcu_read_lock() and rcu_read_unlock(), or by similar primitives | |
311 | such as rcu_read_lock_bh() and rcu_read_unlock_bh(), in which | |
312 | case the matching rcu_dereference() primitive must be used in | |
313 | order to keep lockdep happy, in this case, rcu_dereference_bh(). | |
1da177e4 | 314 | |
32300751 PM |
315 | The reason that it is permissible to use RCU list-traversal |
316 | primitives when the update-side lock is held is that doing so | |
317 | can be quite helpful in reducing code bloat when common code is | |
50aec002 PM |
318 | shared between readers and updaters. Additional primitives |
319 | are provided for this case, as discussed in lockdep.txt. | |
1da177e4 LT |
320 | |
321 | 10. Conversely, if you are in an RCU read-side critical section, | |
32300751 PM |
322 | and you don't hold the appropriate update-side lock, you -must- |
323 | use the "_rcu()" variants of the list macros. Failing to do so | |
4c54005c PM |
324 | will break Alpha, cause aggressive compilers to generate bad code, |
325 | and confuse people trying to read your code. | |
a83f1fe2 PM |
326 | |
327 | 11. Note that synchronize_rcu() -only- guarantees to wait until | |
328 | all currently executing rcu_read_lock()-protected RCU read-side | |
329 | critical sections complete. It does -not- necessarily guarantee | |
330 | that all currently running interrupts, NMIs, preempt_disable() | |
3f944adb PM |
331 | code, or idle loops will complete. Therefore, if your |
332 | read-side critical sections are protected by something other | |
333 | than rcu_read_lock(), do -not- use synchronize_rcu(). | |
a83f1fe2 | 334 | |
4c54005c PM |
335 | Similarly, disabling preemption is not an acceptable substitute |
336 | for rcu_read_lock(). Code that attempts to use preemption | |
337 | disabling where it should be using rcu_read_lock() will break | |
4de5f89e | 338 | in CONFIG_PREEMPT=y kernel builds. |
4c54005c PM |
339 | |
340 | If you want to wait for interrupt handlers, NMI handlers, and | |
341 | code under the influence of preempt_disable(), you instead | |
342 | need to use synchronize_irq() or synchronize_sched(). | |
d19720a9 | 343 | |
2aef619c PM |
344 | This same limitation also applies to synchronize_rcu_bh() |
345 | and synchronize_srcu(), as well as to the asynchronous and | |
346 | expedited forms of the three primitives, namely call_rcu(), | |
347 | call_rcu_bh(), call_srcu(), synchronize_rcu_expedited(), | |
348 | synchronize_rcu_bh_expedited(), and synchronize_srcu_expedited(). | |
349 | ||
d19720a9 | 350 | 12. Any lock acquired by an RCU callback must be acquired elsewhere |
240ebbf8 PM |
351 | with softirq disabled, e.g., via spin_lock_irqsave(), |
352 | spin_lock_bh(), etc. Failing to disable irq on a given | |
4c54005c PM |
353 | acquisition of that lock will result in deadlock as soon as |
354 | the RCU softirq handler happens to run your RCU callback while | |
355 | interrupting that acquisition's critical section. | |
621934ee | 356 | |
ef48bd24 PM |
357 | 13. RCU callbacks can be and are executed in parallel. In many cases, |
358 | the callback code simply wrappers around kfree(), so that this | |
359 | is not an issue (or, more accurately, to the extent that it is | |
360 | an issue, the memory-allocator locking handles it). However, | |
361 | if the callbacks do manipulate a shared data structure, they | |
362 | must use whatever locking or other synchronization is required | |
363 | to safely access and/or modify that data structure. | |
364 | ||
32300751 PM |
365 | RCU callbacks are -usually- executed on the same CPU that executed |
366 | the corresponding call_rcu(), call_rcu_bh(), or call_rcu_sched(), | |
367 | but are by -no- means guaranteed to be. For example, if a given | |
368 | CPU goes offline while having an RCU callback pending, then that | |
369 | RCU callback will execute on some surviving CPU. (If this was | |
370 | not the case, a self-spawning RCU callback would prevent the | |
371 | victim CPU from ever going offline.) | |
372 | ||
4de5f89e PM |
373 | 14. Unlike other forms of RCU, it -is- permissible to block in an |
374 | SRCU read-side critical section (demarked by srcu_read_lock() | |
375 | and srcu_read_unlock()), hence the "SRCU": "sleepable RCU". | |
376 | Please note that if you don't need to sleep in read-side critical | |
377 | sections, you should be using RCU rather than SRCU, because RCU | |
378 | is almost always faster and easier to use than is SRCU. | |
379 | ||
380 | Also unlike other forms of RCU, explicit initialization and | |
381 | cleanup is required either at build time via DEFINE_SRCU() | |
382 | or DEFINE_STATIC_SRCU() or at runtime via init_srcu_struct() | |
383 | and cleanup_srcu_struct(). These last two are passed a | |
384 | "struct srcu_struct" that defines the scope of a given | |
385 | SRCU domain. Once initialized, the srcu_struct is passed | |
386 | to srcu_read_lock(), srcu_read_unlock() synchronize_srcu(), | |
387 | synchronize_srcu_expedited(), and call_srcu(). A given | |
388 | synchronize_srcu() waits only for SRCU read-side critical | |
4c54005c PM |
389 | sections governed by srcu_read_lock() and srcu_read_unlock() |
390 | calls that have been passed the same srcu_struct. This property | |
391 | is what makes sleeping read-side critical sections tolerable -- | |
392 | a given subsystem delays only its own updates, not those of other | |
393 | subsystems using SRCU. Therefore, SRCU is less prone to OOM the | |
394 | system than RCU would be if RCU's read-side critical sections | |
395 | were permitted to sleep. | |
621934ee PM |
396 | |
397 | The ability to sleep in read-side critical sections does not | |
398 | come for free. First, corresponding srcu_read_lock() and | |
399 | srcu_read_unlock() calls must be passed the same srcu_struct. | |
400 | Second, grace-period-detection overhead is amortized only | |
401 | over those updates sharing a given srcu_struct, rather than | |
402 | being globally amortized as they are for other forms of RCU. | |
403 | Therefore, SRCU should be used in preference to rw_semaphore | |
404 | only in extremely read-intensive situations, or in situations | |
405 | requiring SRCU's read-side deadlock immunity or low read-side | |
4de5f89e PM |
406 | realtime latency. You should also consider percpu_rw_semaphore |
407 | when you need lightweight readers. | |
621934ee | 408 | |
4de5f89e PM |
409 | SRCU's expedited primitive (synchronize_srcu_expedited()) |
410 | never sends IPIs to other CPUs, so it is easier on | |
411 | real-time workloads than is synchronize_rcu_expedited(), | |
412 | synchronize_rcu_bh_expedited() or synchronize_sched_expedited(). | |
413 | ||
414 | Note that rcu_dereference() and rcu_assign_pointer() relate to | |
415 | SRCU just as they do to other forms of RCU. | |
0612ea00 PM |
416 | |
417 | 15. The whole point of call_rcu(), synchronize_rcu(), and friends | |
418 | is to wait until all pre-existing readers have finished before | |
419 | carrying out some otherwise-destructive operation. It is | |
420 | therefore critically important to -first- remove any path | |
421 | that readers can follow that could be affected by the | |
422 | destructive operation, and -only- -then- invoke call_rcu(), | |
423 | synchronize_rcu(), or friends. | |
424 | ||
4c54005c PM |
425 | Because these primitives only wait for pre-existing readers, it |
426 | is the caller's responsibility to guarantee that any subsequent | |
427 | readers will execute safely. | |
240ebbf8 | 428 | |
4c54005c PM |
429 | 16. The various RCU read-side primitives do -not- necessarily contain |
430 | memory barriers. You should therefore plan for the CPU | |
431 | and the compiler to freely reorder code into and out of RCU | |
432 | read-side critical sections. It is the responsibility of the | |
433 | RCU update-side primitives to deal with this. | |
84483ea4 | 434 | |
41a2901e PM |
435 | 17. Use CONFIG_PROVE_LOCKING, CONFIG_DEBUG_OBJECTS_RCU_HEAD, and the |
436 | __rcu sparse checks to validate your RCU code. These can help | |
437 | find problems as follows: | |
84483ea4 | 438 | |
41a2901e | 439 | CONFIG_PROVE_LOCKING: check that accesses to RCU-protected data |
84483ea4 PM |
440 | structures are carried out under the proper RCU |
441 | read-side critical section, while holding the right | |
442 | combination of locks, or whatever other conditions | |
443 | are appropriate. | |
444 | ||
445 | CONFIG_DEBUG_OBJECTS_RCU_HEAD: check that you don't pass the | |
446 | same object to call_rcu() (or friends) before an RCU | |
447 | grace period has elapsed since the last time that you | |
448 | passed that same object to call_rcu() (or friends). | |
449 | ||
450 | __rcu sparse checks: tag the pointer to the RCU-protected data | |
451 | structure with __rcu, and sparse will warn you if you | |
452 | access that pointer without the services of one of the | |
453 | variants of rcu_dereference(). | |
454 | ||
455 | These debugging aids can help you find problems that are | |
456 | otherwise extremely difficult to spot. | |
4de5f89e PM |
457 | |
458 | 18. If you register a callback using call_rcu(), call_rcu_bh(), | |
459 | call_rcu_sched(), or call_srcu(), and pass in a function defined | |
460 | within a loadable module, then it in necessary to wait for | |
461 | all pending callbacks to be invoked after the last invocation | |
462 | and before unloading that module. Note that it is absolutely | |
463 | -not- sufficient to wait for a grace period! The current (say) | |
464 | synchronize_rcu() implementation waits only for all previous | |
465 | callbacks registered on the CPU that synchronize_rcu() is running | |
466 | on, but it is -not- guaranteed to wait for callbacks registered | |
467 | on other CPUs. | |
468 | ||
469 | You instead need to use one of the barrier functions: | |
470 | ||
471 | o call_rcu() -> rcu_barrier() | |
472 | o call_rcu_bh() -> rcu_barrier_bh() | |
473 | o call_rcu_sched() -> rcu_barrier_sched() | |
474 | o call_srcu() -> srcu_barrier() | |
475 | ||
476 | However, these barrier functions are absolutely -not- guaranteed | |
477 | to wait for a grace period. In fact, if there are no call_rcu() | |
478 | callbacks waiting anywhere in the system, rcu_barrier() is within | |
479 | its rights to return immediately. | |
480 | ||
481 | So if you need to wait for both an RCU grace period and for | |
482 | all pre-existing call_rcu() callbacks, you will need to execute | |
483 | both rcu_barrier() and synchronize_rcu(), if necessary, using | |
484 | something like workqueues to to execute them concurrently. | |
485 | ||
486 | See rcubarrier.txt for more information. |