]> git.proxmox.com Git - mirror_ubuntu-jammy-kernel.git/blame - Documentation/RCU/Design/Requirements/Requirements.html
Merge branch 'work.misc' of git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
[mirror_ubuntu-jammy-kernel.git] / Documentation / RCU / Design / Requirements / Requirements.html
CommitLineData
649e4368
PM
1<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"
2 "http://www.w3.org/TR/html4/loose.dtd">
3 <html>
4 <head><title>A Tour Through RCU's Requirements [LWN.net]</title>
5 <meta HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=utf-8">
6
7<h1>A Tour Through RCU's Requirements</h1>
8
9<p>Copyright IBM Corporation, 2015</p>
10<p>Author: Paul E.&nbsp;McKenney</p>
11<p><i>The initial version of this document appeared in the
12<a href="https://lwn.net/">LWN</a> articles
13<a href="https://lwn.net/Articles/652156/">here</a>,
14<a href="https://lwn.net/Articles/652677/">here</a>, and
15<a href="https://lwn.net/Articles/653326/">here</a>.</i></p>
16
17<h2>Introduction</h2>
18
19<p>
20Read-copy update (RCU) is a synchronization mechanism that is often
21used as a replacement for reader-writer locking.
22RCU is unusual in that updaters do not block readers,
23which means that RCU's read-side primitives can be exceedingly fast
24and scalable.
25In addition, updaters can make useful forward progress concurrently
26with readers.
27However, all this concurrency between RCU readers and updaters does raise
28the question of exactly what RCU readers are doing, which in turn
29raises the question of exactly what RCU's requirements are.
30
31<p>
32This document therefore summarizes RCU's requirements, and can be thought
33of as an informal, high-level specification for RCU.
34It is important to understand that RCU's specification is primarily
35empirical in nature;
36in fact, I learned about many of these requirements the hard way.
37This situation might cause some consternation, however, not only
38has this learning process been a lot of fun, but it has also been
39a great privilege to work with so many people willing to apply
40technologies in interesting new ways.
41
42<p>
43All that aside, here are the categories of currently known RCU requirements:
44</p>
45
46<ol>
47<li> <a href="#Fundamental Requirements">
48 Fundamental Requirements</a>
49<li> <a href="#Fundamental Non-Requirements">Fundamental Non-Requirements</a>
50<li> <a href="#Parallelism Facts of Life">
51 Parallelism Facts of Life</a>
52<li> <a href="#Quality-of-Implementation Requirements">
53 Quality-of-Implementation Requirements</a>
54<li> <a href="#Linux Kernel Complications">
55 Linux Kernel Complications</a>
56<li> <a href="#Software-Engineering Requirements">
57 Software-Engineering Requirements</a>
58<li> <a href="#Other RCU Flavors">
59 Other RCU Flavors</a>
60<li> <a href="#Possible Future Changes">
61 Possible Future Changes</a>
62</ol>
63
64<p>
65This is followed by a <a href="#Summary">summary</a>,
6146f8df
PM
66however, the answers to each quick quiz immediately follows the quiz.
67Select the big white space with your mouse to see the answer.
649e4368
PM
68
69<h2><a name="Fundamental Requirements">Fundamental Requirements</a></h2>
70
71<p>
72RCU's fundamental requirements are the closest thing RCU has to hard
73mathematical requirements.
74These are:
75
76<ol>
77<li> <a href="#Grace-Period Guarantee">
78 Grace-Period Guarantee</a>
79<li> <a href="#Publish-Subscribe Guarantee">
80 Publish-Subscribe Guarantee</a>
4b689330
PM
81<li> <a href="#Memory-Barrier Guarantees">
82 Memory-Barrier Guarantees</a>
649e4368
PM
83<li> <a href="#RCU Primitives Guaranteed to Execute Unconditionally">
84 RCU Primitives Guaranteed to Execute Unconditionally</a>
85<li> <a href="#Guaranteed Read-to-Write Upgrade">
86 Guaranteed Read-to-Write Upgrade</a>
87</ol>
88
89<h3><a name="Grace-Period Guarantee">Grace-Period Guarantee</a></h3>
90
91<p>
92RCU's grace-period guarantee is unusual in being premeditated:
93Jack Slingwine and I had this guarantee firmly in mind when we started
94work on RCU (then called &ldquo;rclock&rdquo;) in the early 1990s.
95That said, the past two decades of experience with RCU have produced
96a much more detailed understanding of this guarantee.
97
98<p>
99RCU's grace-period guarantee allows updaters to wait for the completion
100of all pre-existing RCU read-side critical sections.
101An RCU read-side critical section
102begins with the marker <tt>rcu_read_lock()</tt> and ends with
103the marker <tt>rcu_read_unlock()</tt>.
104These markers may be nested, and RCU treats a nested set as one
105big RCU read-side critical section.
106Production-quality implementations of <tt>rcu_read_lock()</tt> and
107<tt>rcu_read_unlock()</tt> are extremely lightweight, and in
108fact have exactly zero overhead in Linux kernels built for production
109use with <tt>CONFIG_PREEMPT=n</tt>.
110
111<p>
112This guarantee allows ordering to be enforced with extremely low
113overhead to readers, for example:
114
115<blockquote>
116<pre>
117 1 int x, y;
118 2
119 3 void thread0(void)
120 4 {
121 5 rcu_read_lock();
122 6 r1 = READ_ONCE(x);
123 7 r2 = READ_ONCE(y);
124 8 rcu_read_unlock();
125 9 }
12610
12711 void thread1(void)
12812 {
12913 WRITE_ONCE(x, 1);
13014 synchronize_rcu();
13115 WRITE_ONCE(y, 1);
13216 }
133</pre>
134</blockquote>
135
136<p>
137Because the <tt>synchronize_rcu()</tt> on line&nbsp;14 waits for
138all pre-existing readers, any instance of <tt>thread0()</tt> that
139loads a value of zero from <tt>x</tt> must complete before
140<tt>thread1()</tt> stores to <tt>y</tt>, so that instance must
141also load a value of zero from <tt>y</tt>.
142Similarly, any instance of <tt>thread0()</tt> that loads a value of
143one from <tt>y</tt> must have started after the
144<tt>synchronize_rcu()</tt> started, and must therefore also load
145a value of one from <tt>x</tt>.
146Therefore, the outcome:
147<blockquote>
148<pre>
149(r1 == 0 &amp;&amp; r2 == 1)
150</pre>
151</blockquote>
152cannot happen.
153
6146f8df
PM
154<table>
155<tr><th>&nbsp;</th></tr>
156<tr><th align="left">Quick Quiz:</th></tr>
157<tr><td>
158 Wait a minute!
159 You said that updaters can make useful forward progress concurrently
160 with readers, but pre-existing readers will block
161 <tt>synchronize_rcu()</tt>!!!
162 Just who are you trying to fool???
163</td></tr>
164<tr><th align="left">Answer:</th></tr>
165<tr><td bgcolor="#ffffff"><font color="ffffff">
166 First, if updaters do not wish to be blocked by readers, they can use
167 <tt>call_rcu()</tt> or <tt>kfree_rcu()</tt>, which will
168 be discussed later.
169 Second, even when using <tt>synchronize_rcu()</tt>, the other
170 update-side code does run concurrently with readers, whether
171 pre-existing or not.
172</font></td></tr>
173<tr><td>&nbsp;</td></tr>
174</table>
649e4368
PM
175
176<p>
177This scenario resembles one of the first uses of RCU in
178<a href="https://en.wikipedia.org/wiki/DYNIX">DYNIX/ptx</a>,
179which managed a distributed lock manager's transition into
180a state suitable for handling recovery from node failure,
181more or less as follows:
182
183<blockquote>
184<pre>
185 1 #define STATE_NORMAL 0
186 2 #define STATE_WANT_RECOVERY 1
187 3 #define STATE_RECOVERING 2
188 4 #define STATE_WANT_NORMAL 3
189 5
190 6 int state = STATE_NORMAL;
191 7
192 8 void do_something_dlm(void)
193 9 {
19410 int state_snap;
19511
19612 rcu_read_lock();
19713 state_snap = READ_ONCE(state);
19814 if (state_snap == STATE_NORMAL)
19915 do_something();
20016 else
20117 do_something_carefully();
20218 rcu_read_unlock();
20319 }
20420
20521 void start_recovery(void)
20622 {
20723 WRITE_ONCE(state, STATE_WANT_RECOVERY);
20824 synchronize_rcu();
20925 WRITE_ONCE(state, STATE_RECOVERING);
21026 recovery();
21127 WRITE_ONCE(state, STATE_WANT_NORMAL);
21228 synchronize_rcu();
21329 WRITE_ONCE(state, STATE_NORMAL);
21430 }
215</pre>
216</blockquote>
217
218<p>
219The RCU read-side critical section in <tt>do_something_dlm()</tt>
220works with the <tt>synchronize_rcu()</tt> in <tt>start_recovery()</tt>
221to guarantee that <tt>do_something()</tt> never runs concurrently
222with <tt>recovery()</tt>, but with little or no synchronization
223overhead in <tt>do_something_dlm()</tt>.
224
6146f8df
PM
225<table>
226<tr><th>&nbsp;</th></tr>
227<tr><th align="left">Quick Quiz:</th></tr>
228<tr><td>
229 Why is the <tt>synchronize_rcu()</tt> on line&nbsp;28 needed?
230</td></tr>
231<tr><th align="left">Answer:</th></tr>
232<tr><td bgcolor="#ffffff"><font color="ffffff">
233 Without that extra grace period, memory reordering could result in
234 <tt>do_something_dlm()</tt> executing <tt>do_something()</tt>
235 concurrently with the last bits of <tt>recovery()</tt>.
236</font></td></tr>
237<tr><td>&nbsp;</td></tr>
238</table>
649e4368
PM
239
240<p>
241In order to avoid fatal problems such as deadlocks,
242an RCU read-side critical section must not contain calls to
243<tt>synchronize_rcu()</tt>.
244Similarly, an RCU read-side critical section must not
245contain anything that waits, directly or indirectly, on completion of
246an invocation of <tt>synchronize_rcu()</tt>.
247
248<p>
249Although RCU's grace-period guarantee is useful in and of itself, with
250<a href="https://lwn.net/Articles/573497/">quite a few use cases</a>,
251it would be good to be able to use RCU to coordinate read-side
252access to linked data structures.
253For this, the grace-period guarantee is not sufficient, as can
254be seen in function <tt>add_gp_buggy()</tt> below.
255We will look at the reader's code later, but in the meantime, just think of
256the reader as locklessly picking up the <tt>gp</tt> pointer,
257and, if the value loaded is non-<tt>NULL</tt>, locklessly accessing the
258<tt>-&gt;a</tt> and <tt>-&gt;b</tt> fields.
259
260<blockquote>
261<pre>
262 1 bool add_gp_buggy(int a, int b)
263 2 {
264 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
265 4 if (!p)
266 5 return -ENOMEM;
267 6 spin_lock(&amp;gp_lock);
268 7 if (rcu_access_pointer(gp)) {
269 8 spin_unlock(&amp;gp_lock);
270 9 return false;
27110 }
27211 p-&gt;a = a;
27312 p-&gt;b = a;
27413 gp = p; /* ORDERING BUG */
27514 spin_unlock(&amp;gp_lock);
27615 return true;
27716 }
278</pre>
279</blockquote>
280
281<p>
282The problem is that both the compiler and weakly ordered CPUs are within
283their rights to reorder this code as follows:
284
285<blockquote>
286<pre>
287 1 bool add_gp_buggy_optimized(int a, int b)
288 2 {
289 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
290 4 if (!p)
291 5 return -ENOMEM;
292 6 spin_lock(&amp;gp_lock);
293 7 if (rcu_access_pointer(gp)) {
294 8 spin_unlock(&amp;gp_lock);
295 9 return false;
29610 }
297<b>11 gp = p; /* ORDERING BUG */
29812 p-&gt;a = a;
29913 p-&gt;b = a;</b>
30014 spin_unlock(&amp;gp_lock);
30115 return true;
30216 }
303</pre>
304</blockquote>
305
306<p>
307If an RCU reader fetches <tt>gp</tt> just after
308<tt>add_gp_buggy_optimized</tt> executes line&nbsp;11,
309it will see garbage in the <tt>-&gt;a</tt> and <tt>-&gt;b</tt>
310fields.
311And this is but one of many ways in which compiler and hardware optimizations
312could cause trouble.
313Therefore, we clearly need some way to prevent the compiler and the CPU from
314reordering in this manner, which brings us to the publish-subscribe
315guarantee discussed in the next section.
316
317<h3><a name="Publish-Subscribe Guarantee">Publish/Subscribe Guarantee</a></h3>
318
319<p>
320RCU's publish-subscribe guarantee allows data to be inserted
321into a linked data structure without disrupting RCU readers.
322The updater uses <tt>rcu_assign_pointer()</tt> to insert the
323new data, and readers use <tt>rcu_dereference()</tt> to
324access data, whether new or old.
325The following shows an example of insertion:
326
327<blockquote>
328<pre>
329 1 bool add_gp(int a, int b)
330 2 {
331 3 p = kmalloc(sizeof(*p), GFP_KERNEL);
332 4 if (!p)
333 5 return -ENOMEM;
334 6 spin_lock(&amp;gp_lock);
335 7 if (rcu_access_pointer(gp)) {
336 8 spin_unlock(&amp;gp_lock);
337 9 return false;
33810 }
33911 p-&gt;a = a;
34012 p-&gt;b = a;
34113 rcu_assign_pointer(gp, p);
34214 spin_unlock(&amp;gp_lock);
34315 return true;
34416 }
345</pre>
346</blockquote>
347
348<p>
349The <tt>rcu_assign_pointer()</tt> on line&nbsp;13 is conceptually
350equivalent to a simple assignment statement, but also guarantees
351that its assignment will
352happen after the two assignments in lines&nbsp;11 and&nbsp;12,
353similar to the C11 <tt>memory_order_release</tt> store operation.
354It also prevents any number of &ldquo;interesting&rdquo; compiler
355optimizations, for example, the use of <tt>gp</tt> as a scratch
356location immediately preceding the assignment.
357
6146f8df
PM
358<table>
359<tr><th>&nbsp;</th></tr>
360<tr><th align="left">Quick Quiz:</th></tr>
361<tr><td>
362 But <tt>rcu_assign_pointer()</tt> does nothing to prevent the
363 two assignments to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt>
364 from being reordered.
365 Can't that also cause problems?
366</td></tr>
367<tr><th align="left">Answer:</th></tr>
368<tr><td bgcolor="#ffffff"><font color="ffffff">
369 No, it cannot.
370 The readers cannot see either of these two fields until
371 the assignment to <tt>gp</tt>, by which time both fields are
372 fully initialized.
373 So reordering the assignments
374 to <tt>p-&gt;a</tt> and <tt>p-&gt;b</tt> cannot possibly
375 cause any problems.
376</font></td></tr>
377<tr><td>&nbsp;</td></tr>
378</table>
649e4368
PM
379
380<p>
381It is tempting to assume that the reader need not do anything special
382to control its accesses to the RCU-protected data,
383as shown in <tt>do_something_gp_buggy()</tt> below:
384
385<blockquote>
386<pre>
387 1 bool do_something_gp_buggy(void)
388 2 {
389 3 rcu_read_lock();
390 4 p = gp; /* OPTIMIZATIONS GALORE!!! */
391 5 if (p) {
392 6 do_something(p-&gt;a, p-&gt;b);
393 7 rcu_read_unlock();
394 8 return true;
395 9 }
39610 rcu_read_unlock();
39711 return false;
39812 }
399</pre>
400</blockquote>
401
402<p>
403However, this temptation must be resisted because there are a
404surprisingly large number of ways that the compiler
405(to say nothing of
406<a href="https://h71000.www7.hp.com/wizard/wiz_2637.html">DEC Alpha CPUs</a>)
407can trip this code up.
408For but one example, if the compiler were short of registers, it
409might choose to refetch from <tt>gp</tt> rather than keeping
410a separate copy in <tt>p</tt> as follows:
411
412<blockquote>
413<pre>
414 1 bool do_something_gp_buggy_optimized(void)
415 2 {
416 3 rcu_read_lock();
417 4 if (gp) { /* OPTIMIZATIONS GALORE!!! */
418<b> 5 do_something(gp-&gt;a, gp-&gt;b);</b>
419 6 rcu_read_unlock();
420 7 return true;
421 8 }
422 9 rcu_read_unlock();
42310 return false;
42411 }
425</pre>
426</blockquote>
427
428<p>
429If this function ran concurrently with a series of updates that
430replaced the current structure with a new one,
431the fetches of <tt>gp-&gt;a</tt>
432and <tt>gp-&gt;b</tt> might well come from two different structures,
433which could cause serious confusion.
434To prevent this (and much else besides), <tt>do_something_gp()</tt> uses
435<tt>rcu_dereference()</tt> to fetch from <tt>gp</tt>:
436
437<blockquote>
438<pre>
439 1 bool do_something_gp(void)
440 2 {
441 3 rcu_read_lock();
442 4 p = rcu_dereference(gp);
443 5 if (p) {
444 6 do_something(p-&gt;a, p-&gt;b);
445 7 rcu_read_unlock();
446 8 return true;
447 9 }
44810 rcu_read_unlock();
44911 return false;
45012 }
451</pre>
452</blockquote>
453
454<p>
455The <tt>rcu_dereference()</tt> uses volatile casts and (for DEC Alpha)
456memory barriers in the Linux kernel.
457Should a
458<a href="http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf">high-quality implementation of C11 <tt>memory_order_consume</tt> [PDF]</a>
459ever appear, then <tt>rcu_dereference()</tt> could be implemented
460as a <tt>memory_order_consume</tt> load.
461Regardless of the exact implementation, a pointer fetched by
462<tt>rcu_dereference()</tt> may not be used outside of the
463outermost RCU read-side critical section containing that
464<tt>rcu_dereference()</tt>, unless protection of
465the corresponding data element has been passed from RCU to some
466other synchronization mechanism, most commonly locking or
467<a href="https://www.kernel.org/doc/Documentation/RCU/rcuref.txt">reference counting</a>.
468
469<p>
470In short, updaters use <tt>rcu_assign_pointer()</tt> and readers
471use <tt>rcu_dereference()</tt>, and these two RCU API elements
472work together to ensure that readers have a consistent view of
473newly added data elements.
474
475<p>
476Of course, it is also necessary to remove elements from RCU-protected
477data structures, for example, using the following process:
478
479<ol>
480<li> Remove the data element from the enclosing structure.
481<li> Wait for all pre-existing RCU read-side critical sections
482 to complete (because only pre-existing readers can possibly have
483 a reference to the newly removed data element).
484<li> At this point, only the updater has a reference to the
485 newly removed data element, so it can safely reclaim
486 the data element, for example, by passing it to <tt>kfree()</tt>.
487</ol>
488
489This process is implemented by <tt>remove_gp_synchronous()</tt>:
490
491<blockquote>
492<pre>
493 1 bool remove_gp_synchronous(void)
494 2 {
495 3 struct foo *p;
496 4
497 5 spin_lock(&amp;gp_lock);
498 6 p = rcu_access_pointer(gp);
499 7 if (!p) {
500 8 spin_unlock(&amp;gp_lock);
501 9 return false;
50210 }
50311 rcu_assign_pointer(gp, NULL);
50412 spin_unlock(&amp;gp_lock);
50513 synchronize_rcu();
50614 kfree(p);
50715 return true;
50816 }
509</pre>
510</blockquote>
511
512<p>
513This function is straightforward, with line&nbsp;13 waiting for a grace
514period before line&nbsp;14 frees the old data element.
515This waiting ensures that readers will reach line&nbsp;7 of
516<tt>do_something_gp()</tt> before the data element referenced by
517<tt>p</tt> is freed.
518The <tt>rcu_access_pointer()</tt> on line&nbsp;6 is similar to
519<tt>rcu_dereference()</tt>, except that:
520
521<ol>
522<li> The value returned by <tt>rcu_access_pointer()</tt>
523 cannot be dereferenced.
524 If you want to access the value pointed to as well as
525 the pointer itself, use <tt>rcu_dereference()</tt>
526 instead of <tt>rcu_access_pointer()</tt>.
527<li> The call to <tt>rcu_access_pointer()</tt> need not be
528 protected.
529 In contrast, <tt>rcu_dereference()</tt> must either be
530 within an RCU read-side critical section or in a code
531 segment where the pointer cannot change, for example, in
532 code protected by the corresponding update-side lock.
533</ol>
534
6146f8df
PM
535<table>
536<tr><th>&nbsp;</th></tr>
537<tr><th align="left">Quick Quiz:</th></tr>
538<tr><td>
539 Without the <tt>rcu_dereference()</tt> or the
540 <tt>rcu_access_pointer()</tt>, what destructive optimizations
541 might the compiler make use of?
542</td></tr>
543<tr><th align="left">Answer:</th></tr>
544<tr><td bgcolor="#ffffff"><font color="ffffff">
545 Let's start with what happens to <tt>do_something_gp()</tt>
546 if it fails to use <tt>rcu_dereference()</tt>.
547 It could reuse a value formerly fetched from this same pointer.
548 It could also fetch the pointer from <tt>gp</tt> in a byte-at-a-time
549 manner, resulting in <i>load tearing</i>, in turn resulting a bytewise
e2c85cb1 550 mash-up of two distinct pointer values.
6146f8df
PM
551 It might even use value-speculation optimizations, where it makes
552 a wrong guess, but by the time it gets around to checking the
553 value, an update has changed the pointer to match the wrong guess.
554 Too bad about any dereferences that returned pre-initialization garbage
555 in the meantime!
556 </font>
557
558 <p><font color="ffffff">
559 For <tt>remove_gp_synchronous()</tt>, as long as all modifications
560 to <tt>gp</tt> are carried out while holding <tt>gp_lock</tt>,
561 the above optimizations are harmless.
41a2901e 562 However, <tt>sparse</tt> will complain if you
6146f8df
PM
563 define <tt>gp</tt> with <tt>__rcu</tt> and then
564 access it without using
565 either <tt>rcu_access_pointer()</tt> or <tt>rcu_dereference()</tt>.
566</font></td></tr>
567<tr><td>&nbsp;</td></tr>
568</table>
649e4368
PM
569
570<p>
4b689330
PM
571In short, RCU's publish-subscribe guarantee is provided by the combination
572of <tt>rcu_assign_pointer()</tt> and <tt>rcu_dereference()</tt>.
573This guarantee allows data elements to be safely added to RCU-protected
574linked data structures without disrupting RCU readers.
575This guarantee can be used in combination with the grace-period
576guarantee to also allow data elements to be removed from RCU-protected
577linked data structures, again without disrupting RCU readers.
578
579<p>
580This guarantee was only partially premeditated.
581DYNIX/ptx used an explicit memory barrier for publication, but had nothing
582resembling <tt>rcu_dereference()</tt> for subscription, nor did it
583have anything resembling the <tt>smp_read_barrier_depends()</tt>
9ad3c143
PM
584that was later subsumed into <tt>rcu_dereference()</tt> and later
585still into <tt>READ_ONCE()</tt>.
4b689330
PM
586The need for these operations made itself known quite suddenly at a
587late-1990s meeting with the DEC Alpha architects, back in the days when
588DEC was still a free-standing company.
589It took the Alpha architects a good hour to convince me that any sort
590of barrier would ever be needed, and it then took me a good <i>two</i> hours
591to convince them that their documentation did not make this point clear.
592More recent work with the C and C++ standards committees have provided
593much education on tricks and traps from the compiler.
594In short, compilers were much less tricky in the early 1990s, but in
5952015, don't even think about omitting <tt>rcu_dereference()</tt>!
596
597<h3><a name="Memory-Barrier Guarantees">Memory-Barrier Guarantees</a></h3>
598
599<p>
600The previous section's simple linked-data-structure scenario clearly
601demonstrates the need for RCU's stringent memory-ordering guarantees on
602systems with more than one CPU:
649e4368
PM
603
604<ol>
605<li> Each CPU that has an RCU read-side critical section that
606 begins before <tt>synchronize_rcu()</tt> starts is
607 guaranteed to execute a full memory barrier between the time
608 that the RCU read-side critical section ends and the time that
609 <tt>synchronize_rcu()</tt> returns.
610 Without this guarantee, a pre-existing RCU read-side critical section
611 might hold a reference to the newly removed <tt>struct foo</tt>
612 after the <tt>kfree()</tt> on line&nbsp;14 of
613 <tt>remove_gp_synchronous()</tt>.
614<li> Each CPU that has an RCU read-side critical section that ends
615 after <tt>synchronize_rcu()</tt> returns is guaranteed
616 to execute a full memory barrier between the time that
617 <tt>synchronize_rcu()</tt> begins and the time that the RCU
618 read-side critical section begins.
619 Without this guarantee, a later RCU read-side critical section
620 running after the <tt>kfree()</tt> on line&nbsp;14 of
621 <tt>remove_gp_synchronous()</tt> might
622 later run <tt>do_something_gp()</tt> and find the
623 newly deleted <tt>struct foo</tt>.
624<li> If the task invoking <tt>synchronize_rcu()</tt> remains
625 on a given CPU, then that CPU is guaranteed to execute a full
626 memory barrier sometime during the execution of
627 <tt>synchronize_rcu()</tt>.
628 This guarantee ensures that the <tt>kfree()</tt> on
629 line&nbsp;14 of <tt>remove_gp_synchronous()</tt> really does
630 execute after the removal on line&nbsp;11.
631<li> If the task invoking <tt>synchronize_rcu()</tt> migrates
632 among a group of CPUs during that invocation, then each of the
633 CPUs in that group is guaranteed to execute a full memory barrier
634 sometime during the execution of <tt>synchronize_rcu()</tt>.
635 This guarantee also ensures that the <tt>kfree()</tt> on
636 line&nbsp;14 of <tt>remove_gp_synchronous()</tt> really does
637 execute after the removal on
638 line&nbsp;11, but also in the case where the thread executing the
639 <tt>synchronize_rcu()</tt> migrates in the meantime.
640</ol>
641
6146f8df
PM
642<table>
643<tr><th>&nbsp;</th></tr>
644<tr><th align="left">Quick Quiz:</th></tr>
645<tr><td>
646 Given that multiple CPUs can start RCU read-side critical sections
647 at any time without any ordering whatsoever, how can RCU possibly
648 tell whether or not a given RCU read-side critical section starts
649 before a given instance of <tt>synchronize_rcu()</tt>?
650</td></tr>
651<tr><th align="left">Answer:</th></tr>
652<tr><td bgcolor="#ffffff"><font color="ffffff">
653 If RCU cannot tell whether or not a given
654 RCU read-side critical section starts before a
655 given instance of <tt>synchronize_rcu()</tt>,
656 then it must assume that the RCU read-side critical section
657 started first.
658 In other words, a given instance of <tt>synchronize_rcu()</tt>
659 can avoid waiting on a given RCU read-side critical section only
660 if it can prove that <tt>synchronize_rcu()</tt> started first.
6771853b 661 </font>
e2c85cb1 662
6771853b 663 <p><font color="ffffff">
e2c85cb1
PM
664 A related question is &ldquo;When <tt>rcu_read_lock()</tt>
665 doesn't generate any code, why does it matter how it relates
666 to a grace period?&rdquo;
667 The answer is that it is not the relationship of
668 <tt>rcu_read_lock()</tt> itself that is important, but rather
669 the relationship of the code within the enclosed RCU read-side
670 critical section to the code preceding and following the
671 grace period.
672 If we take this viewpoint, then a given RCU read-side critical
673 section begins before a given grace period when some access
674 preceding the grace period observes the effect of some access
675 within the critical section, in which case none of the accesses
676 within the critical section may observe the effects of any
677 access following the grace period.
6771853b 678 </font>
e2c85cb1 679
6771853b 680 <p><font color="ffffff">
e2c85cb1
PM
681 As of late 2016, mathematical models of RCU take this
682 viewpoint, for example, see slides&nbsp;62 and&nbsp;63
683 of the
684 <a href="http://www2.rdrop.com/users/paulmck/scalability/paper/LinuxMM.2016.10.04c.LCE.pdf">2016 LinuxCon EU</a>
685 presentation.
6146f8df
PM
686</font></td></tr>
687<tr><td>&nbsp;</td></tr>
688</table>
689
690<table>
691<tr><th>&nbsp;</th></tr>
692<tr><th align="left">Quick Quiz:</th></tr>
693<tr><td>
694 The first and second guarantees require unbelievably strict ordering!
695 Are all these memory barriers <i> really</i> required?
696</td></tr>
697<tr><th align="left">Answer:</th></tr>
698<tr><td bgcolor="#ffffff"><font color="ffffff">
699 Yes, they really are required.
700 To see why the first guarantee is required, consider the following
701 sequence of events:
702 </font>
703
704 <ol>
705 <li> <font color="ffffff">
706 CPU 1: <tt>rcu_read_lock()</tt>
707 </font>
708 <li> <font color="ffffff">
709 CPU 1: <tt>q = rcu_dereference(gp);
710 /* Very likely to return p. */</tt>
711 </font>
712 <li> <font color="ffffff">
713 CPU 0: <tt>list_del_rcu(p);</tt>
714 </font>
715 <li> <font color="ffffff">
716 CPU 0: <tt>synchronize_rcu()</tt> starts.
717 </font>
718 <li> <font color="ffffff">
719 CPU 1: <tt>do_something_with(q-&gt;a);
720 /* No smp_mb(), so might happen after kfree(). */</tt>
721 </font>
722 <li> <font color="ffffff">
723 CPU 1: <tt>rcu_read_unlock()</tt>
724 </font>
725 <li> <font color="ffffff">
726 CPU 0: <tt>synchronize_rcu()</tt> returns.
727 </font>
728 <li> <font color="ffffff">
729 CPU 0: <tt>kfree(p);</tt>
730 </font>
731 </ol>
732
733 <p><font color="ffffff">
734 Therefore, there absolutely must be a full memory barrier between the
735 end of the RCU read-side critical section and the end of the
736 grace period.
737 </font>
738
739 <p><font color="ffffff">
740 The sequence of events demonstrating the necessity of the second rule
741 is roughly similar:
742 </font>
743
744 <ol>
745 <li> <font color="ffffff">CPU 0: <tt>list_del_rcu(p);</tt>
746 </font>
747 <li> <font color="ffffff">CPU 0: <tt>synchronize_rcu()</tt> starts.
748 </font>
749 <li> <font color="ffffff">CPU 1: <tt>rcu_read_lock()</tt>
750 </font>
751 <li> <font color="ffffff">CPU 1: <tt>q = rcu_dereference(gp);
752 /* Might return p if no memory barrier. */</tt>
753 </font>
754 <li> <font color="ffffff">CPU 0: <tt>synchronize_rcu()</tt> returns.
755 </font>
756 <li> <font color="ffffff">CPU 0: <tt>kfree(p);</tt>
757 </font>
758 <li> <font color="ffffff">
759 CPU 1: <tt>do_something_with(q-&gt;a); /* Boom!!! */</tt>
760 </font>
761 <li> <font color="ffffff">CPU 1: <tt>rcu_read_unlock()</tt>
762 </font>
763 </ol>
764
765 <p><font color="ffffff">
766 And similarly, without a memory barrier between the beginning of the
767 grace period and the beginning of the RCU read-side critical section,
768 CPU&nbsp;1 might end up accessing the freelist.
769 </font>
770
771 <p><font color="ffffff">
772 The &ldquo;as if&rdquo; rule of course applies, so that any
773 implementation that acts as if the appropriate memory barriers
774 were in place is a correct implementation.
775 That said, it is much easier to fool yourself into believing
776 that you have adhered to the as-if rule than it is to actually
777 adhere to it!
778</font></td></tr>
779<tr><td>&nbsp;</td></tr>
780</table>
781
782<table>
783<tr><th>&nbsp;</th></tr>
784<tr><th align="left">Quick Quiz:</th></tr>
785<tr><td>
786 You claim that <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
787 generate absolutely no code in some kernel builds.
788 This means that the compiler might arbitrarily rearrange consecutive
789 RCU read-side critical sections.
790 Given such rearrangement, if a given RCU read-side critical section
791 is done, how can you be sure that all prior RCU read-side critical
792 sections are done?
793 Won't the compiler rearrangements make that impossible to determine?
794</td></tr>
795<tr><th align="left">Answer:</th></tr>
796<tr><td bgcolor="#ffffff"><font color="ffffff">
797 In cases where <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
798 generate absolutely no code, RCU infers quiescent states only at
799 special locations, for example, within the scheduler.
800 Because calls to <tt>schedule()</tt> had better prevent calling-code
801 accesses to shared variables from being rearranged across the call to
802 <tt>schedule()</tt>, if RCU detects the end of a given RCU read-side
803 critical section, it will necessarily detect the end of all prior
804 RCU read-side critical sections, no matter how aggressively the
805 compiler scrambles the code.
806 </font>
807
808 <p><font color="ffffff">
809 Again, this all assumes that the compiler cannot scramble code across
810 calls to the scheduler, out of interrupt handlers, into the idle loop,
811 into user-mode code, and so on.
812 But if your kernel build allows that sort of scrambling, you have broken
813 far more than just RCU!
814</font></td></tr>
815<tr><td>&nbsp;</td></tr>
816</table>
d8936c0b 817
649e4368 818<p>
4b689330
PM
819Note that these memory-barrier requirements do not replace the fundamental
820RCU requirement that a grace period wait for all pre-existing readers.
821On the contrary, the memory barriers called out in this section must operate in
822such a way as to <i>enforce</i> this fundamental requirement.
823Of course, different implementations enforce this requirement in different
824ways, but enforce it they must.
649e4368
PM
825
826<h3><a name="RCU Primitives Guaranteed to Execute Unconditionally">RCU Primitives Guaranteed to Execute Unconditionally</a></h3>
827
828<p>
829The common-case RCU primitives are unconditional.
830They are invoked, they do their job, and they return, with no possibility
831of error, and no need to retry.
832This is a key RCU design philosophy.
833
834<p>
835However, this philosophy is pragmatic rather than pigheaded.
836If someone comes up with a good justification for a particular conditional
837RCU primitive, it might well be implemented and added.
838After all, this guarantee was reverse-engineered, not premeditated.
839The unconditional nature of the RCU primitives was initially an
840accident of implementation, and later experience with synchronization
841primitives with conditional primitives caused me to elevate this
842accident to a guarantee.
843Therefore, the justification for adding a conditional primitive to
844RCU would need to be based on detailed and compelling use cases.
845
846<h3><a name="Guaranteed Read-to-Write Upgrade">Guaranteed Read-to-Write Upgrade</a></h3>
847
848<p>
849As far as RCU is concerned, it is always possible to carry out an
850update within an RCU read-side critical section.
851For example, that RCU read-side critical section might search for
852a given data element, and then might acquire the update-side
853spinlock in order to update that element, all while remaining
854in that RCU read-side critical section.
855Of course, it is necessary to exit the RCU read-side critical section
856before invoking <tt>synchronize_rcu()</tt>, however, this
857inconvenience can be avoided through use of the
858<tt>call_rcu()</tt> and <tt>kfree_rcu()</tt> API members
859described later in this document.
860
6146f8df
PM
861<table>
862<tr><th>&nbsp;</th></tr>
863<tr><th align="left">Quick Quiz:</th></tr>
864<tr><td>
865 But how does the upgrade-to-write operation exclude other readers?
866</td></tr>
867<tr><th align="left">Answer:</th></tr>
868<tr><td bgcolor="#ffffff"><font color="ffffff">
869 It doesn't, just like normal RCU updates, which also do not exclude
870 RCU readers.
871</font></td></tr>
872<tr><td>&nbsp;</td></tr>
873</table>
649e4368
PM
874
875<p>
876This guarantee allows lookup code to be shared between read-side
877and update-side code, and was premeditated, appearing in the earliest
878DYNIX/ptx RCU documentation.
879
880<h2><a name="Fundamental Non-Requirements">Fundamental Non-Requirements</a></h2>
881
882<p>
883RCU provides extremely lightweight readers, and its read-side guarantees,
884though quite useful, are correspondingly lightweight.
885It is therefore all too easy to assume that RCU is guaranteeing more
886than it really is.
887Of course, the list of things that RCU does not guarantee is infinitely
888long, however, the following sections list a few non-guarantees that
889have caused confusion.
890Except where otherwise noted, these non-guarantees were premeditated.
891
892<ol>
893<li> <a href="#Readers Impose Minimal Ordering">
894 Readers Impose Minimal Ordering</a>
895<li> <a href="#Readers Do Not Exclude Updaters">
896 Readers Do Not Exclude Updaters</a>
897<li> <a href="#Updaters Only Wait For Old Readers">
898 Updaters Only Wait For Old Readers</a>
899<li> <a href="#Grace Periods Don't Partition Read-Side Critical Sections">
900 Grace Periods Don't Partition Read-Side Critical Sections</a>
901<li> <a href="#Read-Side Critical Sections Don't Partition Grace Periods">
902 Read-Side Critical Sections Don't Partition Grace Periods</a>
903<li> <a href="#Disabling Preemption Does Not Block Grace Periods">
904 Disabling Preemption Does Not Block Grace Periods</a>
905</ol>
906
907<h3><a name="Readers Impose Minimal Ordering">Readers Impose Minimal Ordering</a></h3>
908
909<p>
910Reader-side markers such as <tt>rcu_read_lock()</tt> and
911<tt>rcu_read_unlock()</tt> provide absolutely no ordering guarantees
912except through their interaction with the grace-period APIs such as
913<tt>synchronize_rcu()</tt>.
914To see this, consider the following pair of threads:
915
916<blockquote>
917<pre>
918 1 void thread0(void)
919 2 {
920 3 rcu_read_lock();
921 4 WRITE_ONCE(x, 1);
922 5 rcu_read_unlock();
923 6 rcu_read_lock();
924 7 WRITE_ONCE(y, 1);
925 8 rcu_read_unlock();
926 9 }
92710
92811 void thread1(void)
92912 {
93013 rcu_read_lock();
93114 r1 = READ_ONCE(y);
93215 rcu_read_unlock();
93316 rcu_read_lock();
93417 r2 = READ_ONCE(x);
93518 rcu_read_unlock();
93619 }
937</pre>
938</blockquote>
939
940<p>
941After <tt>thread0()</tt> and <tt>thread1()</tt> execute
942concurrently, it is quite possible to have
943
944<blockquote>
945<pre>
946(r1 == 1 &amp;&amp; r2 == 0)
947</pre>
948</blockquote>
949
950(that is, <tt>y</tt> appears to have been assigned before <tt>x</tt>),
951which would not be possible if <tt>rcu_read_lock()</tt> and
952<tt>rcu_read_unlock()</tt> had much in the way of ordering
953properties.
954But they do not, so the CPU is within its rights
955to do significant reordering.
956This is by design: Any significant ordering constraints would slow down
957these fast-path APIs.
958
6146f8df
PM
959<table>
960<tr><th>&nbsp;</th></tr>
961<tr><th align="left">Quick Quiz:</th></tr>
962<tr><td>
963 Can't the compiler also reorder this code?
964</td></tr>
965<tr><th align="left">Answer:</th></tr>
966<tr><td bgcolor="#ffffff"><font color="ffffff">
967 No, the volatile casts in <tt>READ_ONCE()</tt> and
968 <tt>WRITE_ONCE()</tt> prevent the compiler from reordering in
969 this particular case.
970</font></td></tr>
971<tr><td>&nbsp;</td></tr>
972</table>
649e4368
PM
973
974<h3><a name="Readers Do Not Exclude Updaters">Readers Do Not Exclude Updaters</a></h3>
975
976<p>
977Neither <tt>rcu_read_lock()</tt> nor <tt>rcu_read_unlock()</tt>
978exclude updates.
979All they do is to prevent grace periods from ending.
980The following example illustrates this:
981
982<blockquote>
983<pre>
984 1 void thread0(void)
985 2 {
986 3 rcu_read_lock();
987 4 r1 = READ_ONCE(y);
988 5 if (r1) {
989 6 do_something_with_nonzero_x();
990 7 r2 = READ_ONCE(x);
991 8 WARN_ON(!r2); /* BUG!!! */
992 9 }
99310 rcu_read_unlock();
99411 }
99512
99613 void thread1(void)
99714 {
99815 spin_lock(&amp;my_lock);
99916 WRITE_ONCE(x, 1);
100017 WRITE_ONCE(y, 1);
100118 spin_unlock(&amp;my_lock);
100219 }
1003</pre>
1004</blockquote>
1005
1006<p>
1007If the <tt>thread0()</tt> function's <tt>rcu_read_lock()</tt>
1008excluded the <tt>thread1()</tt> function's update,
1009the <tt>WARN_ON()</tt> could never fire.
1010But the fact is that <tt>rcu_read_lock()</tt> does not exclude
1011much of anything aside from subsequent grace periods, of which
1012<tt>thread1()</tt> has none, so the
1013<tt>WARN_ON()</tt> can and does fire.
1014
1015<h3><a name="Updaters Only Wait For Old Readers">Updaters Only Wait For Old Readers</a></h3>
1016
1017<p>
1018It might be tempting to assume that after <tt>synchronize_rcu()</tt>
1019completes, there are no readers executing.
1020This temptation must be avoided because
1021new readers can start immediately after <tt>synchronize_rcu()</tt>
1022starts, and <tt>synchronize_rcu()</tt> is under no
1023obligation to wait for these new readers.
1024
6146f8df
PM
1025<table>
1026<tr><th>&nbsp;</th></tr>
1027<tr><th align="left">Quick Quiz:</th></tr>
1028<tr><td>
5413e24c
PM
1029 Suppose that synchronize_rcu() did wait until <i>all</i>
1030 readers had completed instead of waiting only on
1031 pre-existing readers.
1032 For how long would the updater be able to rely on there
1033 being no readers?
6146f8df
PM
1034</td></tr>
1035<tr><th align="left">Answer:</th></tr>
1036<tr><td bgcolor="#ffffff"><font color="ffffff">
5413e24c 1037 For no time at all.
6146f8df
PM
1038 Even if <tt>synchronize_rcu()</tt> were to wait until
1039 all readers had completed, a new reader might start immediately after
1040 <tt>synchronize_rcu()</tt> completed.
1041 Therefore, the code following
5413e24c
PM
1042 <tt>synchronize_rcu()</tt> can <i>never</i> rely on there being
1043 no readers.
6146f8df
PM
1044</font></td></tr>
1045<tr><td>&nbsp;</td></tr>
1046</table>
649e4368
PM
1047
1048<h3><a name="Grace Periods Don't Partition Read-Side Critical Sections">
1049Grace Periods Don't Partition Read-Side Critical Sections</a></h3>
1050
1051<p>
1052It is tempting to assume that if any part of one RCU read-side critical
1053section precedes a given grace period, and if any part of another RCU
1054read-side critical section follows that same grace period, then all of
1055the first RCU read-side critical section must precede all of the second.
1056However, this just isn't the case: A single grace period does not
1057partition the set of RCU read-side critical sections.
1058An example of this situation can be illustrated as follows, where
1059<tt>x</tt>, <tt>y</tt>, and <tt>z</tt> are initially all zero:
1060
1061<blockquote>
1062<pre>
1063 1 void thread0(void)
1064 2 {
1065 3 rcu_read_lock();
1066 4 WRITE_ONCE(a, 1);
1067 5 WRITE_ONCE(b, 1);
1068 6 rcu_read_unlock();
1069 7 }
1070 8
1071 9 void thread1(void)
107210 {
107311 r1 = READ_ONCE(a);
107412 synchronize_rcu();
107513 WRITE_ONCE(c, 1);
107614 }
107715
107816 void thread2(void)
107917 {
108018 rcu_read_lock();
108119 r2 = READ_ONCE(b);
108220 r3 = READ_ONCE(c);
108321 rcu_read_unlock();
108422 }
1085</pre>
1086</blockquote>
1087
1088<p>
1089It turns out that the outcome:
1090
1091<blockquote>
1092<pre>
1093(r1 == 1 &amp;&amp; r2 == 0 &amp;&amp; r3 == 1)
1094</pre>
1095</blockquote>
1096
1097is entirely possible.
1098The following figure show how this can happen, with each circled
1099<tt>QS</tt> indicating the point at which RCU recorded a
1100<i>quiescent state</i> for each thread, that is, a state in which
1101RCU knows that the thread cannot be in the midst of an RCU read-side
1102critical section that started before the current grace period:
1103
1104<p><img src="GPpartitionReaders1.svg" alt="GPpartitionReaders1.svg" width="60%"></p>
1105
1106<p>
1107If it is necessary to partition RCU read-side critical sections in this
1108manner, it is necessary to use two grace periods, where the first
1109grace period is known to end before the second grace period starts:
1110
1111<blockquote>
1112<pre>
1113 1 void thread0(void)
1114 2 {
1115 3 rcu_read_lock();
1116 4 WRITE_ONCE(a, 1);
1117 5 WRITE_ONCE(b, 1);
1118 6 rcu_read_unlock();
1119 7 }
1120 8
1121 9 void thread1(void)
112210 {
112311 r1 = READ_ONCE(a);
112412 synchronize_rcu();
112513 WRITE_ONCE(c, 1);
112614 }
112715
112816 void thread2(void)
112917 {
113018 r2 = READ_ONCE(c);
113119 synchronize_rcu();
113220 WRITE_ONCE(d, 1);
113321 }
113422
113523 void thread3(void)
113624 {
113725 rcu_read_lock();
113826 r3 = READ_ONCE(b);
113927 r4 = READ_ONCE(d);
114028 rcu_read_unlock();
114129 }
1142</pre>
1143</blockquote>
1144
1145<p>
1146Here, if <tt>(r1 == 1)</tt>, then
1147<tt>thread0()</tt>'s write to <tt>b</tt> must happen
1148before the end of <tt>thread1()</tt>'s grace period.
1149If in addition <tt>(r4 == 1)</tt>, then
1150<tt>thread3()</tt>'s read from <tt>b</tt> must happen
1151after the beginning of <tt>thread2()</tt>'s grace period.
1152If it is also the case that <tt>(r2 == 1)</tt>, then the
1153end of <tt>thread1()</tt>'s grace period must precede the
1154beginning of <tt>thread2()</tt>'s grace period.
1155This mean that the two RCU read-side critical sections cannot overlap,
1156guaranteeing that <tt>(r3 == 1)</tt>.
1157As a result, the outcome:
1158
1159<blockquote>
1160<pre>
1161(r1 == 1 &amp;&amp; r2 == 1 &amp;&amp; r3 == 0 &amp;&amp; r4 == 1)
1162</pre>
1163</blockquote>
1164
1165cannot happen.
1166
1167<p>
1168This non-requirement was also non-premeditated, but became apparent
1169when studying RCU's interaction with memory ordering.
1170
1171<h3><a name="Read-Side Critical Sections Don't Partition Grace Periods">
1172Read-Side Critical Sections Don't Partition Grace Periods</a></h3>
1173
1174<p>
1175It is also tempting to assume that if an RCU read-side critical section
1176happens between a pair of grace periods, then those grace periods cannot
1177overlap.
1178However, this temptation leads nowhere good, as can be illustrated by
1179the following, with all variables initially zero:
1180
1181<blockquote>
1182<pre>
1183 1 void thread0(void)
1184 2 {
1185 3 rcu_read_lock();
1186 4 WRITE_ONCE(a, 1);
1187 5 WRITE_ONCE(b, 1);
1188 6 rcu_read_unlock();
1189 7 }
1190 8
1191 9 void thread1(void)
119210 {
119311 r1 = READ_ONCE(a);
119412 synchronize_rcu();
119513 WRITE_ONCE(c, 1);
119614 }
119715
119816 void thread2(void)
119917 {
120018 rcu_read_lock();
120119 WRITE_ONCE(d, 1);
120220 r2 = READ_ONCE(c);
120321 rcu_read_unlock();
120422 }
120523
120624 void thread3(void)
120725 {
120826 r3 = READ_ONCE(d);
120927 synchronize_rcu();
121028 WRITE_ONCE(e, 1);
121129 }
121230
121331 void thread4(void)
121432 {
121533 rcu_read_lock();
121634 r4 = READ_ONCE(b);
121735 r5 = READ_ONCE(e);
121836 rcu_read_unlock();
121937 }
1220</pre>
1221</blockquote>
1222
1223<p>
1224In this case, the outcome:
1225
1226<blockquote>
1227<pre>
1228(r1 == 1 &amp;&amp; r2 == 1 &amp;&amp; r3 == 1 &amp;&amp; r4 == 0 &amp&amp; r5 == 1)
1229</pre>
1230</blockquote>
1231
1232is entirely possible, as illustrated below:
1233
1234<p><img src="ReadersPartitionGP1.svg" alt="ReadersPartitionGP1.svg" width="100%"></p>
1235
1236<p>
1237Again, an RCU read-side critical section can overlap almost all of a
1238given grace period, just so long as it does not overlap the entire
1239grace period.
1240As a result, an RCU read-side critical section cannot partition a pair
1241of RCU grace periods.
1242
6146f8df
PM
1243<table>
1244<tr><th>&nbsp;</th></tr>
1245<tr><th align="left">Quick Quiz:</th></tr>
1246<tr><td>
1247 How long a sequence of grace periods, each separated by an RCU
1248 read-side critical section, would be required to partition the RCU
1249 read-side critical sections at the beginning and end of the chain?
1250</td></tr>
1251<tr><th align="left">Answer:</th></tr>
1252<tr><td bgcolor="#ffffff"><font color="ffffff">
1253 In theory, an infinite number.
1254 In practice, an unknown number that is sensitive to both implementation
1255 details and timing considerations.
1256 Therefore, even in practice, RCU users must abide by the
1257 theoretical rather than the practical answer.
1258</font></td></tr>
1259<tr><td>&nbsp;</td></tr>
1260</table>
649e4368
PM
1261
1262<h3><a name="Disabling Preemption Does Not Block Grace Periods">
1263Disabling Preemption Does Not Block Grace Periods</a></h3>
1264
1265<p>
1266There was a time when disabling preemption on any given CPU would block
1267subsequent grace periods.
1268However, this was an accident of implementation and is not a requirement.
1269And in the current Linux-kernel implementation, disabling preemption
1270on a given CPU in fact does not block grace periods, as Oleg Nesterov
1271<a href="https://lkml.kernel.org/g/20150614193825.GA19582@redhat.com">demonstrated</a>.
1272
1273<p>
1274If you need a preempt-disable region to block grace periods, you need to add
1275<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>, for example
1276as follows:
1277
1278<blockquote>
1279<pre>
1280 1 preempt_disable();
1281 2 rcu_read_lock();
1282 3 do_something();
1283 4 rcu_read_unlock();
1284 5 preempt_enable();
1285 6
1286 7 /* Spinlocks implicitly disable preemption. */
1287 8 spin_lock(&amp;mylock);
1288 9 rcu_read_lock();
128910 do_something();
129011 rcu_read_unlock();
129112 spin_unlock(&amp;mylock);
1292</pre>
1293</blockquote>
1294
1295<p>
1296In theory, you could enter the RCU read-side critical section first,
1297but it is more efficient to keep the entire RCU read-side critical
1298section contained in the preempt-disable region as shown above.
1299Of course, RCU read-side critical sections that extend outside of
1300preempt-disable regions will work correctly, but such critical sections
1301can be preempted, which forces <tt>rcu_read_unlock()</tt> to do
1302more work.
1303And no, this is <i>not</i> an invitation to enclose all of your RCU
1304read-side critical sections within preempt-disable regions, because
1305doing so would degrade real-time response.
1306
1307<p>
1308This non-requirement appeared with preemptible RCU.
1309If you need a grace period that waits on non-preemptible code regions, use
1310<a href="#Sched Flavor">RCU-sched</a>.
1311
1312<h2><a name="Parallelism Facts of Life">Parallelism Facts of Life</a></h2>
1313
1314<p>
1315These parallelism facts of life are by no means specific to RCU, but
1316the RCU implementation must abide by them.
1317They therefore bear repeating:
1318
1319<ol>
1320<li> Any CPU or task may be delayed at any time,
1321 and any attempts to avoid these delays by disabling
1322 preemption, interrupts, or whatever are completely futile.
1323 This is most obvious in preemptible user-level
1324 environments and in virtualized environments (where
1325 a given guest OS's VCPUs can be preempted at any time by
1326 the underlying hypervisor), but can also happen in bare-metal
1327 environments due to ECC errors, NMIs, and other hardware
1328 events.
1329 Although a delay of more than about 20 seconds can result
1330 in splats, the RCU implementation is obligated to use
1331 algorithms that can tolerate extremely long delays, but where
1332 &ldquo;extremely long&rdquo; is not long enough to allow
1333 wrap-around when incrementing a 64-bit counter.
1334<li> Both the compiler and the CPU can reorder memory accesses.
1335 Where it matters, RCU must use compiler directives and
1336 memory-barrier instructions to preserve ordering.
1337<li> Conflicting writes to memory locations in any given cache line
1338 will result in expensive cache misses.
1339 Greater numbers of concurrent writes and more-frequent
1340 concurrent writes will result in more dramatic slowdowns.
1341 RCU is therefore obligated to use algorithms that have
1342 sufficient locality to avoid significant performance and
1343 scalability problems.
1344<li> As a rough rule of thumb, only one CPU's worth of processing
1345 may be carried out under the protection of any given exclusive
1346 lock.
1347 RCU must therefore use scalable locking designs.
1348<li> Counters are finite, especially on 32-bit systems.
1349 RCU's use of counters must therefore tolerate counter wrap,
1350 or be designed such that counter wrap would take way more
1351 time than a single system is likely to run.
1352 An uptime of ten years is quite possible, a runtime
1353 of a century much less so.
1354 As an example of the latter, RCU's dyntick-idle nesting counter
1355 allows 54 bits for interrupt nesting level (this counter
1356 is 64 bits even on a 32-bit system).
1357 Overflowing this counter requires 2<sup>54</sup>
1358 half-interrupts on a given CPU without that CPU ever going idle.
1359 If a half-interrupt happened every microsecond, it would take
1360 570 years of runtime to overflow this counter, which is currently
1361 believed to be an acceptably long time.
1362<li> Linux systems can have thousands of CPUs running a single
1363 Linux kernel in a single shared-memory environment.
1364 RCU must therefore pay close attention to high-end scalability.
1365</ol>
1366
1367<p>
1368This last parallelism fact of life means that RCU must pay special
1369attention to the preceding facts of life.
1370The idea that Linux might scale to systems with thousands of CPUs would
1371have been met with some skepticism in the 1990s, but these requirements
1372would have otherwise have been unsurprising, even in the early 1990s.
1373
1374<h2><a name="Quality-of-Implementation Requirements">Quality-of-Implementation Requirements</a></h2>
1375
1376<p>
1377These sections list quality-of-implementation requirements.
1378Although an RCU implementation that ignores these requirements could
1379still be used, it would likely be subject to limitations that would
1380make it inappropriate for industrial-strength production use.
1381Classes of quality-of-implementation requirements are as follows:
1382
1383<ol>
1384<li> <a href="#Specialization">Specialization</a>
1385<li> <a href="#Performance and Scalability">Performance and Scalability</a>
1386<li> <a href="#Composability">Composability</a>
1387<li> <a href="#Corner Cases">Corner Cases</a>
1388</ol>
1389
1390<p>
1391These classes is covered in the following sections.
1392
1393<h3><a name="Specialization">Specialization</a></h3>
1394
1395<p>
11a65df5
PM
1396RCU is and always has been intended primarily for read-mostly situations,
1397which means that RCU's read-side primitives are optimized, often at the
649e4368 1398expense of its update-side primitives.
11a65df5 1399Experience thus far is captured by the following list of situations:
649e4368 1400
11a65df5
PM
1401<ol>
1402<li> Read-mostly data, where stale and inconsistent data is not
1403 a problem: RCU works great!
1404<li> Read-mostly data, where data must be consistent:
1405 RCU works well.
1406<li> Read-write data, where data must be consistent:
1407 RCU <i>might</i> work OK.
1408 Or not.
1409<li> Write-mostly data, where data must be consistent:
1410 RCU is very unlikely to be the right tool for the job,
1411 with the following exceptions, where RCU can provide:
1412 <ol type=a>
1413 <li> Existence guarantees for update-friendly mechanisms.
1414 <li> Wait-free read-side primitives for real-time use.
1415 </ol>
1416</ol>
649e4368
PM
1417
1418<p>
1419This focus on read-mostly situations means that RCU must interoperate
1420with other synchronization primitives.
1421For example, the <tt>add_gp()</tt> and <tt>remove_gp_synchronous()</tt>
1422examples discussed earlier use RCU to protect readers and locking to
1423coordinate updaters.
1424However, the need extends much farther, requiring that a variety of
1425synchronization primitives be legal within RCU read-side critical sections,
1426including spinlocks, sequence locks, atomic operations, reference
1427counters, and memory barriers.
1428
6146f8df
PM
1429<table>
1430<tr><th>&nbsp;</th></tr>
1431<tr><th align="left">Quick Quiz:</th></tr>
1432<tr><td>
1433 What about sleeping locks?
1434</td></tr>
1435<tr><th align="left">Answer:</th></tr>
1436<tr><td bgcolor="#ffffff"><font color="ffffff">
1437 These are forbidden within Linux-kernel RCU read-side critical
1438 sections because it is not legal to place a quiescent state
1439 (in this case, voluntary context switch) within an RCU read-side
1440 critical section.
1441 However, sleeping locks may be used within userspace RCU read-side
1442 critical sections, and also within Linux-kernel sleepable RCU
1443 <a href="#Sleepable RCU"><font color="ffffff">(SRCU)</font></a>
1444 read-side critical sections.
1445 In addition, the -rt patchset turns spinlocks into a
1446 sleeping locks so that the corresponding critical sections
1447 can be preempted, which also means that these sleeplockified
1448 spinlocks (but not other sleeping locks!) may be acquire within
1449 -rt-Linux-kernel RCU read-side critical sections.
1450 </font>
1451
1452 <p><font color="ffffff">
1453 Note that it <i>is</i> legal for a normal RCU read-side
1454 critical section to conditionally acquire a sleeping locks
1455 (as in <tt>mutex_trylock()</tt>), but only as long as it does
1456 not loop indefinitely attempting to conditionally acquire that
1457 sleeping locks.
1458 The key point is that things like <tt>mutex_trylock()</tt>
1459 either return with the mutex held, or return an error indication if
1460 the mutex was not immediately available.
1461 Either way, <tt>mutex_trylock()</tt> returns immediately without
1462 sleeping.
1463</font></td></tr>
1464<tr><td>&nbsp;</td></tr>
1465</table>
649e4368
PM
1466
1467<p>
1468It often comes as a surprise that many algorithms do not require a
1469consistent view of data, but many can function in that mode,
1470with network routing being the poster child.
1471Internet routing algorithms take significant time to propagate
1472updates, so that by the time an update arrives at a given system,
1473that system has been sending network traffic the wrong way for
1474a considerable length of time.
1475Having a few threads continue to send traffic the wrong way for a
1476few more milliseconds is clearly not a problem: In the worst case,
1477TCP retransmissions will eventually get the data where it needs to go.
1478In general, when tracking the state of the universe outside of the
1479computer, some level of inconsistency must be tolerated due to
1480speed-of-light delays if nothing else.
1481
1482<p>
1483Furthermore, uncertainty about external state is inherent in many cases.
526914a0 1484For example, a pair of veterinarians might use heartbeat to determine
649e4368
PM
1485whether or not a given cat was alive.
1486But how long should they wait after the last heartbeat to decide that
1487the cat is in fact dead?
1488Waiting less than 400 milliseconds makes no sense because this would
1489mean that a relaxed cat would be considered to cycle between death
1490and life more than 100 times per minute.
1491Moreover, just as with human beings, a cat's heart might stop for
1492some period of time, so the exact wait period is a judgment call.
526914a0 1493One of our pair of veterinarians might wait 30 seconds before pronouncing
649e4368 1494the cat dead, while the other might insist on waiting a full minute.
526914a0 1495The two veterinarians would then disagree on the state of the cat during
11a65df5 1496the final 30 seconds of the minute following the last heartbeat.
649e4368
PM
1497
1498<p>
1499Interestingly enough, this same situation applies to hardware.
1500When push comes to shove, how do we tell whether or not some
1501external server has failed?
1502We send messages to it periodically, and declare it failed if we
1503don't receive a response within a given period of time.
1504Policy decisions can usually tolerate short
1505periods of inconsistency.
1506The policy was decided some time ago, and is only now being put into
1507effect, so a few milliseconds of delay is normally inconsequential.
1508
1509<p>
1510However, there are algorithms that absolutely must see consistent data.
1511For example, the translation between a user-level SystemV semaphore
1512ID to the corresponding in-kernel data structure is protected by RCU,
1513but it is absolutely forbidden to update a semaphore that has just been
1514removed.
1515In the Linux kernel, this need for consistency is accommodated by acquiring
1516spinlocks located in the in-kernel data structure from within
1517the RCU read-side critical section, and this is indicated by the
1518green box in the figure above.
1519Many other techniques may be used, and are in fact used within the
1520Linux kernel.
1521
1522<p>
1523In short, RCU is not required to maintain consistency, and other
1524mechanisms may be used in concert with RCU when consistency is required.
1525RCU's specialization allows it to do its job extremely well, and its
1526ability to interoperate with other synchronization mechanisms allows
1527the right mix of synchronization tools to be used for a given job.
1528
1529<h3><a name="Performance and Scalability">Performance and Scalability</a></h3>
1530
1531<p>
1532Energy efficiency is a critical component of performance today,
1533and Linux-kernel RCU implementations must therefore avoid unnecessarily
1534awakening idle CPUs.
1535I cannot claim that this requirement was premeditated.
1536In fact, I learned of it during a telephone conversation in which I
1537was given &ldquo;frank and open&rdquo; feedback on the importance
1538of energy efficiency in battery-powered systems and on specific
1539energy-efficiency shortcomings of the Linux-kernel RCU implementation.
1540In my experience, the battery-powered embedded community will consider
1541any unnecessary wakeups to be extremely unfriendly acts.
1542So much so that mere Linux-kernel-mailing-list posts are
1543insufficient to vent their ire.
1544
1545<p>
1546Memory consumption is not particularly important for in most
1547situations, and has become decreasingly
1548so as memory sizes have expanded and memory
1549costs have plummeted.
1550However, as I learned from Matt Mackall's
1551<a href="http://elinux.org/Linux_Tiny-FAQ">bloatwatch</a>
1552efforts, memory footprint is critically important on single-CPU systems with
1553non-preemptible (<tt>CONFIG_PREEMPT=n</tt>) kernels, and thus
1554<a href="https://lkml.kernel.org/g/20090113221724.GA15307@linux.vnet.ibm.com">tiny RCU</a>
1555was born.
1556Josh Triplett has since taken over the small-memory banner with his
1557<a href="https://tiny.wiki.kernel.org/">Linux kernel tinification</a>
1558project, which resulted in
1559<a href="#Sleepable RCU">SRCU</a>
1560becoming optional for those kernels not needing it.
1561
1562<p>
1563The remaining performance requirements are, for the most part,
1564unsurprising.
1565For example, in keeping with RCU's read-side specialization,
1566<tt>rcu_dereference()</tt> should have negligible overhead (for
1567example, suppression of a few minor compiler optimizations).
1568Similarly, in non-preemptible environments, <tt>rcu_read_lock()</tt> and
1569<tt>rcu_read_unlock()</tt> should have exactly zero overhead.
1570
1571<p>
1572In preemptible environments, in the case where the RCU read-side
1573critical section was not preempted (as will be the case for the
1574highest-priority real-time process), <tt>rcu_read_lock()</tt> and
1575<tt>rcu_read_unlock()</tt> should have minimal overhead.
1576In particular, they should not contain atomic read-modify-write
1577operations, memory-barrier instructions, preemption disabling,
1578interrupt disabling, or backwards branches.
1579However, in the case where the RCU read-side critical section was preempted,
1580<tt>rcu_read_unlock()</tt> may acquire spinlocks and disable interrupts.
1581This is why it is better to nest an RCU read-side critical section
1582within a preempt-disable region than vice versa, at least in cases
1583where that critical section is short enough to avoid unduly degrading
1584real-time latencies.
1585
1586<p>
1587The <tt>synchronize_rcu()</tt> grace-period-wait primitive is
1588optimized for throughput.
1589It may therefore incur several milliseconds of latency in addition to
1590the duration of the longest RCU read-side critical section.
1591On the other hand, multiple concurrent invocations of
1592<tt>synchronize_rcu()</tt> are required to use batching optimizations
1593so that they can be satisfied by a single underlying grace-period-wait
1594operation.
1595For example, in the Linux kernel, it is not unusual for a single
1596grace-period-wait operation to serve more than
1597<a href="https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response">1,000 separate invocations</a>
1598of <tt>synchronize_rcu()</tt>, thus amortizing the per-invocation
1599overhead down to nearly zero.
1600However, the grace-period optimization is also required to avoid
1601measurable degradation of real-time scheduling and interrupt latencies.
1602
1603<p>
1604In some cases, the multi-millisecond <tt>synchronize_rcu()</tt>
1605latencies are unacceptable.
1606In these cases, <tt>synchronize_rcu_expedited()</tt> may be used
1607instead, reducing the grace-period latency down to a few tens of
1608microseconds on small systems, at least in cases where the RCU read-side
1609critical sections are short.
1610There are currently no special latency requirements for
1611<tt>synchronize_rcu_expedited()</tt> on large systems, but,
1612consistent with the empirical nature of the RCU specification,
1613that is subject to change.
1614However, there most definitely are scalability requirements:
1615A storm of <tt>synchronize_rcu_expedited()</tt> invocations on 4096
1616CPUs should at least make reasonable forward progress.
1617In return for its shorter latencies, <tt>synchronize_rcu_expedited()</tt>
1618is permitted to impose modest degradation of real-time latency
1619on non-idle online CPUs.
6771853b
PM
1620Here, &ldquo;modest&rdquo; means roughly the same latency
1621degradation as a scheduling-clock interrupt.
649e4368
PM
1622
1623<p>
1624There are a number of situations where even
1625<tt>synchronize_rcu_expedited()</tt>'s reduced grace-period
1626latency is unacceptable.
1627In these situations, the asynchronous <tt>call_rcu()</tt> can be
1628used in place of <tt>synchronize_rcu()</tt> as follows:
1629
1630<blockquote>
1631<pre>
1632 1 struct foo {
1633 2 int a;
1634 3 int b;
1635 4 struct rcu_head rh;
1636 5 };
1637 6
1638 7 static void remove_gp_cb(struct rcu_head *rhp)
1639 8 {
1640 9 struct foo *p = container_of(rhp, struct foo, rh);
164110
164211 kfree(p);
164312 }
164413
164514 bool remove_gp_asynchronous(void)
164615 {
164716 struct foo *p;
164817
164918 spin_lock(&amp;gp_lock);
165019 p = rcu_dereference(gp);
165120 if (!p) {
165221 spin_unlock(&amp;gp_lock);
165322 return false;
165423 }
165524 rcu_assign_pointer(gp, NULL);
165625 call_rcu(&amp;p-&gt;rh, remove_gp_cb);
165726 spin_unlock(&amp;gp_lock);
165827 return true;
165928 }
1660</pre>
1661</blockquote>
1662
1663<p>
1664A definition of <tt>struct foo</tt> is finally needed, and appears
1665on lines&nbsp;1-5.
1666The function <tt>remove_gp_cb()</tt> is passed to <tt>call_rcu()</tt>
1667on line&nbsp;25, and will be invoked after the end of a subsequent
1668grace period.
1669This gets the same effect as <tt>remove_gp_synchronous()</tt>,
1670but without forcing the updater to wait for a grace period to elapse.
1671The <tt>call_rcu()</tt> function may be used in a number of
1672situations where neither <tt>synchronize_rcu()</tt> nor
1673<tt>synchronize_rcu_expedited()</tt> would be legal,
1674including within preempt-disable code, <tt>local_bh_disable()</tt> code,
1675interrupt-disable code, and interrupt handlers.
514f1eb5 1676However, even <tt>call_rcu()</tt> is illegal within NMI handlers
0c7d10e4 1677and from idle and offline CPUs.
649e4368
PM
1678The callback function (<tt>remove_gp_cb()</tt> in this case) will be
1679executed within softirq (software interrupt) environment within the
1680Linux kernel,
1681either within a real softirq handler or under the protection
1682of <tt>local_bh_disable()</tt>.
1683In both the Linux kernel and in userspace, it is bad practice to
1684write an RCU callback function that takes too long.
1685Long-running operations should be relegated to separate threads or
1686(in the Linux kernel) workqueues.
1687
6146f8df
PM
1688<table>
1689<tr><th>&nbsp;</th></tr>
1690<tr><th align="left">Quick Quiz:</th></tr>
1691<tr><td>
1692 Why does line&nbsp;19 use <tt>rcu_access_pointer()</tt>?
1693 After all, <tt>call_rcu()</tt> on line&nbsp;25 stores into the
1694 structure, which would interact badly with concurrent insertions.
1695 Doesn't this mean that <tt>rcu_dereference()</tt> is required?
1696</td></tr>
1697<tr><th align="left">Answer:</th></tr>
1698<tr><td bgcolor="#ffffff"><font color="ffffff">
1699 Presumably the <tt>-&gt;gp_lock</tt> acquired on line&nbsp;18 excludes
1700 any changes, including any insertions that <tt>rcu_dereference()</tt>
1701 would protect against.
1702 Therefore, any insertions will be delayed until after
1703 <tt>-&gt;gp_lock</tt>
1704 is released on line&nbsp;25, which in turn means that
1705 <tt>rcu_access_pointer()</tt> suffices.
1706</font></td></tr>
1707<tr><td>&nbsp;</td></tr>
1708</table>
649e4368
PM
1709
1710<p>
1711However, all that <tt>remove_gp_cb()</tt> is doing is
1712invoking <tt>kfree()</tt> on the data element.
1713This is a common idiom, and is supported by <tt>kfree_rcu()</tt>,
1714which allows &ldquo;fire and forget&rdquo; operation as shown below:
1715
1716<blockquote>
1717<pre>
1718 1 struct foo {
1719 2 int a;
1720 3 int b;
1721 4 struct rcu_head rh;
1722 5 };
1723 6
1724 7 bool remove_gp_faf(void)
1725 8 {
1726 9 struct foo *p;
172710
172811 spin_lock(&amp;gp_lock);
172912 p = rcu_dereference(gp);
173013 if (!p) {
173114 spin_unlock(&amp;gp_lock);
173215 return false;
173316 }
173417 rcu_assign_pointer(gp, NULL);
173518 kfree_rcu(p, rh);
173619 spin_unlock(&amp;gp_lock);
173720 return true;
173821 }
1739</pre>
1740</blockquote>
1741
1742<p>
1743Note that <tt>remove_gp_faf()</tt> simply invokes
1744<tt>kfree_rcu()</tt> and proceeds, without any need to pay any
1745further attention to the subsequent grace period and <tt>kfree()</tt>.
1746It is permissible to invoke <tt>kfree_rcu()</tt> from the same
1747environments as for <tt>call_rcu()</tt>.
1748Interestingly enough, DYNIX/ptx had the equivalents of
1749<tt>call_rcu()</tt> and <tt>kfree_rcu()</tt>, but not
1750<tt>synchronize_rcu()</tt>.
1751This was due to the fact that RCU was not heavily used within DYNIX/ptx,
1752so the very few places that needed something like
1753<tt>synchronize_rcu()</tt> simply open-coded it.
1754
6146f8df
PM
1755<table>
1756<tr><th>&nbsp;</th></tr>
1757<tr><th align="left">Quick Quiz:</th></tr>
1758<tr><td>
1759 Earlier it was claimed that <tt>call_rcu()</tt> and
1760 <tt>kfree_rcu()</tt> allowed updaters to avoid being blocked
1761 by readers.
1762 But how can that be correct, given that the invocation of the callback
1763 and the freeing of the memory (respectively) must still wait for
1764 a grace period to elapse?
1765</td></tr>
1766<tr><th align="left">Answer:</th></tr>
1767<tr><td bgcolor="#ffffff"><font color="ffffff">
1768 We could define things this way, but keep in mind that this sort of
1769 definition would say that updates in garbage-collected languages
1770 cannot complete until the next time the garbage collector runs,
1771 which does not seem at all reasonable.
1772 The key point is that in most cases, an updater using either
1773 <tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> can proceed to the
1774 next update as soon as it has invoked <tt>call_rcu()</tt> or
1775 <tt>kfree_rcu()</tt>, without having to wait for a subsequent
1776 grace period.
1777</font></td></tr>
1778<tr><td>&nbsp;</td></tr>
1779</table>
649e4368
PM
1780
1781<p>
1782But what if the updater must wait for the completion of code to be
1783executed after the end of the grace period, but has other tasks
1784that can be carried out in the meantime?
1785The polling-style <tt>get_state_synchronize_rcu()</tt> and
1786<tt>cond_synchronize_rcu()</tt> functions may be used for this
1787purpose, as shown below:
1788
1789<blockquote>
1790<pre>
1791 1 bool remove_gp_poll(void)
1792 2 {
1793 3 struct foo *p;
1794 4 unsigned long s;
1795 5
1796 6 spin_lock(&amp;gp_lock);
1797 7 p = rcu_access_pointer(gp);
1798 8 if (!p) {
1799 9 spin_unlock(&amp;gp_lock);
180010 return false;
180111 }
180212 rcu_assign_pointer(gp, NULL);
180313 spin_unlock(&amp;gp_lock);
180414 s = get_state_synchronize_rcu();
180515 do_something_while_waiting();
180616 cond_synchronize_rcu(s);
180717 kfree(p);
180818 return true;
180919 }
1810</pre>
1811</blockquote>
1812
1813<p>
1814On line&nbsp;14, <tt>get_state_synchronize_rcu()</tt> obtains a
1815&ldquo;cookie&rdquo; from RCU,
1816then line&nbsp;15 carries out other tasks,
1817and finally, line&nbsp;16 returns immediately if a grace period has
1818elapsed in the meantime, but otherwise waits as required.
1819The need for <tt>get_state_synchronize_rcu</tt> and
1820<tt>cond_synchronize_rcu()</tt> has appeared quite recently,
1821so it is too early to tell whether they will stand the test of time.
1822
1823<p>
1824RCU thus provides a range of tools to allow updaters to strike the
1825required tradeoff between latency, flexibility and CPU overhead.
1826
1827<h3><a name="Composability">Composability</a></h3>
1828
1829<p>
1830Composability has received much attention in recent years, perhaps in part
1831due to the collision of multicore hardware with object-oriented techniques
1832designed in single-threaded environments for single-threaded use.
1833And in theory, RCU read-side critical sections may be composed, and in
1834fact may be nested arbitrarily deeply.
1835In practice, as with all real-world implementations of composable
1836constructs, there are limitations.
1837
1838<p>
1839Implementations of RCU for which <tt>rcu_read_lock()</tt>
1840and <tt>rcu_read_unlock()</tt> generate no code, such as
1841Linux-kernel RCU when <tt>CONFIG_PREEMPT=n</tt>, can be
1842nested arbitrarily deeply.
1843After all, there is no overhead.
1844Except that if all these instances of <tt>rcu_read_lock()</tt>
1845and <tt>rcu_read_unlock()</tt> are visible to the compiler,
1846compilation will eventually fail due to exhausting memory,
1847mass storage, or user patience, whichever comes first.
1848If the nesting is not visible to the compiler, as is the case with
1849mutually recursive functions each in its own translation unit,
1850stack overflow will result.
c75e9caa
PM
1851If the nesting takes the form of loops, perhaps in the guise of tail
1852recursion, either the control variable
649e4368
PM
1853will overflow or (in the Linux kernel) you will get an RCU CPU stall warning.
1854Nevertheless, this class of RCU implementations is one
1855of the most composable constructs in existence.
1856
1857<p>
1858RCU implementations that explicitly track nesting depth
1859are limited by the nesting-depth counter.
1860For example, the Linux kernel's preemptible RCU limits nesting to
1861<tt>INT_MAX</tt>.
1862This should suffice for almost all practical purposes.
1863That said, a consecutive pair of RCU read-side critical sections
1864between which there is an operation that waits for a grace period
1865cannot be enclosed in another RCU read-side critical section.
1866This is because it is not legal to wait for a grace period within
1867an RCU read-side critical section: To do so would result either
1868in deadlock or
1869in RCU implicitly splitting the enclosing RCU read-side critical
1870section, neither of which is conducive to a long-lived and prosperous
1871kernel.
1872
0825458b
PM
1873<p>
1874It is worth noting that RCU is not alone in limiting composability.
1875For example, many transactional-memory implementations prohibit
1876composing a pair of transactions separated by an irrevocable
1877operation (for example, a network receive operation).
1878For another example, lock-based critical sections can be composed
1879surprisingly freely, but only if deadlock is avoided.
1880
649e4368
PM
1881<p>
1882In short, although RCU read-side critical sections are highly composable,
1883care is required in some situations, just as is the case for any other
1884composable synchronization mechanism.
1885
1886<h3><a name="Corner Cases">Corner Cases</a></h3>
1887
1888<p>
1889A given RCU workload might have an endless and intense stream of
1890RCU read-side critical sections, perhaps even so intense that there
1891was never a point in time during which there was not at least one
1892RCU read-side critical section in flight.
1893RCU cannot allow this situation to block grace periods: As long as
1894all the RCU read-side critical sections are finite, grace periods
1895must also be finite.
1896
1897<p>
1898That said, preemptible RCU implementations could potentially result
1899in RCU read-side critical sections being preempted for long durations,
1900which has the effect of creating a long-duration RCU read-side
1901critical section.
1902This situation can arise only in heavily loaded systems, but systems using
1903real-time priorities are of course more vulnerable.
1904Therefore, RCU priority boosting is provided to help deal with this
1905case.
1906That said, the exact requirements on RCU priority boosting will likely
1907evolve as more experience accumulates.
1908
1909<p>
1910Other workloads might have very high update rates.
1911Although one can argue that such workloads should instead use
1912something other than RCU, the fact remains that RCU must
1913handle such workloads gracefully.
1914This requirement is another factor driving batching of grace periods,
1915but it is also the driving force behind the checks for large numbers
1916of queued RCU callbacks in the <tt>call_rcu()</tt> code path.
1917Finally, high update rates should not delay RCU read-side critical
6771853b 1918sections, although some small read-side delays can occur when using
649e4368 1919<tt>synchronize_rcu_expedited()</tt>, courtesy of this function's use
6771853b 1920of <tt>smp_call_function_single()</tt>.
649e4368
PM
1921
1922<p>
1923Although all three of these corner cases were understood in the early
19241990s, a simple user-level test consisting of <tt>close(open(path))</tt>
1925in a tight loop
1926in the early 2000s suddenly provided a much deeper appreciation of the
1927high-update-rate corner case.
1928This test also motivated addition of some RCU code to react to high update
1929rates, for example, if a given CPU finds itself with more than 10,000
1930RCU callbacks queued, it will cause RCU to take evasive action by
1931more aggressively starting grace periods and more aggressively forcing
1932completion of grace-period processing.
1933This evasive action causes the grace period to complete more quickly,
1934but at the cost of restricting RCU's batching optimizations, thus
1935increasing the CPU overhead incurred by that grace period.
1936
1937<h2><a name="Software-Engineering Requirements">
1938Software-Engineering Requirements</a></h2>
1939
1940<p>
1941Between Murphy's Law and &ldquo;To err is human&rdquo;, it is necessary to
1942guard against mishaps and misuse:
1943
1944<ol>
1945<li> It is all too easy to forget to use <tt>rcu_read_lock()</tt>
1946 everywhere that it is needed, so kernels built with
526914a0 1947 <tt>CONFIG_PROVE_RCU=y</tt> will splat if
649e4368
PM
1948 <tt>rcu_dereference()</tt> is used outside of an
1949 RCU read-side critical section.
1950 Update-side code can use <tt>rcu_dereference_protected()</tt>,
1951 which takes a
1952 <a href="https://lwn.net/Articles/371986/">lockdep expression</a>
1953 to indicate what is providing the protection.
1954 If the indicated protection is not provided, a lockdep splat
1955 is emitted.
1956
1957 <p>
1958 Code shared between readers and updaters can use
1959 <tt>rcu_dereference_check()</tt>, which also takes a
1960 lockdep expression, and emits a lockdep splat if neither
1961 <tt>rcu_read_lock()</tt> nor the indicated protection
1962 is in place.
1963 In addition, <tt>rcu_dereference_raw()</tt> is used in those
1964 (hopefully rare) cases where the required protection cannot
1965 be easily described.
1966 Finally, <tt>rcu_read_lock_held()</tt> is provided to
1967 allow a function to verify that it has been invoked within
1968 an RCU read-side critical section.
1969 I was made aware of this set of requirements shortly after Thomas
1970 Gleixner audited a number of RCU uses.
1971<li> A given function might wish to check for RCU-related preconditions
1972 upon entry, before using any other RCU API.
1973 The <tt>rcu_lockdep_assert()</tt> does this job,
1974 asserting the expression in kernels having lockdep enabled
1975 and doing nothing otherwise.
1976<li> It is also easy to forget to use <tt>rcu_assign_pointer()</tt>
1977 and <tt>rcu_dereference()</tt>, perhaps (incorrectly)
1978 substituting a simple assignment.
1979 To catch this sort of error, a given RCU-protected pointer may be
41a2901e
PM
1980 tagged with <tt>__rcu</tt>, after which sparse
1981 will complain about simple-assignment accesses to that pointer.
649e4368
PM
1982 Arnd Bergmann made me aware of this requirement, and also
1983 supplied the needed
1984 <a href="https://lwn.net/Articles/376011/">patch series</a>.
1985<li> Kernels built with <tt>CONFIG_DEBUG_OBJECTS_RCU_HEAD=y</tt>
1986 will splat if a data element is passed to <tt>call_rcu()</tt>
1987 twice in a row, without a grace period in between.
1988 (This error is similar to a double free.)
1989 The corresponding <tt>rcu_head</tt> structures that are
1990 dynamically allocated are automatically tracked, but
1991 <tt>rcu_head</tt> structures allocated on the stack
1992 must be initialized with <tt>init_rcu_head_on_stack()</tt>
1993 and cleaned up with <tt>destroy_rcu_head_on_stack()</tt>.
1994 Similarly, statically allocated non-stack <tt>rcu_head</tt>
1995 structures must be initialized with <tt>init_rcu_head()</tt>
1996 and cleaned up with <tt>destroy_rcu_head()</tt>.
1997 Mathieu Desnoyers made me aware of this requirement, and also
1998 supplied the needed
1999 <a href="https://lkml.kernel.org/g/20100319013024.GA28456@Krystal">patch</a>.
2000<li> An infinite loop in an RCU read-side critical section will
01d3ad38
PM
2001 eventually trigger an RCU CPU stall warning splat, with
2002 the duration of &ldquo;eventually&rdquo; being controlled by the
2003 <tt>RCU_CPU_STALL_TIMEOUT</tt> <tt>Kconfig</tt> option, or,
2004 alternatively, by the
2005 <tt>rcupdate.rcu_cpu_stall_timeout</tt> boot/sysfs
2006 parameter.
649e4368
PM
2007 However, RCU is not obligated to produce this splat
2008 unless there is a grace period waiting on that particular
2009 RCU read-side critical section.
01d3ad38
PM
2010 <p>
2011 Some extreme workloads might intentionally delay
2012 RCU grace periods, and systems running those workloads can
2013 be booted with <tt>rcupdate.rcu_cpu_stall_suppress</tt>
2014 to suppress the splats.
2015 This kernel parameter may also be set via <tt>sysfs</tt>.
2016 Furthermore, RCU CPU stall warnings are counter-productive
2017 during sysrq dumps and during panics.
2018 RCU therefore supplies the <tt>rcu_sysrq_start()</tt> and
2019 <tt>rcu_sysrq_end()</tt> API members to be called before
2020 and after long sysrq dumps.
2021 RCU also supplies the <tt>rcu_panic()</tt> notifier that is
2022 automatically invoked at the beginning of a panic to suppress
2023 further RCU CPU stall warnings.
2024
2025 <p>
649e4368
PM
2026 This requirement made itself known in the early 1990s, pretty
2027 much the first time that it was necessary to debug a CPU stall.
01d3ad38
PM
2028 That said, the initial implementation in DYNIX/ptx was quite
2029 generic in comparison with that of Linux.
649e4368
PM
2030<li> Although it would be very good to detect pointers leaking out
2031 of RCU read-side critical sections, there is currently no
2032 good way of doing this.
2033 One complication is the need to distinguish between pointers
2034 leaking and pointers that have been handed off from RCU to
2035 some other synchronization mechanism, for example, reference
2036 counting.
2037<li> In kernels built with <tt>CONFIG_RCU_TRACE=y</tt>, RCU-related
ae91aa0a 2038 information is provided via event tracing.
649e4368
PM
2039<li> Open-coded use of <tt>rcu_assign_pointer()</tt> and
2040 <tt>rcu_dereference()</tt> to create typical linked
2041 data structures can be surprisingly error-prone.
2042 Therefore, RCU-protected
2043 <a href="https://lwn.net/Articles/609973/#RCU List APIs">linked lists</a>
2044 and, more recently, RCU-protected
2045 <a href="https://lwn.net/Articles/612100/">hash tables</a>
2046 are available.
2047 Many other special-purpose RCU-protected data structures are
2048 available in the Linux kernel and the userspace RCU library.
2049<li> Some linked structures are created at compile time, but still
2050 require <tt>__rcu</tt> checking.
2051 The <tt>RCU_POINTER_INITIALIZER()</tt> macro serves this
2052 purpose.
2053<li> It is not necessary to use <tt>rcu_assign_pointer()</tt>
2054 when creating linked structures that are to be published via
2055 a single external pointer.
2056 The <tt>RCU_INIT_POINTER()</tt> macro is provided for
2057 this task and also for assigning <tt>NULL</tt> pointers
2058 at runtime.
2059</ol>
2060
2061<p>
2062This not a hard-and-fast list: RCU's diagnostic capabilities will
2063continue to be guided by the number and type of usage bugs found
2064in real-world RCU usage.
2065
2066<h2><a name="Linux Kernel Complications">Linux Kernel Complications</a></h2>
2067
2068<p>
2069The Linux kernel provides an interesting environment for all kinds of
2070software, including RCU.
2071Some of the relevant points of interest are as follows:
2072
2073<ol>
2074<li> <a href="#Configuration">Configuration</a>.
2075<li> <a href="#Firmware Interface">Firmware Interface</a>.
2076<li> <a href="#Early Boot">Early Boot</a>.
2077<li> <a href="#Interrupts and NMIs">
2078 Interrupts and non-maskable interrupts (NMIs)</a>.
2079<li> <a href="#Loadable Modules">Loadable Modules</a>.
2080<li> <a href="#Hotplug CPU">Hotplug CPU</a>.
2081<li> <a href="#Scheduler and RCU">Scheduler and RCU</a>.
2082<li> <a href="#Tracing and RCU">Tracing and RCU</a>.
2083<li> <a href="#Energy Efficiency">Energy Efficiency</a>.
850bf6d5
PM
2084<li> <a href="#Scheduling-Clock Interrupts and RCU">
2085 Scheduling-Clock Interrupts and RCU</a>.
701e8031 2086<li> <a href="#Memory Efficiency">Memory Efficiency</a>.
649e4368
PM
2087<li> <a href="#Performance, Scalability, Response Time, and Reliability">
2088 Performance, Scalability, Response Time, and Reliability</a>.
2089</ol>
2090
2091<p>
2092This list is probably incomplete, but it does give a feel for the
2093most notable Linux-kernel complications.
2094Each of the following sections covers one of the above topics.
2095
2096<h3><a name="Configuration">Configuration</a></h3>
2097
2098<p>
2099RCU's goal is automatic configuration, so that almost nobody
2100needs to worry about RCU's <tt>Kconfig</tt> options.
2101And for almost all users, RCU does in fact work well
2102&ldquo;out of the box.&rdquo;
2103
2104<p>
2105However, there are specialized use cases that are handled by
2106kernel boot parameters and <tt>Kconfig</tt> options.
2107Unfortunately, the <tt>Kconfig</tt> system will explicitly ask users
2108about new <tt>Kconfig</tt> options, which requires almost all of them
2109be hidden behind a <tt>CONFIG_RCU_EXPERT</tt> <tt>Kconfig</tt> option.
2110
2111<p>
2112This all should be quite obvious, but the fact remains that
2113Linus Torvalds recently had to
2114<a href="https://lkml.kernel.org/g/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com">remind</a>
2115me of this requirement.
2116
2117<h3><a name="Firmware Interface">Firmware Interface</a></h3>
2118
2119<p>
2120In many cases, kernel obtains information about the system from the
2121firmware, and sometimes things are lost in translation.
2122Or the translation is accurate, but the original message is bogus.
2123
2124<p>
2125For example, some systems' firmware overreports the number of CPUs,
2126sometimes by a large factor.
2127If RCU naively believed the firmware, as it used to do,
2128it would create too many per-CPU kthreads.
2129Although the resulting system will still run correctly, the extra
2130kthreads needlessly consume memory and can cause confusion
2131when they show up in <tt>ps</tt> listings.
2132
2133<p>
2134RCU must therefore wait for a given CPU to actually come online before
2135it can allow itself to believe that the CPU actually exists.
2136The resulting &ldquo;ghost CPUs&rdquo; (which are never going to
2137come online) cause a number of
2138<a href="https://paulmck.livejournal.com/37494.html">interesting complications</a>.
2139
2140<h3><a name="Early Boot">Early Boot</a></h3>
2141
2142<p>
2143The Linux kernel's boot sequence is an interesting process,
2144and RCU is used early, even before <tt>rcu_init()</tt>
2145is invoked.
2146In fact, a number of RCU's primitives can be used as soon as the
2147initial task's <tt>task_struct</tt> is available and the
2148boot CPU's per-CPU variables are set up.
2149The read-side primitives (<tt>rcu_read_lock()</tt>,
2150<tt>rcu_read_unlock()</tt>, <tt>rcu_dereference()</tt>,
2151and <tt>rcu_access_pointer()</tt>) will operate normally very early on,
2152as will <tt>rcu_assign_pointer()</tt>.
2153
2154<p>
2155Although <tt>call_rcu()</tt> may be invoked at any
2156time during boot, callbacks are not guaranteed to be invoked until after
f1387d77
PM
2157all of RCU's kthreads have been spawned, which occurs at
2158<tt>early_initcall()</tt> time.
649e4368
PM
2159This delay in callback invocation is due to the fact that RCU does not
2160invoke callbacks until it is fully initialized, and this full initialization
2161cannot occur until after the scheduler has initialized itself to the
2162point where RCU can spawn and run its kthreads.
2163In theory, it would be possible to invoke callbacks earlier,
2164however, this is not a panacea because there would be severe restrictions
2165on what operations those callbacks could invoke.
2166
2167<p>
2168Perhaps surprisingly, <tt>synchronize_rcu()</tt>,
2169<a href="#Bottom-Half Flavor"><tt>synchronize_rcu_bh()</tt></a>
2170(<a href="#Bottom-Half Flavor">discussed below</a>),
f1387d77
PM
2171<a href="#Sched Flavor"><tt>synchronize_sched()</tt></a>,
2172<tt>synchronize_rcu_expedited()</tt>,
2173<tt>synchronize_rcu_bh_expedited()</tt>, and
2174<tt>synchronize_sched_expedited()</tt>
649e4368
PM
2175will all operate normally
2176during very early boot, the reason being that there is only one CPU
2177and preemption is disabled.
2178This means that the call <tt>synchronize_rcu()</tt> (or friends)
2179itself is a quiescent
2180state and thus a grace period, so the early-boot implementation can
2181be a no-op.
2182
2183<p>
f1387d77
PM
2184However, once the scheduler has spawned its first kthread, this early
2185boot trick fails for <tt>synchronize_rcu()</tt> (as well as for
2186<tt>synchronize_rcu_expedited()</tt>) in <tt>CONFIG_PREEMPT=y</tt>
2187kernels.
2188The reason is that an RCU read-side critical section might be preempted,
2189which means that a subsequent <tt>synchronize_rcu()</tt> really does have
2190to wait for something, as opposed to simply returning immediately.
2191Unfortunately, <tt>synchronize_rcu()</tt> can't do this until all of
2192its kthreads are spawned, which doesn't happen until some time during
2193<tt>early_initcalls()</tt> time.
2194But this is no excuse: RCU is nevertheless required to correctly handle
6771853b 2195synchronous grace periods during this time period.
f1387d77
PM
2196Once all of its kthreads are up and running, RCU starts running
2197normally.
649e4368 2198
6146f8df
PM
2199<table>
2200<tr><th>&nbsp;</th></tr>
2201<tr><th align="left">Quick Quiz:</th></tr>
2202<tr><td>
f1387d77
PM
2203 How can RCU possibly handle grace periods before all of its
2204 kthreads have been spawned???
6146f8df
PM
2205</td></tr>
2206<tr><th align="left">Answer:</th></tr>
2207<tr><td bgcolor="#ffffff"><font color="ffffff">
f1387d77 2208 Very carefully!
6771853b 2209 </font>
f1387d77 2210
6771853b
PM
2211 <p><font color="ffffff">
2212 During the &ldquo;dead zone&rdquo; between the time that the
f1387d77
PM
2213 scheduler spawns the first task and the time that all of RCU's
2214 kthreads have been spawned, all synchronous grace periods are
2215 handled by the expedited grace-period mechanism.
2216 At runtime, this expedited mechanism relies on workqueues, but
2217 during the dead zone the requesting task itself drives the
2218 desired expedited grace period.
2219 Because dead-zone execution takes place within task context,
2220 everything works.
2221 Once the dead zone ends, expedited grace periods go back to
2222 using workqueues, as is required to avoid problems that would
2223 otherwise occur when a user task received a POSIX signal while
2224 driving an expedited grace period.
6771853b 2225 </font>
f1387d77 2226
6771853b
PM
2227 <p><font color="ffffff">
2228 And yes, this does mean that it is unhelpful to send POSIX
f1387d77
PM
2229 signals to random tasks between the time that the scheduler
2230 spawns its first kthread and the time that RCU's kthreads
2231 have all been spawned.
2232 If there ever turns out to be a good reason for sending POSIX
2233 signals during that time, appropriate adjustments will be made.
2234 (If it turns out that POSIX signals are sent during this time for
2235 no good reason, other adjustments will be made, appropriate
2236 or otherwise.)
6146f8df
PM
2237</font></td></tr>
2238<tr><td>&nbsp;</td></tr>
2239</table>
649e4368
PM
2240
2241<p>
2242I learned of these boot-time requirements as a result of a series of
2243system hangs.
2244
2245<h3><a name="Interrupts and NMIs">Interrupts and NMIs</a></h3>
2246
2247<p>
2248The Linux kernel has interrupts, and RCU read-side critical sections are
2249legal within interrupt handlers and within interrupt-disabled regions
2250of code, as are invocations of <tt>call_rcu()</tt>.
2251
2252<p>
2253Some Linux-kernel architectures can enter an interrupt handler from
2254non-idle process context, and then just never leave it, instead stealthily
2255transitioning back to process context.
2256This trick is sometimes used to invoke system calls from inside the kernel.
2257These &ldquo;half-interrupts&rdquo; mean that RCU has to be very careful
2258about how it counts interrupt nesting levels.
2259I learned of this requirement the hard way during a rewrite
2260of RCU's dyntick-idle code.
2261
2262<p>
2263The Linux kernel has non-maskable interrupts (NMIs), and
2264RCU read-side critical sections are legal within NMI handlers.
2265Thankfully, RCU update-side primitives, including
2266<tt>call_rcu()</tt>, are prohibited within NMI handlers.
2267
2268<p>
2269The name notwithstanding, some Linux-kernel architectures
2270can have nested NMIs, which RCU must handle correctly.
2271Andy Lutomirski
2272<a href="https://lkml.kernel.org/g/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com">surprised me</a>
2273with this requirement;
2274he also kindly surprised me with
2275<a href="https://lkml.kernel.org/g/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com">an algorithm</a>
2276that meets this requirement.
2277
2278<h3><a name="Loadable Modules">Loadable Modules</a></h3>
2279
2280<p>
2281The Linux kernel has loadable modules, and these modules can
2282also be unloaded.
2283After a given module has been unloaded, any attempt to call
2284one of its functions results in a segmentation fault.
2285The module-unload functions must therefore cancel any
2286delayed calls to loadable-module functions, for example,
2287any outstanding <tt>mod_timer()</tt> must be dealt with
2288via <tt>del_timer_sync()</tt> or similar.
2289
2290<p>
2291Unfortunately, there is no way to cancel an RCU callback;
2292once you invoke <tt>call_rcu()</tt>, the callback function is
2293going to eventually be invoked, unless the system goes down first.
2294Because it is normally considered socially irresponsible to crash the system
2295in response to a module unload request, we need some other way
2296to deal with in-flight RCU callbacks.
2297
2298<p>
2299RCU therefore provides
2300<tt><a href="https://lwn.net/Articles/217484/">rcu_barrier()</a></tt>,
2301which waits until all in-flight RCU callbacks have been invoked.
2302If a module uses <tt>call_rcu()</tt>, its exit function should therefore
2303prevent any future invocation of <tt>call_rcu()</tt>, then invoke
2304<tt>rcu_barrier()</tt>.
2305In theory, the underlying module-unload code could invoke
2306<tt>rcu_barrier()</tt> unconditionally, but in practice this would
2307incur unacceptable latencies.
2308
2309<p>
2310Nikita Danilov noted this requirement for an analogous filesystem-unmount
2311situation, and Dipankar Sarma incorporated <tt>rcu_barrier()</tt> into RCU.
2312The need for <tt>rcu_barrier()</tt> for module unloading became
2313apparent later.
2314
6771853b
PM
2315<p>
2316<b>Important note</b>: The <tt>rcu_barrier()</tt> function is not,
2317repeat, <i>not</i>, obligated to wait for a grace period.
2318It is instead only required to wait for RCU callbacks that have
2319already been posted.
2320Therefore, if there are no RCU callbacks posted anywhere in the system,
2321<tt>rcu_barrier()</tt> is within its rights to return immediately.
2322Even if there are callbacks posted, <tt>rcu_barrier()</tt> does not
2323necessarily need to wait for a grace period.
2324
2325<table>
2326<tr><th>&nbsp;</th></tr>
2327<tr><th align="left">Quick Quiz:</th></tr>
2328<tr><td>
2329 Wait a minute!
2330 Each RCU callbacks must wait for a grace period to complete,
2331 and <tt>rcu_barrier()</tt> must wait for each pre-existing
2332 callback to be invoked.
2333 Doesn't <tt>rcu_barrier()</tt> therefore need to wait for
2334 a full grace period if there is even one callback posted anywhere
2335 in the system?
2336</td></tr>
2337<tr><th align="left">Answer:</th></tr>
2338<tr><td bgcolor="#ffffff"><font color="ffffff">
2339 Absolutely not!!!
2340 </font>
2341
2342 <p><font color="ffffff">
2343 Yes, each RCU callbacks must wait for a grace period to complete,
2344 but it might well be partly (or even completely) finished waiting
2345 by the time <tt>rcu_barrier()</tt> is invoked.
2346 In that case, <tt>rcu_barrier()</tt> need only wait for the
2347 remaining portion of the grace period to elapse.
2348 So even if there are quite a few callbacks posted,
2349 <tt>rcu_barrier()</tt> might well return quite quickly.
2350 </font>
2351
2352 <p><font color="ffffff">
2353 So if you need to wait for a grace period as well as for all
2354 pre-existing callbacks, you will need to invoke both
2355 <tt>synchronize_rcu()</tt> and <tt>rcu_barrier()</tt>.
2356 If latency is a concern, you can always use workqueues
2357 to invoke them concurrently.
2358</font></td></tr>
2359<tr><td>&nbsp;</td></tr>
2360</table>
2361
649e4368
PM
2362<h3><a name="Hotplug CPU">Hotplug CPU</a></h3>
2363
2364<p>
2365The Linux kernel supports CPU hotplug, which means that CPUs
2366can come and go.
6771853b
PM
2367It is of course illegal to use any RCU API member from an offline CPU,
2368with the exception of <a href="#Sleepable RCU">SRCU</a> read-side
2369critical sections.
649e4368
PM
2370This requirement was present from day one in DYNIX/ptx, but
2371on the other hand, the Linux kernel's CPU-hotplug implementation
2372is &ldquo;interesting.&rdquo;
2373
2374<p>
2375The Linux-kernel CPU-hotplug implementation has notifiers that
2376are used to allow the various kernel subsystems (including RCU)
2377to respond appropriately to a given CPU-hotplug operation.
2378Most RCU operations may be invoked from CPU-hotplug notifiers,
6771853b
PM
2379including even synchronous grace-period operations such as
2380<tt>synchronize_rcu()</tt> and <tt>synchronize_rcu_expedited()</tt>.
649e4368
PM
2381
2382<p>
6771853b 2383However, all-callback-wait operations such as
649e4368
PM
2384<tt>rcu_barrier()</tt> are also not supported, due to the
2385fact that there are phases of CPU-hotplug operations where
2386the outgoing CPU's callbacks will not be invoked until after
2387the CPU-hotplug operation ends, which could also result in deadlock.
6771853b
PM
2388Furthermore, <tt>rcu_barrier()</tt> blocks CPU-hotplug operations
2389during its execution, which results in another type of deadlock
2390when invoked from a CPU-hotplug notifier.
649e4368
PM
2391
2392<h3><a name="Scheduler and RCU">Scheduler and RCU</a></h3>
2393
2394<p>
2395RCU depends on the scheduler, and the scheduler uses RCU to
2396protect some of its data structures.
2397This means the scheduler is forbidden from acquiring
2398the runqueue locks and the priority-inheritance locks
a4b57562
PM
2399in the middle of an outermost RCU read-side critical section unless either
2400(1)&nbsp;it releases them before exiting that same
2401RCU read-side critical section, or
c64c4b0f 2402(2)&nbsp;interrupts are disabled across
a4b57562
PM
2403that entire RCU read-side critical section.
2404This same prohibition also applies (recursively!) to any lock that is acquired
649e4368 2405while holding any lock to which this prohibition applies.
a4b57562
PM
2406Adhering to this rule prevents preemptible RCU from invoking
2407<tt>rcu_read_unlock_special()</tt> while either runqueue or
2408priority-inheritance locks are held, thus avoiding deadlock.
649e4368 2409
c64c4b0f
PM
2410<p>
2411Prior to v4.4, it was only necessary to disable preemption across
2412RCU read-side critical sections that acquired scheduler locks.
2413In v4.4, expedited grace periods started using IPIs, and these
2414IPIs could force a <tt>rcu_read_unlock()</tt> to take the slowpath.
2415Therefore, this expedited-grace-period change required disabling of
2416interrupts, not just preemption.
2417
649e4368
PM
2418<p>
2419For RCU's part, the preemptible-RCU <tt>rcu_read_unlock()</tt>
2420implementation must be written carefully to avoid similar deadlocks.
2421In particular, <tt>rcu_read_unlock()</tt> must tolerate an
2422interrupt where the interrupt handler invokes both
2423<tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
2424This possibility requires <tt>rcu_read_unlock()</tt> to use
2425negative nesting levels to avoid destructive recursion via
2426interrupt handler's use of RCU.
2427
2428<p>
2429This pair of mutual scheduler-RCU requirements came as a
2430<a href="https://lwn.net/Articles/453002/">complete surprise</a>.
2431
2432<p>
2433As noted above, RCU makes use of kthreads, and it is necessary to
2434avoid excessive CPU-time accumulation by these kthreads.
2435This requirement was no surprise, but RCU's violation of it
2436when running context-switch-heavy workloads when built with
2437<tt>CONFIG_NO_HZ_FULL=y</tt>
2438<a href="http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf">did come as a surprise [PDF]</a>.
2439RCU has made good progress towards meeting this requirement, even
2440for context-switch-have <tt>CONFIG_NO_HZ_FULL=y</tt> workloads,
2441but there is room for further improvement.
2442
2443<h3><a name="Tracing and RCU">Tracing and RCU</a></h3>
2444
2445<p>
2446It is possible to use tracing on RCU code, but tracing itself
2447uses RCU.
2448For this reason, <tt>rcu_dereference_raw_notrace()</tt>
2449is provided for use by tracing, which avoids the destructive
2450recursion that could otherwise ensue.
2451This API is also used by virtualization in some architectures,
2452where RCU readers execute in environments in which tracing
2453cannot be used.
2454The tracing folks both located the requirement and provided the
2455needed fix, so this surprise requirement was relatively painless.
2456
2457<h3><a name="Energy Efficiency">Energy Efficiency</a></h3>
2458
2459<p>
2460Interrupting idle CPUs is considered socially unacceptable,
2461especially by people with battery-powered embedded systems.
2462RCU therefore conserves energy by detecting which CPUs are
2463idle, including tracking CPUs that have been interrupted from idle.
2464This is a large part of the energy-efficiency requirement,
2465so I learned of this via an irate phone call.
2466
2467<p>
2468Because RCU avoids interrupting idle CPUs, it is illegal to
2469execute an RCU read-side critical section on an idle CPU.
2470(Kernels built with <tt>CONFIG_PROVE_RCU=y</tt> will splat
2471if you try it.)
2472The <tt>RCU_NONIDLE()</tt> macro and <tt>_rcuidle</tt>
2473event tracing is provided to work around this restriction.
2474In addition, <tt>rcu_is_watching()</tt> may be used to
2475test whether or not it is currently legal to run RCU read-side
2476critical sections on this CPU.
2477I learned of the need for diagnostics on the one hand
2478and <tt>RCU_NONIDLE()</tt> on the other while inspecting
2479idle-loop code.
2480Steven Rostedt supplied <tt>_rcuidle</tt> event tracing,
2481which is used quite heavily in the idle loop.
c79dac75
PM
2482However, there are some restrictions on the code placed within
2483<tt>RCU_NONIDLE()</tt>:
2484
2485<ol>
2486<li> Blocking is prohibited.
2487 In practice, this is not a serious restriction given that idle
2488 tasks are prohibited from blocking to begin with.
526914a0 2489<li> Although nesting <tt>RCU_NONIDLE()</tt> is permitted, they cannot
c79dac75
PM
2490 nest indefinitely deeply.
2491 However, given that they can be nested on the order of a million
2492 deep, even on 32-bit systems, this should not be a serious
2493 restriction.
2494 This nesting limit would probably be reached long after the
2495 compiler OOMed or the stack overflowed.
2496<li> Any code path that enters <tt>RCU_NONIDLE()</tt> must sequence
2497 out of that same <tt>RCU_NONIDLE()</tt>.
2498 For example, the following is grossly illegal:
2499
2500 <blockquote>
2501 <pre>
2502 1 RCU_NONIDLE({
2503 2 do_something();
2504 3 goto bad_idea; /* BUG!!! */
2505 4 do_something_else();});
2506 5 bad_idea:
2507 </pre>
2508 </blockquote>
2509
2510 <p>
2511 It is just as illegal to transfer control into the middle of
2512 <tt>RCU_NONIDLE()</tt>'s argument.
2513 Yes, in theory, you could transfer in as long as you also
2514 transferred out, but in practice you could also expect to get sharply
2515 worded review comments.
2516</ol>
649e4368
PM
2517
2518<p>
2519It is similarly socially unacceptable to interrupt an
2520<tt>nohz_full</tt> CPU running in userspace.
2521RCU must therefore track <tt>nohz_full</tt> userspace
2522execution.
fe5ac724 2523RCU must therefore be able to sample state at two points in
649e4368
PM
2524time, and be able to determine whether or not some other CPU spent
2525any time idle and/or executing in userspace.
2526
2527<p>
2528These energy-efficiency requirements have proven quite difficult to
2529understand and to meet, for example, there have been more than five
2530clean-sheet rewrites of RCU's energy-efficiency code, the last of
2531which was finally able to demonstrate
2532<a href="http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf">real energy savings running on real hardware [PDF]</a>.
2533As noted earlier,
2534I learned of many of these requirements via angry phone calls:
2535Flaming me on the Linux-kernel mailing list was apparently not
2536sufficient to fully vent their ire at RCU's energy-efficiency bugs!
2537
850bf6d5
PM
2538<h3><a name="Scheduling-Clock Interrupts and RCU">
2539Scheduling-Clock Interrupts and RCU</a></h3>
2540
2541<p>
2542The kernel transitions between in-kernel non-idle execution, userspace
2543execution, and the idle loop.
2544Depending on kernel configuration, RCU handles these states differently:
2545
2546<table border=3>
2547<tr><th><tt>HZ</tt> Kconfig</th>
2548 <th>In-Kernel</th>
2549 <th>Usermode</th>
2550 <th>Idle</th></tr>
2551<tr><th align="left"><tt>HZ_PERIODIC</tt></th>
2552 <td>Can rely on scheduling-clock interrupt.</td>
2553 <td>Can rely on scheduling-clock interrupt and its
2554 detection of interrupt from usermode.</td>
2555 <td>Can rely on RCU's dyntick-idle detection.</td></tr>
2556<tr><th align="left"><tt>NO_HZ_IDLE</tt></th>
2557 <td>Can rely on scheduling-clock interrupt.</td>
2558 <td>Can rely on scheduling-clock interrupt and its
2559 detection of interrupt from usermode.</td>
2560 <td>Can rely on RCU's dyntick-idle detection.</td></tr>
2561<tr><th align="left"><tt>NO_HZ_FULL</tt></th>
2562 <td>Can only sometimes rely on scheduling-clock interrupt.
2563 In other cases, it is necessary to bound kernel execution
2564 times and/or use IPIs.</td>
2565 <td>Can rely on RCU's dyntick-idle detection.</td>
2566 <td>Can rely on RCU's dyntick-idle detection.</td></tr>
2567</table>
2568
2569<table>
2570<tr><th>&nbsp;</th></tr>
2571<tr><th align="left">Quick Quiz:</th></tr>
2572<tr><td>
2573 Why can't <tt>NO_HZ_FULL</tt> in-kernel execution rely on the
2574 scheduling-clock interrupt, just like <tt>HZ_PERIODIC</tt>
2575 and <tt>NO_HZ_IDLE</tt> do?
2576</td></tr>
2577<tr><th align="left">Answer:</th></tr>
2578<tr><td bgcolor="#ffffff"><font color="ffffff">
2579 Because, as a performance optimization, <tt>NO_HZ_FULL</tt>
2580 does not necessarily re-enable the scheduling-clock interrupt
2581 on entry to each and every system call.
2582</font></td></tr>
2583<tr><td>&nbsp;</td></tr>
2584</table>
2585
2586<p>
2587However, RCU must be reliably informed as to whether any given
2588CPU is currently in the idle loop, and, for <tt>NO_HZ_FULL</tt>,
2589also whether that CPU is executing in usermode, as discussed
2590<a href="#Energy Efficiency">earlier</a>.
2591It also requires that the scheduling-clock interrupt be enabled when
2592RCU needs it to be:
2593
2594<ol>
2595<li> If a CPU is either idle or executing in usermode, and RCU believes
2596 it is non-idle, the scheduling-clock tick had better be running.
2597 Otherwise, you will get RCU CPU stall warnings. Or at best,
2598 very long (11-second) grace periods, with a pointless IPI waking
2599 the CPU from time to time.
2600<li> If a CPU is in a portion of the kernel that executes RCU read-side
2601 critical sections, and RCU believes this CPU to be idle, you will get
2602 random memory corruption. <b>DON'T DO THIS!!!</b>
2603
2604 <br>This is one reason to test with lockdep, which will complain
2605 about this sort of thing.
2606<li> If a CPU is in a portion of the kernel that is absolutely
2607 positively no-joking guaranteed to never execute any RCU read-side
2608 critical sections, and RCU believes this CPU to to be idle,
2609 no problem. This sort of thing is used by some architectures
2610 for light-weight exception handlers, which can then avoid the
2611 overhead of <tt>rcu_irq_enter()</tt> and <tt>rcu_irq_exit()</tt>
2612 at exception entry and exit, respectively.
2613 Some go further and avoid the entireties of <tt>irq_enter()</tt>
2614 and <tt>irq_exit()</tt>.
2615
2616 <br>Just make very sure you are running some of your tests with
2617 <tt>CONFIG_PROVE_RCU=y</tt>, just in case one of your code paths
2618 was in fact joking about not doing RCU read-side critical sections.
2619<li> If a CPU is executing in the kernel with the scheduling-clock
2620 interrupt disabled and RCU believes this CPU to be non-idle,
2621 and if the CPU goes idle (from an RCU perspective) every few
2622 jiffies, no problem. It is usually OK for there to be the
2623 occasional gap between idle periods of up to a second or so.
2624
2625 <br>If the gap grows too long, you get RCU CPU stall warnings.
2626<li> If a CPU is either idle or executing in usermode, and RCU believes
2627 it to be idle, of course no problem.
2628<li> If a CPU is executing in the kernel, the kernel code
2629 path is passing through quiescent states at a reasonable
2630 frequency (preferably about once per few jiffies, but the
2631 occasional excursion to a second or so is usually OK) and the
2632 scheduling-clock interrupt is enabled, of course no problem.
2633
2634 <br>If the gap between a successive pair of quiescent states grows
2635 too long, you get RCU CPU stall warnings.
2636</ol>
2637
2638<table>
2639<tr><th>&nbsp;</th></tr>
2640<tr><th align="left">Quick Quiz:</th></tr>
2641<tr><td>
2642 But what if my driver has a hardware interrupt handler
2643 that can run for many seconds?
2644 I cannot invoke <tt>schedule()</tt> from an hardware
2645 interrupt handler, after all!
2646</td></tr>
2647<tr><th align="left">Answer:</th></tr>
2648<tr><td bgcolor="#ffffff"><font color="ffffff">
2649 One approach is to do <tt>rcu_irq_exit();rcu_irq_enter();</tt>
2650 every so often.
2651 But given that long-running interrupt handlers can cause
2652 other problems, not least for response time, shouldn't you
2653 work to keep your interrupt handler's runtime within reasonable
2654 bounds?
2655</font></td></tr>
2656<tr><td>&nbsp;</td></tr>
2657</table>
2658
2659<p>
2660But as long as RCU is properly informed of kernel state transitions between
2661in-kernel execution, usermode execution, and idle, and as long as the
2662scheduling-clock interrupt is enabled when RCU needs it to be, you
2663can rest assured that the bugs you encounter will be in some other
2664part of RCU or some other part of the kernel!
2665
701e8031
PM
2666<h3><a name="Memory Efficiency">Memory Efficiency</a></h3>
2667
2668<p>
2669Although small-memory non-realtime systems can simply use Tiny RCU,
2670code size is only one aspect of memory efficiency.
2671Another aspect is the size of the <tt>rcu_head</tt> structure
2672used by <tt>call_rcu()</tt> and <tt>kfree_rcu()</tt>.
2673Although this structure contains nothing more than a pair of pointers,
2674it does appear in many RCU-protected data structures, including
2675some that are size critical.
2676The <tt>page</tt> structure is a case in point, as evidenced by
2677the many occurrences of the <tt>union</tt> keyword within that structure.
2678
2679<p>
2680This need for memory efficiency is one reason that RCU uses hand-crafted
2681singly linked lists to track the <tt>rcu_head</tt> structures that
2682are waiting for a grace period to elapse.
2683It is also the reason why <tt>rcu_head</tt> structures do not contain
2684debug information, such as fields tracking the file and line of the
2685<tt>call_rcu()</tt> or <tt>kfree_rcu()</tt> that posted them.
2686Although this information might appear in debug-only kernel builds at some
2687point, in the meantime, the <tt>-&gt;func</tt> field will often provide
2688the needed debug information.
2689
2690<p>
2691However, in some cases, the need for memory efficiency leads to even
2692more extreme measures.
2693Returning to the <tt>page</tt> structure, the <tt>rcu_head</tt> field
2694shares storage with a great many other structures that are used at
2695various points in the corresponding page's lifetime.
2696In order to correctly resolve certain
2697<a href="https://lkml.kernel.org/g/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com">race conditions</a>,
2698the Linux kernel's memory-management subsystem needs a particular bit
2699to remain zero during all phases of grace-period processing,
2700and that bit happens to map to the bottom bit of the
2701<tt>rcu_head</tt> structure's <tt>-&gt;next</tt> field.
2702RCU makes this guarantee as long as <tt>call_rcu()</tt>
2703is used to post the callback, as opposed to <tt>kfree_rcu()</tt>
2704or some future &ldquo;lazy&rdquo;
2705variant of <tt>call_rcu()</tt> that might one day be created for
2706energy-efficiency purposes.
2707
ed2bec07
PM
2708<p>
2709That said, there are limits.
2710RCU requires that the <tt>rcu_head</tt> structure be aligned to a
2711two-byte boundary, and passing a misaligned <tt>rcu_head</tt>
2712structure to one of the <tt>call_rcu()</tt> family of functions
2713will result in a splat.
2714It is therefore necessary to exercise caution when packing
2715structures containing fields of type <tt>rcu_head</tt>.
2716Why not a four-byte or even eight-byte alignment requirement?
2717Because the m68k architecture provides only two-byte alignment,
2718and thus acts as alignment's least common denominator.
2719
2720<p>
2721The reason for reserving the bottom bit of pointers to
2722<tt>rcu_head</tt> structures is to leave the door open to
2723&ldquo;lazy&rdquo; callbacks whose invocations can safely be deferred.
2724Deferring invocation could potentially have energy-efficiency
2725benefits, but only if the rate of non-lazy callbacks decreases
2726significantly for some important workload.
2727In the meantime, reserving the bottom bit keeps this option open
2728in case it one day becomes useful.
2729
649e4368
PM
2730<h3><a name="Performance, Scalability, Response Time, and Reliability">
2731Performance, Scalability, Response Time, and Reliability</a></h3>
2732
2733<p>
2734Expanding on the
2735<a href="#Performance and Scalability">earlier discussion</a>,
2736RCU is used heavily by hot code paths in performance-critical
2737portions of the Linux kernel's networking, security, virtualization,
2738and scheduling code paths.
2739RCU must therefore use efficient implementations, especially in its
2740read-side primitives.
2741To that end, it would be good if preemptible RCU's implementation
2742of <tt>rcu_read_lock()</tt> could be inlined, however, doing
2743this requires resolving <tt>#include</tt> issues with the
2744<tt>task_struct</tt> structure.
2745
2746<p>
2747The Linux kernel supports hardware configurations with up to
27484096 CPUs, which means that RCU must be extremely scalable.
2749Algorithms that involve frequent acquisitions of global locks or
2750frequent atomic operations on global variables simply cannot be
2751tolerated within the RCU implementation.
2752RCU therefore makes heavy use of a combining tree based on the
2753<tt>rcu_node</tt> structure.
2754RCU is required to tolerate all CPUs continuously invoking any
2755combination of RCU's runtime primitives with minimal per-operation
2756overhead.
2757In fact, in many cases, increasing load must <i>decrease</i> the
2758per-operation overhead, witness the batching optimizations for
2759<tt>synchronize_rcu()</tt>, <tt>call_rcu()</tt>,
2760<tt>synchronize_rcu_expedited()</tt>, and <tt>rcu_barrier()</tt>.
2761As a general rule, RCU must cheerfully accept whatever the
2762rest of the Linux kernel decides to throw at it.
2763
2764<p>
2765The Linux kernel is used for real-time workloads, especially
2766in conjunction with the
2767<a href="https://rt.wiki.kernel.org/index.php/Main_Page">-rt patchset</a>.
2768The real-time-latency response requirements are such that the
2769traditional approach of disabling preemption across RCU
2770read-side critical sections is inappropriate.
2771Kernels built with <tt>CONFIG_PREEMPT=y</tt> therefore
2772use an RCU implementation that allows RCU read-side critical
2773sections to be preempted.
2774This requirement made its presence known after users made it
2775clear that an earlier
2776<a href="https://lwn.net/Articles/107930/">real-time patch</a>
2777did not meet their needs, in conjunction with some
2778<a href="https://lkml.kernel.org/g/20050318002026.GA2693@us.ibm.com">RCU issues</a>
2779encountered by a very early version of the -rt patchset.
2780
2781<p>
2782In addition, RCU must make do with a sub-100-microsecond real-time latency
2783budget.
2784In fact, on smaller systems with the -rt patchset, the Linux kernel
2785provides sub-20-microsecond real-time latencies for the whole kernel,
2786including RCU.
2787RCU's scalability and latency must therefore be sufficient for
2788these sorts of configurations.
2789To my surprise, the sub-100-microsecond real-time latency budget
2790<a href="http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf">
2791applies to even the largest systems [PDF]</a>,
2792up to and including systems with 4096 CPUs.
2793This real-time requirement motivated the grace-period kthread, which
2794also simplified handling of a number of race conditions.
2795
41abcf32
PM
2796<p>
2797RCU must avoid degrading real-time response for CPU-bound threads, whether
2798executing in usermode (which is one use case for
2799<tt>CONFIG_NO_HZ_FULL=y</tt>) or in the kernel.
2800That said, CPU-bound loops in the kernel must execute
f2b1760a 2801<tt>cond_resched()</tt> at least once per few tens of milliseconds
41abcf32
PM
2802in order to avoid receiving an IPI from RCU.
2803
649e4368
PM
2804<p>
2805Finally, RCU's status as a synchronization primitive means that
2806any RCU failure can result in arbitrary memory corruption that can be
2807extremely difficult to debug.
2808This means that RCU must be extremely reliable, which in
2809practice also means that RCU must have an aggressive stress-test
2810suite.
2811This stress-test suite is called <tt>rcutorture</tt>.
2812
2813<p>
2814Although the need for <tt>rcutorture</tt> was no surprise,
2815the current immense popularity of the Linux kernel is posing
2816interesting&mdash;and perhaps unprecedented&mdash;validation
2817challenges.
2818To see this, keep in mind that there are well over one billion
2819instances of the Linux kernel running today, given Android
2820smartphones, Linux-powered televisions, and servers.
2821This number can be expected to increase sharply with the advent of
2822the celebrated Internet of Things.
2823
2824<p>
2825Suppose that RCU contains a race condition that manifests on average
2826once per million years of runtime.
2827This bug will be occurring about three times per <i>day</i> across
2828the installed base.
2829RCU could simply hide behind hardware error rates, given that no one
2830should really expect their smartphone to last for a million years.
2831However, anyone taking too much comfort from this thought should
2832consider the fact that in most jurisdictions, a successful multi-year
2833test of a given mechanism, which might include a Linux kernel,
2834suffices for a number of types of safety-critical certifications.
2835In fact, rumor has it that the Linux kernel is already being used
2836in production for safety-critical applications.
2837I don't know about you, but I would feel quite bad if a bug in RCU
2838killed someone.
2839Which might explain my recent focus on validation and verification.
2840
2841<h2><a name="Other RCU Flavors">Other RCU Flavors</a></h2>
2842
2843<p>
2844One of the more surprising things about RCU is that there are now
2845no fewer than five <i>flavors</i>, or API families.
2846In addition, the primary flavor that has been the sole focus up to
2847this point has two different implementations, non-preemptible and
2848preemptible.
2849The other four flavors are listed below, with requirements for each
2850described in a separate section.
2851
2852<ol>
2853<li> <a href="#Bottom-Half Flavor">Bottom-Half Flavor</a>
2854<li> <a href="#Sched Flavor">Sched Flavor</a>
2855<li> <a href="#Sleepable RCU">Sleepable RCU</a>
2856<li> <a href="#Tasks RCU">Tasks RCU</a>
f43b6254
PM
2857<li> <a href="#Waiting for Multiple Grace Periods">
2858 Waiting for Multiple Grace Periods</a>
649e4368
PM
2859</ol>
2860
2861<h3><a name="Bottom-Half Flavor">Bottom-Half Flavor</a></h3>
2862
2863<p>
2864The softirq-disable (AKA &ldquo;bottom-half&rdquo;,
2865hence the &ldquo;_bh&rdquo; abbreviations)
2866flavor of RCU, or <i>RCU-bh</i>, was developed by
2867Dipankar Sarma to provide a flavor of RCU that could withstand the
2868network-based denial-of-service attacks researched by Robert
2869Olsson.
2870These attacks placed so much networking load on the system
2871that some of the CPUs never exited softirq execution,
2872which in turn prevented those CPUs from ever executing a context switch,
2873which, in the RCU implementation of that time, prevented grace periods
2874from ever ending.
2875The result was an out-of-memory condition and a system hang.
2876
2877<p>
2878The solution was the creation of RCU-bh, which does
2879<tt>local_bh_disable()</tt>
2880across its read-side critical sections, and which uses the transition
2881from one type of softirq processing to another as a quiescent state
2882in addition to context switch, idle, user mode, and offline.
2883This means that RCU-bh grace periods can complete even when some of
2884the CPUs execute in softirq indefinitely, thus allowing algorithms
2885based on RCU-bh to withstand network-based denial-of-service attacks.
2886
2887<p>
2888Because
2889<tt>rcu_read_lock_bh()</tt> and <tt>rcu_read_unlock_bh()</tt>
2890disable and re-enable softirq handlers, any attempt to start a softirq
2891handlers during the
2892RCU-bh read-side critical section will be deferred.
2893In this case, <tt>rcu_read_unlock_bh()</tt>
2894will invoke softirq processing, which can take considerable time.
2895One can of course argue that this softirq overhead should be associated
2896with the code following the RCU-bh read-side critical section rather
2897than <tt>rcu_read_unlock_bh()</tt>, but the fact
2898is that most profiling tools cannot be expected to make this sort
2899of fine distinction.
2900For example, suppose that a three-millisecond-long RCU-bh read-side
2901critical section executes during a time of heavy networking load.
2902There will very likely be an attempt to invoke at least one softirq
2903handler during that three milliseconds, but any such invocation will
2904be delayed until the time of the <tt>rcu_read_unlock_bh()</tt>.
2905This can of course make it appear at first glance as if
2906<tt>rcu_read_unlock_bh()</tt> was executing very slowly.
2907
2908<p>
2909The
2910<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">RCU-bh API</a>
2911includes
2912<tt>rcu_read_lock_bh()</tt>,
2913<tt>rcu_read_unlock_bh()</tt>,
2914<tt>rcu_dereference_bh()</tt>,
2915<tt>rcu_dereference_bh_check()</tt>,
2916<tt>synchronize_rcu_bh()</tt>,
2917<tt>synchronize_rcu_bh_expedited()</tt>,
2918<tt>call_rcu_bh()</tt>,
2919<tt>rcu_barrier_bh()</tt>, and
2920<tt>rcu_read_lock_bh_held()</tt>.
2921
2922<h3><a name="Sched Flavor">Sched Flavor</a></h3>
2923
2924<p>
2925Before preemptible RCU, waiting for an RCU grace period had the
2926side effect of also waiting for all pre-existing interrupt
2927and NMI handlers.
2928However, there are legitimate preemptible-RCU implementations that
2929do not have this property, given that any point in the code outside
2930of an RCU read-side critical section can be a quiescent state.
2931Therefore, <i>RCU-sched</i> was created, which follows &ldquo;classic&rdquo;
2932RCU in that an RCU-sched grace period waits for for pre-existing
2933interrupt and NMI handlers.
2934In kernels built with <tt>CONFIG_PREEMPT=n</tt>, the RCU and RCU-sched
2935APIs have identical implementations, while kernels built with
2936<tt>CONFIG_PREEMPT=y</tt> provide a separate implementation for each.
2937
2938<p>
2939Note well that in <tt>CONFIG_PREEMPT=y</tt> kernels,
2940<tt>rcu_read_lock_sched()</tt> and <tt>rcu_read_unlock_sched()</tt>
2941disable and re-enable preemption, respectively.
2942This means that if there was a preemption attempt during the
2943RCU-sched read-side critical section, <tt>rcu_read_unlock_sched()</tt>
2944will enter the scheduler, with all the latency and overhead entailed.
2945Just as with <tt>rcu_read_unlock_bh()</tt>, this can make it look
2946as if <tt>rcu_read_unlock_sched()</tt> was executing very slowly.
2947However, the highest-priority task won't be preempted, so that task
2948will enjoy low-overhead <tt>rcu_read_unlock_sched()</tt> invocations.
2949
2950<p>
2951The
2952<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">RCU-sched API</a>
2953includes
2954<tt>rcu_read_lock_sched()</tt>,
2955<tt>rcu_read_unlock_sched()</tt>,
2956<tt>rcu_read_lock_sched_notrace()</tt>,
2957<tt>rcu_read_unlock_sched_notrace()</tt>,
2958<tt>rcu_dereference_sched()</tt>,
2959<tt>rcu_dereference_sched_check()</tt>,
2960<tt>synchronize_sched()</tt>,
2961<tt>synchronize_rcu_sched_expedited()</tt>,
2962<tt>call_rcu_sched()</tt>,
2963<tt>rcu_barrier_sched()</tt>, and
2964<tt>rcu_read_lock_sched_held()</tt>.
2965However, anything that disables preemption also marks an RCU-sched
2966read-side critical section, including
2967<tt>preempt_disable()</tt> and <tt>preempt_enable()</tt>,
2968<tt>local_irq_save()</tt> and <tt>local_irq_restore()</tt>,
2969and so on.
2970
2971<h3><a name="Sleepable RCU">Sleepable RCU</a></h3>
2972
2973<p>
2974For well over a decade, someone saying &ldquo;I need to block within
2975an RCU read-side critical section&rdquo; was a reliable indication
2976that this someone did not understand RCU.
2977After all, if you are always blocking in an RCU read-side critical
2978section, you can probably afford to use a higher-overhead synchronization
2979mechanism.
2980However, that changed with the advent of the Linux kernel's notifiers,
2981whose RCU read-side critical
2982sections almost never sleep, but sometimes need to.
2983This resulted in the introduction of
2984<a href="https://lwn.net/Articles/202847/">sleepable RCU</a>,
2985or <i>SRCU</i>.
2986
2987<p>
2988SRCU allows different domains to be defined, with each such domain
2989defined by an instance of an <tt>srcu_struct</tt> structure.
2990A pointer to this structure must be passed in to each SRCU function,
2991for example, <tt>synchronize_srcu(&amp;ss)</tt>, where
2992<tt>ss</tt> is the <tt>srcu_struct</tt> structure.
2993The key benefit of these domains is that a slow SRCU reader in one
2994domain does not delay an SRCU grace period in some other domain.
2995That said, one consequence of these domains is that read-side code
2996must pass a &ldquo;cookie&rdquo; from <tt>srcu_read_lock()</tt>
2997to <tt>srcu_read_unlock()</tt>, for example, as follows:
2998
2999<blockquote>
3000<pre>
3001 1 int idx;
3002 2
3003 3 idx = srcu_read_lock(&amp;ss);
3004 4 do_something();
3005 5 srcu_read_unlock(&amp;ss, idx);
3006</pre>
3007</blockquote>
3008
3009<p>
3010As noted above, it is legal to block within SRCU read-side critical sections,
3011however, with great power comes great responsibility.
3012If you block forever in one of a given domain's SRCU read-side critical
3013sections, then that domain's grace periods will also be blocked forever.
3014Of course, one good way to block forever is to deadlock, which can
3015happen if any operation in a given domain's SRCU read-side critical
3016section can block waiting, either directly or indirectly, for that domain's
3017grace period to elapse.
3018For example, this results in a self-deadlock:
3019
3020<blockquote>
3021<pre>
3022 1 int idx;
3023 2
3024 3 idx = srcu_read_lock(&amp;ss);
3025 4 do_something();
3026 5 synchronize_srcu(&amp;ss);
3027 6 srcu_read_unlock(&amp;ss, idx);
3028</pre>
3029</blockquote>
3030
3031<p>
3032However, if line&nbsp;5 acquired a mutex that was held across
3033a <tt>synchronize_srcu()</tt> for domain <tt>ss</tt>,
3034deadlock would still be possible.
3035Furthermore, if line&nbsp;5 acquired a mutex that was held across
3036a <tt>synchronize_srcu()</tt> for some other domain <tt>ss1</tt>,
3037and if an <tt>ss1</tt>-domain SRCU read-side critical section
3038acquired another mutex that was held across as <tt>ss</tt>-domain
3039<tt>synchronize_srcu()</tt>,
3040deadlock would again be possible.
3041Such a deadlock cycle could extend across an arbitrarily large number
3042of different SRCU domains.
3043Again, with great power comes great responsibility.
3044
3045<p>
3046Unlike the other RCU flavors, SRCU read-side critical sections can
3047run on idle and even offline CPUs.
3048This ability requires that <tt>srcu_read_lock()</tt> and
3049<tt>srcu_read_unlock()</tt> contain memory barriers, which means
3050that SRCU readers will run a bit slower than would RCU readers.
3051It also motivates the <tt>smp_mb__after_srcu_read_unlock()</tt>
3052API, which, in combination with <tt>srcu_read_unlock()</tt>,
3053guarantees a full memory barrier.
3054
6771853b
PM
3055<p>
3056Also unlike other RCU flavors, SRCU's callbacks-wait function
3057<tt>srcu_barrier()</tt> may be invoked from CPU-hotplug notifiers,
3058though this is not necessarily a good idea.
3059The reason that this is possible is that SRCU is insensitive
3060to whether or not a CPU is online, which means that <tt>srcu_barrier()</tt>
3061need not exclude CPU-hotplug operations.
3062
09f501a0
PM
3063<p>
3064SRCU also differs from other RCU flavors in that SRCU's expedited and
3065non-expedited grace periods are implemented by the same mechanism.
3066This means that in the current SRCU implementation, expediting a
3067future grace period has the side effect of expediting all prior
3068grace periods that have not yet completed.
3069(But please note that this is a property of the current implementation,
3070not necessarily of future implementations.)
3071In addition, if SRCU has been idle for longer than the interval
3072specified by the <tt>srcutree.exp_holdoff</tt> kernel boot parameter
3073(25&nbsp;microseconds by default),
3074and if a <tt>synchronize_srcu()</tt> invocation ends this idle period,
3075that invocation will be automatically expedited.
3076
6771853b
PM
3077<p>
3078As of v4.12, SRCU's callbacks are maintained per-CPU, eliminating
3079a locking bottleneck present in prior kernel versions.
3080Although this will allow users to put much heavier stress on
3081<tt>call_srcu()</tt>, it is important to note that SRCU does not
3082yet take any special steps to deal with callback flooding.
3083So if you are posting (say) 10,000 SRCU callbacks per second per CPU,
3084you are probably totally OK, but if you intend to post (say) 1,000,000
3085SRCU callbacks per second per CPU, please run some tests first.
3086SRCU just might need a few adjustment to deal with that sort of load.
3087Of course, your mileage may vary based on the speed of your CPUs and
3088the size of your memory.
3089
649e4368
PM
3090<p>
3091The
3092<a href="https://lwn.net/Articles/609973/#RCU Per-Flavor API Table">SRCU API</a>
3093includes
3094<tt>srcu_read_lock()</tt>,
3095<tt>srcu_read_unlock()</tt>,
3096<tt>srcu_dereference()</tt>,
3097<tt>srcu_dereference_check()</tt>,
3098<tt>synchronize_srcu()</tt>,
3099<tt>synchronize_srcu_expedited()</tt>,
3100<tt>call_srcu()</tt>,
3101<tt>srcu_barrier()</tt>, and
3102<tt>srcu_read_lock_held()</tt>.
3103It also includes
3104<tt>DEFINE_SRCU()</tt>,
3105<tt>DEFINE_STATIC_SRCU()</tt>, and
3106<tt>init_srcu_struct()</tt>
3107APIs for defining and initializing <tt>srcu_struct</tt> structures.
3108
3109<h3><a name="Tasks RCU">Tasks RCU</a></h3>
3110
3111<p>
526914a0 3112Some forms of tracing use &ldquo;trampolines&rdquo; to handle the
649e4368
PM
3113binary rewriting required to install different types of probes.
3114It would be good to be able to free old trampolines, which sounds
3115like a job for some form of RCU.
3116However, because it is necessary to be able to install a trace
3117anywhere in the code, it is not possible to use read-side markers
3118such as <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>.
3119In addition, it does not work to have these markers in the trampoline
3120itself, because there would need to be instructions following
3121<tt>rcu_read_unlock()</tt>.
3122Although <tt>synchronize_rcu()</tt> would guarantee that execution
3123reached the <tt>rcu_read_unlock()</tt>, it would not be able to
3124guarantee that execution had completely left the trampoline.
3125
3126<p>
3127The solution, in the form of
3128<a href="https://lwn.net/Articles/607117/"><i>Tasks RCU</i></a>,
3129is to have implicit
3130read-side critical sections that are delimited by voluntary context
3131switches, that is, calls to <tt>schedule()</tt>,
f2b1760a 3132<tt>cond_resched()</tt>, and
649e4368
PM
3133<tt>synchronize_rcu_tasks()</tt>.
3134In addition, transitions to and from userspace execution also delimit
3135tasks-RCU read-side critical sections.
3136
3137<p>
3138The tasks-RCU API is quite compact, consisting only of
3139<tt>call_rcu_tasks()</tt>,
3140<tt>synchronize_rcu_tasks()</tt>, and
3141<tt>rcu_barrier_tasks()</tt>.
3142
f43b6254
PM
3143<h3><a name="Waiting for Multiple Grace Periods">
3144Waiting for Multiple Grace Periods</a></h3>
3145
3146<p>
3147Perhaps you have an RCU protected data structure that is accessed from
3148RCU read-side critical sections, from softirq handlers, and from
3149hardware interrupt handlers.
3150That is three flavors of RCU, the normal flavor, the bottom-half flavor,
3151and the sched flavor.
3152How to wait for a compound grace period?
3153
3154<p>
3155The best approach is usually to &ldquo;just say no!&rdquo; and
3156insert <tt>rcu_read_lock()</tt> and <tt>rcu_read_unlock()</tt>
3157around each RCU read-side critical section, regardless of what
3158environment it happens to be in.
3159But suppose that some of the RCU read-side critical sections are
3160on extremely hot code paths, and that use of <tt>CONFIG_PREEMPT=n</tt>
3161is not a viable option, so that <tt>rcu_read_lock()</tt> and
3162<tt>rcu_read_unlock()</tt> are not free.
3163What then?
3164
3165<p>
3166You <i>could</i> wait on all three grace periods in succession, as follows:
3167
3168<blockquote>
3169<pre>
3170 1 synchronize_rcu();
3171 2 synchronize_rcu_bh();
3172 3 synchronize_sched();
3173</pre>
3174</blockquote>
3175
3176<p>
3177This works, but triples the update-side latency penalty.
3178In cases where this is not acceptable, <tt>synchronize_rcu_mult()</tt>
3179may be used to wait on all three flavors of grace period concurrently:
3180
3181<blockquote>
3182<pre>
3183 1 synchronize_rcu_mult(call_rcu, call_rcu_bh, call_rcu_sched);
3184</pre>
3185</blockquote>
3186
3187<p>
3188But what if it is necessary to also wait on SRCU?
3189This can be done as follows:
3190
3191<blockquote>
3192<pre>
3193 1 static void call_my_srcu(struct rcu_head *head,
3194 2 void (*func)(struct rcu_head *head))
3195 3 {
3196 4 call_srcu(&amp;my_srcu, head, func);
3197 5 }
3198 6
3199 7 synchronize_rcu_mult(call_rcu, call_rcu_bh, call_rcu_sched, call_my_srcu);
3200</pre>
3201</blockquote>
3202
3203<p>
3204If you needed to wait on multiple different flavors of SRCU
3205(but why???), you would need to create a wrapper function resembling
3206<tt>call_my_srcu()</tt> for each SRCU flavor.
3207
6146f8df
PM
3208<table>
3209<tr><th>&nbsp;</th></tr>
3210<tr><th align="left">Quick Quiz:</th></tr>
3211<tr><td>
3212 But what if I need to wait for multiple RCU flavors, but I also need
3213 the grace periods to be expedited?
3214</td></tr>
3215<tr><th align="left">Answer:</th></tr>
3216<tr><td bgcolor="#ffffff"><font color="ffffff">
3217 If you are using expedited grace periods, there should be less penalty
3218 for waiting on them in succession.
3219 But if that is nevertheless a problem, you can use workqueues
3220 or multiple kthreads to wait on the various expedited grace
3221 periods concurrently.
3222</font></td></tr>
3223<tr><td>&nbsp;</td></tr>
3224</table>
f43b6254
PM
3225
3226<p>
3227Again, it is usually better to adjust the RCU read-side critical sections
3228to use a single flavor of RCU, but when this is not feasible, you can use
3229<tt>synchronize_rcu_mult()</tt>.
3230
649e4368
PM
3231<h2><a name="Possible Future Changes">Possible Future Changes</a></h2>
3232
3233<p>
3234One of the tricks that RCU uses to attain update-side scalability is
3235to increase grace-period latency with increasing numbers of CPUs.
3236If this becomes a serious problem, it will be necessary to rework the
3237grace-period state machine so as to avoid the need for the additional
3238latency.
3239
3240<p>
3241Expedited grace periods scan the CPUs, so their latency and overhead
3242increases with increasing numbers of CPUs.
3243If this becomes a serious problem on large systems, it will be necessary
3244to do some redesign to avoid this scalability problem.
3245
3246<p>
3247RCU disables CPU hotplug in a few places, perhaps most notably in the
6771853b
PM
3248<tt>rcu_barrier()</tt> operations.
3249If there is a strong reason to use <tt>rcu_barrier()</tt> in CPU-hotplug
649e4368
PM
3250notifiers, it will be necessary to avoid disabling CPU hotplug.
3251This would introduce some complexity, so there had better be a <i>very</i>
3252good reason.
3253
3254<p>
3255The tradeoff between grace-period latency on the one hand and interruptions
3256of other CPUs on the other hand may need to be re-examined.
3257The desire is of course for zero grace-period latency as well as zero
3258interprocessor interrupts undertaken during an expedited grace period
3259operation.
3260While this ideal is unlikely to be achievable, it is quite possible that
3261further improvements can be made.
3262
3263<p>
3264The multiprocessor implementations of RCU use a combining tree that
3265groups CPUs so as to reduce lock contention and increase cache locality.
3266However, this combining tree does not spread its memory across NUMA
3267nodes nor does it align the CPU groups with hardware features such
3268as sockets or cores.
3269Such spreading and alignment is currently believed to be unnecessary
3270because the hotpath read-side primitives do not access the combining
3271tree, nor does <tt>call_rcu()</tt> in the common case.
3272If you believe that your architecture needs such spreading and alignment,
3273then your architecture should also benefit from the
3274<tt>rcutree.rcu_fanout_leaf</tt> boot parameter, which can be set
3275to the number of CPUs in a socket, NUMA node, or whatever.
3276If the number of CPUs is too large, use a fraction of the number of
3277CPUs.
3278If the number of CPUs is a large prime number, well, that certainly
3279is an &ldquo;interesting&rdquo; architectural choice!
3280More flexible arrangements might be considered, but only if
3281<tt>rcutree.rcu_fanout_leaf</tt> has proven inadequate, and only
3282if the inadequacy has been demonstrated by a carefully run and
3283realistic system-level workload.
3284
3285<p>
3286Please note that arrangements that require RCU to remap CPU numbers will
3287require extremely good demonstration of need and full exploration of
3288alternatives.
3289
3290<p>
3291There is an embarrassingly large number of flavors of RCU, and this
3292number has been increasing over time.
3293Perhaps it will be possible to combine some at some future date.
3294
3295<p>
3296RCU's various kthreads are reasonably recent additions.
3297It is quite likely that adjustments will be required to more gracefully
3298handle extreme loads.
3299It might also be necessary to be able to relate CPU utilization by
3300RCU's kthreads and softirq handlers to the code that instigated this
3301CPU utilization.
3302For example, RCU callback overhead might be charged back to the
3303originating <tt>call_rcu()</tt> instance, though probably not
3304in production kernels.
3305
3306<h2><a name="Summary">Summary</a></h2>
3307
3308<p>
3309This document has presented more than two decade's worth of RCU
3310requirements.
3311Given that the requirements keep changing, this will not be the last
3312word on this subject, but at least it serves to get an important
3313subset of the requirements set forth.
3314
3315<h2><a name="Acknowledgments">Acknowledgments</a></h2>
3316
3317I am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar,
3318Oleg Nesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and
3319Andy Lutomirski for their help in rendering
3320this article human readable, and to Michelle Rankin for her support
3321of this effort.
3322Other contributions are acknowledged in the Linux kernel's git archive.
649e4368 3323
649e4368 3324</body></html>