1=================================
2A Tour Through RCU's Requirements
3=================================
4
5Copyright IBM Corporation, 2015
6
7Author: Paul E. McKenney
8
9The initial version of this document appeared in the
10`LWN <https://lwn.net/>`_ on those articles:
11`part 1 <https://lwn.net/Articles/652156/>`_,
12`part 2 <https://lwn.net/Articles/652677/>`_, and
13`part 3 <https://lwn.net/Articles/653326/>`_.
14
15Introduction
16------------
17
18Read-copy update (RCU) is a synchronization mechanism that is often used
19as a replacement for reader-writer locking. RCU is unusual in that
20updaters do not block readers, which means that RCU's read-side
21primitives can be exceedingly fast and scalable. In addition, updaters
22can make useful forward progress concurrently with readers. However, all
23this concurrency between RCU readers and updaters does raise the
24question of exactly what RCU readers are doing, which in turn raises the
25question of exactly what RCU's requirements are.
26
27This document therefore summarizes RCU's requirements, and can be
28thought of as an informal, high-level specification for RCU. It is
29important to understand that RCU's specification is primarily empirical
30in nature; in fact, I learned about many of these requirements the hard
31way. This situation might cause some consternation, however, not only
32has this learning process been a lot of fun, but it has also been a
33great privilege to work with so many people willing to apply
34technologies in interesting new ways.
35
36All that aside, here are the categories of currently known RCU
37requirements:
38
39#. `Fundamental Requirements`_
40#. `Fundamental Non-Requirements`_
41#. `Parallelism Facts of Life`_
42#. `Quality-of-Implementation Requirements`_
43#. `Linux Kernel Complications`_
44#. `Software-Engineering Requirements`_
45#. `Other RCU Flavors`_
46#. `Possible Future Changes`_
47
48This is followed by a summary_, however, the answers to
49each quick quiz immediately follows the quiz. Select the big white space
50with your mouse to see the answer.
51
52Fundamental Requirements
53------------------------
54
55RCU's fundamental requirements are the closest thing RCU has to hard
56mathematical requirements. These are:
57
58#. `Grace-Period Guarantee`_
59#. `Publish/Subscribe Guarantee`_
60#. `Memory-Barrier Guarantees`_
61#. `RCU Primitives Guaranteed to Execute Unconditionally`_
62#. `Guaranteed Read-to-Write Upgrade`_
63
64Grace-Period Guarantee
65~~~~~~~~~~~~~~~~~~~~~~
66
67RCU's grace-period guarantee is unusual in being premeditated: Jack
68Slingwine and I had this guarantee firmly in mind when we started work
69on RCU (then called “rclock”) in the early 1990s. That said, the past
70two decades of experience with RCU have produced a much more detailed
71understanding of this guarantee.
72
73RCU's grace-period guarantee allows updaters to wait for the completion
74of all pre-existing RCU read-side critical sections. An RCU read-side
75critical section begins with the marker rcu_read_lock() and ends
76with the marker rcu_read_unlock(). These markers may be nested, and
77RCU treats a nested set as one big RCU read-side critical section.
78Production-quality implementations of rcu_read_lock() and
79rcu_read_unlock() are extremely lightweight, and in fact have
80exactly zero overhead in Linux kernels built for production use with
81``CONFIG_PREEMPTION=n``.
82
83This guarantee allows ordering to be enforced with extremely low
84overhead to readers, for example:
85
86   ::
87
88       1 int x, y;
89       2
90       3 void thread0(void)
91       4 {
92       5   rcu_read_lock();
93       6   r1 = READ_ONCE(x);
94       7   r2 = READ_ONCE(y);
95       8   rcu_read_unlock();
96       9 }
97      10
98      11 void thread1(void)
99      12 {
100      13   WRITE_ONCE(x, 1);
101      14   synchronize_rcu();
102      15   WRITE_ONCE(y, 1);
103      16 }
104
105Because the synchronize_rcu() on line 14 waits for all pre-existing
106readers, any instance of thread0() that loads a value of zero from
107``x`` must complete before thread1() stores to ``y``, so that
108instance must also load a value of zero from ``y``. Similarly, any
109instance of thread0() that loads a value of one from ``y`` must have
110started after the synchronize_rcu() started, and must therefore also
111load a value of one from ``x``. Therefore, the outcome:
112
113   ::
114
115      (r1 == 0 && r2 == 1)
116
117cannot happen.
118
119+-----------------------------------------------------------------------+
120| **Quick Quiz**:                                                       |
121+-----------------------------------------------------------------------+
122| Wait a minute! You said that updaters can make useful forward         |
123| progress concurrently with readers, but pre-existing readers will     |
124| block synchronize_rcu()!!!                                            |
125| Just who are you trying to fool???                                    |
126+-----------------------------------------------------------------------+
127| **Answer**:                                                           |
128+-----------------------------------------------------------------------+
129| First, if updaters do not wish to be blocked by readers, they can use |
130| call_rcu() or kfree_rcu(), which will be discussed later.             |
131| Second, even when using synchronize_rcu(), the other update-side      |
132| code does run concurrently with readers, whether pre-existing or not. |
133+-----------------------------------------------------------------------+
134
135This scenario resembles one of the first uses of RCU in
136`DYNIX/ptx <https://en.wikipedia.org/wiki/DYNIX>`__, which managed a
137distributed lock manager's transition into a state suitable for handling
138recovery from node failure, more or less as follows:
139
140   ::
141
142       1 #define STATE_NORMAL        0
143       2 #define STATE_WANT_RECOVERY 1
144       3 #define STATE_RECOVERING    2
145       4 #define STATE_WANT_NORMAL   3
146       5
147       6 int state = STATE_NORMAL;
148       7
149       8 void do_something_dlm(void)
150       9 {
151      10   int state_snap;
152      11
153      12   rcu_read_lock();
154      13   state_snap = READ_ONCE(state);
155      14   if (state_snap == STATE_NORMAL)
156      15     do_something();
157      16   else
158      17     do_something_carefully();
159      18   rcu_read_unlock();
160      19 }
161      20
162      21 void start_recovery(void)
163      22 {
164      23   WRITE_ONCE(state, STATE_WANT_RECOVERY);
165      24   synchronize_rcu();
166      25   WRITE_ONCE(state, STATE_RECOVERING);
167      26   recovery();
168      27   WRITE_ONCE(state, STATE_WANT_NORMAL);
169      28   synchronize_rcu();
170      29   WRITE_ONCE(state, STATE_NORMAL);
171      30 }
172
173The RCU read-side critical section in do_something_dlm() works with
174the synchronize_rcu() in start_recovery() to guarantee that
175do_something() never runs concurrently with recovery(), but with
176little or no synchronization overhead in do_something_dlm().
177
178+-----------------------------------------------------------------------+
179| **Quick Quiz**:                                                       |
180+-----------------------------------------------------------------------+
181| Why is the synchronize_rcu() on line 28 needed?                       |
182+-----------------------------------------------------------------------+
183| **Answer**:                                                           |
184+-----------------------------------------------------------------------+
185| Without that extra grace period, memory reordering could result in    |
186| do_something_dlm() executing do_something() concurrently with         |
187| the last bits of recovery().                                          |
188+-----------------------------------------------------------------------+
189
190In order to avoid fatal problems such as deadlocks, an RCU read-side
191critical section must not contain calls to synchronize_rcu().
192Similarly, an RCU read-side critical section must not contain anything
193that waits, directly or indirectly, on completion of an invocation of
194synchronize_rcu().
195
196Although RCU's grace-period guarantee is useful in and of itself, with
197`quite a few use cases <https://lwn.net/Articles/573497/>`__, it would
198be good to be able to use RCU to coordinate read-side access to linked
199data structures. For this, the grace-period guarantee is not sufficient,
200as can be seen in function add_gp_buggy() below. We will look at the
201reader's code later, but in the meantime, just think of the reader as
202locklessly picking up the ``gp`` pointer, and, if the value loaded is
203non-\ ``NULL``, locklessly accessing the ``->a`` and ``->b`` fields.
204
205   ::
206
207       1 bool add_gp_buggy(int a, int b)
208       2 {
209       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
210       4   if (!p)
211       5     return -ENOMEM;
212       6   spin_lock(&gp_lock);
213       7   if (rcu_access_pointer(gp)) {
214       8     spin_unlock(&gp_lock);
215       9     return false;
216      10   }
217      11   p->a = a;
218      12   p->b = a;
219      13   gp = p; /* ORDERING BUG */
220      14   spin_unlock(&gp_lock);
221      15   return true;
222      16 }
223
224The problem is that both the compiler and weakly ordered CPUs are within
225their rights to reorder this code as follows:
226
227   ::
228
229       1 bool add_gp_buggy_optimized(int a, int b)
230       2 {
231       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
232       4   if (!p)
233       5     return -ENOMEM;
234       6   spin_lock(&gp_lock);
235       7   if (rcu_access_pointer(gp)) {
236       8     spin_unlock(&gp_lock);
237       9     return false;
238      10   }
239      11   gp = p; /* ORDERING BUG */
240      12   p->a = a;
241      13   p->b = a;
242      14   spin_unlock(&gp_lock);
243      15   return true;
244      16 }
245
246If an RCU reader fetches ``gp`` just after ``add_gp_buggy_optimized``
247executes line 11, it will see garbage in the ``->a`` and ``->b`` fields.
248And this is but one of many ways in which compiler and hardware
249optimizations could cause trouble. Therefore, we clearly need some way
250to prevent the compiler and the CPU from reordering in this manner,
251which brings us to the publish-subscribe guarantee discussed in the next
252section.
253
254Publish/Subscribe Guarantee
255~~~~~~~~~~~~~~~~~~~~~~~~~~~
256
257RCU's publish-subscribe guarantee allows data to be inserted into a
258linked data structure without disrupting RCU readers. The updater uses
259rcu_assign_pointer() to insert the new data, and readers use
260rcu_dereference() to access data, whether new or old. The following
261shows an example of insertion:
262
263   ::
264
265       1 bool add_gp(int a, int b)
266       2 {
267       3   p = kmalloc(sizeof(*p), GFP_KERNEL);
268       4   if (!p)
269       5     return -ENOMEM;
270       6   spin_lock(&gp_lock);
271       7   if (rcu_access_pointer(gp)) {
272       8     spin_unlock(&gp_lock);
273       9     return false;
274      10   }
275      11   p->a = a;
276      12   p->b = a;
277      13   rcu_assign_pointer(gp, p);
278      14   spin_unlock(&gp_lock);
279      15   return true;
280      16 }
281
282The rcu_assign_pointer() on line 13 is conceptually equivalent to a
283simple assignment statement, but also guarantees that its assignment
284will happen after the two assignments in lines 11 and 12, similar to the
285C11 ``memory_order_release`` store operation. It also prevents any
286number of “interesting” compiler optimizations, for example, the use of
287``gp`` as a scratch location immediately preceding the assignment.
288
289+-----------------------------------------------------------------------+
290| **Quick Quiz**:                                                       |
291+-----------------------------------------------------------------------+
292| But rcu_assign_pointer() does nothing to prevent the two              |
293| assignments to ``p->a`` and ``p->b`` from being reordered. Can't that |
294| also cause problems?                                                  |
295+-----------------------------------------------------------------------+
296| **Answer**:                                                           |
297+-----------------------------------------------------------------------+
298| No, it cannot. The readers cannot see either of these two fields      |
299| until the assignment to ``gp``, by which time both fields are fully   |
300| initialized. So reordering the assignments to ``p->a`` and ``p->b``   |
301| cannot possibly cause any problems.                                   |
302+-----------------------------------------------------------------------+
303
304It is tempting to assume that the reader need not do anything special to
305control its accesses to the RCU-protected data, as shown in
306do_something_gp_buggy() below:
307
308   ::
309
310       1 bool do_something_gp_buggy(void)
311       2 {
312       3   rcu_read_lock();
313       4   p = gp;  /* OPTIMIZATIONS GALORE!!! */
314       5   if (p) {
315       6     do_something(p->a, p->b);
316       7     rcu_read_unlock();
317       8     return true;
318       9   }
319      10   rcu_read_unlock();
320      11   return false;
321      12 }
322
323However, this temptation must be resisted because there are a
324surprisingly large number of ways that the compiler (or weak ordering
325CPUs like the DEC Alpha) can trip this code up. For but one example, if
326the compiler were short of registers, it might choose to refetch from
327``gp`` rather than keeping a separate copy in ``p`` as follows:
328
329   ::
330
331       1 bool do_something_gp_buggy_optimized(void)
332       2 {
333       3   rcu_read_lock();
334       4   if (gp) { /* OPTIMIZATIONS GALORE!!! */
335       5     do_something(gp->a, gp->b);
336       6     rcu_read_unlock();
337       7     return true;
338       8   }
339       9   rcu_read_unlock();
340      10   return false;
341      11 }
342
343If this function ran concurrently with a series of updates that replaced
344the current structure with a new one, the fetches of ``gp->a`` and
345``gp->b`` might well come from two different structures, which could
346cause serious confusion. To prevent this (and much else besides),
347do_something_gp() uses rcu_dereference() to fetch from ``gp``:
348
349   ::
350
351       1 bool do_something_gp(void)
352       2 {
353       3   rcu_read_lock();
354       4   p = rcu_dereference(gp);
355       5   if (p) {
356       6     do_something(p->a, p->b);
357       7     rcu_read_unlock();
358       8     return true;
359       9   }
360      10   rcu_read_unlock();
361      11   return false;
362      12 }
363
364The rcu_dereference() uses volatile casts and (for DEC Alpha) memory
365barriers in the Linux kernel. Should a `high-quality implementation of
366C11 ``memory_order_consume``
367[PDF] <http://www.rdrop.com/users/paulmck/RCU/consume.2015.07.13a.pdf>`__
368ever appear, then rcu_dereference() could be implemented as a
369``memory_order_consume`` load. Regardless of the exact implementation, a
370pointer fetched by rcu_dereference() may not be used outside of the
371outermost RCU read-side critical section containing that
372rcu_dereference(), unless protection of the corresponding data
373element has been passed from RCU to some other synchronization
374mechanism, most commonly locking or `reference
375counting <https://www.kernel.org/doc/Documentation/RCU/rcuref.txt>`__.
376
377In short, updaters use rcu_assign_pointer() and readers use
378rcu_dereference(), and these two RCU API elements work together to
379ensure that readers have a consistent view of newly added data elements.
380
381Of course, it is also necessary to remove elements from RCU-protected
382data structures, for example, using the following process:
383
384#. Remove the data element from the enclosing structure.
385#. Wait for all pre-existing RCU read-side critical sections to complete
386   (because only pre-existing readers can possibly have a reference to
387   the newly removed data element).
388#. At this point, only the updater has a reference to the newly removed
389   data element, so it can safely reclaim the data element, for example,
390   by passing it to kfree().
391
392This process is implemented by remove_gp_synchronous():
393
394   ::
395
396       1 bool remove_gp_synchronous(void)
397       2 {
398       3   struct foo *p;
399       4
400       5   spin_lock(&gp_lock);
401       6   p = rcu_access_pointer(gp);
402       7   if (!p) {
403       8     spin_unlock(&gp_lock);
404       9     return false;
405      10   }
406      11   rcu_assign_pointer(gp, NULL);
407      12   spin_unlock(&gp_lock);
408      13   synchronize_rcu();
409      14   kfree(p);
410      15   return true;
411      16 }
412
413This function is straightforward, with line 13 waiting for a grace
414period before line 14 frees the old data element. This waiting ensures
415that readers will reach line 7 of do_something_gp() before the data
416element referenced by ``p`` is freed. The rcu_access_pointer() on
417line 6 is similar to rcu_dereference(), except that:
418
419#. The value returned by rcu_access_pointer() cannot be
420   dereferenced. If you want to access the value pointed to as well as
421   the pointer itself, use rcu_dereference() instead of
422   rcu_access_pointer().
423#. The call to rcu_access_pointer() need not be protected. In
424   contrast, rcu_dereference() must either be within an RCU
425   read-side critical section or in a code segment where the pointer
426   cannot change, for example, in code protected by the corresponding
427   update-side lock.
428
429+-----------------------------------------------------------------------+
430| **Quick Quiz**:                                                       |
431+-----------------------------------------------------------------------+
432| Without the rcu_dereference() or the rcu_access_pointer(),            |
433| what destructive optimizations might the compiler make use of?        |
434+-----------------------------------------------------------------------+
435| **Answer**:                                                           |
436+-----------------------------------------------------------------------+
437| Let's start with what happens to do_something_gp() if it fails to     |
438| use rcu_dereference(). It could reuse a value formerly fetched        |
439| from this same pointer. It could also fetch the pointer from ``gp``   |
440| in a byte-at-a-time manner, resulting in *load tearing*, in turn      |
441| resulting a bytewise mash-up of two distinct pointer values. It might |
442| even use value-speculation optimizations, where it makes a wrong      |
443| guess, but by the time it gets around to checking the value, an       |
444| update has changed the pointer to match the wrong guess. Too bad      |
445| about any dereferences that returned pre-initialization garbage in    |
446| the meantime!                                                         |
447| For remove_gp_synchronous(), as long as all modifications to          |
448| ``gp`` are carried out while holding ``gp_lock``, the above           |
449| optimizations are harmless. However, ``sparse`` will complain if you  |
450| define ``gp`` with ``__rcu`` and then access it without using either  |
451| rcu_access_pointer() or rcu_dereference().                            |
452+-----------------------------------------------------------------------+
453
454In short, RCU's publish-subscribe guarantee is provided by the
455combination of rcu_assign_pointer() and rcu_dereference(). This
456guarantee allows data elements to be safely added to RCU-protected
457linked data structures without disrupting RCU readers. This guarantee
458can be used in combination with the grace-period guarantee to also allow
459data elements to be removed from RCU-protected linked data structures,
460again without disrupting RCU readers.
461
462This guarantee was only partially premeditated. DYNIX/ptx used an
463explicit memory barrier for publication, but had nothing resembling
464rcu_dereference() for subscription, nor did it have anything
465resembling the dependency-ordering barrier that was later subsumed
466into rcu_dereference() and later still into READ_ONCE(). The
467need for these operations made itself known quite suddenly at a
468late-1990s meeting with the DEC Alpha architects, back in the days when
469DEC was still a free-standing company. It took the Alpha architects a
470good hour to convince me that any sort of barrier would ever be needed,
471and it then took me a good *two* hours to convince them that their
472documentation did not make this point clear. More recent work with the C
473and C++ standards committees have provided much education on tricks and
474traps from the compiler. In short, compilers were much less tricky in
475the early 1990s, but in 2015, don't even think about omitting
476rcu_dereference()!
477
478Memory-Barrier Guarantees
479~~~~~~~~~~~~~~~~~~~~~~~~~
480
481The previous section's simple linked-data-structure scenario clearly
482demonstrates the need for RCU's stringent memory-ordering guarantees on
483systems with more than one CPU:
484
485#. Each CPU that has an RCU read-side critical section that begins
486   before synchronize_rcu() starts is guaranteed to execute a full
487   memory barrier between the time that the RCU read-side critical
488   section ends and the time that synchronize_rcu() returns. Without
489   this guarantee, a pre-existing RCU read-side critical section might
490   hold a reference to the newly removed ``struct foo`` after the
491   kfree() on line 14 of remove_gp_synchronous().
492#. Each CPU that has an RCU read-side critical section that ends after
493   synchronize_rcu() returns is guaranteed to execute a full memory
494   barrier between the time that synchronize_rcu() begins and the
495   time that the RCU read-side critical section begins. Without this
496   guarantee, a later RCU read-side critical section running after the
497   kfree() on line 14 of remove_gp_synchronous() might later run
498   do_something_gp() and find the newly deleted ``struct foo``.
499#. If the task invoking synchronize_rcu() remains on a given CPU,
500   then that CPU is guaranteed to execute a full memory barrier sometime
501   during the execution of synchronize_rcu(). This guarantee ensures
502   that the kfree() on line 14 of remove_gp_synchronous() really
503   does execute after the removal on line 11.
504#. If the task invoking synchronize_rcu() migrates among a group of
505   CPUs during that invocation, then each of the CPUs in that group is
506   guaranteed to execute a full memory barrier sometime during the
507   execution of synchronize_rcu(). This guarantee also ensures that
508   the kfree() on line 14 of remove_gp_synchronous() really does
509   execute after the removal on line 11, but also in the case where the
510   thread executing the synchronize_rcu() migrates in the meantime.
511
512+-----------------------------------------------------------------------+
513| **Quick Quiz**:                                                       |
514+-----------------------------------------------------------------------+
515| Given that multiple CPUs can start RCU read-side critical sections at |
516| any time without any ordering whatsoever, how can RCU possibly tell   |
517| whether or not a given RCU read-side critical section starts before a |
518| given instance of synchronize_rcu()?                                  |
519+-----------------------------------------------------------------------+
520| **Answer**:                                                           |
521+-----------------------------------------------------------------------+
522| If RCU cannot tell whether or not a given RCU read-side critical      |
523| section starts before a given instance of synchronize_rcu(), then     |
524| it must assume that the RCU read-side critical section started first. |
525| In other words, a given instance of synchronize_rcu() can avoid       |
526| waiting on a given RCU read-side critical section only if it can      |
527| prove that synchronize_rcu() started first.                           |
528| A related question is “When rcu_read_lock() doesn't generate any      |
529| code, why does it matter how it relates to a grace period?” The       |
530| answer is that it is not the relationship of rcu_read_lock()          |
531| itself that is important, but rather the relationship of the code     |
532| within the enclosed RCU read-side critical section to the code        |
533| preceding and following the grace period. If we take this viewpoint,  |
534| then a given RCU read-side critical section begins before a given     |
535| grace period when some access preceding the grace period observes the |
536| effect of some access within the critical section, in which case none |
537| of the accesses within the critical section may observe the effects   |
538| of any access following the grace period.                             |
539|                                                                       |
540| As of late 2016, mathematical models of RCU take this viewpoint, for  |
541| example, see slides 62 and 63 of the `2016 LinuxCon                   |
542| EU <http://www2.rdrop.com/users/paulmck/scalability/paper/LinuxMM.201 |
543| 6.10.04c.LCE.pdf>`__                                                  |
544| presentation.                                                         |
545+-----------------------------------------------------------------------+
546
547+-----------------------------------------------------------------------+
548| **Quick Quiz**:                                                       |
549+-----------------------------------------------------------------------+
550| The first and second guarantees require unbelievably strict ordering! |
551| Are all these memory barriers *really* required?                      |
552+-----------------------------------------------------------------------+
553| **Answer**:                                                           |
554+-----------------------------------------------------------------------+
555| Yes, they really are required. To see why the first guarantee is      |
556| required, consider the following sequence of events:                  |
557|                                                                       |
558| #. CPU 1: rcu_read_lock()                                             |
559| #. CPU 1: ``q = rcu_dereference(gp); /* Very likely to return p. */`` |
560| #. CPU 0: ``list_del_rcu(p);``                                        |
561| #. CPU 0: synchronize_rcu() starts.                                   |
562| #. CPU 1: ``do_something_with(q->a);``                                |
563|    ``/* No smp_mb(), so might happen after kfree(). */``              |
564| #. CPU 1: rcu_read_unlock()                                           |
565| #. CPU 0: synchronize_rcu() returns.                                  |
566| #. CPU 0: ``kfree(p);``                                               |
567|                                                                       |
568| Therefore, there absolutely must be a full memory barrier between the |
569| end of the RCU read-side critical section and the end of the grace    |
570| period.                                                               |
571|                                                                       |
572| The sequence of events demonstrating the necessity of the second rule |
573| is roughly similar:                                                   |
574|                                                                       |
575| #. CPU 0: ``list_del_rcu(p);``                                        |
576| #. CPU 0: synchronize_rcu() starts.                                   |
577| #. CPU 1: rcu_read_lock()                                             |
578| #. CPU 1: ``q = rcu_dereference(gp);``                                |
579|    ``/* Might return p if no memory barrier. */``                     |
580| #. CPU 0: synchronize_rcu() returns.                                  |
581| #. CPU 0: ``kfree(p);``                                               |
582| #. CPU 1: ``do_something_with(q->a); /* Boom!!! */``                  |
583| #. CPU 1: rcu_read_unlock()                                           |
584|                                                                       |
585| And similarly, without a memory barrier between the beginning of the  |
586| grace period and the beginning of the RCU read-side critical section, |
587| CPU 1 might end up accessing the freelist.                            |
588|                                                                       |
589| The “as if” rule of course applies, so that any implementation that   |
590| acts as if the appropriate memory barriers were in place is a correct |
591| implementation. That said, it is much easier to fool yourself into    |
592| believing that you have adhered to the as-if rule than it is to       |
593| actually adhere to it!                                                |
594+-----------------------------------------------------------------------+
595
596+-----------------------------------------------------------------------+
597| **Quick Quiz**:                                                       |
598+-----------------------------------------------------------------------+
599| You claim that rcu_read_lock() and rcu_read_unlock() generate         |
600| absolutely no code in some kernel builds. This means that the         |
601| compiler might arbitrarily rearrange consecutive RCU read-side        |
602| critical sections. Given such rearrangement, if a given RCU read-side |
603| critical section is done, how can you be sure that all prior RCU      |
604| read-side critical sections are done? Won't the compiler              |
605| rearrangements make that impossible to determine?                     |
606+-----------------------------------------------------------------------+
607| **Answer**:                                                           |
608+-----------------------------------------------------------------------+
609| In cases where rcu_read_lock() and rcu_read_unlock() generate         |
610| absolutely no code, RCU infers quiescent states only at special       |
611| locations, for example, within the scheduler. Because calls to        |
612| schedule() had better prevent calling-code accesses to shared         |
613| variables from being rearranged across the call to schedule(), if     |
614| RCU detects the end of a given RCU read-side critical section, it     |
615| will necessarily detect the end of all prior RCU read-side critical   |
616| sections, no matter how aggressively the compiler scrambles the code. |
617| Again, this all assumes that the compiler cannot scramble code across |
618| calls to the scheduler, out of interrupt handlers, into the idle      |
619| loop, into user-mode code, and so on. But if your kernel build allows |
620| that sort of scrambling, you have broken far more than just RCU!      |
621+-----------------------------------------------------------------------+
622
623Note that these memory-barrier requirements do not replace the
624fundamental RCU requirement that a grace period wait for all
625pre-existing readers. On the contrary, the memory barriers called out in
626this section must operate in such a way as to *enforce* this fundamental
627requirement. Of course, different implementations enforce this
628requirement in different ways, but enforce it they must.
629
630RCU Primitives Guaranteed to Execute Unconditionally
631~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
632
633The common-case RCU primitives are unconditional. They are invoked, they
634do their job, and they return, with no possibility of error, and no need
635to retry. This is a key RCU design philosophy.
636
637However, this philosophy is pragmatic rather than pigheaded. If someone
638comes up with a good justification for a particular conditional RCU
639primitive, it might well be implemented and added. After all, this
640guarantee was reverse-engineered, not premeditated. The unconditional
641nature of the RCU primitives was initially an accident of
642implementation, and later experience with synchronization primitives
643with conditional primitives caused me to elevate this accident to a
644guarantee. Therefore, the justification for adding a conditional
645primitive to RCU would need to be based on detailed and compelling use
646cases.
647
648Guaranteed Read-to-Write Upgrade
649~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
650
651As far as RCU is concerned, it is always possible to carry out an update
652within an RCU read-side critical section. For example, that RCU
653read-side critical section might search for a given data element, and
654then might acquire the update-side spinlock in order to update that
655element, all while remaining in that RCU read-side critical section. Of
656course, it is necessary to exit the RCU read-side critical section
657before invoking synchronize_rcu(), however, this inconvenience can
658be avoided through use of the call_rcu() and kfree_rcu() API
659members described later in this document.
660
661+-----------------------------------------------------------------------+
662| **Quick Quiz**:                                                       |
663+-----------------------------------------------------------------------+
664| But how does the upgrade-to-write operation exclude other readers?    |
665+-----------------------------------------------------------------------+
666| **Answer**:                                                           |
667+-----------------------------------------------------------------------+
668| It doesn't, just like normal RCU updates, which also do not exclude   |
669| RCU readers.                                                          |
670+-----------------------------------------------------------------------+
671
672This guarantee allows lookup code to be shared between read-side and
673update-side code, and was premeditated, appearing in the earliest
674DYNIX/ptx RCU documentation.
675
676Fundamental Non-Requirements
677----------------------------
678
679RCU provides extremely lightweight readers, and its read-side
680guarantees, though quite useful, are correspondingly lightweight. It is
681therefore all too easy to assume that RCU is guaranteeing more than it
682really is. Of course, the list of things that RCU does not guarantee is
683infinitely long, however, the following sections list a few
684non-guarantees that have caused confusion. Except where otherwise noted,
685these non-guarantees were premeditated.
686
687#. `Readers Impose Minimal Ordering`_
688#. `Readers Do Not Exclude Updaters`_
689#. `Updaters Only Wait For Old Readers`_
690#. `Grace Periods Don't Partition Read-Side Critical Sections`_
691#. `Read-Side Critical Sections Don't Partition Grace Periods`_
692
693Readers Impose Minimal Ordering
694~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
695
696Reader-side markers such as rcu_read_lock() and
697rcu_read_unlock() provide absolutely no ordering guarantees except
698through their interaction with the grace-period APIs such as
699synchronize_rcu(). To see this, consider the following pair of
700threads:
701
702   ::
703
704       1 void thread0(void)
705       2 {
706       3   rcu_read_lock();
707       4   WRITE_ONCE(x, 1);
708       5   rcu_read_unlock();
709       6   rcu_read_lock();
710       7   WRITE_ONCE(y, 1);
711       8   rcu_read_unlock();
712       9 }
713      10
714      11 void thread1(void)
715      12 {
716      13   rcu_read_lock();
717      14   r1 = READ_ONCE(y);
718      15   rcu_read_unlock();
719      16   rcu_read_lock();
720      17   r2 = READ_ONCE(x);
721      18   rcu_read_unlock();
722      19 }
723
724After thread0() and thread1() execute concurrently, it is quite
725possible to have
726
727   ::
728
729      (r1 == 1 && r2 == 0)
730
731(that is, ``y`` appears to have been assigned before ``x``), which would
732not be possible if rcu_read_lock() and rcu_read_unlock() had
733much in the way of ordering properties. But they do not, so the CPU is
734within its rights to do significant reordering. This is by design: Any
735significant ordering constraints would slow down these fast-path APIs.
736
737+-----------------------------------------------------------------------+
738| **Quick Quiz**:                                                       |
739+-----------------------------------------------------------------------+
740| Can't the compiler also reorder this code?                            |
741+-----------------------------------------------------------------------+
742| **Answer**:                                                           |
743+-----------------------------------------------------------------------+
744| No, the volatile casts in READ_ONCE() and WRITE_ONCE()                |
745| prevent the compiler from reordering in this particular case.         |
746+-----------------------------------------------------------------------+
747
748Readers Do Not Exclude Updaters
749~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
750
751Neither rcu_read_lock() nor rcu_read_unlock() exclude updates.
752All they do is to prevent grace periods from ending. The following
753example illustrates this:
754
755   ::
756
757       1 void thread0(void)
758       2 {
759       3   rcu_read_lock();
760       4   r1 = READ_ONCE(y);
761       5   if (r1) {
762       6     do_something_with_nonzero_x();
763       7     r2 = READ_ONCE(x);
764       8     WARN_ON(!r2); /* BUG!!! */
765       9   }
766      10   rcu_read_unlock();
767      11 }
768      12
769      13 void thread1(void)
770      14 {
771      15   spin_lock(&my_lock);
772      16   WRITE_ONCE(x, 1);
773      17   WRITE_ONCE(y, 1);
774      18   spin_unlock(&my_lock);
775      19 }
776
777If the thread0() function's rcu_read_lock() excluded the
778thread1() function's update, the WARN_ON() could never fire. But
779the fact is that rcu_read_lock() does not exclude much of anything
780aside from subsequent grace periods, of which thread1() has none, so
781the WARN_ON() can and does fire.
782
783Updaters Only Wait For Old Readers
784~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
785
786It might be tempting to assume that after synchronize_rcu()
787completes, there are no readers executing. This temptation must be
788avoided because new readers can start immediately after
789synchronize_rcu() starts, and synchronize_rcu() is under no
790obligation to wait for these new readers.
791
792+-----------------------------------------------------------------------+
793| **Quick Quiz**:                                                       |
794+-----------------------------------------------------------------------+
795| Suppose that synchronize_rcu() did wait until *all* readers had       |
796| completed instead of waiting only on pre-existing readers. For how    |
797| long would the updater be able to rely on there being no readers?     |
798+-----------------------------------------------------------------------+
799| **Answer**:                                                           |
800+-----------------------------------------------------------------------+
801| For no time at all. Even if synchronize_rcu() were to wait until      |
802| all readers had completed, a new reader might start immediately after |
803| synchronize_rcu() completed. Therefore, the code following            |
804| synchronize_rcu() can *never* rely on there being no readers.         |
805+-----------------------------------------------------------------------+
806
807Grace Periods Don't Partition Read-Side Critical Sections
808~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
809
810It is tempting to assume that if any part of one RCU read-side critical
811section precedes a given grace period, and if any part of another RCU
812read-side critical section follows that same grace period, then all of
813the first RCU read-side critical section must precede all of the second.
814However, this just isn't the case: A single grace period does not
815partition the set of RCU read-side critical sections. An example of this
816situation can be illustrated as follows, where ``x``, ``y``, and ``z``
817are initially all zero:
818
819   ::
820
821       1 void thread0(void)
822       2 {
823       3   rcu_read_lock();
824       4   WRITE_ONCE(a, 1);
825       5   WRITE_ONCE(b, 1);
826       6   rcu_read_unlock();
827       7 }
828       8
829       9 void thread1(void)
830      10 {
831      11   r1 = READ_ONCE(a);
832      12   synchronize_rcu();
833      13   WRITE_ONCE(c, 1);
834      14 }
835      15
836      16 void thread2(void)
837      17 {
838      18   rcu_read_lock();
839      19   r2 = READ_ONCE(b);
840      20   r3 = READ_ONCE(c);
841      21   rcu_read_unlock();
842      22 }
843
844It turns out that the outcome:
845
846   ::
847
848      (r1 == 1 && r2 == 0 && r3 == 1)
849
850is entirely possible. The following figure show how this can happen,
851with each circled ``QS`` indicating the point at which RCU recorded a
852*quiescent state* for each thread, that is, a state in which RCU knows
853that the thread cannot be in the midst of an RCU read-side critical
854section that started before the current grace period:
855
856.. kernel-figure:: GPpartitionReaders1.svg
857
858If it is necessary to partition RCU read-side critical sections in this
859manner, it is necessary to use two grace periods, where the first grace
860period is known to end before the second grace period starts:
861
862   ::
863
864       1 void thread0(void)
865       2 {
866       3   rcu_read_lock();
867       4   WRITE_ONCE(a, 1);
868       5   WRITE_ONCE(b, 1);
869       6   rcu_read_unlock();
870       7 }
871       8
872       9 void thread1(void)
873      10 {
874      11   r1 = READ_ONCE(a);
875      12   synchronize_rcu();
876      13   WRITE_ONCE(c, 1);
877      14 }
878      15
879      16 void thread2(void)
880      17 {
881      18   r2 = READ_ONCE(c);
882      19   synchronize_rcu();
883      20   WRITE_ONCE(d, 1);
884      21 }
885      22
886      23 void thread3(void)
887      24 {
888      25   rcu_read_lock();
889      26   r3 = READ_ONCE(b);
890      27   r4 = READ_ONCE(d);
891      28   rcu_read_unlock();
892      29 }
893
894Here, if ``(r1 == 1)``, then thread0()'s write to ``b`` must happen
895before the end of thread1()'s grace period. If in addition
896``(r4 == 1)``, then thread3()'s read from ``b`` must happen after
897the beginning of thread2()'s grace period. If it is also the case
898that ``(r2 == 1)``, then the end of thread1()'s grace period must
899precede the beginning of thread2()'s grace period. This mean that
900the two RCU read-side critical sections cannot overlap, guaranteeing
901that ``(r3 == 1)``. As a result, the outcome:
902
903   ::
904
905      (r1 == 1 && r2 == 1 && r3 == 0 && r4 == 1)
906
907cannot happen.
908
909This non-requirement was also non-premeditated, but became apparent when
910studying RCU's interaction with memory ordering.
911
912Read-Side Critical Sections Don't Partition Grace Periods
913~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
914
915It is also tempting to assume that if an RCU read-side critical section
916happens between a pair of grace periods, then those grace periods cannot
917overlap. However, this temptation leads nowhere good, as can be
918illustrated by the following, with all variables initially zero:
919
920   ::
921
922       1 void thread0(void)
923       2 {
924       3   rcu_read_lock();
925       4   WRITE_ONCE(a, 1);
926       5   WRITE_ONCE(b, 1);
927       6   rcu_read_unlock();
928       7 }
929       8
930       9 void thread1(void)
931      10 {
932      11   r1 = READ_ONCE(a);
933      12   synchronize_rcu();
934      13   WRITE_ONCE(c, 1);
935      14 }
936      15
937      16 void thread2(void)
938      17 {
939      18   rcu_read_lock();
940      19   WRITE_ONCE(d, 1);
941      20   r2 = READ_ONCE(c);
942      21   rcu_read_unlock();
943      22 }
944      23
945      24 void thread3(void)
946      25 {
947      26   r3 = READ_ONCE(d);
948      27   synchronize_rcu();
949      28   WRITE_ONCE(e, 1);
950      29 }
951      30
952      31 void thread4(void)
953      32 {
954      33   rcu_read_lock();
955      34   r4 = READ_ONCE(b);
956      35   r5 = READ_ONCE(e);
957      36   rcu_read_unlock();
958      37 }
959
960In this case, the outcome:
961
962   ::
963
964      (r1 == 1 && r2 == 1 && r3 == 1 && r4 == 0 && r5 == 1)
965
966is entirely possible, as illustrated below:
967
968.. kernel-figure:: ReadersPartitionGP1.svg
969
970Again, an RCU read-side critical section can overlap almost all of a
971given grace period, just so long as it does not overlap the entire grace
972period. As a result, an RCU read-side critical section cannot partition
973a pair of RCU grace periods.
974
975+-----------------------------------------------------------------------+
976| **Quick Quiz**:                                                       |
977+-----------------------------------------------------------------------+
978| How long a sequence of grace periods, each separated by an RCU        |
979| read-side critical section, would be required to partition the RCU    |
980| read-side critical sections at the beginning and end of the chain?    |
981+-----------------------------------------------------------------------+
982| **Answer**:                                                           |
983+-----------------------------------------------------------------------+
984| In theory, an infinite number. In practice, an unknown number that is |
985| sensitive to both implementation details and timing considerations.   |
986| Therefore, even in practice, RCU users must abide by the theoretical  |
987| rather than the practical answer.                                     |
988+-----------------------------------------------------------------------+
989
990Parallelism Facts of Life
991-------------------------
992
993These parallelism facts of life are by no means specific to RCU, but the
994RCU implementation must abide by them. They therefore bear repeating:
995
996#. Any CPU or task may be delayed at any time, and any attempts to avoid
997   these delays by disabling preemption, interrupts, or whatever are
998   completely futile. This is most obvious in preemptible user-level
999   environments and in virtualized environments (where a given guest
1000   OS's VCPUs can be preempted at any time by the underlying
1001   hypervisor), but can also happen in bare-metal environments due to
1002   ECC errors, NMIs, and other hardware events. Although a delay of more
1003   than about 20 seconds can result in splats, the RCU implementation is
1004   obligated to use algorithms that can tolerate extremely long delays,
1005   but where “extremely long” is not long enough to allow wrap-around
1006   when incrementing a 64-bit counter.
1007#. Both the compiler and the CPU can reorder memory accesses. Where it
1008   matters, RCU must use compiler directives and memory-barrier
1009   instructions to preserve ordering.
1010#. Conflicting writes to memory locations in any given cache line will
1011   result in expensive cache misses. Greater numbers of concurrent
1012   writes and more-frequent concurrent writes will result in more
1013   dramatic slowdowns. RCU is therefore obligated to use algorithms that
1014   have sufficient locality to avoid significant performance and
1015   scalability problems.
1016#. As a rough rule of thumb, only one CPU's worth of processing may be
1017   carried out under the protection of any given exclusive lock. RCU
1018   must therefore use scalable locking designs.
1019#. Counters are finite, especially on 32-bit systems. RCU's use of
1020   counters must therefore tolerate counter wrap, or be designed such
1021   that counter wrap would take way more time than a single system is
1022   likely to run. An uptime of ten years is quite possible, a runtime of
1023   a century much less so. As an example of the latter, RCU's
1024   dyntick-idle nesting counter allows 54 bits for interrupt nesting
1025   level (this counter is 64 bits even on a 32-bit system). Overflowing
1026   this counter requires 2\ :sup:`54` half-interrupts on a given CPU
1027   without that CPU ever going idle. If a half-interrupt happened every
1028   microsecond, it would take 570 years of runtime to overflow this
1029   counter, which is currently believed to be an acceptably long time.
1030#. Linux systems can have thousands of CPUs running a single Linux
1031   kernel in a single shared-memory environment. RCU must therefore pay
1032   close attention to high-end scalability.
1033
1034This last parallelism fact of life means that RCU must pay special
1035attention to the preceding facts of life. The idea that Linux might
1036scale to systems with thousands of CPUs would have been met with some
1037skepticism in the 1990s, but these requirements would have otherwise
1038have been unsurprising, even in the early 1990s.
1039
1040Quality-of-Implementation Requirements
1041--------------------------------------
1042
1043These sections list quality-of-implementation requirements. Although an
1044RCU implementation that ignores these requirements could still be used,
1045it would likely be subject to limitations that would make it
1046inappropriate for industrial-strength production use. Classes of
1047quality-of-implementation requirements are as follows:
1048
1049#. `Specialization`_
1050#. `Performance and Scalability`_
1051#. `Forward Progress`_
1052#. `Composability`_
1053#. `Corner Cases`_
1054
1055These classes is covered in the following sections.
1056
1057Specialization
1058~~~~~~~~~~~~~~
1059
1060RCU is and always has been intended primarily for read-mostly
1061situations, which means that RCU's read-side primitives are optimized,
1062often at the expense of its update-side primitives. Experience thus far
1063is captured by the following list of situations:
1064
1065#. Read-mostly data, where stale and inconsistent data is not a problem:
1066   RCU works great!
1067#. Read-mostly data, where data must be consistent: RCU works well.
1068#. Read-write data, where data must be consistent: RCU *might* work OK.
1069   Or not.
1070#. Write-mostly data, where data must be consistent: RCU is very
1071   unlikely to be the right tool for the job, with the following
1072   exceptions, where RCU can provide:
1073
1074   a. Existence guarantees for update-friendly mechanisms.
1075   b. Wait-free read-side primitives for real-time use.
1076
1077This focus on read-mostly situations means that RCU must interoperate
1078with other synchronization primitives. For example, the add_gp() and
1079remove_gp_synchronous() examples discussed earlier use RCU to
1080protect readers and locking to coordinate updaters. However, the need
1081extends much farther, requiring that a variety of synchronization
1082primitives be legal within RCU read-side critical sections, including
1083spinlocks, sequence locks, atomic operations, reference counters, and
1084memory barriers.
1085
1086+-----------------------------------------------------------------------+
1087| **Quick Quiz**:                                                       |
1088+-----------------------------------------------------------------------+
1089| What about sleeping locks?                                            |
1090+-----------------------------------------------------------------------+
1091| **Answer**:                                                           |
1092+-----------------------------------------------------------------------+
1093| These are forbidden within Linux-kernel RCU read-side critical        |
1094| sections because it is not legal to place a quiescent state (in this  |
1095| case, voluntary context switch) within an RCU read-side critical      |
1096| section. However, sleeping locks may be used within userspace RCU     |
1097| read-side critical sections, and also within Linux-kernel sleepable   |
1098| RCU `(SRCU) <Sleepable RCU_>`__ read-side critical sections. In       |
1099| addition, the -rt patchset turns spinlocks into a sleeping locks so   |
1100| that the corresponding critical sections can be preempted, which also |
1101| means that these sleeplockified spinlocks (but not other sleeping     |
1102| locks!) may be acquire within -rt-Linux-kernel RCU read-side critical |
1103| sections.                                                             |
1104| Note that it *is* legal for a normal RCU read-side critical section   |
1105| to conditionally acquire a sleeping locks (as in                      |
1106| mutex_trylock()), but only as long as it does not loop                |
1107| indefinitely attempting to conditionally acquire that sleeping locks. |
1108| The key point is that things like mutex_trylock() either return       |
1109| with the mutex held, or return an error indication if the mutex was   |
1110| not immediately available. Either way, mutex_trylock() returns        |
1111| immediately without sleeping.                                         |
1112+-----------------------------------------------------------------------+
1113
1114It often comes as a surprise that many algorithms do not require a
1115consistent view of data, but many can function in that mode, with
1116network routing being the poster child. Internet routing algorithms take
1117significant time to propagate updates, so that by the time an update
1118arrives at a given system, that system has been sending network traffic
1119the wrong way for a considerable length of time. Having a few threads
1120continue to send traffic the wrong way for a few more milliseconds is
1121clearly not a problem: In the worst case, TCP retransmissions will
1122eventually get the data where it needs to go. In general, when tracking
1123the state of the universe outside of the computer, some level of
1124inconsistency must be tolerated due to speed-of-light delays if nothing
1125else.
1126
1127Furthermore, uncertainty about external state is inherent in many cases.
1128For example, a pair of veterinarians might use heartbeat to determine
1129whether or not a given cat was alive. But how long should they wait
1130after the last heartbeat to decide that the cat is in fact dead? Waiting
1131less than 400 milliseconds makes no sense because this would mean that a
1132relaxed cat would be considered to cycle between death and life more
1133than 100 times per minute. Moreover, just as with human beings, a cat's
1134heart might stop for some period of time, so the exact wait period is a
1135judgment call. One of our pair of veterinarians might wait 30 seconds
1136before pronouncing the cat dead, while the other might insist on waiting
1137a full minute. The two veterinarians would then disagree on the state of
1138the cat during the final 30 seconds of the minute following the last
1139heartbeat.
1140
1141Interestingly enough, this same situation applies to hardware. When push
1142comes to shove, how do we tell whether or not some external server has
1143failed? We send messages to it periodically, and declare it failed if we
1144don't receive a response within a given period of time. Policy decisions
1145can usually tolerate short periods of inconsistency. The policy was
1146decided some time ago, and is only now being put into effect, so a few
1147milliseconds of delay is normally inconsequential.
1148
1149However, there are algorithms that absolutely must see consistent data.
1150For example, the translation between a user-level SystemV semaphore ID
1151to the corresponding in-kernel data structure is protected by RCU, but
1152it is absolutely forbidden to update a semaphore that has just been
1153removed. In the Linux kernel, this need for consistency is accommodated
1154by acquiring spinlocks located in the in-kernel data structure from
1155within the RCU read-side critical section, and this is indicated by the
1156green box in the figure above. Many other techniques may be used, and
1157are in fact used within the Linux kernel.
1158
1159In short, RCU is not required to maintain consistency, and other
1160mechanisms may be used in concert with RCU when consistency is required.
1161RCU's specialization allows it to do its job extremely well, and its
1162ability to interoperate with other synchronization mechanisms allows the
1163right mix of synchronization tools to be used for a given job.
1164
1165Performance and Scalability
1166~~~~~~~~~~~~~~~~~~~~~~~~~~~
1167
1168Energy efficiency is a critical component of performance today, and
1169Linux-kernel RCU implementations must therefore avoid unnecessarily
1170awakening idle CPUs. I cannot claim that this requirement was
1171premeditated. In fact, I learned of it during a telephone conversation
1172in which I was given “frank and open” feedback on the importance of
1173energy efficiency in battery-powered systems and on specific
1174energy-efficiency shortcomings of the Linux-kernel RCU implementation.
1175In my experience, the battery-powered embedded community will consider
1176any unnecessary wakeups to be extremely unfriendly acts. So much so that
1177mere Linux-kernel-mailing-list posts are insufficient to vent their ire.
1178
1179Memory consumption is not particularly important for in most situations,
1180and has become decreasingly so as memory sizes have expanded and memory
1181costs have plummeted. However, as I learned from Matt Mackall's
1182`bloatwatch <http://elinux.org/Linux_Tiny-FAQ>`__ efforts, memory
1183footprint is critically important on single-CPU systems with
1184non-preemptible (``CONFIG_PREEMPTION=n``) kernels, and thus `tiny
1185RCU <https://lore.kernel.org/r/20090113221724.GA15307@linux.vnet.ibm.com>`__
1186was born. Josh Triplett has since taken over the small-memory banner
1187with his `Linux kernel tinification <https://tiny.wiki.kernel.org/>`__
1188project, which resulted in `SRCU <Sleepable RCU_>`__ becoming optional
1189for those kernels not needing it.
1190
1191The remaining performance requirements are, for the most part,
1192unsurprising. For example, in keeping with RCU's read-side
1193specialization, rcu_dereference() should have negligible overhead
1194(for example, suppression of a few minor compiler optimizations).
1195Similarly, in non-preemptible environments, rcu_read_lock() and
1196rcu_read_unlock() should have exactly zero overhead.
1197
1198In preemptible environments, in the case where the RCU read-side
1199critical section was not preempted (as will be the case for the
1200highest-priority real-time process), rcu_read_lock() and
1201rcu_read_unlock() should have minimal overhead. In particular, they
1202should not contain atomic read-modify-write operations, memory-barrier
1203instructions, preemption disabling, interrupt disabling, or backwards
1204branches. However, in the case where the RCU read-side critical section
1205was preempted, rcu_read_unlock() may acquire spinlocks and disable
1206interrupts. This is why it is better to nest an RCU read-side critical
1207section within a preempt-disable region than vice versa, at least in
1208cases where that critical section is short enough to avoid unduly
1209degrading real-time latencies.
1210
1211The synchronize_rcu() grace-period-wait primitive is optimized for
1212throughput. It may therefore incur several milliseconds of latency in
1213addition to the duration of the longest RCU read-side critical section.
1214On the other hand, multiple concurrent invocations of
1215synchronize_rcu() are required to use batching optimizations so that
1216they can be satisfied by a single underlying grace-period-wait
1217operation. For example, in the Linux kernel, it is not unusual for a
1218single grace-period-wait operation to serve more than `1,000 separate
1219invocations <https://www.usenix.org/conference/2004-usenix-annual-technical-conference/making-rcu-safe-deep-sub-millisecond-response>`__
1220of synchronize_rcu(), thus amortizing the per-invocation overhead
1221down to nearly zero. However, the grace-period optimization is also
1222required to avoid measurable degradation of real-time scheduling and
1223interrupt latencies.
1224
1225In some cases, the multi-millisecond synchronize_rcu() latencies are
1226unacceptable. In these cases, synchronize_rcu_expedited() may be
1227used instead, reducing the grace-period latency down to a few tens of
1228microseconds on small systems, at least in cases where the RCU read-side
1229critical sections are short. There are currently no special latency
1230requirements for synchronize_rcu_expedited() on large systems, but,
1231consistent with the empirical nature of the RCU specification, that is
1232subject to change. However, there most definitely are scalability
1233requirements: A storm of synchronize_rcu_expedited() invocations on
12344096 CPUs should at least make reasonable forward progress. In return
1235for its shorter latencies, synchronize_rcu_expedited() is permitted
1236to impose modest degradation of real-time latency on non-idle online
1237CPUs. Here, “modest” means roughly the same latency degradation as a
1238scheduling-clock interrupt.
1239
1240There are a number of situations where even
1241synchronize_rcu_expedited()'s reduced grace-period latency is
1242unacceptable. In these situations, the asynchronous call_rcu() can
1243be used in place of synchronize_rcu() as follows:
1244
1245   ::
1246
1247       1 struct foo {
1248       2   int a;
1249       3   int b;
1250       4   struct rcu_head rh;
1251       5 };
1252       6
1253       7 static void remove_gp_cb(struct rcu_head *rhp)
1254       8 {
1255       9   struct foo *p = container_of(rhp, struct foo, rh);
1256      10
1257      11   kfree(p);
1258      12 }
1259      13
1260      14 bool remove_gp_asynchronous(void)
1261      15 {
1262      16   struct foo *p;
1263      17
1264      18   spin_lock(&gp_lock);
1265      19   p = rcu_access_pointer(gp);
1266      20   if (!p) {
1267      21     spin_unlock(&gp_lock);
1268      22     return false;
1269      23   }
1270      24   rcu_assign_pointer(gp, NULL);
1271      25   call_rcu(&p->rh, remove_gp_cb);
1272      26   spin_unlock(&gp_lock);
1273      27   return true;
1274      28 }
1275
1276A definition of ``struct foo`` is finally needed, and appears on
1277lines 1-5. The function remove_gp_cb() is passed to call_rcu()
1278on line 25, and will be invoked after the end of a subsequent grace
1279period. This gets the same effect as remove_gp_synchronous(), but
1280without forcing the updater to wait for a grace period to elapse. The
1281call_rcu() function may be used in a number of situations where
1282neither synchronize_rcu() nor synchronize_rcu_expedited() would
1283be legal, including within preempt-disable code, local_bh_disable()
1284code, interrupt-disable code, and interrupt handlers. However, even
1285call_rcu() is illegal within NMI handlers and from idle and offline
1286CPUs. The callback function (remove_gp_cb() in this case) will be
1287executed within softirq (software interrupt) environment within the
1288Linux kernel, either within a real softirq handler or under the
1289protection of local_bh_disable(). In both the Linux kernel and in
1290userspace, it is bad practice to write an RCU callback function that
1291takes too long. Long-running operations should be relegated to separate
1292threads or (in the Linux kernel) workqueues.
1293
1294+-----------------------------------------------------------------------+
1295| **Quick Quiz**:                                                       |
1296+-----------------------------------------------------------------------+
1297| Why does line 19 use rcu_access_pointer()? After all,                 |
1298| call_rcu() on line 25 stores into the structure, which would          |
1299| interact badly with concurrent insertions. Doesn't this mean that     |
1300| rcu_dereference() is required?                                        |
1301+-----------------------------------------------------------------------+
1302| **Answer**:                                                           |
1303+-----------------------------------------------------------------------+
1304| Presumably the ``->gp_lock`` acquired on line 18 excludes any         |
1305| changes, including any insertions that rcu_dereference() would        |
1306| protect against. Therefore, any insertions will be delayed until      |
1307| after ``->gp_lock`` is released on line 25, which in turn means that  |
1308| rcu_access_pointer() suffices.                                        |
1309+-----------------------------------------------------------------------+
1310
1311However, all that remove_gp_cb() is doing is invoking kfree() on
1312the data element. This is a common idiom, and is supported by
1313kfree_rcu(), which allows “fire and forget” operation as shown
1314below:
1315
1316   ::
1317
1318       1 struct foo {
1319       2   int a;
1320       3   int b;
1321       4   struct rcu_head rh;
1322       5 };
1323       6
1324       7 bool remove_gp_faf(void)
1325       8 {
1326       9   struct foo *p;
1327      10
1328      11   spin_lock(&gp_lock);
1329      12   p = rcu_dereference(gp);
1330      13   if (!p) {
1331      14     spin_unlock(&gp_lock);
1332      15     return false;
1333      16   }
1334      17   rcu_assign_pointer(gp, NULL);
1335      18   kfree_rcu(p, rh);
1336      19   spin_unlock(&gp_lock);
1337      20   return true;
1338      21 }
1339
1340Note that remove_gp_faf() simply invokes kfree_rcu() and
1341proceeds, without any need to pay any further attention to the
1342subsequent grace period and kfree(). It is permissible to invoke
1343kfree_rcu() from the same environments as for call_rcu().
1344Interestingly enough, DYNIX/ptx had the equivalents of call_rcu()
1345and kfree_rcu(), but not synchronize_rcu(). This was due to the
1346fact that RCU was not heavily used within DYNIX/ptx, so the very few
1347places that needed something like synchronize_rcu() simply
1348open-coded it.
1349
1350+-----------------------------------------------------------------------+
1351| **Quick Quiz**:                                                       |
1352+-----------------------------------------------------------------------+
1353| Earlier it was claimed that call_rcu() and kfree_rcu()                |
1354| allowed updaters to avoid being blocked by readers. But how can that  |
1355| be correct, given that the invocation of the callback and the freeing |
1356| of the memory (respectively) must still wait for a grace period to    |
1357| elapse?                                                               |
1358+-----------------------------------------------------------------------+
1359| **Answer**:                                                           |
1360+-----------------------------------------------------------------------+
1361| We could define things this way, but keep in mind that this sort of   |
1362| definition would say that updates in garbage-collected languages      |
1363| cannot complete until the next time the garbage collector runs, which |
1364| does not seem at all reasonable. The key point is that in most cases, |
1365| an updater using either call_rcu() or kfree_rcu() can proceed         |
1366| to the next update as soon as it has invoked call_rcu() or            |
1367| kfree_rcu(), without having to wait for a subsequent grace            |
1368| period.                                                               |
1369+-----------------------------------------------------------------------+
1370
1371But what if the updater must wait for the completion of code to be
1372executed after the end of the grace period, but has other tasks that can
1373be carried out in the meantime? The polling-style
1374get_state_synchronize_rcu() and cond_synchronize_rcu() functions
1375may be used for this purpose, as shown below:
1376
1377   ::
1378
1379       1 bool remove_gp_poll(void)
1380       2 {
1381       3   struct foo *p;
1382       4   unsigned long s;
1383       5
1384       6   spin_lock(&gp_lock);
1385       7   p = rcu_access_pointer(gp);
1386       8   if (!p) {
1387       9     spin_unlock(&gp_lock);
1388      10     return false;
1389      11   }
1390      12   rcu_assign_pointer(gp, NULL);
1391      13   spin_unlock(&gp_lock);
1392      14   s = get_state_synchronize_rcu();
1393      15   do_something_while_waiting();
1394      16   cond_synchronize_rcu(s);
1395      17   kfree(p);
1396      18   return true;
1397      19 }
1398
1399On line 14, get_state_synchronize_rcu() obtains a “cookie” from RCU,
1400then line 15 carries out other tasks, and finally, line 16 returns
1401immediately if a grace period has elapsed in the meantime, but otherwise
1402waits as required. The need for ``get_state_synchronize_rcu`` and
1403cond_synchronize_rcu() has appeared quite recently, so it is too
1404early to tell whether they will stand the test of time.
1405
1406RCU thus provides a range of tools to allow updaters to strike the
1407required tradeoff between latency, flexibility and CPU overhead.
1408
1409Forward Progress
1410~~~~~~~~~~~~~~~~
1411
1412In theory, delaying grace-period completion and callback invocation is
1413harmless. In practice, not only are memory sizes finite but also
1414callbacks sometimes do wakeups, and sufficiently deferred wakeups can be
1415difficult to distinguish from system hangs. Therefore, RCU must provide
1416a number of mechanisms to promote forward progress.
1417
1418These mechanisms are not foolproof, nor can they be. For one simple
1419example, an infinite loop in an RCU read-side critical section must by
1420definition prevent later grace periods from ever completing. For a more
1421involved example, consider a 64-CPU system built with
1422``CONFIG_RCU_NOCB_CPU=y`` and booted with ``rcu_nocbs=1-63``, where
1423CPUs 1 through 63 spin in tight loops that invoke call_rcu(). Even
1424if these tight loops also contain calls to cond_resched() (thus
1425allowing grace periods to complete), CPU 0 simply will not be able to
1426invoke callbacks as fast as the other 63 CPUs can register them, at
1427least not until the system runs out of memory. In both of these
1428examples, the Spiderman principle applies: With great power comes great
1429responsibility. However, short of this level of abuse, RCU is required
1430to ensure timely completion of grace periods and timely invocation of
1431callbacks.
1432
1433RCU takes the following steps to encourage timely completion of grace
1434periods:
1435
1436#. If a grace period fails to complete within 100 milliseconds, RCU
1437   causes future invocations of cond_resched() on the holdout CPUs
1438   to provide an RCU quiescent state. RCU also causes those CPUs'
1439   need_resched() invocations to return ``true``, but only after the
1440   corresponding CPU's next scheduling-clock.
1441#. CPUs mentioned in the ``nohz_full`` kernel boot parameter can run
1442   indefinitely in the kernel without scheduling-clock interrupts, which
1443   defeats the above need_resched() strategem. RCU will therefore
1444   invoke resched_cpu() on any ``nohz_full`` CPUs still holding out
1445   after 109 milliseconds.
1446#. In kernels built with ``CONFIG_RCU_BOOST=y``, if a given task that
1447   has been preempted within an RCU read-side critical section is
1448   holding out for more than 500 milliseconds, RCU will resort to
1449   priority boosting.
1450#. If a CPU is still holding out 10 seconds into the grace period, RCU
1451   will invoke resched_cpu() on it regardless of its ``nohz_full``
1452   state.
1453
1454The above values are defaults for systems running with ``HZ=1000``. They
1455will vary as the value of ``HZ`` varies, and can also be changed using
1456the relevant Kconfig options and kernel boot parameters. RCU currently
1457does not do much sanity checking of these parameters, so please use
1458caution when changing them. Note that these forward-progress measures
1459are provided only for RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
1460RCU`_.
1461
1462RCU takes the following steps in call_rcu() to encourage timely
1463invocation of callbacks when any given non-\ ``rcu_nocbs`` CPU has
146410,000 callbacks, or has 10,000 more callbacks than it had the last time
1465encouragement was provided:
1466
1467#. Starts a grace period, if one is not already in progress.
1468#. Forces immediate checking for quiescent states, rather than waiting
1469   for three milliseconds to have elapsed since the beginning of the
1470   grace period.
1471#. Immediately tags the CPU's callbacks with their grace period
1472   completion numbers, rather than waiting for the ``RCU_SOFTIRQ``
1473   handler to get around to it.
1474#. Lifts callback-execution batch limits, which speeds up callback
1475   invocation at the expense of degrading realtime response.
1476
1477Again, these are default values when running at ``HZ=1000``, and can be
1478overridden. Again, these forward-progress measures are provided only for
1479RCU, not for `SRCU <Sleepable RCU_>`__ or `Tasks
1480RCU`_. Even for RCU, callback-invocation forward
1481progress for ``rcu_nocbs`` CPUs is much less well-developed, in part
1482because workloads benefiting from ``rcu_nocbs`` CPUs tend to invoke
1483call_rcu() relatively infrequently. If workloads emerge that need
1484both ``rcu_nocbs`` CPUs and high call_rcu() invocation rates, then
1485additional forward-progress work will be required.
1486
1487Composability
1488~~~~~~~~~~~~~
1489
1490Composability has received much attention in recent years, perhaps in
1491part due to the collision of multicore hardware with object-oriented
1492techniques designed in single-threaded environments for single-threaded
1493use. And in theory, RCU read-side critical sections may be composed, and
1494in fact may be nested arbitrarily deeply. In practice, as with all
1495real-world implementations of composable constructs, there are
1496limitations.
1497
1498Implementations of RCU for which rcu_read_lock() and
1499rcu_read_unlock() generate no code, such as Linux-kernel RCU when
1500``CONFIG_PREEMPTION=n``, can be nested arbitrarily deeply. After all, there
1501is no overhead. Except that if all these instances of
1502rcu_read_lock() and rcu_read_unlock() are visible to the
1503compiler, compilation will eventually fail due to exhausting memory,
1504mass storage, or user patience, whichever comes first. If the nesting is
1505not visible to the compiler, as is the case with mutually recursive
1506functions each in its own translation unit, stack overflow will result.
1507If the nesting takes the form of loops, perhaps in the guise of tail
1508recursion, either the control variable will overflow or (in the Linux
1509kernel) you will get an RCU CPU stall warning. Nevertheless, this class
1510of RCU implementations is one of the most composable constructs in
1511existence.
1512
1513RCU implementations that explicitly track nesting depth are limited by
1514the nesting-depth counter. For example, the Linux kernel's preemptible
1515RCU limits nesting to ``INT_MAX``. This should suffice for almost all
1516practical purposes. That said, a consecutive pair of RCU read-side
1517critical sections between which there is an operation that waits for a
1518grace period cannot be enclosed in another RCU read-side critical
1519section. This is because it is not legal to wait for a grace period
1520within an RCU read-side critical section: To do so would result either
1521in deadlock or in RCU implicitly splitting the enclosing RCU read-side
1522critical section, neither of which is conducive to a long-lived and
1523prosperous kernel.
1524
1525It is worth noting that RCU is not alone in limiting composability. For
1526example, many transactional-memory implementations prohibit composing a
1527pair of transactions separated by an irrevocable operation (for example,
1528a network receive operation). For another example, lock-based critical
1529sections can be composed surprisingly freely, but only if deadlock is
1530avoided.
1531
1532In short, although RCU read-side critical sections are highly
1533composable, care is required in some situations, just as is the case for
1534any other composable synchronization mechanism.
1535
1536Corner Cases
1537~~~~~~~~~~~~
1538
1539A given RCU workload might have an endless and intense stream of RCU
1540read-side critical sections, perhaps even so intense that there was
1541never a point in time during which there was not at least one RCU
1542read-side critical section in flight. RCU cannot allow this situation to
1543block grace periods: As long as all the RCU read-side critical sections
1544are finite, grace periods must also be finite.
1545
1546That said, preemptible RCU implementations could potentially result in
1547RCU read-side critical sections being preempted for long durations,
1548which has the effect of creating a long-duration RCU read-side critical
1549section. This situation can arise only in heavily loaded systems, but
1550systems using real-time priorities are of course more vulnerable.
1551Therefore, RCU priority boosting is provided to help deal with this
1552case. That said, the exact requirements on RCU priority boosting will
1553likely evolve as more experience accumulates.
1554
1555Other workloads might have very high update rates. Although one can
1556argue that such workloads should instead use something other than RCU,
1557the fact remains that RCU must handle such workloads gracefully. This
1558requirement is another factor driving batching of grace periods, but it
1559is also the driving force behind the checks for large numbers of queued
1560RCU callbacks in the call_rcu() code path. Finally, high update
1561rates should not delay RCU read-side critical sections, although some
1562small read-side delays can occur when using
1563synchronize_rcu_expedited(), courtesy of this function's use of
1564smp_call_function_single().
1565
1566Although all three of these corner cases were understood in the early
15671990s, a simple user-level test consisting of ``close(open(path))`` in a
1568tight loop in the early 2000s suddenly provided a much deeper
1569appreciation of the high-update-rate corner case. This test also
1570motivated addition of some RCU code to react to high update rates, for
1571example, if a given CPU finds itself with more than 10,000 RCU callbacks
1572queued, it will cause RCU to take evasive action by more aggressively
1573starting grace periods and more aggressively forcing completion of
1574grace-period processing. This evasive action causes the grace period to
1575complete more quickly, but at the cost of restricting RCU's batching
1576optimizations, thus increasing the CPU overhead incurred by that grace
1577period.
1578
1579Software-Engineering Requirements
1580---------------------------------
1581
1582Between Murphy's Law and “To err is human”, it is necessary to guard
1583against mishaps and misuse:
1584
1585#. It is all too easy to forget to use rcu_read_lock() everywhere
1586   that it is needed, so kernels built with ``CONFIG_PROVE_RCU=y`` will
1587   splat if rcu_dereference() is used outside of an RCU read-side
1588   critical section. Update-side code can use
1589   rcu_dereference_protected(), which takes a `lockdep
1590   expression <https://lwn.net/Articles/371986/>`__ to indicate what is
1591   providing the protection. If the indicated protection is not
1592   provided, a lockdep splat is emitted.
1593   Code shared between readers and updaters can use
1594   rcu_dereference_check(), which also takes a lockdep expression,
1595   and emits a lockdep splat if neither rcu_read_lock() nor the
1596   indicated protection is in place. In addition,
1597   rcu_dereference_raw() is used in those (hopefully rare) cases
1598   where the required protection cannot be easily described. Finally,
1599   rcu_read_lock_held() is provided to allow a function to verify
1600   that it has been invoked within an RCU read-side critical section. I
1601   was made aware of this set of requirements shortly after Thomas
1602   Gleixner audited a number of RCU uses.
1603#. A given function might wish to check for RCU-related preconditions
1604   upon entry, before using any other RCU API. The
1605   rcu_lockdep_assert() does this job, asserting the expression in
1606   kernels having lockdep enabled and doing nothing otherwise.
1607#. It is also easy to forget to use rcu_assign_pointer() and
1608   rcu_dereference(), perhaps (incorrectly) substituting a simple
1609   assignment. To catch this sort of error, a given RCU-protected
1610   pointer may be tagged with ``__rcu``, after which sparse will
1611   complain about simple-assignment accesses to that pointer. Arnd
1612   Bergmann made me aware of this requirement, and also supplied the
1613   needed `patch series <https://lwn.net/Articles/376011/>`__.
1614#. Kernels built with ``CONFIG_DEBUG_OBJECTS_RCU_HEAD=y`` will splat if
1615   a data element is passed to call_rcu() twice in a row, without a
1616   grace period in between. (This error is similar to a double free.)
1617   The corresponding ``rcu_head`` structures that are dynamically
1618   allocated are automatically tracked, but ``rcu_head`` structures
1619   allocated on the stack must be initialized with
1620   init_rcu_head_on_stack() and cleaned up with
1621   destroy_rcu_head_on_stack(). Similarly, statically allocated
1622   non-stack ``rcu_head`` structures must be initialized with
1623   init_rcu_head() and cleaned up with destroy_rcu_head().
1624   Mathieu Desnoyers made me aware of this requirement, and also
1625   supplied the needed
1626   `patch <https://lore.kernel.org/r/20100319013024.GA28456@Krystal>`__.
1627#. An infinite loop in an RCU read-side critical section will eventually
1628   trigger an RCU CPU stall warning splat, with the duration of
1629   “eventually” being controlled by the ``RCU_CPU_STALL_TIMEOUT``
1630   ``Kconfig`` option, or, alternatively, by the
1631   ``rcupdate.rcu_cpu_stall_timeout`` boot/sysfs parameter. However, RCU
1632   is not obligated to produce this splat unless there is a grace period
1633   waiting on that particular RCU read-side critical section.
1634
1635   Some extreme workloads might intentionally delay RCU grace periods,
1636   and systems running those workloads can be booted with
1637   ``rcupdate.rcu_cpu_stall_suppress`` to suppress the splats. This
1638   kernel parameter may also be set via ``sysfs``. Furthermore, RCU CPU
1639   stall warnings are counter-productive during sysrq dumps and during
1640   panics. RCU therefore supplies the rcu_sysrq_start() and
1641   rcu_sysrq_end() API members to be called before and after long
1642   sysrq dumps. RCU also supplies the rcu_panic() notifier that is
1643   automatically invoked at the beginning of a panic to suppress further
1644   RCU CPU stall warnings.
1645
1646   This requirement made itself known in the early 1990s, pretty much
1647   the first time that it was necessary to debug a CPU stall. That said,
1648   the initial implementation in DYNIX/ptx was quite generic in
1649   comparison with that of Linux.
1650
1651#. Although it would be very good to detect pointers leaking out of RCU
1652   read-side critical sections, there is currently no good way of doing
1653   this. One complication is the need to distinguish between pointers
1654   leaking and pointers that have been handed off from RCU to some other
1655   synchronization mechanism, for example, reference counting.
1656#. In kernels built with ``CONFIG_RCU_TRACE=y``, RCU-related information
1657   is provided via event tracing.
1658#. Open-coded use of rcu_assign_pointer() and rcu_dereference()
1659   to create typical linked data structures can be surprisingly
1660   error-prone. Therefore, RCU-protected `linked
1661   lists <https://lwn.net/Articles/609973/#RCU%20List%20APIs>`__ and,
1662   more recently, RCU-protected `hash
1663   tables <https://lwn.net/Articles/612100/>`__ are available. Many
1664   other special-purpose RCU-protected data structures are available in
1665   the Linux kernel and the userspace RCU library.
1666#. Some linked structures are created at compile time, but still require
1667   ``__rcu`` checking. The RCU_POINTER_INITIALIZER() macro serves
1668   this purpose.
1669#. It is not necessary to use rcu_assign_pointer() when creating
1670   linked structures that are to be published via a single external
1671   pointer. The RCU_INIT_POINTER() macro is provided for this task.
1672
1673This not a hard-and-fast list: RCU's diagnostic capabilities will
1674continue to be guided by the number and type of usage bugs found in
1675real-world RCU usage.
1676
1677Linux Kernel Complications
1678--------------------------
1679
1680The Linux kernel provides an interesting environment for all kinds of
1681software, including RCU. Some of the relevant points of interest are as
1682follows:
1683
1684#. `Configuration`_
1685#. `Firmware Interface`_
1686#. `Early Boot`_
1687#. `Interrupts and NMIs`_
1688#. `Loadable Modules`_
1689#. `Hotplug CPU`_
1690#. `Scheduler and RCU`_
1691#. `Tracing and RCU`_
1692#. `Accesses to User Memory and RCU`_
1693#. `Energy Efficiency`_
1694#. `Scheduling-Clock Interrupts and RCU`_
1695#. `Memory Efficiency`_
1696#. `Performance, Scalability, Response Time, and Reliability`_
1697
1698This list is probably incomplete, but it does give a feel for the most
1699notable Linux-kernel complications. Each of the following sections
1700covers one of the above topics.
1701
1702Configuration
1703~~~~~~~~~~~~~
1704
1705RCU's goal is automatic configuration, so that almost nobody needs to
1706worry about RCU's ``Kconfig`` options. And for almost all users, RCU
1707does in fact work well “out of the box.”
1708
1709However, there are specialized use cases that are handled by kernel boot
1710parameters and ``Kconfig`` options. Unfortunately, the ``Kconfig``
1711system will explicitly ask users about new ``Kconfig`` options, which
1712requires almost all of them be hidden behind a ``CONFIG_RCU_EXPERT``
1713``Kconfig`` option.
1714
1715This all should be quite obvious, but the fact remains that Linus
1716Torvalds recently had to
1717`remind <https://lore.kernel.org/r/CA+55aFy4wcCwaL4okTs8wXhGZ5h-ibecy_Meg9C4MNQrUnwMcg@mail.gmail.com>`__
1718me of this requirement.
1719
1720Firmware Interface
1721~~~~~~~~~~~~~~~~~~
1722
1723In many cases, kernel obtains information about the system from the
1724firmware, and sometimes things are lost in translation. Or the
1725translation is accurate, but the original message is bogus.
1726
1727For example, some systems' firmware overreports the number of CPUs,
1728sometimes by a large factor. If RCU naively believed the firmware, as it
1729used to do, it would create too many per-CPU kthreads. Although the
1730resulting system will still run correctly, the extra kthreads needlessly
1731consume memory and can cause confusion when they show up in ``ps``
1732listings.
1733
1734RCU must therefore wait for a given CPU to actually come online before
1735it can allow itself to believe that the CPU actually exists. The
1736resulting “ghost CPUs” (which are never going to come online) cause a
1737number of `interesting
1738complications <https://paulmck.livejournal.com/37494.html>`__.
1739
1740Early Boot
1741~~~~~~~~~~
1742
1743The Linux kernel's boot sequence is an interesting process, and RCU is
1744used early, even before rcu_init() is invoked. In fact, a number of
1745RCU's primitives can be used as soon as the initial task's
1746``task_struct`` is available and the boot CPU's per-CPU variables are
1747set up. The read-side primitives (rcu_read_lock(),
1748rcu_read_unlock(), rcu_dereference(), and
1749rcu_access_pointer()) will operate normally very early on, as will
1750rcu_assign_pointer().
1751
1752Although call_rcu() may be invoked at any time during boot,
1753callbacks are not guaranteed to be invoked until after all of RCU's
1754kthreads have been spawned, which occurs at early_initcall() time.
1755This delay in callback invocation is due to the fact that RCU does not
1756invoke callbacks until it is fully initialized, and this full
1757initialization cannot occur until after the scheduler has initialized
1758itself to the point where RCU can spawn and run its kthreads. In theory,
1759it would be possible to invoke callbacks earlier, however, this is not a
1760panacea because there would be severe restrictions on what operations
1761those callbacks could invoke.
1762
1763Perhaps surprisingly, synchronize_rcu() and
1764synchronize_rcu_expedited(), will operate normally during very early
1765boot, the reason being that there is only one CPU and preemption is
1766disabled. This means that the call synchronize_rcu() (or friends)
1767itself is a quiescent state and thus a grace period, so the early-boot
1768implementation can be a no-op.
1769
1770However, once the scheduler has spawned its first kthread, this early
1771boot trick fails for synchronize_rcu() (as well as for
1772synchronize_rcu_expedited()) in ``CONFIG_PREEMPTION=y`` kernels. The
1773reason is that an RCU read-side critical section might be preempted,
1774which means that a subsequent synchronize_rcu() really does have to
1775wait for something, as opposed to simply returning immediately.
1776Unfortunately, synchronize_rcu() can't do this until all of its
1777kthreads are spawned, which doesn't happen until some time during
1778early_initcalls() time. But this is no excuse: RCU is nevertheless
1779required to correctly handle synchronous grace periods during this time
1780period. Once all of its kthreads are up and running, RCU starts running
1781normally.
1782
1783+-----------------------------------------------------------------------+
1784| **Quick Quiz**:                                                       |
1785+-----------------------------------------------------------------------+
1786| How can RCU possibly handle grace periods before all of its kthreads  |
1787| have been spawned???                                                  |
1788+-----------------------------------------------------------------------+
1789| **Answer**:                                                           |
1790+-----------------------------------------------------------------------+
1791| Very carefully!                                                       |
1792| During the “dead zone” between the time that the scheduler spawns the |
1793| first task and the time that all of RCU's kthreads have been spawned, |
1794| all synchronous grace periods are handled by the expedited            |
1795| grace-period mechanism. At runtime, this expedited mechanism relies   |
1796| on workqueues, but during the dead zone the requesting task itself    |
1797| drives the desired expedited grace period. Because dead-zone          |
1798| execution takes place within task context, everything works. Once the |
1799| dead zone ends, expedited grace periods go back to using workqueues,  |
1800| as is required to avoid problems that would otherwise occur when a    |
1801| user task received a POSIX signal while driving an expedited grace    |
1802| period.                                                               |
1803|                                                                       |
1804| And yes, this does mean that it is unhelpful to send POSIX signals to |
1805| random tasks between the time that the scheduler spawns its first     |
1806| kthread and the time that RCU's kthreads have all been spawned. If    |
1807| there ever turns out to be a good reason for sending POSIX signals    |
1808| during that time, appropriate adjustments will be made. (If it turns  |
1809| out that POSIX signals are sent during this time for no good reason,  |
1810| other adjustments will be made, appropriate or otherwise.)            |
1811+-----------------------------------------------------------------------+
1812
1813I learned of these boot-time requirements as a result of a series of
1814system hangs.
1815
1816Interrupts and NMIs
1817~~~~~~~~~~~~~~~~~~~
1818
1819The Linux kernel has interrupts, and RCU read-side critical sections are
1820legal within interrupt handlers and within interrupt-disabled regions of
1821code, as are invocations of call_rcu().
1822
1823Some Linux-kernel architectures can enter an interrupt handler from
1824non-idle process context, and then just never leave it, instead
1825stealthily transitioning back to process context. This trick is
1826sometimes used to invoke system calls from inside the kernel. These
1827“half-interrupts” mean that RCU has to be very careful about how it
1828counts interrupt nesting levels. I learned of this requirement the hard
1829way during a rewrite of RCU's dyntick-idle code.
1830
1831The Linux kernel has non-maskable interrupts (NMIs), and RCU read-side
1832critical sections are legal within NMI handlers. Thankfully, RCU
1833update-side primitives, including call_rcu(), are prohibited within
1834NMI handlers.
1835
1836The name notwithstanding, some Linux-kernel architectures can have
1837nested NMIs, which RCU must handle correctly. Andy Lutomirski `surprised
1838me <https://lore.kernel.org/r/CALCETrXLq1y7e_dKFPgou-FKHB6Pu-r8+t-6Ds+8=va7anBWDA@mail.gmail.com>`__
1839with this requirement; he also kindly surprised me with `an
1840algorithm <https://lore.kernel.org/r/CALCETrXSY9JpW3uE6H8WYk81sg56qasA2aqmjMPsq5dOtzso=g@mail.gmail.com>`__
1841that meets this requirement.
1842
1843Furthermore, NMI handlers can be interrupted by what appear to RCU to be
1844normal interrupts. One way that this can happen is for code that
1845directly invokes rcu_irq_enter() and rcu_irq_exit() to be called
1846from an NMI handler. This astonishing fact of life prompted the current
1847code structure, which has rcu_irq_enter() invoking
1848rcu_nmi_enter() and rcu_irq_exit() invoking rcu_nmi_exit().
1849And yes, I also learned of this requirement the hard way.
1850
1851Loadable Modules
1852~~~~~~~~~~~~~~~~
1853
1854The Linux kernel has loadable modules, and these modules can also be
1855unloaded. After a given module has been unloaded, any attempt to call
1856one of its functions results in a segmentation fault. The module-unload
1857functions must therefore cancel any delayed calls to loadable-module
1858functions, for example, any outstanding mod_timer() must be dealt
1859with via del_timer_sync() or similar.
1860
1861Unfortunately, there is no way to cancel an RCU callback; once you
1862invoke call_rcu(), the callback function is eventually going to be
1863invoked, unless the system goes down first. Because it is normally
1864considered socially irresponsible to crash the system in response to a
1865module unload request, we need some other way to deal with in-flight RCU
1866callbacks.
1867
1868RCU therefore provides rcu_barrier(), which waits until all
1869in-flight RCU callbacks have been invoked. If a module uses
1870call_rcu(), its exit function should therefore prevent any future
1871invocation of call_rcu(), then invoke rcu_barrier(). In theory,
1872the underlying module-unload code could invoke rcu_barrier()
1873unconditionally, but in practice this would incur unacceptable
1874latencies.
1875
1876Nikita Danilov noted this requirement for an analogous
1877filesystem-unmount situation, and Dipankar Sarma incorporated
1878rcu_barrier() into RCU. The need for rcu_barrier() for module
1879unloading became apparent later.
1880
1881.. important::
1882
1883   The rcu_barrier() function is not, repeat,
1884   *not*, obligated to wait for a grace period. It is instead only required
1885   to wait for RCU callbacks that have already been posted. Therefore, if
1886   there are no RCU callbacks posted anywhere in the system,
1887   rcu_barrier() is within its rights to return immediately. Even if
1888   there are callbacks posted, rcu_barrier() does not necessarily need
1889   to wait for a grace period.
1890
1891+-----------------------------------------------------------------------+
1892| **Quick Quiz**:                                                       |
1893+-----------------------------------------------------------------------+
1894| Wait a minute! Each RCU callbacks must wait for a grace period to     |
1895| complete, and rcu_barrier() must wait for each pre-existing           |
1896| callback to be invoked. Doesn't rcu_barrier() therefore need to       |
1897| wait for a full grace period if there is even one callback posted     |
1898| anywhere in the system?                                               |
1899+-----------------------------------------------------------------------+
1900| **Answer**:                                                           |
1901+-----------------------------------------------------------------------+
1902| Absolutely not!!!                                                     |
1903| Yes, each RCU callbacks must wait for a grace period to complete, but |
1904| it might well be partly (or even completely) finished waiting by the  |
1905| time rcu_barrier() is invoked. In that case, rcu_barrier()            |
1906| need only wait for the remaining portion of the grace period to       |
1907| elapse. So even if there are quite a few callbacks posted,            |
1908| rcu_barrier() might well return quite quickly.                        |
1909|                                                                       |
1910| So if you need to wait for a grace period as well as for all          |
1911| pre-existing callbacks, you will need to invoke both                  |
1912| synchronize_rcu() and rcu_barrier(). If latency is a concern,         |
1913| you can always use workqueues to invoke them concurrently.            |
1914+-----------------------------------------------------------------------+
1915
1916Hotplug CPU
1917~~~~~~~~~~~
1918
1919The Linux kernel supports CPU hotplug, which means that CPUs can come
1920and go. It is of course illegal to use any RCU API member from an
1921offline CPU, with the exception of `SRCU <Sleepable RCU_>`__ read-side
1922critical sections. This requirement was present from day one in
1923DYNIX/ptx, but on the other hand, the Linux kernel's CPU-hotplug
1924implementation is “interesting.”
1925
1926The Linux-kernel CPU-hotplug implementation has notifiers that are used
1927to allow the various kernel subsystems (including RCU) to respond
1928appropriately to a given CPU-hotplug operation. Most RCU operations may
1929be invoked from CPU-hotplug notifiers, including even synchronous
1930grace-period operations such as (synchronize_rcu() and
1931synchronize_rcu_expedited()).  However, these synchronous operations
1932do block and therefore cannot be invoked from notifiers that execute via
1933stop_machine(), specifically those between the ``CPUHP_AP_OFFLINE``
1934and ``CPUHP_AP_ONLINE`` states.
1935
1936In addition, all-callback-wait operations such as rcu_barrier() may
1937not be invoked from any CPU-hotplug notifier.  This restriction is due
1938to the fact that there are phases of CPU-hotplug operations where the
1939outgoing CPU's callbacks will not be invoked until after the CPU-hotplug
1940operation ends, which could also result in deadlock. Furthermore,
1941rcu_barrier() blocks CPU-hotplug operations during its execution,
1942which results in another type of deadlock when invoked from a CPU-hotplug
1943notifier.
1944
1945Finally, RCU must avoid deadlocks due to interaction between hotplug,
1946timers and grace period processing. It does so by maintaining its own set
1947of books that duplicate the centrally maintained ``cpu_online_mask``,
1948and also by reporting quiescent states explicitly when a CPU goes
1949offline.  This explicit reporting of quiescent states avoids any need
1950for the force-quiescent-state loop (FQS) to report quiescent states for
1951offline CPUs.  However, as a debugging measure, the FQS loop does splat
1952if offline CPUs block an RCU grace period for too long.
1953
1954An offline CPU's quiescent state will be reported either:
1955
19561.  As the CPU goes offline using RCU's hotplug notifier (rcu_report_dead()).
19572.  When grace period initialization (rcu_gp_init()) detects a
1958    race either with CPU offlining or with a task unblocking on a leaf
1959    ``rcu_node`` structure whose CPUs are all offline.
1960
1961The CPU-online path (rcu_cpu_starting()) should never need to report
1962a quiescent state for an offline CPU.  However, as a debugging measure,
1963it does emit a warning if a quiescent state was not already reported
1964for that CPU.
1965
1966During the checking/modification of RCU's hotplug bookkeeping, the
1967corresponding CPU's leaf node lock is held. This avoids race conditions
1968between RCU's hotplug notifier hooks, the grace period initialization
1969code, and the FQS loop, all of which refer to or modify this bookkeeping.
1970
1971Scheduler and RCU
1972~~~~~~~~~~~~~~~~~
1973
1974RCU makes use of kthreads, and it is necessary to avoid excessive CPU-time
1975accumulation by these kthreads. This requirement was no surprise, but
1976RCU's violation of it when running context-switch-heavy workloads when
1977built with ``CONFIG_NO_HZ_FULL=y`` `did come as a surprise
1978[PDF] <http://www.rdrop.com/users/paulmck/scalability/paper/BareMetal.2015.01.15b.pdf>`__.
1979RCU has made good progress towards meeting this requirement, even for
1980context-switch-heavy ``CONFIG_NO_HZ_FULL=y`` workloads, but there is
1981room for further improvement.
1982
1983There is no longer any prohibition against holding any of
1984scheduler's runqueue or priority-inheritance spinlocks across an
1985rcu_read_unlock(), even if interrupts and preemption were enabled
1986somewhere within the corresponding RCU read-side critical section.
1987Therefore, it is now perfectly legal to execute rcu_read_lock()
1988with preemption enabled, acquire one of the scheduler locks, and hold
1989that lock across the matching rcu_read_unlock().
1990
1991Similarly, the RCU flavor consolidation has removed the need for negative
1992nesting.  The fact that interrupt-disabled regions of code act as RCU
1993read-side critical sections implicitly avoids earlier issues that used
1994to result in destructive recursion via interrupt handler's use of RCU.
1995
1996Tracing and RCU
1997~~~~~~~~~~~~~~~
1998
1999It is possible to use tracing on RCU code, but tracing itself uses RCU.
2000For this reason, rcu_dereference_raw_check() is provided for use
2001by tracing, which avoids the destructive recursion that could otherwise
2002ensue. This API is also used by virtualization in some architectures,
2003where RCU readers execute in environments in which tracing cannot be
2004used. The tracing folks both located the requirement and provided the
2005needed fix, so this surprise requirement was relatively painless.
2006
2007Accesses to User Memory and RCU
2008~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2009
2010The kernel needs to access user-space memory, for example, to access data
2011referenced by system-call parameters.  The get_user() macro does this job.
2012
2013However, user-space memory might well be paged out, which means that
2014get_user() might well page-fault and thus block while waiting for the
2015resulting I/O to complete.  It would be a very bad thing for the compiler to
2016reorder a get_user() invocation into an RCU read-side critical section.
2017
2018For example, suppose that the source code looked like this:
2019
2020  ::
2021
2022       1 rcu_read_lock();
2023       2 p = rcu_dereference(gp);
2024       3 v = p->value;
2025       4 rcu_read_unlock();
2026       5 get_user(user_v, user_p);
2027       6 do_something_with(v, user_v);
2028
2029The compiler must not be permitted to transform this source code into
2030the following:
2031
2032  ::
2033
2034       1 rcu_read_lock();
2035       2 p = rcu_dereference(gp);
2036       3 get_user(user_v, user_p); // BUG: POSSIBLE PAGE FAULT!!!
2037       4 v = p->value;
2038       5 rcu_read_unlock();
2039       6 do_something_with(v, user_v);
2040
2041If the compiler did make this transformation in a ``CONFIG_PREEMPTION=n`` kernel
2042build, and if get_user() did page fault, the result would be a quiescent
2043state in the middle of an RCU read-side critical section.  This misplaced
2044quiescent state could result in line 4 being a use-after-free access,
2045which could be bad for your kernel's actuarial statistics.  Similar examples
2046can be constructed with the call to get_user() preceding the
2047rcu_read_lock().
2048
2049Unfortunately, get_user() doesn't have any particular ordering properties,
2050and in some architectures the underlying ``asm`` isn't even marked
2051``volatile``.  And even if it was marked ``volatile``, the above access to
2052``p->value`` is not volatile, so the compiler would not have any reason to keep
2053those two accesses in order.
2054
2055Therefore, the Linux-kernel definitions of rcu_read_lock() and
2056rcu_read_unlock() must act as compiler barriers, at least for outermost
2057instances of rcu_read_lock() and rcu_read_unlock() within a nested set
2058of RCU read-side critical sections.
2059
2060Energy Efficiency
2061~~~~~~~~~~~~~~~~~
2062
2063Interrupting idle CPUs is considered socially unacceptable, especially
2064by people with battery-powered embedded systems. RCU therefore conserves
2065energy by detecting which CPUs are idle, including tracking CPUs that
2066have been interrupted from idle. This is a large part of the
2067energy-efficiency requirement, so I learned of this via an irate phone
2068call.
2069
2070Because RCU avoids interrupting idle CPUs, it is illegal to execute an
2071RCU read-side critical section on an idle CPU. (Kernels built with
2072``CONFIG_PROVE_RCU=y`` will splat if you try it.) The RCU_NONIDLE()
2073macro and ``_rcuidle`` event tracing is provided to work around this
2074restriction. In addition, rcu_is_watching() may be used to test
2075whether or not it is currently legal to run RCU read-side critical
2076sections on this CPU. I learned of the need for diagnostics on the one
2077hand and RCU_NONIDLE() on the other while inspecting idle-loop code.
2078Steven Rostedt supplied ``_rcuidle`` event tracing, which is used quite
2079heavily in the idle loop. However, there are some restrictions on the
2080code placed within RCU_NONIDLE():
2081
2082#. Blocking is prohibited. In practice, this is not a serious
2083   restriction given that idle tasks are prohibited from blocking to
2084   begin with.
2085#. Although nesting RCU_NONIDLE() is permitted, they cannot nest
2086   indefinitely deeply. However, given that they can be nested on the
2087   order of a million deep, even on 32-bit systems, this should not be a
2088   serious restriction. This nesting limit would probably be reached
2089   long after the compiler OOMed or the stack overflowed.
2090#. Any code path that enters RCU_NONIDLE() must sequence out of that
2091   same RCU_NONIDLE(). For example, the following is grossly
2092   illegal:
2093
2094      ::
2095
2096	  1     RCU_NONIDLE({
2097	  2       do_something();
2098	  3       goto bad_idea;  /* BUG!!! */
2099	  4       do_something_else();});
2100	  5   bad_idea:
2101
2102
2103   It is just as illegal to transfer control into the middle of
2104   RCU_NONIDLE()'s argument. Yes, in theory, you could transfer in
2105   as long as you also transferred out, but in practice you could also
2106   expect to get sharply worded review comments.
2107
2108It is similarly socially unacceptable to interrupt an ``nohz_full`` CPU
2109running in userspace. RCU must therefore track ``nohz_full`` userspace
2110execution. RCU must therefore be able to sample state at two points in
2111time, and be able to determine whether or not some other CPU spent any
2112time idle and/or executing in userspace.
2113
2114These energy-efficiency requirements have proven quite difficult to
2115understand and to meet, for example, there have been more than five
2116clean-sheet rewrites of RCU's energy-efficiency code, the last of which
2117was finally able to demonstrate `real energy savings running on real
2118hardware
2119[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/AMPenergy.2013.04.19a.pdf>`__.
2120As noted earlier, I learned of many of these requirements via angry
2121phone calls: Flaming me on the Linux-kernel mailing list was apparently
2122not sufficient to fully vent their ire at RCU's energy-efficiency bugs!
2123
2124Scheduling-Clock Interrupts and RCU
2125~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2126
2127The kernel transitions between in-kernel non-idle execution, userspace
2128execution, and the idle loop. Depending on kernel configuration, RCU
2129handles these states differently:
2130
2131+-----------------+------------------+------------------+-----------------+
2132| ``HZ`` Kconfig  | In-Kernel        | Usermode         | Idle            |
2133+=================+==================+==================+=================+
2134| ``HZ_PERIODIC`` | Can rely on      | Can rely on      | Can rely on     |
2135|                 | scheduling-clock | scheduling-clock | RCU's           |
2136|                 | interrupt.       | interrupt and    | dyntick-idle    |
2137|                 |                  | its detection    | detection.      |
2138|                 |                  | of interrupt     |                 |
2139|                 |                  | from usermode.   |                 |
2140+-----------------+------------------+------------------+-----------------+
2141| ``NO_HZ_IDLE``  | Can rely on      | Can rely on      | Can rely on     |
2142|                 | scheduling-clock | scheduling-clock | RCU's           |
2143|                 | interrupt.       | interrupt and    | dyntick-idle    |
2144|                 |                  | its detection    | detection.      |
2145|                 |                  | of interrupt     |                 |
2146|                 |                  | from usermode.   |                 |
2147+-----------------+------------------+------------------+-----------------+
2148| ``NO_HZ_FULL``  | Can only         | Can rely on      | Can rely on     |
2149|                 | sometimes rely   | RCU's            | RCU's           |
2150|                 | on               | dyntick-idle     | dyntick-idle    |
2151|                 | scheduling-clock | detection.       | detection.      |
2152|                 | interrupt. In    |                  |                 |
2153|                 | other cases, it  |                  |                 |
2154|                 | is necessary to  |                  |                 |
2155|                 | bound kernel     |                  |                 |
2156|                 | execution times  |                  |                 |
2157|                 | and/or use       |                  |                 |
2158|                 | IPIs.            |                  |                 |
2159+-----------------+------------------+------------------+-----------------+
2160
2161+-----------------------------------------------------------------------+
2162| **Quick Quiz**:                                                       |
2163+-----------------------------------------------------------------------+
2164| Why can't ``NO_HZ_FULL`` in-kernel execution rely on the              |
2165| scheduling-clock interrupt, just like ``HZ_PERIODIC`` and             |
2166| ``NO_HZ_IDLE`` do?                                                    |
2167+-----------------------------------------------------------------------+
2168| **Answer**:                                                           |
2169+-----------------------------------------------------------------------+
2170| Because, as a performance optimization, ``NO_HZ_FULL`` does not       |
2171| necessarily re-enable the scheduling-clock interrupt on entry to each |
2172| and every system call.                                                |
2173+-----------------------------------------------------------------------+
2174
2175However, RCU must be reliably informed as to whether any given CPU is
2176currently in the idle loop, and, for ``NO_HZ_FULL``, also whether that
2177CPU is executing in usermode, as discussed
2178`earlier <Energy Efficiency_>`__. It also requires that the
2179scheduling-clock interrupt be enabled when RCU needs it to be:
2180
2181#. If a CPU is either idle or executing in usermode, and RCU believes it
2182   is non-idle, the scheduling-clock tick had better be running.
2183   Otherwise, you will get RCU CPU stall warnings. Or at best, very long
2184   (11-second) grace periods, with a pointless IPI waking the CPU from
2185   time to time.
2186#. If a CPU is in a portion of the kernel that executes RCU read-side
2187   critical sections, and RCU believes this CPU to be idle, you will get
2188   random memory corruption. **DON'T DO THIS!!!**
2189   This is one reason to test with lockdep, which will complain about
2190   this sort of thing.
2191#. If a CPU is in a portion of the kernel that is absolutely positively
2192   no-joking guaranteed to never execute any RCU read-side critical
2193   sections, and RCU believes this CPU to be idle, no problem. This
2194   sort of thing is used by some architectures for light-weight
2195   exception handlers, which can then avoid the overhead of
2196   rcu_irq_enter() and rcu_irq_exit() at exception entry and
2197   exit, respectively. Some go further and avoid the entireties of
2198   irq_enter() and irq_exit().
2199   Just make very sure you are running some of your tests with
2200   ``CONFIG_PROVE_RCU=y``, just in case one of your code paths was in
2201   fact joking about not doing RCU read-side critical sections.
2202#. If a CPU is executing in the kernel with the scheduling-clock
2203   interrupt disabled and RCU believes this CPU to be non-idle, and if
2204   the CPU goes idle (from an RCU perspective) every few jiffies, no
2205   problem. It is usually OK for there to be the occasional gap between
2206   idle periods of up to a second or so.
2207   If the gap grows too long, you get RCU CPU stall warnings.
2208#. If a CPU is either idle or executing in usermode, and RCU believes it
2209   to be idle, of course no problem.
2210#. If a CPU is executing in the kernel, the kernel code path is passing
2211   through quiescent states at a reasonable frequency (preferably about
2212   once per few jiffies, but the occasional excursion to a second or so
2213   is usually OK) and the scheduling-clock interrupt is enabled, of
2214   course no problem.
2215   If the gap between a successive pair of quiescent states grows too
2216   long, you get RCU CPU stall warnings.
2217
2218+-----------------------------------------------------------------------+
2219| **Quick Quiz**:                                                       |
2220+-----------------------------------------------------------------------+
2221| But what if my driver has a hardware interrupt handler that can run   |
2222| for many seconds? I cannot invoke schedule() from an hardware         |
2223| interrupt handler, after all!                                         |
2224+-----------------------------------------------------------------------+
2225| **Answer**:                                                           |
2226+-----------------------------------------------------------------------+
2227| One approach is to do ``rcu_irq_exit();rcu_irq_enter();`` every so    |
2228| often. But given that long-running interrupt handlers can cause other |
2229| problems, not least for response time, shouldn't you work to keep     |
2230| your interrupt handler's runtime within reasonable bounds?            |
2231+-----------------------------------------------------------------------+
2232
2233But as long as RCU is properly informed of kernel state transitions
2234between in-kernel execution, usermode execution, and idle, and as long
2235as the scheduling-clock interrupt is enabled when RCU needs it to be,
2236you can rest assured that the bugs you encounter will be in some other
2237part of RCU or some other part of the kernel!
2238
2239Memory Efficiency
2240~~~~~~~~~~~~~~~~~
2241
2242Although small-memory non-realtime systems can simply use Tiny RCU, code
2243size is only one aspect of memory efficiency. Another aspect is the size
2244of the ``rcu_head`` structure used by call_rcu() and
2245kfree_rcu(). Although this structure contains nothing more than a
2246pair of pointers, it does appear in many RCU-protected data structures,
2247including some that are size critical. The ``page`` structure is a case
2248in point, as evidenced by the many occurrences of the ``union`` keyword
2249within that structure.
2250
2251This need for memory efficiency is one reason that RCU uses hand-crafted
2252singly linked lists to track the ``rcu_head`` structures that are
2253waiting for a grace period to elapse. It is also the reason why
2254``rcu_head`` structures do not contain debug information, such as fields
2255tracking the file and line of the call_rcu() or kfree_rcu() that
2256posted them. Although this information might appear in debug-only kernel
2257builds at some point, in the meantime, the ``->func`` field will often
2258provide the needed debug information.
2259
2260However, in some cases, the need for memory efficiency leads to even
2261more extreme measures. Returning to the ``page`` structure, the
2262``rcu_head`` field shares storage with a great many other structures
2263that are used at various points in the corresponding page's lifetime. In
2264order to correctly resolve certain `race
2265conditions <https://lore.kernel.org/r/1439976106-137226-1-git-send-email-kirill.shutemov@linux.intel.com>`__,
2266the Linux kernel's memory-management subsystem needs a particular bit to
2267remain zero during all phases of grace-period processing, and that bit
2268happens to map to the bottom bit of the ``rcu_head`` structure's
2269``->next`` field. RCU makes this guarantee as long as call_rcu() is
2270used to post the callback, as opposed to kfree_rcu() or some future
2271“lazy” variant of call_rcu() that might one day be created for
2272energy-efficiency purposes.
2273
2274That said, there are limits. RCU requires that the ``rcu_head``
2275structure be aligned to a two-byte boundary, and passing a misaligned
2276``rcu_head`` structure to one of the call_rcu() family of functions
2277will result in a splat. It is therefore necessary to exercise caution
2278when packing structures containing fields of type ``rcu_head``. Why not
2279a four-byte or even eight-byte alignment requirement? Because the m68k
2280architecture provides only two-byte alignment, and thus acts as
2281alignment's least common denominator.
2282
2283The reason for reserving the bottom bit of pointers to ``rcu_head``
2284structures is to leave the door open to “lazy” callbacks whose
2285invocations can safely be deferred. Deferring invocation could
2286potentially have energy-efficiency benefits, but only if the rate of
2287non-lazy callbacks decreases significantly for some important workload.
2288In the meantime, reserving the bottom bit keeps this option open in case
2289it one day becomes useful.
2290
2291Performance, Scalability, Response Time, and Reliability
2292~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2293
2294Expanding on the `earlier
2295discussion <Performance and Scalability_>`__, RCU is used heavily by
2296hot code paths in performance-critical portions of the Linux kernel's
2297networking, security, virtualization, and scheduling code paths. RCU
2298must therefore use efficient implementations, especially in its
2299read-side primitives. To that end, it would be good if preemptible RCU's
2300implementation of rcu_read_lock() could be inlined, however, doing
2301this requires resolving ``#include`` issues with the ``task_struct``
2302structure.
2303
2304The Linux kernel supports hardware configurations with up to 4096 CPUs,
2305which means that RCU must be extremely scalable. Algorithms that involve
2306frequent acquisitions of global locks or frequent atomic operations on
2307global variables simply cannot be tolerated within the RCU
2308implementation. RCU therefore makes heavy use of a combining tree based
2309on the ``rcu_node`` structure. RCU is required to tolerate all CPUs
2310continuously invoking any combination of RCU's runtime primitives with
2311minimal per-operation overhead. In fact, in many cases, increasing load
2312must *decrease* the per-operation overhead, witness the batching
2313optimizations for synchronize_rcu(), call_rcu(),
2314synchronize_rcu_expedited(), and rcu_barrier(). As a general
2315rule, RCU must cheerfully accept whatever the rest of the Linux kernel
2316decides to throw at it.
2317
2318The Linux kernel is used for real-time workloads, especially in
2319conjunction with the `-rt
2320patchset <https://wiki.linuxfoundation.org/realtime/>`__. The
2321real-time-latency response requirements are such that the traditional
2322approach of disabling preemption across RCU read-side critical sections
2323is inappropriate. Kernels built with ``CONFIG_PREEMPTION=y`` therefore use
2324an RCU implementation that allows RCU read-side critical sections to be
2325preempted. This requirement made its presence known after users made it
2326clear that an earlier `real-time
2327patch <https://lwn.net/Articles/107930/>`__ did not meet their needs, in
2328conjunction with some `RCU
2329issues <https://lore.kernel.org/r/20050318002026.GA2693@us.ibm.com>`__
2330encountered by a very early version of the -rt patchset.
2331
2332In addition, RCU must make do with a sub-100-microsecond real-time
2333latency budget. In fact, on smaller systems with the -rt patchset, the
2334Linux kernel provides sub-20-microsecond real-time latencies for the
2335whole kernel, including RCU. RCU's scalability and latency must
2336therefore be sufficient for these sorts of configurations. To my
2337surprise, the sub-100-microsecond real-time latency budget `applies to
2338even the largest systems
2339[PDF] <http://www.rdrop.com/users/paulmck/realtime/paper/bigrt.2013.01.31a.LCA.pdf>`__,
2340up to and including systems with 4096 CPUs. This real-time requirement
2341motivated the grace-period kthread, which also simplified handling of a
2342number of race conditions.
2343
2344RCU must avoid degrading real-time response for CPU-bound threads,
2345whether executing in usermode (which is one use case for
2346``CONFIG_NO_HZ_FULL=y``) or in the kernel. That said, CPU-bound loops in
2347the kernel must execute cond_resched() at least once per few tens of
2348milliseconds in order to avoid receiving an IPI from RCU.
2349
2350Finally, RCU's status as a synchronization primitive means that any RCU
2351failure can result in arbitrary memory corruption that can be extremely
2352difficult to debug. This means that RCU must be extremely reliable,
2353which in practice also means that RCU must have an aggressive
2354stress-test suite. This stress-test suite is called ``rcutorture``.
2355
2356Although the need for ``rcutorture`` was no surprise, the current
2357immense popularity of the Linux kernel is posing interesting—and perhaps
2358unprecedented—validation challenges. To see this, keep in mind that
2359there are well over one billion instances of the Linux kernel running
2360today, given Android smartphones, Linux-powered televisions, and
2361servers. This number can be expected to increase sharply with the advent
2362of the celebrated Internet of Things.
2363
2364Suppose that RCU contains a race condition that manifests on average
2365once per million years of runtime. This bug will be occurring about
2366three times per *day* across the installed base. RCU could simply hide
2367behind hardware error rates, given that no one should really expect
2368their smartphone to last for a million years. However, anyone taking too
2369much comfort from this thought should consider the fact that in most
2370jurisdictions, a successful multi-year test of a given mechanism, which
2371might include a Linux kernel, suffices for a number of types of
2372safety-critical certifications. In fact, rumor has it that the Linux
2373kernel is already being used in production for safety-critical
2374applications. I don't know about you, but I would feel quite bad if a
2375bug in RCU killed someone. Which might explain my recent focus on
2376validation and verification.
2377
2378Other RCU Flavors
2379-----------------
2380
2381One of the more surprising things about RCU is that there are now no
2382fewer than five *flavors*, or API families. In addition, the primary
2383flavor that has been the sole focus up to this point has two different
2384implementations, non-preemptible and preemptible. The other four flavors
2385are listed below, with requirements for each described in a separate
2386section.
2387
2388#. `Bottom-Half Flavor (Historical)`_
2389#. `Sched Flavor (Historical)`_
2390#. `Sleepable RCU`_
2391#. `Tasks RCU`_
2392
2393Bottom-Half Flavor (Historical)
2394~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2395
2396The RCU-bh flavor of RCU has since been expressed in terms of the other
2397RCU flavors as part of a consolidation of the three flavors into a
2398single flavor. The read-side API remains, and continues to disable
2399softirq and to be accounted for by lockdep. Much of the material in this
2400section is therefore strictly historical in nature.
2401
2402The softirq-disable (AKA “bottom-half”, hence the “_bh” abbreviations)
2403flavor of RCU, or *RCU-bh*, was developed by Dipankar Sarma to provide a
2404flavor of RCU that could withstand the network-based denial-of-service
2405attacks researched by Robert Olsson. These attacks placed so much
2406networking load on the system that some of the CPUs never exited softirq
2407execution, which in turn prevented those CPUs from ever executing a
2408context switch, which, in the RCU implementation of that time, prevented
2409grace periods from ever ending. The result was an out-of-memory
2410condition and a system hang.
2411
2412The solution was the creation of RCU-bh, which does
2413local_bh_disable() across its read-side critical sections, and which
2414uses the transition from one type of softirq processing to another as a
2415quiescent state in addition to context switch, idle, user mode, and
2416offline. This means that RCU-bh grace periods can complete even when
2417some of the CPUs execute in softirq indefinitely, thus allowing
2418algorithms based on RCU-bh to withstand network-based denial-of-service
2419attacks.
2420
2421Because rcu_read_lock_bh() and rcu_read_unlock_bh() disable and
2422re-enable softirq handlers, any attempt to start a softirq handlers
2423during the RCU-bh read-side critical section will be deferred. In this
2424case, rcu_read_unlock_bh() will invoke softirq processing, which can
2425take considerable time. One can of course argue that this softirq
2426overhead should be associated with the code following the RCU-bh
2427read-side critical section rather than rcu_read_unlock_bh(), but the
2428fact is that most profiling tools cannot be expected to make this sort
2429of fine distinction. For example, suppose that a three-millisecond-long
2430RCU-bh read-side critical section executes during a time of heavy
2431networking load. There will very likely be an attempt to invoke at least
2432one softirq handler during that three milliseconds, but any such
2433invocation will be delayed until the time of the
2434rcu_read_unlock_bh(). This can of course make it appear at first
2435glance as if rcu_read_unlock_bh() was executing very slowly.
2436
2437The `RCU-bh
2438API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2439includes rcu_read_lock_bh(), rcu_read_unlock_bh(), rcu_dereference_bh(),
2440rcu_dereference_bh_check(), and rcu_read_lock_bh_held(). However, the
2441old RCU-bh update-side APIs are now gone, replaced by synchronize_rcu(),
2442synchronize_rcu_expedited(), call_rcu(), and rcu_barrier().  In addition,
2443anything that disables bottom halves also marks an RCU-bh read-side
2444critical section, including local_bh_disable() and local_bh_enable(),
2445local_irq_save() and local_irq_restore(), and so on.
2446
2447Sched Flavor (Historical)
2448~~~~~~~~~~~~~~~~~~~~~~~~~
2449
2450The RCU-sched flavor of RCU has since been expressed in terms of the
2451other RCU flavors as part of a consolidation of the three flavors into a
2452single flavor. The read-side API remains, and continues to disable
2453preemption and to be accounted for by lockdep. Much of the material in
2454this section is therefore strictly historical in nature.
2455
2456Before preemptible RCU, waiting for an RCU grace period had the side
2457effect of also waiting for all pre-existing interrupt and NMI handlers.
2458However, there are legitimate preemptible-RCU implementations that do
2459not have this property, given that any point in the code outside of an
2460RCU read-side critical section can be a quiescent state. Therefore,
2461*RCU-sched* was created, which follows “classic” RCU in that an
2462RCU-sched grace period waits for pre-existing interrupt and NMI
2463handlers. In kernels built with ``CONFIG_PREEMPTION=n``, the RCU and
2464RCU-sched APIs have identical implementations, while kernels built with
2465``CONFIG_PREEMPTION=y`` provide a separate implementation for each.
2466
2467Note well that in ``CONFIG_PREEMPTION=y`` kernels,
2468rcu_read_lock_sched() and rcu_read_unlock_sched() disable and
2469re-enable preemption, respectively. This means that if there was a
2470preemption attempt during the RCU-sched read-side critical section,
2471rcu_read_unlock_sched() will enter the scheduler, with all the
2472latency and overhead entailed. Just as with rcu_read_unlock_bh(),
2473this can make it look as if rcu_read_unlock_sched() was executing
2474very slowly. However, the highest-priority task won't be preempted, so
2475that task will enjoy low-overhead rcu_read_unlock_sched()
2476invocations.
2477
2478The `RCU-sched
2479API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2480includes rcu_read_lock_sched(), rcu_read_unlock_sched(),
2481rcu_read_lock_sched_notrace(), rcu_read_unlock_sched_notrace(),
2482rcu_dereference_sched(), rcu_dereference_sched_check(), and
2483rcu_read_lock_sched_held().  However, the old RCU-sched update-side APIs
2484are now gone, replaced by synchronize_rcu(), synchronize_rcu_expedited(),
2485call_rcu(), and rcu_barrier().  In addition, anything that disables
2486preemption also marks an RCU-sched read-side critical section,
2487including preempt_disable() and preempt_enable(), local_irq_save()
2488and local_irq_restore(), and so on.
2489
2490Sleepable RCU
2491~~~~~~~~~~~~~
2492
2493For well over a decade, someone saying “I need to block within an RCU
2494read-side critical section” was a reliable indication that this someone
2495did not understand RCU. After all, if you are always blocking in an RCU
2496read-side critical section, you can probably afford to use a
2497higher-overhead synchronization mechanism. However, that changed with
2498the advent of the Linux kernel's notifiers, whose RCU read-side critical
2499sections almost never sleep, but sometimes need to. This resulted in the
2500introduction of `sleepable RCU <https://lwn.net/Articles/202847/>`__, or
2501*SRCU*.
2502
2503SRCU allows different domains to be defined, with each such domain
2504defined by an instance of an ``srcu_struct`` structure. A pointer to
2505this structure must be passed in to each SRCU function, for example,
2506``synchronize_srcu(&ss)``, where ``ss`` is the ``srcu_struct``
2507structure. The key benefit of these domains is that a slow SRCU reader
2508in one domain does not delay an SRCU grace period in some other domain.
2509That said, one consequence of these domains is that read-side code must
2510pass a “cookie” from srcu_read_lock() to srcu_read_unlock(), for
2511example, as follows:
2512
2513   ::
2514
2515       1 int idx;
2516       2
2517       3 idx = srcu_read_lock(&ss);
2518       4 do_something();
2519       5 srcu_read_unlock(&ss, idx);
2520
2521As noted above, it is legal to block within SRCU read-side critical
2522sections, however, with great power comes great responsibility. If you
2523block forever in one of a given domain's SRCU read-side critical
2524sections, then that domain's grace periods will also be blocked forever.
2525Of course, one good way to block forever is to deadlock, which can
2526happen if any operation in a given domain's SRCU read-side critical
2527section can wait, either directly or indirectly, for that domain's grace
2528period to elapse. For example, this results in a self-deadlock:
2529
2530   ::
2531
2532       1 int idx;
2533       2
2534       3 idx = srcu_read_lock(&ss);
2535       4 do_something();
2536       5 synchronize_srcu(&ss);
2537       6 srcu_read_unlock(&ss, idx);
2538
2539However, if line 5 acquired a mutex that was held across a
2540synchronize_srcu() for domain ``ss``, deadlock would still be
2541possible. Furthermore, if line 5 acquired a mutex that was held across a
2542synchronize_srcu() for some other domain ``ss1``, and if an
2543``ss1``-domain SRCU read-side critical section acquired another mutex
2544that was held across as ``ss``-domain synchronize_srcu(), deadlock
2545would again be possible. Such a deadlock cycle could extend across an
2546arbitrarily large number of different SRCU domains. Again, with great
2547power comes great responsibility.
2548
2549Unlike the other RCU flavors, SRCU read-side critical sections can run
2550on idle and even offline CPUs. This ability requires that
2551srcu_read_lock() and srcu_read_unlock() contain memory barriers,
2552which means that SRCU readers will run a bit slower than would RCU
2553readers. It also motivates the smp_mb__after_srcu_read_unlock() API,
2554which, in combination with srcu_read_unlock(), guarantees a full
2555memory barrier.
2556
2557Also unlike other RCU flavors, synchronize_srcu() may **not** be
2558invoked from CPU-hotplug notifiers, due to the fact that SRCU grace
2559periods make use of timers and the possibility of timers being
2560temporarily “stranded” on the outgoing CPU. This stranding of timers
2561means that timers posted to the outgoing CPU will not fire until late in
2562the CPU-hotplug process. The problem is that if a notifier is waiting on
2563an SRCU grace period, that grace period is waiting on a timer, and that
2564timer is stranded on the outgoing CPU, then the notifier will never be
2565awakened, in other words, deadlock has occurred. This same situation of
2566course also prohibits srcu_barrier() from being invoked from
2567CPU-hotplug notifiers.
2568
2569SRCU also differs from other RCU flavors in that SRCU's expedited and
2570non-expedited grace periods are implemented by the same mechanism. This
2571means that in the current SRCU implementation, expediting a future grace
2572period has the side effect of expediting all prior grace periods that
2573have not yet completed. (But please note that this is a property of the
2574current implementation, not necessarily of future implementations.) In
2575addition, if SRCU has been idle for longer than the interval specified
2576by the ``srcutree.exp_holdoff`` kernel boot parameter (25 microseconds
2577by default), and if a synchronize_srcu() invocation ends this idle
2578period, that invocation will be automatically expedited.
2579
2580As of v4.12, SRCU's callbacks are maintained per-CPU, eliminating a
2581locking bottleneck present in prior kernel versions. Although this will
2582allow users to put much heavier stress on call_srcu(), it is
2583important to note that SRCU does not yet take any special steps to deal
2584with callback flooding. So if you are posting (say) 10,000 SRCU
2585callbacks per second per CPU, you are probably totally OK, but if you
2586intend to post (say) 1,000,000 SRCU callbacks per second per CPU, please
2587run some tests first. SRCU just might need a few adjustment to deal with
2588that sort of load. Of course, your mileage may vary based on the speed
2589of your CPUs and the size of your memory.
2590
2591The `SRCU
2592API <https://lwn.net/Articles/609973/#RCU%20Per-Flavor%20API%20Table>`__
2593includes srcu_read_lock(), srcu_read_unlock(),
2594srcu_dereference(), srcu_dereference_check(),
2595synchronize_srcu(), synchronize_srcu_expedited(),
2596call_srcu(), srcu_barrier(), and srcu_read_lock_held(). It
2597also includes DEFINE_SRCU(), DEFINE_STATIC_SRCU(), and
2598init_srcu_struct() APIs for defining and initializing
2599``srcu_struct`` structures.
2600
2601More recently, the SRCU API has added polling interfaces:
2602
2603#. start_poll_synchronize_srcu() returns a cookie identifying
2604   the completion of a future SRCU grace period and ensures
2605   that this grace period will be started.
2606#. poll_state_synchronize_srcu() returns ``true`` iff the
2607   specified cookie corresponds to an already-completed
2608   SRCU grace period.
2609#. get_state_synchronize_srcu() returns a cookie just like
2610   start_poll_synchronize_srcu() does, but differs in that
2611   it does nothing to ensure that any future SRCU grace period
2612   will be started.
2613
2614These functions are used to avoid unnecessary SRCU grace periods in
2615certain types of buffer-cache algorithms having multi-stage age-out
2616mechanisms.  The idea is that by the time the block has aged completely
2617from the cache, an SRCU grace period will be very likely to have elapsed.
2618
2619Tasks RCU
2620~~~~~~~~~
2621
2622Some forms of tracing use “trampolines” to handle the binary rewriting
2623required to install different types of probes. It would be good to be
2624able to free old trampolines, which sounds like a job for some form of
2625RCU. However, because it is necessary to be able to install a trace
2626anywhere in the code, it is not possible to use read-side markers such
2627as rcu_read_lock() and rcu_read_unlock(). In addition, it does
2628not work to have these markers in the trampoline itself, because there
2629would need to be instructions following rcu_read_unlock(). Although
2630synchronize_rcu() would guarantee that execution reached the
2631rcu_read_unlock(), it would not be able to guarantee that execution
2632had completely left the trampoline. Worse yet, in some situations
2633the trampoline's protection must extend a few instructions *prior* to
2634execution reaching the trampoline.  For example, these few instructions
2635might calculate the address of the trampoline, so that entering the
2636trampoline would be pre-ordained a surprisingly long time before execution
2637actually reached the trampoline itself.
2638
2639The solution, in the form of `Tasks
2640RCU <https://lwn.net/Articles/607117/>`__, is to have implicit read-side
2641critical sections that are delimited by voluntary context switches, that
2642is, calls to schedule(), cond_resched(), and
2643synchronize_rcu_tasks(). In addition, transitions to and from
2644userspace execution also delimit tasks-RCU read-side critical sections.
2645
2646The tasks-RCU API is quite compact, consisting only of
2647call_rcu_tasks(), synchronize_rcu_tasks(), and
2648rcu_barrier_tasks(). In ``CONFIG_PREEMPTION=n`` kernels, trampolines
2649cannot be preempted, so these APIs map to call_rcu(),
2650synchronize_rcu(), and rcu_barrier(), respectively. In
2651``CONFIG_PREEMPTION=y`` kernels, trampolines can be preempted, and these
2652three APIs are therefore implemented by separate functions that check
2653for voluntary context switches.
2654
2655Possible Future Changes
2656-----------------------
2657
2658One of the tricks that RCU uses to attain update-side scalability is to
2659increase grace-period latency with increasing numbers of CPUs. If this
2660becomes a serious problem, it will be necessary to rework the
2661grace-period state machine so as to avoid the need for the additional
2662latency.
2663
2664RCU disables CPU hotplug in a few places, perhaps most notably in the
2665rcu_barrier() operations. If there is a strong reason to use
2666rcu_barrier() in CPU-hotplug notifiers, it will be necessary to
2667avoid disabling CPU hotplug. This would introduce some complexity, so
2668there had better be a *very* good reason.
2669
2670The tradeoff between grace-period latency on the one hand and
2671interruptions of other CPUs on the other hand may need to be
2672re-examined. The desire is of course for zero grace-period latency as
2673well as zero interprocessor interrupts undertaken during an expedited
2674grace period operation. While this ideal is unlikely to be achievable,
2675it is quite possible that further improvements can be made.
2676
2677The multiprocessor implementations of RCU use a combining tree that
2678groups CPUs so as to reduce lock contention and increase cache locality.
2679However, this combining tree does not spread its memory across NUMA
2680nodes nor does it align the CPU groups with hardware features such as
2681sockets or cores. Such spreading and alignment is currently believed to
2682be unnecessary because the hotpath read-side primitives do not access
2683the combining tree, nor does call_rcu() in the common case. If you
2684believe that your architecture needs such spreading and alignment, then
2685your architecture should also benefit from the
2686``rcutree.rcu_fanout_leaf`` boot parameter, which can be set to the
2687number of CPUs in a socket, NUMA node, or whatever. If the number of
2688CPUs is too large, use a fraction of the number of CPUs. If the number
2689of CPUs is a large prime number, well, that certainly is an
2690“interesting” architectural choice! More flexible arrangements might be
2691considered, but only if ``rcutree.rcu_fanout_leaf`` has proven
2692inadequate, and only if the inadequacy has been demonstrated by a
2693carefully run and realistic system-level workload.
2694
2695Please note that arrangements that require RCU to remap CPU numbers will
2696require extremely good demonstration of need and full exploration of
2697alternatives.
2698
2699RCU's various kthreads are reasonably recent additions. It is quite
2700likely that adjustments will be required to more gracefully handle
2701extreme loads. It might also be necessary to be able to relate CPU
2702utilization by RCU's kthreads and softirq handlers to the code that
2703instigated this CPU utilization. For example, RCU callback overhead
2704might be charged back to the originating call_rcu() instance, though
2705probably not in production kernels.
2706
2707Additional work may be required to provide reasonable forward-progress
2708guarantees under heavy load for grace periods and for callback
2709invocation.
2710
2711Summary
2712-------
2713
2714This document has presented more than two decade's worth of RCU
2715requirements. Given that the requirements keep changing, this will not
2716be the last word on this subject, but at least it serves to get an
2717important subset of the requirements set forth.
2718
2719Acknowledgments
2720---------------
2721
2722I am grateful to Steven Rostedt, Lai Jiangshan, Ingo Molnar, Oleg
2723Nesterov, Borislav Petkov, Peter Zijlstra, Boqun Feng, and Andy
2724Lutomirski for their help in rendering this article human readable, and
2725to Michelle Rankin for her support of this effort. Other contributions
2726are acknowledged in the Linux kernel's git archive.
2727