]> git.proxmox.com Git - mirror_ubuntu-bionic-kernel.git/blame - Documentation/kernel-hacking/locking.rst
docs-rst: convert kernel-locking to ReST
[mirror_ubuntu-bionic-kernel.git] / Documentation / kernel-hacking / locking.rst
CommitLineData
e548cdef
MCC
1===========================
2Unreliable Guide To Locking
3===========================
4
5:Author: Rusty Russell
6
7Introduction
8============
9
10Welcome, to Rusty's Remarkably Unreliable Guide to Kernel Locking
11issues. This document describes the locking systems in the Linux Kernel
12in 2.6.
13
14With the wide availability of HyperThreading, and preemption in the
15Linux Kernel, everyone hacking on the kernel needs to know the
16fundamentals of concurrency and locking for SMP.
17
18The Problem With Concurrency
19============================
20
21(Skip this if you know what a Race Condition is).
22
23In a normal program, you can increment a counter like so:
24
25::
26
27 very_important_count++;
28
29
30This is what they would expect to happen:
31
32+------------------------------------+------------------------------------+
33| Instance 1 | Instance 2 |
34+====================================+====================================+
35| read very_important_count (5) | |
36+------------------------------------+------------------------------------+
37| add 1 (6) | |
38+------------------------------------+------------------------------------+
39| write very_important_count (6) | |
40+------------------------------------+------------------------------------+
41| | read very_important_count (6) |
42+------------------------------------+------------------------------------+
43| | add 1 (7) |
44+------------------------------------+------------------------------------+
45| | write very_important_count (7) |
46+------------------------------------+------------------------------------+
47
48Table: Expected Results
49
50This is what might happen:
51
52+------------------------------------+------------------------------------+
53| Instance 1 | Instance 2 |
54+====================================+====================================+
55| read very_important_count (5) | |
56+------------------------------------+------------------------------------+
57| | read very_important_count (5) |
58+------------------------------------+------------------------------------+
59| add 1 (6) | |
60+------------------------------------+------------------------------------+
61| | add 1 (6) |
62+------------------------------------+------------------------------------+
63| write very_important_count (6) | |
64+------------------------------------+------------------------------------+
65| | write very_important_count (6) |
66+------------------------------------+------------------------------------+
67
68Table: Possible Results
69
70Race Conditions and Critical Regions
71------------------------------------
72
73This overlap, where the result depends on the relative timing of
74multiple tasks, is called a race condition. The piece of code containing
75the concurrency issue is called a critical region. And especially since
76Linux starting running on SMP machines, they became one of the major
77issues in kernel design and implementation.
78
79Preemption can have the same effect, even if there is only one CPU: by
80preempting one task during the critical region, we have exactly the same
81race condition. In this case the thread which preempts might run the
82critical region itself.
83
84The solution is to recognize when these simultaneous accesses occur, and
85use locks to make sure that only one instance can enter the critical
86region at any time. There are many friendly primitives in the Linux
87kernel to help you do this. And then there are the unfriendly
88primitives, but I'll pretend they don't exist.
89
90Locking in the Linux Kernel
91===========================
92
93If I could give you one piece of advice: never sleep with anyone crazier
94than yourself. But if I had to give you advice on locking: *keep it
95simple*.
96
97Be reluctant to introduce new locks.
98
99Strangely enough, this last one is the exact reverse of my advice when
100you *have* slept with someone crazier than yourself. And you should
101think about getting a big dog.
102
103Two Main Types of Kernel Locks: Spinlocks and Mutexes
104-----------------------------------------------------
105
106There are two main types of kernel locks. The fundamental type is the
107spinlock (``include/asm/spinlock.h``), which is a very simple
108single-holder lock: if you can't get the spinlock, you keep trying
109(spinning) until you can. Spinlocks are very small and fast, and can be
110used anywhere.
111
112The second type is a mutex (``include/linux/mutex.h``): it is like a
113spinlock, but you may block holding a mutex. If you can't lock a mutex,
114your task will suspend itself, and be woken up when the mutex is
115released. This means the CPU can do something else while you are
116waiting. There are many cases when you simply can't sleep (see
117`What Functions Are Safe To Call From Interrupts? <#sleeping-things>`__),
118and so have to use a spinlock instead.
119
120Neither type of lock is recursive: see
121`Deadlock: Simple and Advanced <#deadlock>`__.
122
123Locks and Uniprocessor Kernels
124------------------------------
125
126For kernels compiled without ``CONFIG_SMP``, and without
127``CONFIG_PREEMPT`` spinlocks do not exist at all. This is an excellent
128design decision: when no-one else can run at the same time, there is no
129reason to have a lock.
130
131If the kernel is compiled without ``CONFIG_SMP``, but ``CONFIG_PREEMPT``
132is set, then spinlocks simply disable preemption, which is sufficient to
133prevent any races. For most purposes, we can think of preemption as
134equivalent to SMP, and not worry about it separately.
135
136You should always test your locking code with ``CONFIG_SMP`` and
137``CONFIG_PREEMPT`` enabled, even if you don't have an SMP test box,
138because it will still catch some kinds of locking bugs.
139
140Mutexes still exist, because they are required for synchronization
141between user contexts, as we will see below.
142
143Locking Only In User Context
144----------------------------
145
146If you have a data structure which is only ever accessed from user
147context, then you can use a simple mutex (``include/linux/mutex.h``) to
148protect it. This is the most trivial case: you initialize the mutex.
149Then you can call :c:func:`mutex_lock_interruptible()` to grab the
150mutex, and :c:func:`mutex_unlock()` to release it. There is also a
151:c:func:`mutex_lock()`, which should be avoided, because it will
152not return if a signal is received.
153
154Example: ``net/netfilter/nf_sockopt.c`` allows registration of new
155:c:func:`setsockopt()` and :c:func:`getsockopt()` calls, with
156:c:func:`nf_register_sockopt()`. Registration and de-registration
157are only done on module load and unload (and boot time, where there is
158no concurrency), and the list of registrations is only consulted for an
159unknown :c:func:`setsockopt()` or :c:func:`getsockopt()` system
160call. The ``nf_sockopt_mutex`` is perfect to protect this, especially
161since the setsockopt and getsockopt calls may well sleep.
162
163Locking Between User Context and Softirqs
164-----------------------------------------
165
166If a softirq shares data with user context, you have two problems.
167Firstly, the current user context can be interrupted by a softirq, and
168secondly, the critical region could be entered from another CPU. This is
169where :c:func:`spin_lock_bh()` (``include/linux/spinlock.h``) is
170used. It disables softirqs on that CPU, then grabs the lock.
171:c:func:`spin_unlock_bh()` does the reverse. (The '_bh' suffix is
172a historical reference to "Bottom Halves", the old name for software
173interrupts. It should really be called spin_lock_softirq()' in a
174perfect world).
175
176Note that you can also use :c:func:`spin_lock_irq()` or
177:c:func:`spin_lock_irqsave()` here, which stop hardware interrupts
178as well: see `Hard IRQ Context <#hardirq-context>`__.
179
180This works perfectly for UP as well: the spin lock vanishes, and this
181macro simply becomes :c:func:`local_bh_disable()`
182(``include/linux/interrupt.h``), which protects you from the softirq
183being run.
184
185Locking Between User Context and Tasklets
186-----------------------------------------
187
188This is exactly the same as above, because tasklets are actually run
189from a softirq.
190
191Locking Between User Context and Timers
192---------------------------------------
193
194This, too, is exactly the same as above, because timers are actually run
195from a softirq. From a locking point of view, tasklets and timers are
196identical.
197
198Locking Between Tasklets/Timers
199-------------------------------
200
201Sometimes a tasklet or timer might want to share data with another
202tasklet or timer.
203
204The Same Tasklet/Timer
205~~~~~~~~~~~~~~~~~~~~~~
206
207Since a tasklet is never run on two CPUs at once, you don't need to
208worry about your tasklet being reentrant (running twice at once), even
209on SMP.
210
211Different Tasklets/Timers
212~~~~~~~~~~~~~~~~~~~~~~~~~
213
214If another tasklet/timer wants to share data with your tasklet or timer
215, you will both need to use :c:func:`spin_lock()` and
216:c:func:`spin_unlock()` calls. :c:func:`spin_lock_bh()` is
217unnecessary here, as you are already in a tasklet, and none will be run
218on the same CPU.
219
220Locking Between Softirqs
221------------------------
222
223Often a softirq might want to share data with itself or a tasklet/timer.
224
225The Same Softirq
226~~~~~~~~~~~~~~~~
227
228The same softirq can run on the other CPUs: you can use a per-CPU array
229(see `Per-CPU Data <#per-cpu>`__) for better performance. If you're
230going so far as to use a softirq, you probably care about scalable
231performance enough to justify the extra complexity.
232
233You'll need to use :c:func:`spin_lock()` and
234:c:func:`spin_unlock()` for shared data.
235
236Different Softirqs
237~~~~~~~~~~~~~~~~~~
238
239You'll need to use :c:func:`spin_lock()` and
240:c:func:`spin_unlock()` for shared data, whether it be a timer,
241tasklet, different softirq or the same or another softirq: any of them
242could be running on a different CPU.
243
244Hard IRQ Context
245================
246
247Hardware interrupts usually communicate with a tasklet or softirq.
248Frequently this involves putting work in a queue, which the softirq will
249take out.
250
251Locking Between Hard IRQ and Softirqs/Tasklets
252----------------------------------------------
253
254If a hardware irq handler shares data with a softirq, you have two
255concerns. Firstly, the softirq processing can be interrupted by a
256hardware interrupt, and secondly, the critical region could be entered
257by a hardware interrupt on another CPU. This is where
258:c:func:`spin_lock_irq()` is used. It is defined to disable
259interrupts on that cpu, then grab the lock.
260:c:func:`spin_unlock_irq()` does the reverse.
261
262The irq handler does not to use :c:func:`spin_lock_irq()`, because
263the softirq cannot run while the irq handler is running: it can use
264:c:func:`spin_lock()`, which is slightly faster. The only exception
265would be if a different hardware irq handler uses the same lock:
266:c:func:`spin_lock_irq()` will stop that from interrupting us.
267
268This works perfectly for UP as well: the spin lock vanishes, and this
269macro simply becomes :c:func:`local_irq_disable()`
270(``include/asm/smp.h``), which protects you from the softirq/tasklet/BH
271being run.
272
273:c:func:`spin_lock_irqsave()` (``include/linux/spinlock.h``) is a
274variant which saves whether interrupts were on or off in a flags word,
275which is passed to :c:func:`spin_unlock_irqrestore()`. This means
276that the same code can be used inside an hard irq handler (where
277interrupts are already off) and in softirqs (where the irq disabling is
278required).
279
280Note that softirqs (and hence tasklets and timers) are run on return
281from hardware interrupts, so :c:func:`spin_lock_irq()` also stops
282these. In that sense, :c:func:`spin_lock_irqsave()` is the most
283general and powerful locking function.
284
285Locking Between Two Hard IRQ Handlers
286-------------------------------------
287
288It is rare to have to share data between two IRQ handlers, but if you
289do, :c:func:`spin_lock_irqsave()` should be used: it is
290architecture-specific whether all interrupts are disabled inside irq
291handlers themselves.
292
293Cheat Sheet For Locking
294=======================
295
296Pete Zaitcev gives the following summary:
297
298- If you are in a process context (any syscall) and want to lock other
299 process out, use a mutex. You can take a mutex and sleep
300 (``copy_from_user*(`` or ``kmalloc(x,GFP_KERNEL)``).
301
302- Otherwise (== data can be touched in an interrupt), use
303 :c:func:`spin_lock_irqsave()` and
304 :c:func:`spin_unlock_irqrestore()`.
305
306- Avoid holding spinlock for more than 5 lines of code and across any
307 function call (except accessors like :c:func:`readb()`).
308
309Table of Minimum Requirements
310-----------------------------
311
312The following table lists the *minimum* locking requirements between
313various contexts. In some cases, the same context can only be running on
314one CPU at a time, so no locking is required for that context (eg. a
315particular thread can only run on one CPU at a time, but if it needs
316shares data with another thread, locking is required).
317
318Remember the advice above: you can always use
319:c:func:`spin_lock_irqsave()`, which is a superset of all other
320spinlock primitives.
321
322+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
323| | IRQ Handler A | IRQ Handler B | Softirq A | Softirq B | Tasklet A | Tasklet B | Timer A | Timer B | User Context A | User Context B |
324+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
325| IRQ Handler A | None | | | | | | | | | |
326+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
327| IRQ Handler B | SLIS | None | | | | | | | | |
328+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
329| Softirq A | SLI | SLI | SL | | | | | | | |
330+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
331| Softirq B | SLI | SLI | SL | SL | | | | | | |
332+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
333| Tasklet A | SLI | SLI | SL | SL | None | | | | | |
334+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
335| Tasklet B | SLI | SLI | SL | SL | SL | None | | | | |
336+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
337| Timer A | SLI | SLI | SL | SL | SL | SL | None | | | |
338+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
339| Timer B | SLI | SLI | SL | SL | SL | SL | SL | None | | |
340+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
341| User Context A | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | None | |
342+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
343| User Context B | SLI | SLI | SLBH | SLBH | SLBH | SLBH | SLBH | SLBH | MLI | None |
344+------------------+-----------------+-----------------+-------------+-------------+-------------+-------------+-----------+-----------+------------------+------------------+
345
346Table: Table of Locking Requirements
347
348+--------+----------------------------+
349| SLIS | spin_lock_irqsave |
350+--------+----------------------------+
351| SLI | spin_lock_irq |
352+--------+----------------------------+
353| SL | spin_lock |
354+--------+----------------------------+
355| SLBH | spin_lock_bh |
356+--------+----------------------------+
357| MLI | mutex_lock_interruptible |
358+--------+----------------------------+
359
360Table: Legend for Locking Requirements Table
361
362The trylock Functions
363=====================
364
365There are functions that try to acquire a lock only once and immediately
366return a value telling about success or failure to acquire the lock.
367They can be used if you need no access to the data protected with the
368lock when some other thread is holding the lock. You should acquire the
369lock later if you then need access to the data protected with the lock.
370
371:c:func:`spin_trylock()` does not spin but returns non-zero if it
372acquires the spinlock on the first try or 0 if not. This function can be
373used in all contexts like :c:func:`spin_lock()`: you must have
374disabled the contexts that might interrupt you and acquire the spin
375lock.
376
377:c:func:`mutex_trylock()` does not suspend your task but returns
378non-zero if it could lock the mutex on the first try or 0 if not. This
379function cannot be safely used in hardware or software interrupt
380contexts despite not sleeping.
381
382Common Examples
383===============
384
385Let's step through a simple example: a cache of number to name mappings.
386The cache keeps a count of how often each of the objects is used, and
387when it gets full, throws out the least used one.
388
389All In User Context
390-------------------
391
392For our first example, we assume that all operations are in user context
393(ie. from system calls), so we can sleep. This means we can use a mutex
394to protect the cache and all the objects within it. Here's the code::
395
396 #include <linux/list.h>
397 #include <linux/slab.h>
398 #include <linux/string.h>
399 #include <linux/mutex.h>
400 #include <asm/errno.h>
401
402 struct object
403 {
404 struct list_head list;
405 int id;
406 char name[32];
407 int popularity;
408 };
409
410 /* Protects the cache, cache_num, and the objects within it */
411 static DEFINE_MUTEX(cache_lock);
412 static LIST_HEAD(cache);
413 static unsigned int cache_num = 0;
414 #define MAX_CACHE_SIZE 10
415
416 /* Must be holding cache_lock */
417 static struct object *__cache_find(int id)
418 {
419 struct object *i;
420
421 list_for_each_entry(i, &cache, list)
422 if (i->id == id) {
423 i->popularity++;
424 return i;
425 }
426 return NULL;
427 }
428
429 /* Must be holding cache_lock */
430 static void __cache_delete(struct object *obj)
431 {
432 BUG_ON(!obj);
433 list_del(&obj->list);
434 kfree(obj);
435 cache_num--;
436 }
437
438 /* Must be holding cache_lock */
439 static void __cache_add(struct object *obj)
440 {
441 list_add(&obj->list, &cache);
442 if (++cache_num > MAX_CACHE_SIZE) {
443 struct object *i, *outcast = NULL;
444 list_for_each_entry(i, &cache, list) {
445 if (!outcast || i->popularity < outcast->popularity)
446 outcast = i;
447 }
448 __cache_delete(outcast);
449 }
450 }
451
452 int cache_add(int id, const char *name)
453 {
454 struct object *obj;
455
456 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
457 return -ENOMEM;
458
459 strlcpy(obj->name, name, sizeof(obj->name));
460 obj->id = id;
461 obj->popularity = 0;
462
463 mutex_lock(&cache_lock);
464 __cache_add(obj);
465 mutex_unlock(&cache_lock);
466 return 0;
467 }
468
469 void cache_delete(int id)
470 {
471 mutex_lock(&cache_lock);
472 __cache_delete(__cache_find(id));
473 mutex_unlock(&cache_lock);
474 }
475
476 int cache_find(int id, char *name)
477 {
478 struct object *obj;
479 int ret = -ENOENT;
480
481 mutex_lock(&cache_lock);
482 obj = __cache_find(id);
483 if (obj) {
484 ret = 0;
485 strcpy(name, obj->name);
486 }
487 mutex_unlock(&cache_lock);
488 return ret;
489 }
490
491Note that we always make sure we have the cache_lock when we add,
492delete, or look up the cache: both the cache infrastructure itself and
493the contents of the objects are protected by the lock. In this case it's
494easy, since we copy the data for the user, and never let them access the
495objects directly.
496
497There is a slight (and common) optimization here: in
498:c:func:`cache_add()` we set up the fields of the object before
499grabbing the lock. This is safe, as no-one else can access it until we
500put it in cache.
501
502Accessing From Interrupt Context
503--------------------------------
504
505Now consider the case where :c:func:`cache_find()` can be called
506from interrupt context: either a hardware interrupt or a softirq. An
507example would be a timer which deletes object from the cache.
508
509The change is shown below, in standard patch format: the ``-`` are lines
510which are taken away, and the ``+`` are lines which are added.
511
512::
513
514 --- cache.c.usercontext 2003-12-09 13:58:54.000000000 +1100
515 +++ cache.c.interrupt 2003-12-09 14:07:49.000000000 +1100
516 @@ -12,7 +12,7 @@
517 int popularity;
518 };
519
520 -static DEFINE_MUTEX(cache_lock);
521 +static DEFINE_SPINLOCK(cache_lock);
522 static LIST_HEAD(cache);
523 static unsigned int cache_num = 0;
524 #define MAX_CACHE_SIZE 10
525 @@ -55,6 +55,7 @@
526 int cache_add(int id, const char *name)
527 {
528 struct object *obj;
529 + unsigned long flags;
530
531 if ((obj = kmalloc(sizeof(*obj), GFP_KERNEL)) == NULL)
532 return -ENOMEM;
533 @@ -63,30 +64,33 @@
534 obj->id = id;
535 obj->popularity = 0;
536
537 - mutex_lock(&cache_lock);
538 + spin_lock_irqsave(&cache_lock, flags);
539 __cache_add(obj);
540 - mutex_unlock(&cache_lock);
541 + spin_unlock_irqrestore(&cache_lock, flags);
542 return 0;
543 }
544
545 void cache_delete(int id)
546 {
547 - mutex_lock(&cache_lock);
548 + unsigned long flags;
549 +
550 + spin_lock_irqsave(&cache_lock, flags);
551 __cache_delete(__cache_find(id));
552 - mutex_unlock(&cache_lock);
553 + spin_unlock_irqrestore(&cache_lock, flags);
554 }
555
556 int cache_find(int id, char *name)
557 {
558 struct object *obj;
559 int ret = -ENOENT;
560 + unsigned long flags;
561
562 - mutex_lock(&cache_lock);
563 + spin_lock_irqsave(&cache_lock, flags);
564 obj = __cache_find(id);
565 if (obj) {
566 ret = 0;
567 strcpy(name, obj->name);
568 }
569 - mutex_unlock(&cache_lock);
570 + spin_unlock_irqrestore(&cache_lock, flags);
571 return ret;
572 }
573
574Note that the :c:func:`spin_lock_irqsave()` will turn off
575interrupts if they are on, otherwise does nothing (if we are already in
576an interrupt handler), hence these functions are safe to call from any
577context.
578
579Unfortunately, :c:func:`cache_add()` calls :c:func:`kmalloc()`
580with the ``GFP_KERNEL`` flag, which is only legal in user context. I
581have assumed that :c:func:`cache_add()` is still only called in
582user context, otherwise this should become a parameter to
583:c:func:`cache_add()`.
584
585Exposing Objects Outside This File
586----------------------------------
587
588If our objects contained more information, it might not be sufficient to
589copy the information in and out: other parts of the code might want to
590keep pointers to these objects, for example, rather than looking up the
591id every time. This produces two problems.
592
593The first problem is that we use the ``cache_lock`` to protect objects:
594we'd need to make this non-static so the rest of the code can use it.
595This makes locking trickier, as it is no longer all in one place.
596
597The second problem is the lifetime problem: if another structure keeps a
598pointer to an object, it presumably expects that pointer to remain
599valid. Unfortunately, this is only guaranteed while you hold the lock,
600otherwise someone might call :c:func:`cache_delete()` and even
601worse, add another object, re-using the same address.
602
603As there is only one lock, you can't hold it forever: no-one else would
604get any work done.
605
606The solution to this problem is to use a reference count: everyone who
607has a pointer to the object increases it when they first get the object,
608and drops the reference count when they're finished with it. Whoever
609drops it to zero knows it is unused, and can actually delete it.
610
611Here is the code::
612
613 --- cache.c.interrupt 2003-12-09 14:25:43.000000000 +1100
614 +++ cache.c.refcnt 2003-12-09 14:33:05.000000000 +1100
615 @@ -7,6 +7,7 @@
616 struct object
617 {
618 struct list_head list;
619 + unsigned int refcnt;
620 int id;
621 char name[32];
622 int popularity;
623 @@ -17,6 +18,35 @@
624 static unsigned int cache_num = 0;
625 #define MAX_CACHE_SIZE 10
626
627 +static void __object_put(struct object *obj)
628 +{
629 + if (--obj->refcnt == 0)
630 + kfree(obj);
631 +}
632 +
633 +static void __object_get(struct object *obj)
634 +{
635 + obj->refcnt++;
636 +}
637 +
638 +void object_put(struct object *obj)
639 +{
640 + unsigned long flags;
641 +
642 + spin_lock_irqsave(&cache_lock, flags);
643 + __object_put(obj);
644 + spin_unlock_irqrestore(&cache_lock, flags);
645 +}
646 +
647 +void object_get(struct object *obj)
648 +{
649 + unsigned long flags;
650 +
651 + spin_lock_irqsave(&cache_lock, flags);
652 + __object_get(obj);
653 + spin_unlock_irqrestore(&cache_lock, flags);
654 +}
655 +
656 /* Must be holding cache_lock */
657 static struct object *__cache_find(int id)
658 {
659 @@ -35,6 +65,7 @@
660 {
661 BUG_ON(!obj);
662 list_del(&obj->list);
663 + __object_put(obj);
664 cache_num--;
665 }
666
667 @@ -63,6 +94,7 @@
668 strlcpy(obj->name, name, sizeof(obj->name));
669 obj->id = id;
670 obj->popularity = 0;
671 + obj->refcnt = 1; /* The cache holds a reference */
672
673 spin_lock_irqsave(&cache_lock, flags);
674 __cache_add(obj);
675 @@ -79,18 +111,15 @@
676 spin_unlock_irqrestore(&cache_lock, flags);
677 }
678
679 -int cache_find(int id, char *name)
680 +struct object *cache_find(int id)
681 {
682 struct object *obj;
683 - int ret = -ENOENT;
684 unsigned long flags;
685
686 spin_lock_irqsave(&cache_lock, flags);
687 obj = __cache_find(id);
688 - if (obj) {
689 - ret = 0;
690 - strcpy(name, obj->name);
691 - }
692 + if (obj)
693 + __object_get(obj);
694 spin_unlock_irqrestore(&cache_lock, flags);
695 - return ret;
696 + return obj;
697 }
698
699We encapsulate the reference counting in the standard 'get' and 'put'
700functions. Now we can return the object itself from
701:c:func:`cache_find()` which has the advantage that the user can
702now sleep holding the object (eg. to :c:func:`copy_to_user()` to
703name to userspace).
704
705The other point to note is that I said a reference should be held for
706every pointer to the object: thus the reference count is 1 when first
707inserted into the cache. In some versions the framework does not hold a
708reference count, but they are more complicated.
709
710Using Atomic Operations For The Reference Count
711~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
712
713In practice, ``atomic_t`` would usually be used for refcnt. There are a
714number of atomic operations defined in ``include/asm/atomic.h``: these
715are guaranteed to be seen atomically from all CPUs in the system, so no
716lock is required. In this case, it is simpler than using spinlocks,
717although for anything non-trivial using spinlocks is clearer. The
718:c:func:`atomic_inc()` and :c:func:`atomic_dec_and_test()`
719are used instead of the standard increment and decrement operators, and
720the lock is no longer used to protect the reference count itself.
721
722::
723
724 --- cache.c.refcnt 2003-12-09 15:00:35.000000000 +1100
725 +++ cache.c.refcnt-atomic 2003-12-11 15:49:42.000000000 +1100
726 @@ -7,7 +7,7 @@
727 struct object
728 {
729 struct list_head list;
730 - unsigned int refcnt;
731 + atomic_t refcnt;
732 int id;
733 char name[32];
734 int popularity;
735 @@ -18,33 +18,15 @@
736 static unsigned int cache_num = 0;
737 #define MAX_CACHE_SIZE 10
738
739 -static void __object_put(struct object *obj)
740 -{
741 - if (--obj->refcnt == 0)
742 - kfree(obj);
743 -}
744 -
745 -static void __object_get(struct object *obj)
746 -{
747 - obj->refcnt++;
748 -}
749 -
750 void object_put(struct object *obj)
751 {
752 - unsigned long flags;
753 -
754 - spin_lock_irqsave(&cache_lock, flags);
755 - __object_put(obj);
756 - spin_unlock_irqrestore(&cache_lock, flags);
757 + if (atomic_dec_and_test(&obj->refcnt))
758 + kfree(obj);
759 }
760
761 void object_get(struct object *obj)
762 {
763 - unsigned long flags;
764 -
765 - spin_lock_irqsave(&cache_lock, flags);
766 - __object_get(obj);
767 - spin_unlock_irqrestore(&cache_lock, flags);
768 + atomic_inc(&obj->refcnt);
769 }
770
771 /* Must be holding cache_lock */
772 @@ -65,7 +47,7 @@
773 {
774 BUG_ON(!obj);
775 list_del(&obj->list);
776 - __object_put(obj);
777 + object_put(obj);
778 cache_num--;
779 }
780
781 @@ -94,7 +76,7 @@
782 strlcpy(obj->name, name, sizeof(obj->name));
783 obj->id = id;
784 obj->popularity = 0;
785 - obj->refcnt = 1; /* The cache holds a reference */
786 + atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
787
788 spin_lock_irqsave(&cache_lock, flags);
789 __cache_add(obj);
790 @@ -119,7 +101,7 @@
791 spin_lock_irqsave(&cache_lock, flags);
792 obj = __cache_find(id);
793 if (obj)
794 - __object_get(obj);
795 + object_get(obj);
796 spin_unlock_irqrestore(&cache_lock, flags);
797 return obj;
798 }
799
800Protecting The Objects Themselves
801---------------------------------
802
803In these examples, we assumed that the objects (except the reference
804counts) never changed once they are created. If we wanted to allow the
805name to change, there are three possibilities:
806
807- You can make ``cache_lock`` non-static, and tell people to grab that
808 lock before changing the name in any object.
809
810- You can provide a :c:func:`cache_obj_rename()` which grabs this
811 lock and changes the name for the caller, and tell everyone to use
812 that function.
813
814- You can make the ``cache_lock`` protect only the cache itself, and
815 use another lock to protect the name.
816
817Theoretically, you can make the locks as fine-grained as one lock for
818every field, for every object. In practice, the most common variants
819are:
820
821- One lock which protects the infrastructure (the ``cache`` list in
822 this example) and all the objects. This is what we have done so far.
823
824- One lock which protects the infrastructure (including the list
825 pointers inside the objects), and one lock inside the object which
826 protects the rest of that object.
827
828- Multiple locks to protect the infrastructure (eg. one lock per hash
829 chain), possibly with a separate per-object lock.
830
831Here is the "lock-per-object" implementation:
832
833::
834
835 --- cache.c.refcnt-atomic 2003-12-11 15:50:54.000000000 +1100
836 +++ cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
837 @@ -6,11 +6,17 @@
838
839 struct object
840 {
841 + /* These two protected by cache_lock. */
842 struct list_head list;
843 + int popularity;
844 +
845 atomic_t refcnt;
846 +
847 + /* Doesn't change once created. */
848 int id;
849 +
850 + spinlock_t lock; /* Protects the name */
851 char name[32];
852 - int popularity;
853 };
854
855 static DEFINE_SPINLOCK(cache_lock);
856 @@ -77,6 +84,7 @@
857 obj->id = id;
858 obj->popularity = 0;
859 atomic_set(&obj->refcnt, 1); /* The cache holds a reference */
860 + spin_lock_init(&obj->lock);
861
862 spin_lock_irqsave(&cache_lock, flags);
863 __cache_add(obj);
864
865Note that I decide that the popularity count should be protected by the
866``cache_lock`` rather than the per-object lock: this is because it (like
867the :c:type:`struct list_head <list_head>` inside the object)
868is logically part of the infrastructure. This way, I don't need to grab
869the lock of every object in :c:func:`__cache_add()` when seeking
870the least popular.
871
872I also decided that the id member is unchangeable, so I don't need to
873grab each object lock in :c:func:`__cache_find()` to examine the
874id: the object lock is only used by a caller who wants to read or write
875the name field.
876
877Note also that I added a comment describing what data was protected by
878which locks. This is extremely important, as it describes the runtime
879behavior of the code, and can be hard to gain from just reading. And as
880Alan Cox says, “Lock data, not code”.
881
882Common Problems
883===============
884
885Deadlock: Simple and Advanced
886-----------------------------
887
888There is a coding bug where a piece of code tries to grab a spinlock
889twice: it will spin forever, waiting for the lock to be released
890(spinlocks, rwlocks and mutexes are not recursive in Linux). This is
891trivial to diagnose: not a
892stay-up-five-nights-talk-to-fluffy-code-bunnies kind of problem.
893
894For a slightly more complex case, imagine you have a region shared by a
895softirq and user context. If you use a :c:func:`spin_lock()` call
896to protect it, it is possible that the user context will be interrupted
897by the softirq while it holds the lock, and the softirq will then spin
898forever trying to get the same lock.
899
900Both of these are called deadlock, and as shown above, it can occur even
901with a single CPU (although not on UP compiles, since spinlocks vanish
902on kernel compiles with ``CONFIG_SMP``\ =n. You'll still get data
903corruption in the second example).
904
905This complete lockup is easy to diagnose: on SMP boxes the watchdog
906timer or compiling with ``DEBUG_SPINLOCK`` set
907(``include/linux/spinlock.h``) will show this up immediately when it
908happens.
909
910A more complex problem is the so-called 'deadly embrace', involving two
911or more locks. Say you have a hash table: each entry in the table is a
912spinlock, and a chain of hashed objects. Inside a softirq handler, you
913sometimes want to alter an object from one place in the hash to another:
914you grab the spinlock of the old hash chain and the spinlock of the new
915hash chain, and delete the object from the old one, and insert it in the
916new one.
917
918There are two problems here. First, if your code ever tries to move the
919object to the same chain, it will deadlock with itself as it tries to
920lock it twice. Secondly, if the same softirq on another CPU is trying to
921move another object in the reverse direction, the following could
922happen:
923
924+-----------------------+-----------------------+
925| CPU 1 | CPU 2 |
926+=======================+=======================+
927| Grab lock A -> OK | Grab lock B -> OK |
928+-----------------------+-----------------------+
929| Grab lock B -> spin | Grab lock A -> spin |
930+-----------------------+-----------------------+
931
932Table: Consequences
933
934The two CPUs will spin forever, waiting for the other to give up their
935lock. It will look, smell, and feel like a crash.
936
937Preventing Deadlock
938-------------------
939
940Textbooks will tell you that if you always lock in the same order, you
941will never get this kind of deadlock. Practice will tell you that this
942approach doesn't scale: when I create a new lock, I don't understand
943enough of the kernel to figure out where in the 5000 lock hierarchy it
944will fit.
945
946The best locks are encapsulated: they never get exposed in headers, and
947are never held around calls to non-trivial functions outside the same
948file. You can read through this code and see that it will never
949deadlock, because it never tries to grab another lock while it has that
950one. People using your code don't even need to know you are using a
951lock.
952
953A classic problem here is when you provide callbacks or hooks: if you
954call these with the lock held, you risk simple deadlock, or a deadly
955embrace (who knows what the callback will do?). Remember, the other
956programmers are out to get you, so don't do this.
957
958Overzealous Prevention Of Deadlocks
959~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
960
961Deadlocks are problematic, but not as bad as data corruption. Code which
962grabs a read lock, searches a list, fails to find what it wants, drops
963the read lock, grabs a write lock and inserts the object has a race
964condition.
965
966If you don't see why, please stay the fuck away from my code.
967
968Racing Timers: A Kernel Pastime
969-------------------------------
970
971Timers can produce their own special problems with races. Consider a
972collection of objects (list, hash, etc) where each object has a timer
973which is due to destroy it.
974
975If you want to destroy the entire collection (say on module removal),
976you might do the following::
977
978 /* THIS CODE BAD BAD BAD BAD: IF IT WAS ANY WORSE IT WOULD USE
979 HUNGARIAN NOTATION */
980 spin_lock_bh(&list_lock);
981
982 while (list) {
983 struct foo *next = list->next;
984 del_timer(&list->timer);
985 kfree(list);
986 list = next;
987 }
988
989 spin_unlock_bh(&list_lock);
990
991
992Sooner or later, this will crash on SMP, because a timer can have just
993gone off before the :c:func:`spin_lock_bh()`, and it will only get
994the lock after we :c:func:`spin_unlock_bh()`, and then try to free
995the element (which has already been freed!).
996
997This can be avoided by checking the result of
998:c:func:`del_timer()`: if it returns 1, the timer has been deleted.
999If 0, it means (in this case) that it is currently running, so we can
1000do::
1001
1002 retry:
1003 spin_lock_bh(&list_lock);
1004
1005 while (list) {
1006 struct foo *next = list->next;
1007 if (!del_timer(&list->timer)) {
1008 /* Give timer a chance to delete this */
1009 spin_unlock_bh(&list_lock);
1010 goto retry;
1011 }
1012 kfree(list);
1013 list = next;
1014 }
1015
1016 spin_unlock_bh(&list_lock);
1017
1018
1019Another common problem is deleting timers which restart themselves (by
1020calling :c:func:`add_timer()` at the end of their timer function).
1021Because this is a fairly common case which is prone to races, you should
1022use :c:func:`del_timer_sync()` (``include/linux/timer.h``) to
1023handle this case. It returns the number of times the timer had to be
1024deleted before we finally stopped it from adding itself back in.
1025
1026Locking Speed
1027=============
1028
1029There are three main things to worry about when considering speed of
1030some code which does locking. First is concurrency: how many things are
1031going to be waiting while someone else is holding a lock. Second is the
1032time taken to actually acquire and release an uncontended lock. Third is
1033using fewer, or smarter locks. I'm assuming that the lock is used fairly
1034often: otherwise, you wouldn't be concerned about efficiency.
1035
1036Concurrency depends on how long the lock is usually held: you should
1037hold the lock for as long as needed, but no longer. In the cache
1038example, we always create the object without the lock held, and then
1039grab the lock only when we are ready to insert it in the list.
1040
1041Acquisition times depend on how much damage the lock operations do to
1042the pipeline (pipeline stalls) and how likely it is that this CPU was
1043the last one to grab the lock (ie. is the lock cache-hot for this CPU):
1044on a machine with more CPUs, this likelihood drops fast. Consider a
1045700MHz Intel Pentium III: an instruction takes about 0.7ns, an atomic
1046increment takes about 58ns, a lock which is cache-hot on this CPU takes
1047160ns, and a cacheline transfer from another CPU takes an additional 170
1048to 360ns. (These figures from Paul McKenney's `Linux Journal RCU
1049article <http://www.linuxjournal.com/article.php?sid=6993>`__).
1050
1051These two aims conflict: holding a lock for a short time might be done
1052by splitting locks into parts (such as in our final per-object-lock
1053example), but this increases the number of lock acquisitions, and the
1054results are often slower than having a single lock. This is another
1055reason to advocate locking simplicity.
1056
1057The third concern is addressed below: there are some methods to reduce
1058the amount of locking which needs to be done.
1059
1060Read/Write Lock Variants
1061------------------------
1062
1063Both spinlocks and mutexes have read/write variants: ``rwlock_t`` and
1064:c:type:`struct rw_semaphore <rw_semaphore>`. These divide
1065users into two classes: the readers and the writers. If you are only
1066reading the data, you can get a read lock, but to write to the data you
1067need the write lock. Many people can hold a read lock, but a writer must
1068be sole holder.
1069
1070If your code divides neatly along reader/writer lines (as our cache code
1071does), and the lock is held by readers for significant lengths of time,
1072using these locks can help. They are slightly slower than the normal
1073locks though, so in practice ``rwlock_t`` is not usually worthwhile.
1074
1075Avoiding Locks: Read Copy Update
1076--------------------------------
1077
1078There is a special method of read/write locking called Read Copy Update.
1079Using RCU, the readers can avoid taking a lock altogether: as we expect
1080our cache to be read more often than updated (otherwise the cache is a
1081waste of time), it is a candidate for this optimization.
1082
1083How do we get rid of read locks? Getting rid of read locks means that
1084writers may be changing the list underneath the readers. That is
1085actually quite simple: we can read a linked list while an element is
1086being added if the writer adds the element very carefully. For example,
1087adding ``new`` to a single linked list called ``list``::
1088
1089 new->next = list->next;
1090 wmb();
1091 list->next = new;
1092
1093
1094The :c:func:`wmb()` is a write memory barrier. It ensures that the
1095first operation (setting the new element's ``next`` pointer) is complete
1096and will be seen by all CPUs, before the second operation is (putting
1097the new element into the list). This is important, since modern
1098compilers and modern CPUs can both reorder instructions unless told
1099otherwise: we want a reader to either not see the new element at all, or
1100see the new element with the ``next`` pointer correctly pointing at the
1101rest of the list.
1102
1103Fortunately, there is a function to do this for standard
1104:c:type:`struct list_head <list_head>` lists:
1105:c:func:`list_add_rcu()` (``include/linux/list.h``).
1106
1107Removing an element from the list is even simpler: we replace the
1108pointer to the old element with a pointer to its successor, and readers
1109will either see it, or skip over it.
1110
1111::
1112
1113 list->next = old->next;
1114
1115
1116There is :c:func:`list_del_rcu()` (``include/linux/list.h``) which
1117does this (the normal version poisons the old object, which we don't
1118want).
1119
1120The reader must also be careful: some CPUs can look through the ``next``
1121pointer to start reading the contents of the next element early, but
1122don't realize that the pre-fetched contents is wrong when the ``next``
1123pointer changes underneath them. Once again, there is a
1124:c:func:`list_for_each_entry_rcu()` (``include/linux/list.h``)
1125to help you. Of course, writers can just use
1126:c:func:`list_for_each_entry()`, since there cannot be two
1127simultaneous writers.
1128
1129Our final dilemma is this: when can we actually destroy the removed
1130element? Remember, a reader might be stepping through this element in
1131the list right now: if we free this element and the ``next`` pointer
1132changes, the reader will jump off into garbage and crash. We need to
1133wait until we know that all the readers who were traversing the list
1134when we deleted the element are finished. We use
1135:c:func:`call_rcu()` to register a callback which will actually
1136destroy the object once all pre-existing readers are finished.
1137Alternatively, :c:func:`synchronize_rcu()` may be used to block
1138until all pre-existing are finished.
1139
1140But how does Read Copy Update know when the readers are finished? The
1141method is this: firstly, the readers always traverse the list inside
1142:c:func:`rcu_read_lock()`/:c:func:`rcu_read_unlock()` pairs:
1143these simply disable preemption so the reader won't go to sleep while
1144reading the list.
1145
1146RCU then waits until every other CPU has slept at least once: since
1147readers cannot sleep, we know that any readers which were traversing the
1148list during the deletion are finished, and the callback is triggered.
1149The real Read Copy Update code is a little more optimized than this, but
1150this is the fundamental idea.
1151
1152::
1153
1154 --- cache.c.perobjectlock 2003-12-11 17:15:03.000000000 +1100
1155 +++ cache.c.rcupdate 2003-12-11 17:55:14.000000000 +1100
1156 @@ -1,15 +1,18 @@
1157 #include <linux/list.h>
1158 #include <linux/slab.h>
1159 #include <linux/string.h>
1160 +#include <linux/rcupdate.h>
1161 #include <linux/mutex.h>
1162 #include <asm/errno.h>
1163
1164 struct object
1165 {
1166 - /* These two protected by cache_lock. */
1167 + /* This is protected by RCU */
1168 struct list_head list;
1169 int popularity;
1170
1171 + struct rcu_head rcu;
1172 +
1173 atomic_t refcnt;
1174
1175 /* Doesn't change once created. */
1176 @@ -40,7 +43,7 @@
1177 {
1178 struct object *i;
1179
1180 - list_for_each_entry(i, &cache, list) {
1181 + list_for_each_entry_rcu(i, &cache, list) {
1182 if (i->id == id) {
1183 i->popularity++;
1184 return i;
1185 @@ -49,19 +52,25 @@
1186 return NULL;
1187 }
1188
1189 +/* Final discard done once we know no readers are looking. */
1190 +static void cache_delete_rcu(void *arg)
1191 +{
1192 + object_put(arg);
1193 +}
1194 +
1195 /* Must be holding cache_lock */
1196 static void __cache_delete(struct object *obj)
1197 {
1198 BUG_ON(!obj);
1199 - list_del(&obj->list);
1200 - object_put(obj);
1201 + list_del_rcu(&obj->list);
1202 cache_num--;
1203 + call_rcu(&obj->rcu, cache_delete_rcu);
1204 }
1205
1206 /* Must be holding cache_lock */
1207 static void __cache_add(struct object *obj)
1208 {
1209 - list_add(&obj->list, &cache);
1210 + list_add_rcu(&obj->list, &cache);
1211 if (++cache_num > MAX_CACHE_SIZE) {
1212 struct object *i, *outcast = NULL;
1213 list_for_each_entry(i, &cache, list) {
1214 @@ -104,12 +114,11 @@
1215 struct object *cache_find(int id)
1216 {
1217 struct object *obj;
1218 - unsigned long flags;
1219
1220 - spin_lock_irqsave(&cache_lock, flags);
1221 + rcu_read_lock();
1222 obj = __cache_find(id);
1223 if (obj)
1224 object_get(obj);
1225 - spin_unlock_irqrestore(&cache_lock, flags);
1226 + rcu_read_unlock();
1227 return obj;
1228 }
1229
1230Note that the reader will alter the popularity member in
1231:c:func:`__cache_find()`, and now it doesn't hold a lock. One
1232solution would be to make it an ``atomic_t``, but for this usage, we
1233don't really care about races: an approximate result is good enough, so
1234I didn't change it.
1235
1236The result is that :c:func:`cache_find()` requires no
1237synchronization with any other functions, so is almost as fast on SMP as
1238it would be on UP.
1239
1240There is a further optimization possible here: remember our original
1241cache code, where there were no reference counts and the caller simply
1242held the lock whenever using the object? This is still possible: if you
1243hold the lock, no one can delete the object, so you don't need to get
1244and put the reference count.
1245
1246Now, because the 'read lock' in RCU is simply disabling preemption, a
1247caller which always has preemption disabled between calling
1248:c:func:`cache_find()` and :c:func:`object_put()` does not
1249need to actually get and put the reference count: we could expose
1250:c:func:`__cache_find()` by making it non-static, and such
1251callers could simply call that.
1252
1253The benefit here is that the reference count is not written to: the
1254object is not altered in any way, which is much faster on SMP machines
1255due to caching.
1256
1257Per-CPU Data
1258------------
1259
1260Another technique for avoiding locking which is used fairly widely is to
1261duplicate information for each CPU. For example, if you wanted to keep a
1262count of a common condition, you could use a spin lock and a single
1263counter. Nice and simple.
1264
1265If that was too slow (it's usually not, but if you've got a really big
1266machine to test on and can show that it is), you could instead use a
1267counter for each CPU, then none of them need an exclusive lock. See
1268:c:func:`DEFINE_PER_CPU()`, :c:func:`get_cpu_var()` and
1269:c:func:`put_cpu_var()` (``include/linux/percpu.h``).
1270
1271Of particular use for simple per-cpu counters is the ``local_t`` type,
1272and the :c:func:`cpu_local_inc()` and related functions, which are
1273more efficient than simple code on some architectures
1274(``include/asm/local.h``).
1275
1276Note that there is no simple, reliable way of getting an exact value of
1277such a counter, without introducing more locks. This is not a problem
1278for some uses.
1279
1280Data Which Mostly Used By An IRQ Handler
1281----------------------------------------
1282
1283If data is always accessed from within the same IRQ handler, you don't
1284need a lock at all: the kernel already guarantees that the irq handler
1285will not run simultaneously on multiple CPUs.
1286
1287Manfred Spraul points out that you can still do this, even if the data
1288is very occasionally accessed in user context or softirqs/tasklets. The
1289irq handler doesn't use a lock, and all other accesses are done as so::
1290
1291 spin_lock(&lock);
1292 disable_irq(irq);
1293 ...
1294 enable_irq(irq);
1295 spin_unlock(&lock);
1296
1297The :c:func:`disable_irq()` prevents the irq handler from running
1298(and waits for it to finish if it's currently running on other CPUs).
1299The spinlock prevents any other accesses happening at the same time.
1300Naturally, this is slower than just a :c:func:`spin_lock_irq()`
1301call, so it only makes sense if this type of access happens extremely
1302rarely.
1303
1304What Functions Are Safe To Call From Interrupts?
1305================================================
1306
1307Many functions in the kernel sleep (ie. call schedule()) directly or
1308indirectly: you can never call them while holding a spinlock, or with
1309preemption disabled. This also means you need to be in user context:
1310calling them from an interrupt is illegal.
1311
1312Some Functions Which Sleep
1313--------------------------
1314
1315The most common ones are listed below, but you usually have to read the
1316code to find out if other calls are safe. If everyone else who calls it
1317can sleep, you probably need to be able to sleep, too. In particular,
1318registration and deregistration functions usually expect to be called
1319from user context, and can sleep.
1320
1321- Accesses to userspace:
1322
1323 - :c:func:`copy_from_user()`
1324
1325 - :c:func:`copy_to_user()`
1326
1327 - :c:func:`get_user()`
1328
1329 - :c:func:`put_user()`
1330
1331- ``kmalloc(GFP_KERNEL)``
1332
1333- :c:func:`mutex_lock_interruptible()` and
1334 :c:func:`mutex_lock()`
1335
1336 There is a :c:func:`mutex_trylock()` which does not sleep.
1337 Still, it must not be used inside interrupt context since its
1338 implementation is not safe for that. :c:func:`mutex_unlock()`
1339 will also never sleep. It cannot be used in interrupt context either
1340 since a mutex must be released by the same task that acquired it.
1341
1342Some Functions Which Don't Sleep
1343--------------------------------
1344
1345Some functions are safe to call from any context, or holding almost any
1346lock.
1347
1348- :c:func:`printk()`
1349
1350- :c:func:`kfree()`
1351
1352- :c:func:`add_timer()` and :c:func:`del_timer()`
1353
1354Mutex API reference
1355===================
1356
1357.. kernel-doc:: include/linux/mutex.h
1358 :internal:
1359
1360.. kernel-doc:: kernel/locking/mutex.c
1361 :export:
1362
1363Futex API reference
1364===================
1365
1366.. kernel-doc:: kernel/futex.c
1367 :internal:
1368
1369Further reading
1370===============
1371
1372- ``Documentation/locking/spinlocks.txt``: Linus Torvalds' spinlocking
1373 tutorial in the kernel sources.
1374
1375- Unix Systems for Modern Architectures: Symmetric Multiprocessing and
1376 Caching for Kernel Programmers:
1377
1378 Curt Schimmel's very good introduction to kernel level locking (not
1379 written for Linux, but nearly everything applies). The book is
1380 expensive, but really worth every penny to understand SMP locking.
1381 [ISBN: 0201633388]
1382
1383Thanks
1384======
1385
1386Thanks to Telsa Gwynne for DocBooking, neatening and adding style.
1387
1388Thanks to Martin Pool, Philipp Rumpf, Stephen Rothwell, Paul Mackerras,
1389Ruedi Aschwanden, Alan Cox, Manfred Spraul, Tim Waugh, Pete Zaitcev,
1390James Morris, Robert Love, Paul McKenney, John Ashby for proofreading,
1391correcting, flaming, commenting.
1392
1393Thanks to the cabal for having no influence on this document.
1394
1395Glossary
1396========
1397
1398preemption
1399 Prior to 2.5, or when ``CONFIG_PREEMPT`` is unset, processes in user
1400 context inside the kernel would not preempt each other (ie. you had that
1401 CPU until you gave it up, except for interrupts). With the addition of
1402 ``CONFIG_PREEMPT`` in 2.5.4, this changed: when in user context, higher
1403 priority tasks can "cut in": spinlocks were changed to disable
1404 preemption, even on UP.
1405
1406bh
1407 Bottom Half: for historical reasons, functions with '_bh' in them often
1408 now refer to any software interrupt, e.g. :c:func:`spin_lock_bh()`
1409 blocks any software interrupt on the current CPU. Bottom halves are
1410 deprecated, and will eventually be replaced by tasklets. Only one bottom
1411 half will be running at any time.
1412
1413Hardware Interrupt / Hardware IRQ
1414 Hardware interrupt request. :c:func:`in_irq()` returns true in a
1415 hardware interrupt handler.
1416
1417Interrupt Context
1418 Not user context: processing a hardware irq or software irq. Indicated
1419 by the :c:func:`in_interrupt()` macro returning true.
1420
1421SMP
1422 Symmetric Multi-Processor: kernels compiled for multiple-CPU machines.
1423 (``CONFIG_SMP=y``).
1424
1425Software Interrupt / softirq
1426 Software interrupt handler. :c:func:`in_irq()` returns false;
1427 :c:func:`in_softirq()` returns true. Tasklets and softirqs both
1428 fall into the category of 'software interrupts'.
1429
1430 Strictly speaking a softirq is one of up to 32 enumerated software
1431 interrupts which can run on multiple CPUs at once. Sometimes used to
1432 refer to tasklets as well (ie. all software interrupts).
1433
1434tasklet
1435 A dynamically-registrable software interrupt, which is guaranteed to
1436 only run on one CPU at a time.
1437
1438timer
1439 A dynamically-registrable software interrupt, which is run at (or close
1440 to) a given time. When running, it is just like a tasklet (in fact, they
1441 are called from the TIMER_SOFTIRQ).
1442
1443UP
1444 Uni-Processor: Non-SMP. (CONFIG_SMP=n).
1445
1446User Context
1447 The kernel executing on behalf of a particular process (ie. a system
1448 call or trap) or kernel thread. You can tell which process with the
1449 ``current`` macro.) Not to be confused with userspace. Can be
1450 interrupted by software or hardware interrupts.
1451
1452Userspace
1453 A process executing its own code outside the kernel.